The following are the guidelines and requirements for the Automation framework:
1. Adhere to CI/CD (Continuous Integration/Continuous Deployment) for Automation framework.
2. Use an easy-to-understand framework, preferably BDD (Behaviour-Driven Development) implemented in a Given, When, Then format. This format helps both technical and non-technical users to
understand the test scenarios in plain language.
3. Provide comprehensive reporting and overview of what was tested, including the number of tests executed as part of a release, covering regression and smoke testing scenarios. All issues raised during
testing must be reported in Jira.
4. Maintain a common repository for all test cases in Jira, accessible , including both manual and automated tests.
5. Implement a regression suite maintenance process that includes handling production defects to prevent the recurrence of bugs in production.
(Atlassian) Jira / Confluence Confluence is used for all our KT and documentation, including post-mortems, operational processes and procures, the release.change calendar and technicalarchitecture.
6. Do not maintain separate smoke and regression suites but use test case tagging to run only smoke or only regression tests, or both.
7. Implement unit testing during the development cycle, and share the details.
8. Optimise the handling of the Automation backlog, which includes tests and bugs.
9. Efficiently manage defects through effective Root Cause Analysis (RCA) sessions and take actions to avoid their recurrence.
10. Provide brand-wise reporting for regression testing to allow better insights into the testing coverage.
11. Focus on introducing API testing to reduce the reliance on UI testing for functionality checks.
12. Daily maintenance of the regression suite to address failing and flaky test cases.
13. Implement Requirement Traceability to map requirements to test cases.
14. Have dedicated Test environments with Production data, configuration, and content synchronised with the production environment.
15. Update the regression suite based on the progression of test cases as new functionalities are added or existing ones are modified.
16. Clear and designated Single Points of Contact (SPOCs) for specific queries within the AMS team is essential for effective communication and issue resolution, covering: deployment-related queries, there should be one designated SPOC who will handle all questions and concerns related to deployments and for environment-related queries, another SPOC should be assigned to address any issues or inquiries concerning environments.
*By following these guidelines, the Automation framework will be robust, efficient, and align with testing standards and practices.
*Having these designated SPOCs documented under the Confluence page will provide visibility to everyone involved in the project/support process. This documentation will serve as a quick reference for
team members, enabling them to know exactly who to reach out to for specific types of questions or problems.
*Having clear communication channels and SPOCs will streamline the support process and enhance collaboration within the AMS team. It also ensures that important information and knowledge are
shared effectively, leading to better incident management and issue resolution.
QA process:
QA should be involved in all the meetings - refinement , sprint planning, RCA meetings, retrospectives
Test management tool to be used for Sprint Test execution(Jira, zephyr).
Requirement traceability matrix to map requirements to the test cases.
Test cases to be drafted before the sprint start and get reviewed by QA team.
Final QA sign off should be from QA team.
Test cases should be with detailed test steps, actual result and expected result.
Clear visibility for what is been tested with evidences(Screenshots, video recordings) and related bugs if any to be linked to the JIRA ticket(story/epic)
RCA process for every production bug, process to not see it again by adding it to automation regression suite.
Defect management
Defect backlog to be maintained and reduced with every sprint incoming.
Every defect should have below details
- Steps to reproduce
- Evidence(screenshots, logs)
Environment details
Test data
Expected and actual results
Requirement reference for every defect