Skip to content

Test Strategy

EdiWeeks edited this page Oct 10, 2024 · 9 revisions

Zapp.ie Test Strategy

Document History

Review

Reviewer Reviewed Date Comment
Ben Weeks 1st October 2024
Akash Jadhav

Approval

Approval Approval Date Status Comment
Ben Weeks Pending

Introduction

Overview

The Test Strategy document provides a comprehensive overview of the testing approach and scope for this project. It outlines the testing processes, methodologies, constraints, and tools to ensure the quality and reliability of the project deliverables. It serves as a guiding framework for the entire testing phase and aligns the team's efforts with the project's goals and objectives.

Document Purpose

The purpose of this document is to provide a structured Test Strategy for the project. It outlines the testing approach, methodologies, and scope that will be employed to ensure the quality, reliability, and functionality of the project deliverables. It serves as a framework for communication among team members and stakeholders, guiding efforts towards delivering a high-quality product that meets user requirements and complies with industry standards.

Key Objectives

The Test Strategy covers the following areas:

  • Testing Guidelines: Establishes principles for testing activities throughout the project lifecycle.
  • Scope Definition: Defines specific features to be tested and areas that will not undergo formal testing.
  • Test Types and Techniques: Lists the test types and techniques for functional, regression, performance, and acceptance testing.
  • Risk Management: Outlines risk management strategies.
  • Resource Allocation: Details the personnel, tools, and environments required.
  • Legal and Compliance Considerations: Addresses legal and regulatory constraints.
  • Testing Artifacts and Deliverables: Defines deliverables such as test cases and bug reports.
  • Environments and Tools: Describes the test environments and tools required.
  • Continuous Improvement: Promotes feedback and lessons learned for future iterations.

Scope

This section of the Test Strategy outlines the extent and boundaries of the testing activities to be conducted for the project. It defines what aspects of the product will be tested, as well as what falls outside the testing scope. This section ensures that all stakeholders have a clear understanding of the areas covered and the limitations of the testing process.

By clearly defining the testing scope, this section ensures that the testing efforts are focused on the critical functionalities of the product while providing transparency on areas that are not part of the formal testing process. This helps streamline the testing process and allows stakeholders to align their expectations accordingly.

Testing Scope: Features to be tested

The following features will be thoroughly tested to ensure their functionality, reliability, and compliance with defined requirements:

ID Feature
83586 Enabler: Lightning Liquidity (Funding Source)
89508 React App - Total Zaps Sent Component
89507 React App - Leaderboard Component
90545 SSO Authentication
89192 React App - Rewards Carousel Component
89522 React App - Admin - LNBits Settings
89551 React App - Admin - Permissions
89525 React App - Allowance Component
89982 React App - Wallet - Send & Receive Payment
89981 React App - Wallet - Transaction History
83588 Bot - Notification - Zap Received
83562 Bot - Send Zap Pop-Up
89513 LNBits Extension - Allowance Schedule

Testing Scope: Features not to be tested

The following aspects fall outside the testing scope and will not be subject to formal testing (primarily as these Features are enablers rather than functional features):

ID Feature
83477 Enabler: Project Instantiation
83490 Enabler: Application Lifecycle Management
83517 Enabler: Technical Solution Design
83500 Enabler: Test Strategy
83586 Enabler: Lightning Liquidity (Funding Source)
83584 Enabler: LNBits Setup

Configuration Scope

The application is executed for Evo Labs team. Main user interface – browser (latest Google Chrome, Edge and Firefox), standard office equipment configuration, also IOS and Android devices.

Devices

The following devices will be used for testing:

Operating System Hardware Browsers
Windows 11 Google Chrome

For avoidance of doubt, the solution will not be tested on mobile devices and is intended for desktop use in this release.

Types of Testing

Scope Decisions

Test Types Scope Responsibility Notes
Requirements Testing Yes Prepper
Design Testing Limited Technical Team Lead
Manual Functional Testing Yes Test Engineer
Regression Testing Yes Test Engineer
Security Testing Limited Test Engineer
Code Quality Testing Yes Technical Team Lead
Automated Functional Testing No
Automated UI Testing No
Automated Unit Testing No
Integration Testing No
Deployment Testing Yes Technical Team Lead
Performance Testing Limited Test Engineer Test basic performance issues
Load Testing No Expected standard platform behavior
Stress Testing No Expected standard platform behavior
UI/UX Testing Yes Test Engineer
Chaos Monkey Testing No Test Engineer
Acceptance Testing Yes Client
Pull Request Review Testing Yes Test Engineer

Requirements Testing

Requirements Testing should be executed by the Prepper and Technical Team Lead with the Test Engineer:

  • Check the standard alignment (if applied)
  • Test requirements quality (verify requirements backlog)
  • Test traceability between system requirements and business requirements
  • Test traceability between user stories and automated business process

Design Testing

A single Design review will be performed once per project and should cover key design risks. The Design Testing should be executed by one or several external Architect and Consultants and cover:

  • Verify used components (React Apps, LNbits, Power Automate, Azure, etc.) – their necessity, review alternatives and architecture decisions that prove the correct selection.
  • Verify licensing requirements for the solution.
  • Verify integration schema (master data systems, formats, security implications)

Manual Functional Testing

Manual Functional Testing is conducted by the Test Engineer with the following approach:

  • Test Plan and Test Cases: Manual testing is performed according to the Test Plan, encompassing the Test Cases specified within it.
  • Test Case Definitions and Checklist: QA combines unique Test Case Definitions with a Checklist for standard test activities, ensuring comprehensive test coverage.
  • Constant Case Actualization: Test cases are regularly updated, ensuring all manual steps are aligned with the latest system deviations and support requirements, ensuring that all test cases remain accurate and up to date.
  • Test Execution on Completed User Stories: Tests are executed on completed User Stories, tracking the progress using Test Points to mark completed testing activities.
  • Re-Testing for Reopened Issues: If a developer reopens a bug or enhancement, Test engineers must re-test the related Test Cases to validate the fixes or changes.
  • Problem Reporting: Any issues discovered during testing are documented and linked to relevant Bugs or Enhancements for proper tracking and resolution.

Regression Testing

Regression tests will be performed by the Test Engineer to ensure that any new changes or updates to the system do not negatively impact existing functionalities. The regression testing process includes:

  • Reusing existing test material to validate the system against known test cases and scenarios.
  • Extensively automating functional regression tests (where applicable) to efficiently validate repetitive test cases.
  • Including regression testing as part of the next test plan, with a specific test suite dedicated to highlighting regression tests.

The regression testing needs will be determined based on several factors, including:

  • Test case bug statistics, such as the number of bugs and reopened issues, to identify areas that may require additional scrutiny.
  • Checking test cases against related platform thresholds or throttles, such as security and size limits, to ensure compliance.
  • Addressing any specific risks associated with the User Story or Feature being developed.
  • Taking into consideration known platform stability issues (known issues) that could impact the application.
  • Considering the priority and severity of test cases to prioritize critical areas during regression testing.
  • Factoring in decisions made by the Product Owner regarding features and changes.

By conducting regression testing, the project ensures that updates and enhancements are implemented without introducing unintended consequences to the existing functionality, maintaining the overall quality and stability of the system.

Security Testing

Security testing will be performed by the Test Engineer to confirm that each role has the appropriate and correct access levels. Security testing is aimed at identifying and verifying the effectiveness of security control. It is essential to ensure that applications and data are protected from unauthorized access, data breaches, and other potential security risks. The testing process includes:

Role-based access control is properly implemented:

  • Verify that different roles (e.g., administrators, users, guests) have assigned specific access rights based on their responsibilities and privileges.
  • Ensure that only authorized users can access certain functionalities or data within the application.

Data access restrictions are enforced:

  • Confirm that users can only access the data they are authorized to view, edit, or delete based on their roles and permissions.

It should be noted that an assumption is made that Microsoft will have performed extensive security testing of the platform, and that is therefore out of the scope of our testing, and as such our testing will focus on users’ roles and permissions.

Code Quality Testing

Code quality testing involves evaluating the quality and efficiency of the underlying code and logic used to build applications and solutions within the platform. Code quality testing ensures that the custom code adheres to best practices, is maintainable, and performs optimally.

Code quality testing will be completed by the Technical Team Lead and will be facilitated using pull requests (PR) in our version control system. While pull requests are primarily associated with code reviews, they can also support code quality testing by enabling collaboration, validation, and improvement of the codebase. The testing process includes code reviews by PR:

  • Pull requests serve as a formal mechanism for submitting code changes to the main codebase for review by other developers.
  • Reviewers can assess the code for readability, maintainability, adherence to coding standards, and overall quality.

Using pull requests for code quality testing helps maintain a clean and robust codebase by catching potential issues early in the development process. It promotes collaboration among developers, fosters continuous improvement, and enforces code quality standards within the development team.

Integration Testing

Any Features that require testing of systems in the Integration will be performed by the Test Engineer in the Integration environment after a successful deployment to said environment. Specific steps for testing the integrations will be highlighted in the relevant test cases.

It should be noted that the Test Engineer will perform “smoke testing” of deployments to the Integration environment in addition to the test for the Integration Features.

Deployment Testing

Deployment Testing is a critical phase in the software release process, focusing on ensuring a seamless and reliable deployment of the components to the production environment. This testing phase involves a series of carefully designed tests and validations to verify that the deployment is successful and that the solution functions as expected in the live environment.

The key activities in Deployment Testing include:

  • Testing in the Testing Environment: The solution will be thoroughly tested in a test environment that mirrors the production environment as closely as possible. This ensures that the solution operates correctly in an environment that closely resembles the live setup.
  • Performance and Behaviour Monitoring: During deployment, the performance and behaviour of the solution will be continuously monitored to identify any anomalies or issues. This real-time monitoring helps detect potential problems early on and allows for prompt remediation.
  • Default Values/Settings Testing: Default values and settings of the solution will be rigorously tested to ensure that it function optimally with the initial configurations. This helps verify that the model behaves as intended right from the start.
  • Users Setup Testing: The setup of users, including enabled users, licenses, and security roles, will be tested to ensure that the solution is accessible and usable by the designated users with the appropriate permissions.

For deployments to production, a screen-sharing session will be scheduled to confirm the solution was successfully deployed.

By conducting thorough Deployment Testing, the project team can have confidence in the successful deployment of the solution to the production environment. This testing phase plays a pivotal role in minimizing the risk of disruptions and ensuring a smooth transition to the live environment, ultimately providing a positive experience for end-users and stakeholders.

Performance Testing

In the context of a hosted platform where extensive load testing is restricted by Microsoft's terms of service, Performance Testing will be conducted in a limited manner, focusing on specific aspects that impact the overall performance of the solution. The objective is to ensure optimal functioning and efficiency within the given constraints.

The limited Performance Testing activities include:

  • Client Connection Verification: The client's network interface and current speed to the target domain will be verified to ensure stable and reliable connections, minimizing potential bottlenecks.
  • Optimized Client Machines and Software Settings: The client machines and software settings will be checked and optimized for performance to ensure smooth interactions with the solution.
  • Efficiency of Caching: The efficiency of caching mechanisms will be assessed to determine how effectively the solution utilizes caching to enhance performance and reduce processing overhead.

By focusing on these specific performance aspects, limited Performance Testing aims to ensure that the solution operates efficiently and optimally within the platform's restrictions. The insights gained from this testing will inform optimization efforts and contribute to delivering a reliable and responsive solution for the project.

Chaos Monkey Testing

Chaos Monkey testing is a type of software testing that involves intentionally introducing failures into a system to observe how well it can withstand and recover from them. The goal is to ensure that the system is resilient, fault-tolerant, and capable of handling unexpected disruptions without significant downtime or loss of functionality.

Key Aspects of Chaos Monkey testing:

  • Random Failure Injection: Chaos Monkey randomly disables or terminates instances, services, or other components within a system to see how it responds.
  • Monitoring and Observing: After injecting a failure, the team monitors the system's behaviour to ensure that it can recover gracefully. This might involve triggering fallback mechanisms, rerouting traffic, or automatically spinning up new instances.
  • Learning and Improving: The insights gained from Chaos Monkey testing are used to improve the system’s design and implementation, making it more robust and resilient.

Benefits:

  • Increased Resilience: By regularly subjecting a system to failures, organizations can identify and fix potential issues before they cause real problems.
  • Preparedness: Teams become better prepared for real-world outages or disruptions, as they have already practiced responding to similar situations.
  • Confidence in Deployments: Chaos Monkey helps build confidence in the stability of new deployments by ensuring they can handle failures from the start.

Chaos Monkey testing is a proactive approach to identifying and mitigating risks in a system by intentionally causing failures. It ensures that systems are built to be resilient, minimizing the impact of unexpected issues in production environments.

Acceptance Testing

Acceptance Testing is a dynamic and collaborative phase in the software development process, where the client and Test Engineer work closely to verify the solution's readiness for production deployment. Iterative deployments to production, such as feature-by-feature or week-by-week releases, are a key aspect of our process, ensuring incremental improvements and timely feedback incorporation.

Key points about Acceptance Testing:

  • Client's Integration Environment: The client conducts Acceptance Testing in their dedicated integration environment. This environment allows the client to assess the solution's behaviour within their specific setup and test real-world scenarios. The client will be informed once a deployment has been made from our Test environment to their Integration environment and the solution has satisfactorily passed initial integration tests and “smoke tests” by the Test Engineer.
  • Collaborative Testing with Added Domain Knowledge: During Acceptance Testing, the Test Engineer collaborates closely with the client, leveraging their domain knowledge about the business. These testing sessions actively involve end users from the client's organization, providing valuable insights into specific business requirements and user expectations. The Test Engineer's expertise guides the testing process, while the client's and end users' feedback and domain insights help identify areas for improvement. This collaborative approach ensures that the solution aligns with the client's specific business needs, delivering a valuable and tailored solution that meets their expectations.
  • Iterative Deployments to Production: The solution is not confined to one large deployment at the end of the Epic. Instead, iterative deployments, such as feature-by-feature or week-by-week releases, are made to production. This approach allows for incremental improvements and continuous validation of new functionalities.
  • Client Decision-Making Authority: The client retains the ultimate decision-making authority for production deployment. They assess the solution's performance, functionality, and stability during Acceptance Testing and decide whether it meets their expectations and criteria for each iterative release. It is an expectation that a release with be approved for deployment to Production if it contains no critical or major bugs.
  • Continuous Improvement: The Acceptance Testing process is iterative, allowing for ongoing refinements based on client feedback. This collaborative approach ensures that the solution continually evolves to meet changing needs and expectations.

By involving the client in iterative Acceptance Testing within their integration environment and emphasizing the role of the Test Engineer, the project team fosters transparency, client satisfaction, and a seamless transition to production. This collaborative and iterative approach ensures that the software consistently delivers value and high-quality performance throughout its development journey.

Pull Request Review Testing

A Pull Request (PR) review testing is a critical step in the development process where changes are carefully examined before they are merged into the main codebase. During a PR review, we check for bugs and enhancements, readability, performance, and alignments with project requirements. The goal is to ensure that the proposed changes are of high quality and do not introduce regressions or vulnerabilities into the project before the final approval and merging.

The key aspects of the PR testing:

  • Code Review and Validation: This highlights the focus on both reviewing the code quality and validating its functionality and alignment with the project goals.
  • Code Quality Assurance: This emphasizes the assurance of code quality, encompassing the review and validation processes. It highlights the responsibility to ensure code meets the required standards.
  • PR Validation: Validation covers correctness, style, and adherence to project requirements, involves validating the entire change set.
  • Merge Readiness Review: This process checks if the pull request is ready to be merged into the main branch, ensuring that it has passed all necessary checks.
  • Peer Review and Verification: This combines the collaborative aspect of peer code review with the verification of functionality through testing.

Regression Testing Matrix

We created a Regression Testing Matrix to provide a clear and concise way to understand the dependencies between features in the solution. This helps in identifying which features need to be retested when changes are made to specific parts of the application.

Feature ID Feature Title Dependent Feature(s)
89508 React App - Total Zaps Sent Component React App - Leaderboard Component, React App - Wallet - Send & Receive Payment
89507 React App - Leaderboard Component React App - Total Zaps Sent Component, React App - Wallet - Send & Receive Payment
90545 SSO Authentication
89192 React App - Rewards Carousel Component React App - Leaderboard Component, React App - Total Zaps Sent Component
89522 React App - Admin - LNBits Settings React App - Admin - Permissions
89551 React App - Admin - Permissions React App - Admin - LNBits Settings
89525 React App - Allowance Component React App - Leaderboard Component, React App - Total Zaps Sent Component, React App - Wallet - Send & Receive Payment
89982 React App - Wallet - Send & Receive Payment React App - Leaderboard Component, React App - Total Zaps Sent Component
89981 React App - Wallet - Transaction History React App - Leaderboard Component, React App - Total Zaps Sent Component, React App - Wallet - Send & Receive Payment
83588 Bot - Notification - Zap Received React App - Leaderboard Component, React App - Total Zaps Sent Component, React App - Wallet - Send & Receive Payment
83562 Bot - Send Zap Pop-Up React App - Leaderboard Component, React App - Total Zaps Sent Component
89513 LNBits Extension - Allowance Schedule React App - Allowance Component

Testing Schedule

Session Date Persons Features
UAT session I. 25th September Ben Weeks Feature 90545: SSO Authentication, Feature 89507: React App - Leaderboard Component, Feature 89508: React App - Total Zaps Sent Component, Feature 89192: React App - Rewards carousel component
UAT session II. 2nd October Ben Weeks Feature 89981: React App - Wallet - Transaction History, Feature 89982: React App - Wallet - Send & Receive Payment, Feature 89525: React App - Allowance Component

Constraints and Rules

This section outlines the constraints and rules that govern the testing process for the project. By addressing these constraints and rules in the Test Strategy, the testing process will be well-aligned with project goals, priorities, and legal requirements. This ensures that the testing efforts are focused on critical areas, delivering a high-quality product that meets user expectations and complies with relevant regulations.

Project Constraints

  • Time Constraints: The testing activities will be aligned with the project's planned sprint iterations. That is, the Features in the current Sprint will also be tested in that Sprint or Sprints. Test efforts will be conducted within the designated timeframes to ensure timely delivery of high-quality software.
  • Cost Constraints: The testing activities will be managed and executed in accordance with the defined test budget, ensuring efficient resource allocation and cost-effectiveness.

Project Priorities

QA priorities will be closely aligned with the priority of User Stories. Testing efforts will be directed toward high-priority User Stories to address critical functionalities first and ensure that the most important features are thoroughly validated.

Test Coverage

All backlog items will be covered in the testing process. Test scenarios and cases will be designed to encompass all identified backlog items, ensuring comprehensive coverage of the project's requirements and functionalities.

Test coverage will be controlled by work item traces, establishing traceability between test cases and corresponding requirements or User Stories.

Legal Constraints

GDPR Compliance: The testing process will adhere to all applicable General Data Protection Regulation (GDPR) rules and guidelines to ensure the protection of user data and maintain regulatory compliance.

Resource Constraints

The availability of testing resources, including personnel, hardware, and software tools, will be considered during the planning and execution of testing activities. Efforts will be made to optimize resource utilization for efficient testing.

Risk Management

Risk-based testing strategies will be employed to identify and prioritize critical areas for testing based on potential impact and probability of occurrence.

QA Process

The QA Process is a vital phase in solution delivery that ensures the quality, reliability, and functionality of the deliverable. Through rigorous testing and evaluation, it identifies and addresses issues, leading to a high-quality and user-friendly solution that is of value to the client. Our Test Engineers collaborate to plan, design, and execute tests, driving continuous improvement and delivering a high-value solution, that is of high, and at a high cadence.

Inputs

General inputs

The following general inputs are essential for initiating the QA process:

  • Initial Project Documentation: The Epic proposal, including the objective and lean business case, technical solution, and delivery plans, will be reviewed to gain a comprehensive understanding of Epic's objectives and scope.
  • Initial Backlog: The initial backlog of User Stories and Features will serve as a foundation for developing test scenarios and test cases aligned with Epic's functionalities.
  • Applicable Legal and Internal Rules and Constraints: Any legal requirements or internal rules that pertain to the project will be considered during the QA process to ensure compliance and adherence to industry standards.

Data inputs

During the QA process, test data will be inserted into the solution to test the performance of the system.

UX related inputs

The following UX-related inputs will be considered during the QA process:

  • UX Standards/Guidelines: UX standards and guidelines, if provided, will be considered to assess the application's user interface and user experience against established best practices.
  • Brand Definition Brand Books: Brand definition and brand books will be referred to when evaluating the application's visual elements and ensuring adherence to the brand's identity and guidelines.

Process

Process Scope Approach
Strategy Definition Yes Define Test Strategy (this document)
Test Design Yes Test Cases: Based on the initial requirement and on the DevOps backlog populated by the product owner
Test Planning Yes Iteration-based: Test cases are selected based on the current iteration scope. Regression selection is executed based on the Regression mark for the previous iteration scope
Test Execution Yes Manual test execution according to the plan
Bug & Enhancement Tracking Yes Bugs & Enhancements are tracked as a separate record. Must be subordinated to User Stories
QA Reporting Yes Testing Dashboard

Roles and Responsibilities

Role Inputs from Test Engineer Output for Test Engineer
Product Owner Test Strategy Testing Constraints, Priorities
Stakeholders Test Strategy Bugs, Enhancements, Requirements Clarification
Client Testers Test Strategy, Test Plan Bugs, Enhancements
Project Manager / Scrum Master Test Strategy, Estimation, Test Plan, Backlog Testing Constraints, Priorities, Resource Allocation
Business Analyst Test Strategy Stakeholders Profiles, Business Process, Business Cases, Business Requirements
Architect Test Strategy Environment Specification
Functional Consultant Test Strategy Detailed Solution Design, Fit & Gap Analysis
Developer Test Strategy, Bugs, Enhancements Resolutions
DevOps Engineer Test Strategy Release Pipeline, Accounts

Deliverables

  • Test Strategy – this document
  • Testing Dashboard – a DevOps dashboard giving an overview of QA
  • Test Plans - per Iteration, in Azure DevOps and execution reports
  • Test Cases - work items in Azure DevOps

Acceptance Criteria

The acceptance criteria for the project in the Test Strategy are as follows:

Bugs and Enhancements resolution:

  • All critical and major issues identified during testing will be addressed and fixed before release to ensure a high-quality and reliable solution.
  • Clients can prioritize up to three bugs or enhancements per Sprint using the "Priority Override" tag. These flagged bugs/enhancements will be treated with the same priority as Critical or Major items, ensuring prompt attention and resolution.
  • At least 75% of minor bugs will be resolved, enhancing the overall user experience and system performance.
  • A minimum of 25% of cosmetic bugs will be fixed to improve the system's visual appeal and user interface. In addition to the above, the following criteria must also be met for acceptance:
  • All tests have been executed, and test results have been documented.
  • Successful deployment to Test and Integration (if applicable) environments has been completed without critical issues.

By adhering to these acceptance criteria, the project aims to deliver a robust and efficient solution that meets user requirements and ensures high accuracy in data extraction while addressing and resolving identified issues to deliver a reliable and seamless user experience.

QA Model

Artifacts Overview

Artifact Approach
Test Plan Created per iteration
Test Suites Created inside test plan
Test Cases Linked to user stories
Shared Steps Applied to test cases with repetitive steps
Test Parameters Used as test data
Test Runs Test executions
Bugs Linked to user stories
Enhancements Linked to user stories

Tags

The following DevOps tags will be used to classify items for the testing process:

Tag Apply to Description
UI Bug or Enhancement Identify issues or enhancements related to user interface, layout, design, and usability.
Data Bug or Enhancement Focus on data handling, validation, storage, or retrieval within the application.
Functionality Bug or Enhancement Relates to bugs or enhancements in the core functionalities of the software.
Performance Bug or Enhancement Highlights bugs or enhancements regarding speed, responsiveness, or overall performance.
Compatibility Bug or Enhancement Deals with compatibility issues or enhancements with various devices, browsers, or operating systems.
Security Bug or Enhancement Concerns security vulnerabilities or measures within the application.
Documentation Bug or Enhancement Refers to issues or enhancements in software documentation, such as user manuals or technical guides.
Localization Bug or Enhancement Covers bugs or enhancements related to localization and internationalization.
Integration Bug or Enhancement Focuses on issues or enhancements related to external system or service integration.
Accessibility Bug or Enhancement Addresses accessibility issues or enhancements for users with disabilities.
Acceptance Testing Bug or Enhancement Bugs or enhancements raised by the client during acceptance testing.
Priority Override Bug or Enhancement Non-critical items prioritized by the client to be treated with higher importance.

Bugs vs Enhancements vs Features

In the QA Model, we differentiate between Bugs, Enhancements, and Features based on their characteristics and impact on the system:

Type Definition
Bug A bug is defined as a defect in the system that causes a User Story to fail its acceptance criteria. Bugs are defects that directly impact the functionality and usability of the system. When a User Story does not meet its specified acceptance criteria due to a defect, it is considered a bug and must be addressed and resolved before release.
Enhancement An enhancement is like a bug but does not cause a failure in the acceptance criteria of a User Story. Enhancements are typically improvements or optimizations to existing functionalities, often categorized as Minor or Cosmetic in severity. They aim to enhance the overall user experience or system performance. NB If an Enhancement requires an effort exceeding 4 hours, it will typically be redefined as a Feature.
Feature A Feature involves a set of User Stories (requirements) that introduce new functionalities or significant improvements to the system. New Feature requests are scheduled for subsequent Sprints or phases of the project to accommodate their complexity and scope.

By classifying and handling Bugs, Enhancements, and Features accordingly, the team ensures that critical items are addressed promptly, while improvements and new functionalities are effectively managed and planned to enhance the overall system over time. This approach allows us to maintain a high-quality and stable system throughout the project lifecycle.

NB Issues that are referenced in the project are project-related issues as opposed to a bug or an enhancement and therefore outside the scope of this document.

Bugs and Enhancements Model

This section outlines our approach to managing and resolving bugs and enhancements throughout the development lifecycle. It provides guidelines for tracking, prioritizing, and addressing issues, ensuring a clear focus on delivering a high-quality product. Effective collaboration and efficient issue resolution contribute to an exceptional deliverable that meets client expectations and requirements.

Tracking Rules

To maintain a structured and efficient bug and enhancement tracking process, the following rules will be adhered to:

  1. Linkage to User Stories: Bugs or Enhancements must be linked to the relevant User Story as child items, ensuring clear traceability and alignment with project requirements.
  2. Steps to Reproduce: Bugs must include detailed steps to reproduce. This information is crucial for the development team to accurately identify and resolve the problem.
  3. Log Files (if available): Bugs or Enhancements should include log files, where applicable, to provide additional diagnostic information that aids in understanding and resolving the issue.
  4. Screen Capture (where applicable): Whenever possible, include screen captures that visually demonstrate the encountered bug or enhancement. These visuals assist the development team in quickly grasping the nature of the issue.
  5. Customer Priority Override: Clients have the option to flag up to three Bugs or Enhancements per sprint with the "Priority Override" tag (more detail in the next section). This enables customers to highlight specific bugs or enhancements they consider critical or major, regardless of their initial classification by the Test Engineer. These flagged items will receive immediate attention and priority for resolution, ensuring customer-driven focus and satisfaction.

By following these bug-tracking rules, we ensure a comprehensive and effective approach to managing bugs and enhancements throughout the development process. These guidelines facilitate clear communication, streamlined issue resolution, and a high level of customer engagement in the project's success.

Priority Override

To empower customers and address their specific concerns, we introduce the concept of "Priority Override”. As stated, this initiative allows customers to flag certain bugs or enhancements as critical or major, even if they are initially classified as minor or cosmetic by the development team. The objective is to ensure that the customer's priorities are addressed promptly, resulting in a highly satisfactory user experience.

Key points about “Priority Override”:

  1. Priority Override Option: Customers can flag up to three bugs or enhancements per sprint, regardless of their original classification. These flagged items receive immediate attention and become the primary focus for resolution.
  2. Enhancing User Satisfaction: By giving customers, the ability to directly influence priority, we acknowledge their unique perspective and understanding of their business needs. This results in a solution that better aligns with their requirements and enhances overall user satisfaction.
  3. Rapid Resolution: Customer-flagged items are treated as top priorities, ensuring swift resolution and deployment of bug fixes or enhancements. This approach demonstrates our commitment to delivering a solution that fully addresses customer expectations.
  4. Balancing Priority: While customer feedback is crucial, the development team also maintains a balanced approach to prioritize other critical defects impacting the project. This ensures that the overall stability and quality of the solution are not compromised.

By implementing the Priority Override process, we reinforce our customer-centric approach, fostering collaboration, and enabling seamless integration of customer feedback into the development process. This initiative significantly contributes to the success of the project and the creation of a product that aligns precisely with the customer's vision and requirements.

Management

To ensure effective management and resolution of bugs and enhancements, the following guidelines will be followed:

  1. Bug/Enhancement Analysis: Analysing bugs is a crucial step in the testing process. Proper bug analysis helps identify the root causes, understand their impact, and facilitate effective bug and enhancement resolution. Bugs and Enhancements will be reviewed by the Test Engineer and Prepper to understand the reported symptoms, steps to reproduce the problem, and any additional information provided by the reporter.
  2. Work Estimation: Bugs and Enhancements will be estimated in a manner like tasks, with time allocated in hours for each item. Work estimation will be populated by the respective Engineer.
  3. Severity Designation: The Severity designation for each Bug and Enhancement will be decided by the Test Engineer responsible for the testing process. The Test Engineer will carefully assess the impact on the overall system and user experience, considering factors such as functionality, performance, and usability. This standardized approach to Severity designation enhances communication within the team and enables focused resolution efforts, ultimately contributing to a more efficient and effective development process.
  4. Assignment to Sprint Iteration: Bugs and Enhancements must be assigned to the relevant Sprint iteration to be addressed by Engineers during that Sprint. Bugs and Enhancements will be assigned in the current Sprint if capacity allows with Critical, Major, and Priority Override items taking precedence.
  5. Engineer Assignment: Bugs and Enhancements must be assigned to the respective Engineer responsible for fixing them. If a Bug is assigned to the Test Engineer, it indicates that the Bug is under clarification or review and is not yet ready to be fixed.
  6. Prepper Involvement: For complex Bugs that require substantial reworking of a User Story, the Prepper may be involved in the resolution process.
  7. Closure by Test Engineer: Only the Test Engineer is authorized to close Bugs and Enhancements once they have been resolved and verified.
  8. Priority Override Handling: Bugs or Enhancements flagged as "Priority Override" will be treated with the same priority as Critical or Major items, ensuring prompt attention and resolution.
  9. Enhancements with Spelling or Grammatical Errors: Enhancements primarily related to spelling or grammatical errors, which do not impact core functionality or user experience, will not incur resolution time billing. The development team will address these minor items as part of their routine review and improvement process, maintaining a high level of product quality without additional costs for minor text-related enhancements.
  10. Enhancements Over 4 Hours: If an Enhancement is estimated to take over 4 hours to implement, it should be re-evaluated and converted into a Feature. Features require detailed User Stories and planning, and they may be scheduled for subsequent Sprints or phases of the project. This approach ensures that the current sprint remains focused on achievable tasks and prevents scope creep, allowing for more effective planning and execution.

By adhering to these guidelines, we promote a systematic and collaborative approach to managing bugs and enhancements, ensuring timely resolution and an exceptional level of product quality.

Lifecycle

Status Description
New Bug assignments rules: A New Bug or Enhancement created by the Test Engineer could be assigned to the Engineer responsible for the related User Story. A Bug or Enhancement has been created by Engineer the Bug should be assigned to the Test Engineer. If there is no linked User Story, then the Product Owner / Scrum Master must be assigned.
Active Active Bugs and Enhancements means the item is being fixed by engineers now. The Engineer should mark the item Active as soon as work starts.
Resolved Only responsible Engineers should mark the Bug as Resolved. The Engineer must test at least once to be sure that Bug is fixed, and no clear side effects appear. To verify a fix, a developer should attempt to reproduce the Bug and look for more unexpected behavior. If necessary, they should reactivate the Bug if they mark the Bug as Resolved.
Closed Only the Test Engineer should close Bugs or Enhancements. Before closing a Bug or Enhancement the item should be verified by the Test Engineer attempting to reproduce the Bug or Enhancement and look for more unexpected behaviour. If necessary, the Test Engineer should reactivate the Bug or Enhancement.

Closing Reasons

Reason Description
Fixed Bug is fixed.
Deferred Deferring a fix until the next product release.
Duplicate A bug has already been reported, and you can link each Bug with the Duplicate/Duplicate link type and close one of the bugs.
As Designed The feature works as designed.
Cannot Reproduce The Test Engineer cannot reproduce the bug or enhancement (and ideally can show it no longer exists).
Obsolete The feature related to the Bug is no longer in the product.

Bug Severity Qualification

Severity Description
Critical This defect indicates a complete shut-down of the process or application and nothing can proceed further. An example of a critical defect is an application’s returning a server error message after a login attempt.
Major Affects key functionality of an application or process and the application is not behaving in line with the requirements. For example, an email provider does not allow adding more than one email address to the recipient field.
Minor A minor function does not behave as expected or outlined in the requirements. An example of this is a broken link.
Cosmetic Primarily related to the application UI. For example, there is a typo, or a button is the wrong colour.

Test Plan Composition

The "Test Plan" composition is structured using the following approaches:

  1. Iteration-based: The Test Plan is organized around project iterations or sprints. Each iteration focuses on specific development and testing activities, ensuring that features are thoroughly tested within a defined time frame.
  2. Area-based: The Test Plan is categorized based on different areas or modules of the application. This approach allows for targeted testing of individual components or functionalities, ensuring comprehensive coverage of the entire system.
  3. Composed by Test Suites: The Test Plan is composed of various Test Suites, each containing a set of related test cases. These Test Suites are designed to efficiently manage and execute specific types of testing activities.
  4. Backlog-based Feature-based Test Suites: The Test Plan aligns with the backlog and is organized into Feature-based Test Suites. Each suite corresponds to specific backlog items (features or user stories) and is designed to thoroughly test the functionality and acceptance criteria associated with those items.

By adopting these different approaches in the Test Plan composition, the testing process becomes more structured, organized, and targeted, allowing for efficient testing and delivery of a high-quality product.

Environments

The Environments section of the Test Strategy focuses on the setup and configuration of different project environments to facilitate thorough testing and validation of the software solution. These environments play a crucial role in simulating real-world scenarios and ensuring that the application functions as intended across various stages of the development lifecycle.

General Principles

The following general principles will be applied to the various environments:

  • Development Engineers do not change the Test environment.
  • Only the Test Engineer manages/approves any changes to the Test and Integration environments.

Environments

Type Scope Domain Quantity Description
Engineering Yes Eir Evo 1 Individual sandboxes. This environment is where the initial development and coding of the solution take place. It provides developers with a sandbox to create, test, and iterate on features without affecting the live or test systems.
Test Yes Eir Evo 1 Testing environment. The Test environment is dedicated to testing the solution against defined test cases and scenarios. It ensures that the application meets quality standards and requirements before moving to the next phase.
Production Yes Eir Evo 1 Live environment. This is the live environment where the final, fully tested solution is deployed and made available to end-users. It is the operational environment where the application runs in a production-ready state.

Supplemental

Test Accounts

Account Role/Persona Environment Domain Notes
Edit Weeks Teammate Test Eir Evo Test Engineer
Akash Jadhav System Admin Test Eir Evo Tech-Lead
Edit Weeks Teammate Production Client Test Engineer

Client Testers

Name Role (Persona) Notes
Ben Weeks Teammate, Admin Ben will perform end-user testing in the UAT environment.

CI/CD

The solution will be deployed to the various test environments using a DevOps CI/CD deployment pipeline configured by the Operations Engineer and separate service accounts. For the avoidance of confusion, these service accounts will not be used for testing.

Tools

Test Tools

Type Scope Domain
Azure DevOps Yes Test management and execution tracking
Github Yes Version control, collaboration, and code review.
Visual Studio Code Yes Validating code quality directly through extensions and tools.

Risks

ID Title Description Mitigation
90564 HR or Accounts may put blockers on Zapp.ie HR or Accounts blockers Start conversation with HR accounts
90882 API endpoint may not return required data Looping through multiple endpoints Accept performance hit, raise issue
Account Role/Persona Environment Domain Notes
Teammate Test Eir Evo Test Engineer account
System Admin Test Eir Evo Tech-Lead
Teammate Production Client Test Engineer and Tech Lead

Client Testers

Name Role (Persona) Notes
Ben Weeks Teammate, Admin Ben will perform end-user testing in the UAT environment.

CI/CD

The solution will be deployed to test environments using a DevOps CI/CD pipeline. Separate service accounts will be used for deployment, not testing.

Tools

Tool Scope Scenario
Azure DevOps Test management Execution and tracking

Risks

ID Title Description Mitigation

Issues

ID Title Description Mitigation
90948 Edi is over capacity on testing Edi is over capacity on testing Assign testing tasks to Anton and Natalia B
90953 Developers are not raising their own enhancements Developers should propose feature improvements Lead devs to log improvements, automate testing
91574 Too many Critical and Major items are open Excessive critical and major issues remain unresolved Focus on resolving critical/major items

Conclusion

The test strategy laid out in this document provides the framework for thorough, efficient, and well-structured testing activities. This document ensures that all stakeholders are aligned on testing efforts, risk management, and resource allocation to deliver a high-quality solution. Continuous collaboration, communication, and iteration will drive improvement and ensure the success of this project.