Internal Documentation

Software Development Lifecycle

The technology team is committed to a regular and predictable release cycle. This document describes the processes involved in ensuring that the software meets the required quality threshold while continuing to deliver new product.

Ticketing and Ticket States

Tasks are managed as tickets in Zenhub. Tickets are filed in the initial Open state, and then progressed through the subsequent states as they are worked on.

The states that we use are defined as follows:

  • Open: Indicates that the task is assigned but no work has been done on it yet.

  • Code in progress: Indicates that someone is assigned and actively working on the task.

  • Code Review: Indicates that a PR is ready for review, and is waiting for someone to review it. After the pull request is approved, the owner merges their change into main, waits for the deploy, tests their change in staging, then labels the ticket for the current release.

  • Ready for QA: Indicates that the code has been merged, is deployed to the staging environment, has been tested by the owner, and is labeled for the current release. The owner is prepared to hand it over to QA, with the developer confident that the ticket won’t be reopened.

  • In test: Indicates that testing is currently underway, performed by the QA team or any other designated tester.

  • QA verified: Indicates that all specifications have been met satisfactorily and the ticket has been approved. The ticket has been reviewed from the customer’s perspective, and any additional issues or edge cases are documented for tracking or added to the backlog for prioritization.

  • Reopened: Indicates that the ticket has been reopened due to unmet requirements or issues. It remains in this state until the coder addresses the concerns and the ticket progresses back to the “Coder in progress” stage.

  • Released to prod or closed: This stage occurs only after the changes have been deployed to production. QA performs a post-release check to verify the changes, ensuring they are implemented correctly before planning for the next batch of tasks.

Sprint Cycle

Work is planned and executed in “sprints”. For the timing of the recurring sprint cycle, see the Sprint Cycle document.

Tracking work, and peer review

Tickets are moved through the cycle described above. The QA team is responsible for ensuring that tickets are delivered according to their requirements, and will reopen tickets that do not meet the quality threshold defined by the team.

As developers begin work on a ticket, they create a new git branch from main in git to track their work.

Developers are required to obtain peer review on their work. New work is submitted into github as pull requests to main, and the “request approval” feature used to tag one or more peers as reviewers. Once objections have been met and at least one positive approval is obtained, the developer is responsible for merging their work into the develop branch.

At the end of the sprint, the sprint’s “release conductor” is responsible for merging the sprint’s work from main to prod for the momentum code base, and performing a release of the latest calculation service to the production environment. Post deploy, manual smoke tests are performed to ensure that the product continues to function.

Continuous Integration and Continuous Deployment

Github actions are set up to support the release cadence.

Merging a pull request to main for either the momentum or momentum-calcs repositories, triggers an automatic deploy to the staging environment. This is expected to be triggered multiple times a day. When a pull request is merged to prod for the momentum repository, this triggers an automatic deploy of the application to the production environment. Calculation service deployments are done via a manual trigger of a github action. This is expected to be triggered once a sprint.

Automatic deploys trigger any database migrations that are necessary to bring the environment’s associated database to the current revision.

Tests are automatically run when pull requests are opened. Pull requests cannot be merged if there are failing tests.

Likewise, static code analyzers are run on the codebase when pull requests are opened. Failing static code analysis will block a pull request from being merged.

Monitoring and Observability

After each deployment, automated monitoring tools are used to track system performance, error rates, and user behavior. This helps quickly identify any issues that may have been introduced by the new changes.

Any exceptions triggered in production results in an alert to the development team for immediate investigation.

Quality Assurance

The QA team is responsible for maintaining the quality of the product. A range of tooling and processes are involved.

As work is performed by developers and merged into main and therefore deployed into staging, the QA team manually tests to ensure that the work has been performed to the requirements, and that no additional bugs have been introduced. Once a ticket is verified, the QA team will move the ticket to a subsequent state, indicating its readiness to be deployed into production.

At the end of the release cycle, a regression test is performed to check that no features have been inadvertently degraded. These regression tests are formally tracked in test management software.

Automated tests are created by developers, and (as above) are run automatically when pull requests are submitted. It is the responsibility of the QA team to indicate to the development team if test coverage is lower than required.

Security testing is regularly performed using an analysis tool named Zed Attack Proxy, which tests server, application and network configuration for weaknesses. The QA team also tests for OWASP related vulnerabilities such a SQL injection. On a regular cadence, a dedicated penetration tester is contracted to conduct a formal test and report.

Load testing is regularly performed to uncover weaknesses in the scaling architecture of the application.

User acceptance testing is continually performed, whereby the QA team and business users of the product will provide feedback if the implementation of features is confusing or cumbersome.

Visibility

This document is confidential and is a proprietary work product of Cadence OneFive. The information contained herein may not be copied or distributed without the specific written consent of Cadence OneFive.