We use CircleCI to host our CI/CD environment. Our entire platform is based on test-driven development, with as much (or more) software written in tests as we have in actual testing software. All of this is verifiable, since the tests, code, and entire CI/CD pipeline are all open source. The build pipeline from issue to pull request to merging to testing to packaging is all process-driven and publicly referenceable in the tools that we use:
- Jira: Issue management and agile development (weekly sprints)
- GitHub: Source code management and GitHub pull request process flow with Jira and CircleCI
- CircleCI: Our build toolchain that runs a full suite of tests: Unit, integration (mock environments), system (dockerized live systems), and smoke (web UI).
How do you ensure that there are no regressions with new code additions? (And is this automatic?)
We have been doing test-driven development in the project now for more than 10 years, and yes, the intent is to be automatic.
Here's a link to the CI/CD toolchain: https://circleci.com/gh/OpenNMS/opennms.
How do you measure code coverage for tests?
We integrate with SonarCloud.
Here's a report on today's latest code in active development: https://sonarcloud.io/component_measures?id=OpenNMS_opennms&metric=coverage#