Automated Testing
Automated testing is a fundamental requirement for ensuring software quality, stability, and long-term maintainability. All applications developed at DIT must include a structured and reliable automated testing approach that validates core functionality, prevents regressions, and supports safe continuous integration and deployment.
Automated tests must be treated as first-class citizens in the development process. They are not optional; they are an integral part of delivering production-grade software.
Testing Requirements
Every software project must include automated tests covering, at minimum, the following layers:
1. Unit Tests
Unit tests validate individual functions, classes, or modules in complete isolation.
- Must cover critical logic and computational paths
- Must not rely on external services or I/O
- Must run quickly and deterministically
2. Integration Tests
Integration tests validate interactions between multiple components, such as:
- database access
- external API calls (mocked where appropriate)
- storage or cache interactions
- message broker behavior
These tests ensure that combined functionality behaves correctly.
3. End-to-End (E2E) or Functional Tests
E2E tests simulate real user workflows and validate end-to-end system behavior.
- Should cover core user and API flows
- Should be minimal in number but high-value
- Must not replace proper unit or integration coverage
Code Coverage Requirement
DIT enforces the following coverage rule:
New or modified code must meet a minimum of 70% automated test coverage.
- Coverage is measured per Pull Request.
- Code that does not meet the 70% threshold cannot be merged without explicit approval from the Head of Digital Development.
- Coverage should be meaningful—teams must avoid writing trivial or superficial tests simply to satisfy the requirement.
- Critical logic paths should be thoroughly tested beyond the minimum threshold.
While 70% is the baseline for new code, teams are encouraged to exceed this whenever feasible.
Test Automation in CI/CD
All automated tests must:
- run on every Pull Request
- block merges if any test fails
- be fully deterministic and self-contained
- produce clear and actionable output for developers
No manual steps should be required to execute the full test suite.
Test Data and Fixtures
Testing environments must:
- use isolated test databases
- reset data between tests
- avoid reliance on production data
- seed predictable datasets
- use factories/builders for clean, repeatable test data
Mocking and Stubs
External dependencies (message brokers, APIs, storage services) should be mocked or stubbed in unit tests.
Guidelines:
- mock only when necessary
- avoid over-mocking, which reduces test realism
- use integration tests to validate real interactions
Performance and Concurrency Testing (Optional but Recommended)
For systems handling significant load or concurrency, teams are encouraged to include:
- concurrency tests
- race-condition detection tests
- load or stress tests
- queue/worker throughput tests
These tests are especially important for applications dealing with message brokers or high-frequency operations.
Summary
Automated testing ensures:
- correctness
- reliability
- maintainability
- safe deployments
- prevention of regressions
- confidence in system evolution
All DIT software must implement a comprehensive automated testing strategy, with new code meeting a minimum of 70% test coverage and all tests integrated into the CI/CD pipeline.
