Concurrency
Concurrency is a foundational aspect of modern, cloud-native and horizontally scaled applications. It enables systems to handle multiple operations simultaneously—but it also introduces complexity and the risk of race conditions. When building concurrent applications, developers must design their systems so that simultaneous operations do not interfere with each other and do not lead to inconsistent or unpredictable outcomes.
This requires careful consideration of sequencing, timing, and shared state.
Techniques to ensure safe concurrency include:
- thread-safe programming practices
- atomic operations
- optimistic concurrency controls
- transactional boundaries
- safe locking mechanisms (only when necessary)
Applications must be designed so that concurrency does not introduce ambiguity or nondeterministic behavior.
Optimistic Concurrency Control (OCC)
One of the most common causes of concurrency bugs is the “check-then-act” anti-pattern.
In such scenarios:
- Multiple processes read a condition or a piece of shared state.
- They all observe the same value.
- They decide the condition is met and proceed.
- Meanwhile, a third process changes the underlying state.
- The original processes act based on stale information, producing incorrect results.
This pattern appears frequently in APIs, database updates, message processing, and state transitions.
How OCC Solves This
Optimistic Concurrency Control avoids this problem by removing the dependency on pre-checks.
Instead of:
if condition_is_true:
perform_action()It relies on:
- a version number,
- a timestamp, or
- an ETag (in HTTP systems)
to detect whether the underlying resource changed between reading and writing.
The flow becomes:
- Read the resource and its version.
- Attempt to update it.
- The database or storage layer checks if the version is unchanged.
- If the version has changed → the operation fails → the caller retries or aborts.
This prevents race conditions without locking, and without forcing requests to wait on each other.
Why We Prefer OCC
DIT strongly recommends using Optimistic Concurrency Control (OCC) whenever possible because:
- it scales extremely well
- it avoids locking resources
- it works naturally in stateless, horizontally scaled environments
- it minimizes contention under normal loads
- it greatly simplifies system behavior
Pessimistic Concurrency Control (PCC)
Pessimistic Concurrency Control uses locks to prevent concurrent modifications.
While effective in certain scenarios, PCC:
- introduces blocking and waiting
- reduces throughput
- complicates error handling
- increases operational fragility
- can lead to deadlocks if mishandled
- is poorly suited for microservices and cloud-native environments
For these reasons, PCC must be avoided unless absolutely required.
Approval Requirement
If the use of pessimistic locking or PCC-based architecture is unavoidable:
- you must obtain explicit approval from the Head of Digital Development
- the design must include a formal justification and risk analysis
By designing for concurrency through OCC, atomic operations, and stateless application logic, systems become more predictable, more resilient, and more scalable across DIT infrastructure.
