Skip to content

Statelessness

Intro

Applications deployed in modern cloud-native environments—especially those running on Kubernetes and scaled horizontally—must be designed to operate in a stateless manner. Statelessness ensures that any instance of an application can handle any request at any time, enabling reliable scaling, resilience, rolling updates, and fault tolerance.

Stateful behavior inside application instances creates fragility, complicates deployments, and breaks core operational guarantees expected in a containerized environment.

HTTP

HTTP is inherently a stateless protocol.
Each request is independent and contains all the information needed for the server to process it. The server does not (and should not) rely on previous requests unless state is explicitly introduced by the application.

Servers that attach persistent or implicit state to incoming requests—such as session-based authentication—are violating HTTP statelessness and making the application dependent on server-side state.

Sessions

Traditional server-side sessions store per-user data in-memory or on the local server instance. This creates a stateful application runtime. In horizontally scaled environments, this leads to serious operational issues.

Why local server sessions are problematic

In a cloud environment (e.g., Kubernetes), multiple replicas of the same application run simultaneously. Incoming HTTP requests are load-balanced and may be routed to any instance.

Example problem:

  • Request #1 goes to Instance A → session state is stored there
  • Request #2 goes to Instance B → session state is missing, causing broken behavior

This creates inconsistent user experiences and breaks the expectation that all application replicas are interchangeable.

Acceptable Solutions

To maintain statelessness, one of the following approaches must be used:

  1. Use an external, centralized session store
    (Preferred: Redis, Memcached, or any distributed cache)

    • All instances share the same session data
    • Fully compatible with horizontal scaling
  2. Use sticky sessions (session affinity)

    • Requests from the same user always go to the same instance
    • Only allowed with explicit approval from the Head of DevOps
    • Not recommended due to poor scalability and fragility
  3. Avoid server-s

Storage

Applications deployed in a cloud-native or containerized environment must not store files locally on the container’s filesystem. Containers are ephemeral by design—files written to the local filesystem may be lost at any time due to restarts, scaling events, rolling updates, or pod rescheduling.

All persistent or shared file storage must use external storage services.

Supported Storage Options

  1. DIT On-Prem S3-Compatible Object Storage
    This is the recommended and default storage solution for handling:

    • file uploads
    • documents
    • media assets
    • long-term or shared storage
      It offers durability, high availability, and compatibility with standard S3 tooling and SDKs.
  2. Kubernetes Persistent Volumes (PV/PVC) (Use only when required)
    Persistent volumes may be attached to containers when an application explicitly needs filesystem-level persistence.
    They should be used sparingly and only for valid technical reasons—such as databases or applications that cannot operate using object storage.
    PV usage must align with DevOps guidelines to avoid lock-in or fragility in deployments.

Requirements

  • Applications must be designed to write files to external storage instead of the container’s local filesystem.
  • Any exception to this rule requires a documented justification and approval from the Head of DevOps.
  • Ensure that all storage SDKs and configurations are set through environment variables and not hardcoded.

Please contact the Head of DevOps to obtain connection details, access credentials, recommended SDKs, and configuration requirements for using the supported storage services.