A backing service refers to any resource or service upon which an application depends but is not incorporated within the application itself. Common examples include databases and message queues. Treating these backing services as attached resources implies that the application receives the service details through its configuration, thereby eliminating the need for any hardcoded dependencies within the application.

For instance, the application might be supplied with a uniform resource identifier (URI) and credentials to access a particular database. This approach allows for the independent scaling and management of the backing service, offering enhanced flexibility and seamless transitions between different environments. For instance, a database utilized in a development environment could be purposefully scaled down to a size significantly smaller than that in a production environment, aiding in resource management and cost efficiency.

V. Build, Release, Run: Strictly Separate Build and Run Stages

The principle of strictly separating the build and run stages of the software development life cycle aligns with the earlier discussion of the factory and the citadel.

The factory stage is focused on building the application efficiently. The objective here is to minimize the build time and ensure the reliability of the application through rigorous testing before it is released. This phase is fully automated and generates an immutable artifact, which assures reproducibility. Consequently, this makes debugging simpler, as it limits the potential variables that could cause changes.

The citadel stage, on the other hand, is where the application artifact from the factory is run. It’s optimized for security, providing only the necessities to securely run the application artifact. If you were to include tools for building the application at this stage, it could create additional avenues for security vulnerabilities.

By separating the build and run stages, you can optimize each for different factors—efficiency and reliability in the build stage, and security in the run stage. This clear division minimizes the risk of security breaches when the application is operating within the citadel.

VI. Processes: Execute the App as One or More Stateless Processes

At the beginning of this chapter, I compared the architecture of cloud native applications to Alan Kay’s analogy of cells—individual, loosely coupled components working together. To expand on this, each component is recommended to operate as an independent, stateless process under this principle.

Stateless doesn’t mean a process without any state. Rather, it emphasizes that any state a component maintains should be stored externally, such as in a database, rather than being held in the process’s memory or local disk storage. This principle becomes especially relevant when a process restarts, as locally stored states could be lost. Notably, in the Heroku platform, persistent storage was not available, making state holding an impossibility, and necessitating this approach. Nonetheless, even in cloud native platforms where persistence is available, this principle remains important and beneficial.

Adherence to statelessness facilitates the scaling of processes, as there is no state to be lost when instances are scaled down or acquired when scaling up. It also mitigates the risk of process failures due to corruption of the internal state, thus enhancing reliability. Importantly, the absence of state allows for flexibility in relocating processes to different hardware or environments in case of failure, enabling the creation of cloud native applications that are more reliable than the infrastructure on which they run.

Although the 12-factor app principles predate microservices, the notion of a cloud native application as multiple stateless processes is foundational to the convergence of microservices and cloud native. It’s worth noting that the Heroku platform was built upon small instances called dynos, essentially small stateless processes, allowing for no alternative. While loose coupling is a key attribute of cloud native applications, the question of whether each component must be deployed as a separate microservice is a topic I will explore further.

Addressing a common question, the concept of statelessness doesn’t necessarily preclude all forms of state. For instance, buffering unsent data for efficiency, such as batching insert statements, might seem like a form of state. However, this represents a temporary or transient state, which is intended to be cleared in the short term. Such transient states can be lost without disrupting the overall functionality or correctness of the application. Therefore, such practices can still be considered in line with stateless principles, provided they don’t affect the overall functionality or lead to data loss.

Leave a Reply

Your email address will not be published. Required fields are marked *