You have a container nicely packaging your microservice; now you need to choose where to run it—a container runtime. Many people automatically associate Kubernetes, the container orchestrator, with cloud native. “Containerized microservices running on Kubernetes” is almost the definition of cloud native for some. However, in this book’s definition of cloud native, I prioritize efficiency; and Kubernetes will not always be the most efficient way of running a container on Google Cloud.
To illustrate this, consider a container as a person and the container runtime as a vehicle. If you have a family of four and use a car multiple hours each day, owning a standard five-seater car makes sense. At times, the car may transport all four individuals; other times, only one or two. While the car may not be fully utilized at all times, it is always ready and possesses adequate space. Occasionally, you might even squeeze an additional passenger into the middle seat. This scenario reflects the versatility of Kubernetes.
However, if there are times when you need to accommodate more people—say, when grandparents visit monthly, and you need six seats—you might consider purchasing a seven-seater car, perhaps with two extra seats that fold away when not needed. This mirrors the functionality of Kubernetes with autoscaling, where the capacity can expand to accommodate additional load when required.
On the flip side, consider a different scenario where:
- You are living in a city.
- You undertake short, solo journeys every other day.
- Once a week, you travel for an outing with a group of friends.
- A few times a year, you travel a hundred miles to visit family with your partner.
In such a situation, does owning a car make sense? Probably not.
Now imagine a service like Uber offering an on-demand transportation service where you can order a self-driving car to accommodate anything from one to twenty people. You merely specify the number of passengers, a suitable car arrives within 30 seconds, and you pay per minute of use. Would you still own a car, or would you prefer this “carless” service?
This scenario is akin to running containers using a serverless container runtime. Yes, there may be a cluster of machines (perhaps a Kubernetes cluster) somewhere behind the scenes, but they’re so abstracted that you don’t need to worry about them.
However, imagine if you changed jobs and had a 60-minute daily commute where punctuality is critical. You might opt to own a two-seater sports car for daily commuting and use the Uber service for larger, less frequent trips. Here, the ownership of the car provides a guaranteed capacity, and on-demand service is the perfect solution for situations where you don’t need that assurance.
Likewise, different services exist on Google Cloud for running containers. Although Kubernetes is a powerful abstraction, it has a steep learning curve, and sometimes running a cluster 24/7 will not be efficient. You can use Kubernetes directly on Google Cloud using Google Kubernetes Engine (GKE) but you don’t need to. If your services run all the time with occasional spikes, it may make sense to have a Kubernetes cluster. It may make sense to use just the serverless container runtimes. It may make sense to use a combination.
Cloud Foundry, an earlier cloud native platform, introduced a succinct haiku that elegantly encapsulates the essence of cloud native services:
Here is my source code
Run it on the cloud for me
I do not care how
@onsijoe
Drawing parallels with Cloud Foundry’s approach, once you have a service running in a container on Google Cloud, you can adopt a similar philosophy: “Here is my containerized service, ensure it’s ready to handle incoming requests.” This approach offers immense flexibility. The specifics and trade-offs of various services will be explored in detail in later chapters.