The 12-Factors moderating container services…(1/4)
Before we begin decoding the logic behind the guidelines, we must first establish some facts.
The set of twelve factors, that we have repeatedly mentioned in a couple of our blogs, has been transformed into a methodology — called the twelve-factor application methodology.
A methodology is “a collection of effective ways or methods to do something”.
Some of you might even recognize this name — Heroku.
Simply put, Heroku is a platform turned service, or PaaS which arms the developers to build, run, and operate applications entirely within the cloud.
The mentioned methodology was first drafted by the developers at Heroku.
And the most important thing about these guidelines is the fact that the characteristics mentioned in the twelve factors are not specific to a cloud provider, platform, or language.
These factors are there to represent a set of guidelines or best practices to develop and scale portable, resilient applications that will surely thrive in cloud environments (specifically Software as a Service applications).
So what are these 12-factors?
4. Backing Services
5. Build, Release, Run
7. Port Binding
9. Dev/prod parity
12. Admin Talks
Kubernetes is extremely declarative constructs dependent.
Parts of applications designed utilizing Kubernetes are described using text-based representations in YAML or JSON.
It is very common that we reference containers while developing our application to add functionality.
These referenced containers themselves are described in the source code, traditionally as a Dockerfile.
What happens then is that the build process is able to loop the conversion of the Dockerfile (text) into a container image.
Those who are familiar with the use of Docker would know what does the term “image” means.
For simplicity’s sake, consider the term “image” as capturing a ‘moment’ in time and then ‘storing’ that ‘moment’ for later use when we might need to access that ‘moment’ again in the future.
So what we mean by a Codebase is that because everything from the image to the container deployment behavior is finely encapsulated in text, we gain the ability to source control all the things within the system, typically using git.
Git truly is a perfect tool for interacting with a version control system.
Keep in mind, that depending on system-level tools or libraries is never a good option.
It is considered good practice to explicitly declare the dependencies required in the development of the application.
Even so, Kubernetes provides its users with “readiness probes” and “liveness probes” that facilitate the checking of ongoing dependencies.
As the name suggests, the “readiness Probes” allow the user to do a check-up on the dependencies’ health and see whether they are able to accept requests or not.
Whereas the “liveness probes” allow the user to confirm that the involved microservices are healthy on their own.
If either of the two probes fails over a given window of time and threshold attempts, the entire “pod” will restart.
When we separate configuration from the code, the microservice becomes completely independent from its environment, therefore, this implies that it can now be transferred from its existing environment to another without making any changes in the source code.
The Config factor advises the users to store the configuration sources in the process environment table (e.g. ENV VARs).
In order to follow this factor, Kubernetes provides its users with ConfigMaps and Secrets, which are possible to be managed in source repositories.
However, keep in mind, that they are called “Secrets” for a reason — they should never be source controlled without an additional layer of encryption protecting them from unwanted elements.
Anyway, these containers are able to retrieve the config details at runtime.
What you wanna do now is to store these configurations in your environment for scalability, and for the ability to be able to handle the increasing number of services.
Another thing to note is that configuration that varies between deployment environments is advised to be stored in the environment (specifically in environment variables).
4. Backing Services
The backing services are classified as attached resources, which are moderated (attached and detached) by the execution environment.
When developing an application that involves having network dependencies, we classify that dependency as a “Backing Service”.
At any given moment, a backing service can be attached or detached and our microservice must be efficient enough to be able to respond appropriately.
Let’s try to understand using an example.
Assume you are developing an application that interacts with a database, the very first thing that you should do is to isolate all interactions to the said database using some connection details; they can be either dynamic service discovery or even via Config in a Kubernetes Secret.
Then we must also account for the fault tolerance implemented by the network requests, such that if the backing service fails at runtime, the microservice wouldn’t trigger a cascading failure across the system.
That targeted service may also be running in a separate container, somewhere.
Your microservice should not be affected as all interactions then should occur only through APIs to interact with the database.
Alright, guys! 👋
We have successfully expanded on 4 out of 12-factors that govern the use of microservice containers.
We will be covering the other 8-factors in our upcoming blogs.
So stay tuned.
Check out our free Zero to Hero Essentials Program at LetsUpgrade
Take care and stay safe.