Overview of deployment with Azure Kubernetes Service
In this article, I will show you a solution for zero-downtime deployment in Azure Kubernetes Service. To add a context for it, first, we are going through some Deployment strategies. Then, I will choose the one that fits our needs. Some of them are supported by Kubernetes natively, some are not (yet). Next, I will outline a System overview by showing you the necessary Kubernetes objects in our AKS. The following part of my article presents our Azure DevOps deployment pipeline to you and briefly goes through the scripts and other settings that do the main thing: zero-downtime deployment. Finally, I am going to Wrap up the things.
Deployment strategies in azure services
Many deployment strategies can help you to deploy your application to production or any other environment. They all have their usage scenarios along with their benefits and drawbacks. As a pre-condition, consider that you might have multiple running instances of your application that needs to be deployed.
Deployment strategies and Azure Kubernetes Service em:
Let us see some of them:
Recreate: Firstly, every instance with the old version is removed then the instances of the new version are rolled out.
We use this technique while we are developing, and the downtime does not matter.
- Kubernetes services support this strategy out of the box.
Ramped: The central concept of this strategy is to replace the instances of the old version with the new version of the instances one by one.
There is no doubt that the main gain of this solution is that there is no downtime. In contrast, some severe cons like time-consuming rollout and rollback, no influence on traffic lead you to version problems.
- Natively available in Kubernetes services
Blue/Green: The new version instances are deployed to the destination environment while the traffic is routed to the old instances. The traffic is switched to the instances with the latest version. Lastly, the old instances are deleted.
Now we have won the following three: no downtime, fast rollout/rollback, control over traffic – no version problems. The downside of this technique is the price of having both the old and new instances of the application at the same time. We can use this approach in a production environment.
· Not supported by Kubernetes services out of the box.
Canary: With this strategy, you will also have both the old and new instances alongside. However, the switch of the traffic from the old instances to the new ones is different. In this solution, only weighted traffic is switched to the new instances. After some iterations, you will send the whole traffic to the new instances, and the old versions can be terminated. As an outcome, the users are testing new releases.
The pros here are fast rollback, measurable performance, and failures, more control over traffic. The cons: slow rollout, can be expensive, there is no control over traffic on the level of users. By using an ingress controller like NGINX, the weighted traffic can be routed way more precisely and cost-effectively. This approach can be used in a production environment as well.
Not supported by Kubernetes services out of the box.
A/B testing: Pretty much the same as the canary. The difference is that instead of using a weight for traffic switching, you can use a so-called canary cookie or header. You can highly accurately specify the subset of users who are routed to the new instances. As you might know, A/B testing is originally a technique for making business decisions by rolling out the version that converts the most.
A remarkable benefit over the weighted canary is the complete control over the traffic. The drawback is still the slow rollout and using a Layer-7 load balancer like NGINX. The strategy can be very useful in production environments.
Not supported by Kubernetes services out of the box
Shadow: New instances are deployed along with the old instances. After rollout, traffic is routed both to the old and new versions. One can mirror traffic, e.g., with the help of NGINX.
With this strategy, performance tests with full production traffic can be made quickly; furthermore, there is no impact on the users. On the other hand, it is expensive since we are doubling the required resources.
You can read more about these deployment strategies at https://thenewstack.io/deployment-strategies/.
Now that we have seen some exciting deployment strategies choosing the right one for our needs is time. We had needed a deployment strategy that satisfies the following requirements:
- Fast rollout and rollback.
- The ability to check the newly deployed version while the users are still routed to the old version.
- Zero-downtime switch to the new version.
- Can use in the production environment.
The prerequisites above implicate that we will need to mix the Blue/Green and the A/B testing strategies to fit our needs.
System overview for Zero-Downtime Deployment Using Kubernetes Services
Now let me show you an overview of the relevant components of our system. The figure below shows the infrastructure requirements regarding the