Skip to content

Conduct Vulnerability Management for Your Kubernetes Applications

Kubernetes is an open source container orchestration tool initially developed by Google and subsequently handed over to the Cloud Native Computing Foundation (CNCF). Kubernetes offers a highly resilient infrastructure that flawlessly manages containerized application deployment and enables auto-scaling and self-healing capabilities with zero downtime.

By default, Kubernetes comes with plain security, which means we have to harden every component associated with it. Securing Kubernetes is a complex task, considering the underlying components and interdependencies that are part of it. We need to consider various factors when securing Kubernetes, starting with secure development and application building and deployment.

This article will take you through the things that can be done to secure Kubernetes clusters and workloads using best practices and OSS tools.

We can inject security at three different levels:

  1. Security in the Build Phase
  2. Security in the Deploy/Run Phase
  3. Security in the Infra Platform

1. Security in Build Phase

Before we go into the details, please note that the CI/CD pipeline used in the example is from Azure Pipelines.

a. Software Composition Analysis (SCA)
SCA is a part of the application security testing tool suite, which helps manage open source libraries used during application development. There are many OSS libraries available on the internet, but when you use them for enterprise application development, you might be infringing the licenses written by the OSS companies. Luckily, we have WhiteSource, a prominent player in open source software security and compliance management.

WhiteSource Boltis part of the WhiteSource security suite, which has been specifically developed to integrate with Azure DevOps, Azure DevOps Server, and GitHub Actions. Bolt works seamlessly in the continuous integration (CI) phase by scanning all the libraries used in the application source code and generating a detailed summary of libraries with security flaws, and lists the licenses being consumed. There is a free WhiteSource Bolt extension available for Azure DevOps Marketplace, which gives you 5 scans per day. We have used this version for the illustration. Bolt can be installed from Azure DevOps Marketplace.

Marketplace in Azure Devops

Before we dockerize an application, we should scan the source code for vulnerabilities based on the libraries used. This fosters a shifting left approach. Below is a result that got generated after integrating with Azure Pipelines.

White Source Scan Report

The reports tell us about the security flaws in the OSS libraries used and any licenses associated with the libraries that we may be infringing. The remediation is also displayed for your action.


b. Linting Docker Files
Linting is an essential practice that must be embedded within CI/CD pipelines. It helps to identify whether Docker files have been written using best practices and evaluates the security posture. Hadolintis a good tool for linting Docker files and can be executed inside a container as well.



Hadolint scan results published in the console

c. Container Images Scanning
The base image that you use to containerize the application is susceptible to a lot of vulnerabilities. The base image may be loaded with unwanted tools and configurations, which widens the attack surface. There are multiple tools in the market that scan and catch such flaws. The tool I like is Trivyby Aqua Security. It is a vulnerability scanner that can detect vulnerabilities in OS packages and application dependencies.

The scan results summarize the flaws inside the Docker image and list them based on priority.

Console out after scanning Dockerfiles
Code snippet


The YAML file for the entire pipeline can be referred below

2. Security in Deploy/Run Phase

After the build phase, the next layer of security should be on the manifest files that will be deployed. There are best practices that must be followed clearly to reduce the attack surface. Below are a few methods that we can follow.

a. Enforce limits to prevent denial of service (DDoS) attacks
When you deploy an application to a Kubernetes cluster, the resource limits are unbounded by default, which means that if the application generates a memory leak, it could consume the entire worker nodes’ resources, resulting in a blackout. A similar scenario happens when a DDoS attack takes place, bringing down the cluster itself. So a good approach is to enforce resource limits on the Kubernetes manifest.

b. Enforce non-root permissions in the security context
By default, the application inside Kubernetes will run with root permission. Following the least privilege principle, we should only provide permissions that are required. Excess authorizations only increase the security flaws in a system.

c. Enforce read-only file systems in the security context
If we have an application that does not require write permission in the container, it is better to degrade the permission to read-only. In short, excess is dangerous from a security context perspective.

d. Restrict all the access to OS layer by default
We need to restrict default permissions to the OS systems, and it is best to mention the capabilities tag within the manifest. This helps segregate what actions the deployment can do to the container OS.

3. Security in Infra & Platform

The Kubernetes infrastructure comes with multiple critical components in the control plane and worker nodes. Ensuring security in the infrastructure is crucial. In the market, we have several options available to deploy Kubernetes: on bare metal, on-premises, and in the public cloud. Kubernetes was designed to be highly flexible, and users can choose their kind depending on their needs.


Kubernetes offers many out-of-the-box customizations that benefit users. Parallelly, it brings weaknesses when it comes to security. Engineers maintaining and deploying Kubernetes need to know all the potential attack vectors and vulnerabilities that poor configuration can bring.


Kubernetes is an evolving space. It is the same with its security threats. One approach to solving this is to get in line with the latest Kubernetes distribution and perform continuous security evaluation. Yet, many operational hurdles exist in working in that fashion. It is evident that one cannot be even slightly lax about security practices because of the business-critical nature of the applications running on Kubernetes.


The views expressed and the content shared in all published articles on this website are solely those of the respective authors, and they do not necessarily reflect the views of the author’s employer or the techbeatly platform. We strive to ensure the accuracy and validity of the content published on our website. However, we cannot guarantee the absolute correctness or completeness of the information provided. It is the responsibility of the readers and users of this website to verify the accuracy and appropriateness of any information or opinions expressed within the articles. If you come across any content that you believe to be incorrect or invalid, please contact us immediately so that we can address the issue promptly.
A DevOps engineer by profession, dad, traveler and more interestingly to tweak around stuff inside memory constrained devices during spare time.


Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.