Container Security Tools: A buyer’s guide
September 1, 2025Explore container security tools, their features, and how to choose the right mix to secure modern cloud-native applications and infrastructure.
August 7, 2025
Docker images are blueprints that define the files, libraries, and configurations needed to launch containers, while containers are the live, isolated processes created from those images.
Containers provide lightweight, portable, and secure environments for running applications, making them central to modern cloud-native and microservices architectures.
Starting with minimal, hardened images is critical to reducing attack surfaces and avoiding unnecessary risks introduced by shells, package managers, or bloated Linux distributions.
Chainguard offers secure, zero-CVE images designed for production, helping organizations streamline development and strengthen container security.
Docker is a system for deploying and running software within what are called containers. A container is essentially a lightweight alternative to a virtual machine. A container doesn’t have its own operating system, but it does have its own file system and is isolated from other containers running on the same computer. Thus a container is an isolated process for each of your applications.
An image is a blueprint for a container, providing a package of all the files, binaries, libraries, and configurations required to run a container. Typically, containers should be minimal, providing only the main components required by the deployed application.
In summary, containers are what you get when you run an image. Let’s take a closer look at the differences between Docker images and containers, and then dive a bit deeper into a discussion on architecture.
An image serves as a blueprint for a container, and it specifies the components the container needs to run. Typically a container will run a single application, such as a web server. But the web server might need additional files and apps to run - these are called dependencies, and all of this would be included in the image.
When you launch a container, the container system or engine (usually Docker) creates an isolated environment for the container, setting up a file system and memory. The file system is populated with files from the image, hence the use of the image as a blueprint. Docker then launches a program specified by the image, such as the web server application.
Because the container is isolated, you can re-use an image to launch multiple containers. Each container will run independently.
Through the principle of process isolation, containerized software can run securely without other software interacting with it except under well-defined circumstances. For example, in the event that one container is compromised, the intruder won’t automatically have full access to the other containers or to the host system. Containerization is also beneficial in that the containers can be deployed on nearly any type of hardware or instance type, meaning you can optimize for cost while targeting optimal performance.
Images can be general-purpose (such as the base files for an operating system such as Ubuntu) or specialized (such as an image that runs MySQL or other third-party applications). Because images are isolated, they can even run different versions of an application; for example, one image might have an older version of MySQL and another might have a newer one. When launching containers from each, you would then have two independent versions of MySQL running simultaneously on your computer.
Since containers are easy to deploy and manage, developers have embraced containerization engines, like Docker, as a de facto standard for running microservices, and have shared publicly available images on sites such as Docker Hub and Github. Individual developers can upload their images to these repositories, as can organizations, including those, like Chainguard, that have become a Docker Verified Publisher (DVP), a program Docker began in 2021.
However, a public repo such as Docker Hub isn’t required to run containers. Typically an engineering org will deploy a completed version of their software in the form of an image, and upload that image to a cloud service. The cloud service will then deploy as many containers based on that image as needed, depending on what the developers specify, taking into consideration current usage volume. During spikes of high usage, the cloud system might deploy more containers to handle the additional load, and then shut them down when the load slows.
By architecting software as containers, these containers can all work together as microservices, which are a series of smaller applications that provide services to other applications, resulting in a robust architecture that’s quick and easy to deploy. Developers have tools for managing large systems of containers - these tools are called orchestration software. Popular container orchestration engines include Kubernetes and Docker Swarm.
Docker containers are not virtual machines, and as such requires a base operating system to run. Typically that “host” operating system will be a specific Linux distribution such as Ubuntu, Debian, and CentOS and provides the underlying kernel and resources needed for containers to run.
Another important aspect in understanding the difference between images and containers is that images can be layered upon each other. For example, when building an image that contains a web server, a developer would likely start with a base image such as Ubuntu Linux, which defines the base operating system files provisioned for the container. From there, the developer would add the web server image onto the base image, creating what’s called a second layer in their docker file. Then the developer might add on custom first-party software that runs on the web server, and put that software into another layer. The final output is a flattened layer that appears as a single image containing the Linux files, the web server files, and then the application files. When this final image is run, it is then an active container.
It's important to recognize that starting with a container image that defines a full Linux distribution such as Ubuntu means you are including a lot of unnecessary files and programs. These are software components required for older paradigms of operating system deployments, like virtual machines, that are not required for containers. One such program is called a shell, which is an interactive program whereby you can type Linux commands and run them right inside the container. Additionally, such Linux distributions usually contain what are called package managers, which allow a user running the shell to install additional software inside the container. This is a substantial security risk; an intruder could gain access to the container, open up a shell, install software, and then do malicious work.
As such, it’s important to start with a container image that is stripped down and doesn’t contain any shells or package managers.
Both images and containers are required as part of the orchestration of applications as microservices. The developers build their software upon the layered images; these images are then deployed to the cloud, from which containers are run.
But typically during development, the developers will run Docker on their own local workstations, and run their applications as containers. This allows them to work in an environment that’s similar to the production environment in the cloud.
Here’s how it goes:
During development, the developers will likely start with a base image that they run as a container; they then deploy their application inside the container as they develop and test their application.
Once their project is complete, they will build a new image from their environment, starting with the base image as the first layer, and their software added as the next layer.
The resulting image will be deployed to the cloud.
This image will serve as the blueprint for containers that are deployed into staging and production.
Additionally, the developers will likely use a version control system tied closely to the creation of the images and containers. Before final deployment, they will use the version control system to keep all their code files under what’s called a “branch” of their code with a name such as “development.” Here, they’ll follow the same outline above, except instead of deploying to production, they’ll deploy to containers meant for internal use for their testing, often called staging. They’ll use their built images to deploy containers in this special environment, running tests, and making sure everything is correct. If they find problems, they’ll fix the problems locally, and repeat all the steps, deploying their images and containers again to the staging area.
After they are given the go-ahead that the bugs are fixed, they’ll move their development files to the main version control branch used for production deployments. Then they’ll repeat the steps above, create the images for final deployment, push the images to the cloud, and launch the containers based on those final images. The result is several running containers working together as microservices.
This entire process can then be repeated over and over as they build new versions of the software. Each version of the software will have its own set of images, meaning they can simultaneously run different versions for different customers and clients as necessary. For example, some clients might want to get in on a beta program, trying out the newer version before it’s ready. Or some customers might not have purchased the newest version and still want to use the older version, which is also running in the cloud.
By managing their images and containers this way, the developers can also patch earlier versions of the software when bugs are found, by again following the above steps, and then applying the resulting new images to the older versions running in the cloud.
Understanding the difference between containers and images is vital not only to developers but to managers and executives as they try to navigate the complex world of containerization. They need to be sure their teams are starting with minimal container images that have secure-by-design traits such as NIST-level hardening, zero CVEs, and locked down configurations for security best practices. This will ensure that the developers build images without extra overhead, attack surface, and security risks of large images such as Ubuntu and other Linux distros.
Chainguard’s images have low-to-zero CVEs and integrate natively with your CI/CD to make containerization seamless. Ready to learn more? Contact us.