- InfraCoffee
- Posts
- Mastering Containerization: Building and Running Your First Secure Docker Container on Linux
Mastering Containerization: Building and Running Your First Secure Docker Container on Linux
A Step-by-Step Guide to Building and Deploying Secure Docker Containers for Junior DevOps Engineers
Introduction
As a DevOps engineer, one of the foundational skills you'll need is containerization. Containers allow you to package applications with their dependencies, ensuring consistency across development, testing, and production environments. Docker is the most popular tool for this, and it's deeply integrated with Linux kernels via features like cgroups and namespaces.
This article is tailored for junior DevOps or Linux engineers who are new to Docker. We'll walk through installing Docker on Ubuntu Linux (a common server OS), building a simple Node.js application container, running it securely, and deploying it with best practices in mind. By the end, you'll understand how to avoid common pitfalls like running containers as root or exposing unnecessary ports.
Why focus on security? In DevOps, "shift-left" security means embedding safeguards early. Poorly configured containers can lead to vulnerabilities like privilege escalation or data leaks. We'll follow guidelines from the Docker Bench for Security and CIS benchmarks.
This guide assumes basic Linux command-line knowledge (e.g., using sudo, navigating directories). If you're a senior engineer, you can use this as a reference for mentoring or auditing setups.
Prerequisites
Before starting, ensure you have:
A fresh Ubuntu 22.04 LTS or 24.04 LTS installation (virtual machine via VirtualBox/VMware or a cloud instance like AWS EC2 works fine). We'll use 22.04 for this example.
Internet access for package downloads.
A non-root user with sudo privileges (best practice: never run as root to minimize risks).
At least 2GB RAM and 10GB free disk space.
Basic tools: Install them with sudo apt update && sudo apt install -y curl git vim (or your preferred editor).
Verify your setup:
Check Ubuntu version: lsb_release -a.
Ensure you're not root: whoami should show your username.
Update packages: sudo apt update && sudo apt upgrade -y.
Step 1: Installing Docker Securely on Ubuntu
Docker's official installation method uses a script, but we'll do it manually for transparency and security.
Step 1.1: Uninstall Old Versions (If Any)
Old packages like docker.io can conflict. Remove them:
sudo apt remove -y docker docker-engine docker.io containerd runc
Step 1.2: Set Up the Docker Repository
Add Docker's GPG key and repository to ensure packages are signed and trusted.
sudo apt update
sudo apt install -y ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc
Add the repo (replace $(lsb_release -cs) with your codename, e.g., jammy for 22.04):
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt update
Step 1.3: Install Docker Packages
Install the core components:
sudo apt install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
Step 1.4: Start and Enable Docker Service
sudo systemctl start docker
sudo systemctl enable docker
Verify installation:
sudo docker --version
sudo docker run hello-world
The hello-world image should download and print a message. If it fails, check logs with sudo journalctl -u docker.
Step 1.5: Add Your User to the Docker Group (Security Note)
To run Docker without sudo (convenient but risky— it gives effective root access), add your user:
sudo usermod -aG docker $USER
Log out and back in, then test: docker ps. Security Best Practice: In production, use sudo or tools like podman (rootless). Avoid this on shared systems to prevent privilege escalation.
Step 2: Creating a Simple Application for Containerization
We'll build a basic Node.js "Hello World" web server. This simulates a real app.
Step 2.1: Set Up Your Project Directory
mkdir ~/my-docker-app && cd ~/my-docker-app
Step 2.2: Create the Application Code
Create app.js with vim or nano:
vim app.js
Paste this:
const http = require('http');
const hostname = '0.0.0.0';
const port = 3000;
const server = http.createServer((req, res) => {
res.statusCode = 200;
res.setHeader('Content-Type', 'text/plain');
res.end('Hello from Docker on Linux!\n');
});
server.listen(port, hostname, () => {
console.logServer running at http://${hostname}:${port}/);
});
Step 2.3: Create a Package.json for Dependencies
vim package.json
Paste:
{
"name": "docker-hello",
"version": "1.0.0",
"main": "app.js",
"scripts": {
"start": "node app.js"
},
"dependencies": {}
}
Step 3: Writing a Secure Dockerfile
The Dockerfile defines how to build your image.
Step 3.1: Create the Dockerfile
vim Dockerfile
Paste this secure version:
Dockerfile
# Use an official Node.js runtime as a parent image (slim variant for smaller size)
FROM node:20-slim
# Set the working directory in the container
WORKDIR /app
# Copy package.json and install dependencies (layer caching for faster builds)
COPY package*.json ./
RUN npm install
# Copy the application code
COPY . .
# Expose the port the app runs on (informational, doesn't publish)
EXPOSE 3000
# Run as a non-root user for security (create user and switch)
RUN useradd -m appuser
USER appuser
# Command to run the app
CMD ["npm", "start"]
Why this is secure:
Uses node:20-slim (minimal OS, reduces attack surface).
Installs deps before code (caching optimization).
Creates and switches to appuser (no root privileges).
Exposes port but doesn't auto-publish (we'll handle that later).
Step 3.2: Build the Docker Image
docker build -t my-docker-app:latest .
Check: docker images should list my-docker-app.
Best Practice: Tag images with versions (e.g., -t my-docker-app:v1.0) and use multi-stage builds for production to strip build tools.
Step 4: Running the Container Securely
Step 4.1: Run the Container
docker run -d --name my-app-container -p 8080:3000 my-docker-app:latest
-d: Detached mode (background).
--name: Easy reference.
-p 8080:3000: Publish port 3000 inside to 8080 on host (use high ports >1024 for non-root).
Test: curl http://localhost:8080 should return "Hello from Docker on Linux!"
Step 4.2: Inspect and Manage
Logs: docker logs my-app-container.
Stop: docker stop my-app-container.
Remove: docker rm my-app-container.
Security Additions:
Add --read-only to make filesystem read-only (if app doesn't write files).
Use --cap-drop=ALL to drop Linux capabilities.
Scan for vulnerabilities: Install trivy (via sudo apt install -y trivy) and run trivy image my-docker-app:latest.
For secrets (e.g., API keys), use Docker secrets or environment variables: docker run -e MY_SECRET=value ....
Step 5: Pushing to a Registry and Basic Deployment
For DevOps workflows, push to Docker Hub or a private registry.
Step 5.1: Create a Docker Hub Account
Sign up at hub.docker.com (free tier).
Step 5.2: Tag and Push
text
docker tag my-docker-app:latest yourusername/my-docker-app:latest
docker login
docker push yourusername/my-docker-app:latest
Security: Use access tokens instead of passwords. In CI/CD, store creds in secrets.
For deployment, use Docker Compose for multi-container apps or Kubernetes for orchestration (advanced topic).
Best Practices and Security Governance
Image Management: Always use official/base images. Regularly update with docker pull and rebuild.
Least Privilege: Run as non-root, drop capabilities, use seccomp/AppArmor profiles (e.g., docker run --security-opt seccomp=unconfined but customize profiles).
Scanning and Auditing: Integrate tools like Docker Scout or Clair in pipelines. Run docker scout cves my-docker-app (if enabled).
Networking: Use custom networks: docker network create mynet and --network mynet.
Volumes: For persistent data, use -v /host/path:/container/path:ro (read-only where possible).
Monitoring: Install Prometheus/Node Exporter for container metrics.
Governance: Follow CIS Docker Benchmark: Enable content trust (export DOCKER_CONTENT_TRUST=1), sign images.
Common Pitfalls for Juniors: Don't hardcode secrets in Dockerfiles. Avoid latest tags in production. Clean up dangling images with docker system prune.
In a real DevOps setup, integrate this with Git for version control and CI tools like Jenkins or GitHub Actions to automate builds.
Conclusion
You've now built and run a secure Docker container on Linux! This is the gateway to advanced DevOps: orchestrating with Kubernetes, automating with Ansible, or scaling on clouds. Practice by containerizing a personal project, then explore Docker Compose for multi-service apps.
If issues arise, check Docker docs or forums like Stack Overflow. Remember, DevOps is iterative—start simple, secure early, and automate everything. Happy containerizing!