• InfraCoffee
  • Posts
  • Implementing Centralized Logging with the ELK Stack on Linux: A Secure, Scalable Guide for DevOps Engineers

Implementing Centralized Logging with the ELK Stack on Linux: A Secure, Scalable Guide for DevOps Engineers

Hey there, little buddy! Imagine all your toys are like little computers, and every time they play or bump into something, they write it down in their own tiny notebooks. But if you have lots of toys, reading all those notebooks one by one would be super tiring! So, what if you had a magic big book that collects all the stories from every toy's notebook, keeps them organized, and even alerts you if a toy is about to break? That's what centralized logging is! We use tools called ELK Stack—Elasticsearch (the searcher), Logstash (the collector), and Kibana (the picture viewer)—on Linux to gather all these "stories" (logs) from servers and apps, make sense of them, and keep everything safe with locks and secret codes so no bad guys can read your toys' secrets.

This article is fresh and new: We're diving into logging management, which complements monitoring (like we covered before with Prometheus) but focuses on -based events for debugging, auditing, and compliance. Juniors will benefit by learning to handle real-world troubleshooting; seniors can optimize for high-volume production. We'll follow best practices: Use Docker for isolation, TLS for encryption, RBAC for access, and governance like retaining logs for 90 days per regulations (e.g., GDPR/HIPAA). Go deep with configs, integrations, and advanced filtering. Assume Ubuntu 22.04 LTS; scale to clusters later.

Why ELK Stack for Centralized Logging? Basics for Juniors

Logs are records of events (e.g., errors, user actions) from apps/servers. Centralized logging collects them in one place for search, analysis, and alerts. ELK is open-source, scalable, and Linux-friendly.

  • Elasticsearch: Stores and searches logs fast (like a smart library).

  • Logstash: Processes and filters logs (the organizer).

  • Kibana: Visual dashboards (the storyteller).

Benefits: Faster issue fresolution, security auditing (spot attacks), compliance. Security governance: Encrypt data, control access, monitor log access itself.

Prerequisites: Secure Your Linux Foundation

  1. Update Ubuntu:

    sudo apt update && sudo apt upgrade -y

    sudo apt install curl gnupg apt-transport-https -y

    Security: Enable auto-updates sudo apt install unattended-upgrades -y && sudo dpkg-reconfigure unattended-upgrades.

  2. Install Docker (for Containerized ELK): Best practice: Containerize to isolate—prevents conflicts.

    sudo apt install docker.io -y

    sudo systemctl start docker && sudo systemctl enable docker

    sudo usermod -aG docker $USER # Logout/login

    Governance: Use rootless Docker if possible; scan images with Trivy (curl -sfL https://raw.githubusercontent.com/aquasecurity/trivy/main/contrib/install.sh | sh -x).

  3. Firewall Setup:

    sudo ufw allow 9200/tcp # Elasticsearch

    sudo ufw allow 5601/tcp # Kibana

    sudo ufw allow 5044/tcp # Beats (later)

    sudo ufw enable

    Limit to trusted IPs: sudo ufw allow from 192.168.1.0/24 to any port 9200.

  4. Java for Elasticsearch (if not Docker): But we'll use Docker, so skip unless bare-metal.

Step 1: Set Up Elasticsearch – The Log Library

Elasticsearch indexes logs for quick searches.

  1. Run in Docker: Create network: docker network create elk-net.

    docker run -d --name elasticsearch --net elk-net -p 9200:9200 -p 9300:9300 \

    -e "discovery.type=single-node" -e "xpack.security.enabled=true" \

    docker.elastic.co/elasticsearch/elasticsearch:8.14.3

    -e xpack.security.enabled=true enables security. Get initial password: docker logs elasticsearch | grep "generated password".

  2. Set Passwords Securely: Exec into container: docker exec -it elasticsearch /bin/bash. Run /usr/share/elasticsearch/bin/elasticsearch-reset-password -u elastic (note new password). Governance: Use strong passwords (20+ chars, mix types); store in Vault (later integration).

  3. Enable TLS: Generate certs on host:

    openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout es.key -out es.crt

    sudo chown 1000:1000 es.{crt,key} # Elasticsearch UID

    Mount in Docker: Add -v $(pwd)/es.crt:/usr/share/elasticsearch/config/es.crt -v $(pwd)/es.key:/usr/share/elasticsearch/config/es.key. Update config (via volume mount elasticsearch.yml):

    xpack.security.http.ssl.enabled: true

    xpack.security.http.ssl.certificate: /config/es.crt

    xpack.security.http.ssl.key: /config/es.key

    Restart container.

  4. Test: curl -u elastic:password -k https://localhost:9200 (should show cluster info).

Deep: For production, use multi-node cluster with discovery.seed_hosts. Set JVM heap: -e ES_JAVA_OPTS="-Xms4g -Xmx4g".

Step 2: Configure Logstash – The Log Organizer

Logstash parses and filters incoming logs.

  1. Run in Docker:

    docker run -d --name logstash --net elk-net -p 5044:5044 -p 9600:9600 \

    docker.elastic.co/logstash/logstash:8.14.3

  2. Create Config (logstash.conf on host):

    input {

    beats {

    port => 5044

    ssl => true

    ssl_certificate => "/config/ls.crt"

    ssl_key => "/config/ls.key"

    }

    }

    filter {

    if [source] =~ "nginx" {

    grok {

    match => { "message" => "%{COMBINEDAPACHELOG}" }

    }

    }

    }

    output {

    elasticsearch {

    hosts => ["elasticsearch:9200"]

    user => "elastic"

    password => "yourpassword"

    ssl => true

    cacert => "/config/es.crt"

    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"

    }

    }

    Mount: -v $(pwd)/logstash.conf:/usr/share/logstash/pipeline/logstash.conf -v certs:/config.

  3. Generate TLS for Logstash: Similar to ES: openssl req -x509 ... -keyout ls.key -out ls.crt.

  4. Security: Enable X-Pack in logstash.yml: xpack.monitoring.enabled: true. Use roles for outputs.

Deep: Custom filters with Ruby code for complex parsing. Scale with multiple pipelines.

Step 3: Launch Kibana – The Dashboard Storyteller

Kibana for visualization.

  1. Run in Docker:

    docker run -d --name kibana --net elk-net -p 5601:5601 \

    -e "ELASTICSEARCH_HOSTS=http://elasticsearch:9200" \

    -e "ELASTICSEARCH_USERNAME=elastic" -e "ELASTICSEARCH_PASSWORD=yourpassword" \

    docker.elastic.co/kibana/kibana:8.14.3

  2. Enable TLS: Update kibana.yml (mount): server.ssl.enabled: true, with certs.

  3. Access and Secure: Browser: http://localhost:5601. Login as elastic. Governance: Create users/roles in Kibana > Management > Security. RBAC: Read-only for devs.

  4. Create Index Pattern: Kibana > Analytics > Discover > Create index pattern (e.g., logstash-*).

Deep: Dashboards with Vega for custom viz. Alerts via Watcher.

Step 4: Add Filebeat – Shipping Logs from Sources

To collect logs from Linux hosts/apps.

  1. Install Filebeat:

    curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-8.14.3-amd64.deb

    sudo dpkg -i filebeat-8.14.3-amd64.deb

  2. Configure /etc/filebeat/filebeat.yml:

    filebeat.inputs:

    - type: log

    enabled: true

    paths:

    - /var/log/*.log

    output.logstash:

    hosts: ["localhost:5044"]

    ssl.enabled: true

    ssl.certificate_authorities: ["/path/to/ls.crt"]

  3. Start: sudo systemctl start filebeat && sudo systemctl enable filebeat.

Security: Run as non-root, use modules (e.g., nginx: sudo filebeat modules enable nginx).

Step 5: Advanced Features and Governance

  1. Scaling: Docker Compose for all: Create docker-compose.yml with services.

  2. Retention Policies: In Elasticsearch, use ILM (Index Lifecycle Management): Kibana > Management > ILM Policies > Create (delete after 90 days).

  3. Auditing: Enable in elasticsearch.yml: xpack.security.audit.enabled: true.

  4. Integrations: With SIEM for security events; Alerting rules in Kibana for anomalies (e.g., failed logins >10/min).

  5. Backup/Restore: Snapshot API: curl -XPUT "localhost:9200/_snapshot/my_backup?wait_for_completion=true".

  6. Troubleshooting: Logs: docker logs <container>. Common: Cert mismatches—verify chains. High load: Tune shards/replicas.

  7. Compliance: Encrypt at rest (LUKS on volumes), anonymize PII in filters.

Deep for seniors: Ingest pipelines in ES for processing, Beats modules for AWS/K8s logs, Machine Learning for anomaly detection.

Step 6: Testing and Production Tips

  • Test: Generate logs (echo "Test log" >> /var/log/test.log), search in Kibana.

  • Prod: Use Elastic Cloud or self-managed cluster. Monitor ELK with its own metrics.

  • Cost: Optimize storage with hot/warm/cold nodes.

You've built a secure ELK logging system! Juniors: Parse your app logs next. Seniors: Integrate with prior topics like Kubernetes.