Tag Archives: devops

Using `otelcol` (open-telemetry collector) to collect docker logs and send to a self-hosted signoz

Assuming that your self hosted signoz is at http://signoz.example.com:4317 (gRPC) or at http://signoz.example.com:4138 (json), following is a docker compose setup to scrape logs from docker and send them to a self-hosted signoz. Prefer grpc since it is very efficient when it comes to sending logs.

Using Logspout, a log forwarder

I am going to use gliderlabs/logspout. It collects logs from running containers (using docker.sock) and make them available via a tcp socket for otel-collector to read. One can also use fluentd etc. I found this to be a simpler solution for my needs.

Set up the tcp receiver in your otel-collector-config.yaml to listen for logs. I am going to use port 2255.

otel-collector.yaml
YAML
receivers:
tcplog:
listen_address: "0.0.0.0:2255"
processors:
batch:
send_batch_size: 512
exporters:
debug:
verbosity: detailed
otlp:
endpoint: http://signoz.example.com:4317
tls:
insecure: true
service:
pipelines:
logs:
receivers: [tcplog]
processors: [batch]
exporters: [otlp]
extensions: []

Run the Log Forwarder

In the following docker compose file, a service logspout collects logs from docker containers and make them available on port 2255. We then use the above otel configuration files to read logs from the port 2255 and send it to self-hosted signoz.

compose.yaml
YAML
services:
logspout:
image: docker.io/gliderlabs/logspout
volumes:
- /etc/hostname:/etc/host_hostname:ro
- /var/run/docker.sock:/var/run/docker.sock
networks:
- otel
depends_on:
- otel-collector
command:
tcp://otel-collector:2255
otel-collector:
image: docker.io/otel/opentelemetry-collector-contrib
restart: unless-stopped
volumes:
- ./otel-config.yaml:/etc/otelcol-contrib/config.yaml
networks:
- otel
ports:
- 4317:4317 # OTLP gRPC receiver
- 4318:4318 # OTLP http receiver
- 2255:2255
networks:
otel:
driver: bridge

That’s it. Here is screenshot of collected logs.

Logs inside self-hosted signoz

 

Storing environment variables for CICD pipelines

Instead of storing environment variables in Gitlab, we encrypt the .env file using gpg and commit the encrypted file to the repository. Let’s say that we used key stored in environment variable KEY , encrypting is easy gpg -c .env. It will prompt for passphrase and create an encrypted file .env.gpg.

To decrypt in the pipeline,

gpg --yes --batch --passphrase-file <(echo $KEY) -d ./.env.gpg  > .env

I am usually nervous talking about security since there are many ways things can go wrong. But here is my take on the pros of this approach.

  • The environment remains part of the code and can be changed easily by the developer.
  • You can run the pipelines locally as well without downloading many environment variables and files from Gitlab.
  • It will be easy to migrate to a bit more secure setup e.g. using password vaults and their cli.

Cons

  • One secret to rule them all! Keep it secure please.