- Published on
Poor Man's Log Viewer with OpenSearch and FluentBit
- Authors
- Name
- Parminder Singh
Over the weekend I was talking to a friend about troubleshooting remotely, logging and monitoring. It would be cool if we could spin up a temporary log viewer on the fly for fast troubleshooting, and then tear it down just as easily once the issue is resolved. In this article, I will demonstrate how to setup a simple logging pipeline using OpenSearch and FluentBit.

Log analysis is the most basic and the first step in troubleshooting any application. Cloud based log analysis tools like DataDog, Microsoft Sentinel, etc. are great and very well integrated into cloud platforms. But they can drive cloud costs up and may not always be the best option for small teams or pre-prod and other uses cases. I want to present an on-demand logging pipeline that can be setup and torn down quickly, using OpenSearch, FluentBit and Docker.
OpenSearch is a community driven, open source observability suite. With OpenDashboards, we can visualize the logs, create alerts, anomaly detection, etc. FluentBit is an open source, lightweight log processor and shipper/forwarder.
Docker Compose for OpenSearch
Here's a working docker-compose.yml
for OpenSearch and Dashboards:
version: '3.8'
services:
opensearch:
image: opensearchproject/opensearch:2.11.1
container_name: opensearch
environment:
- discovery.type=single-node
- bootstrap.memory_lock=true
- OPENSEARCH_INITIAL_ADMIN_PASSWORD=_PutAPasswordHere_
- plugins.security.ssl.http.enabled=false
- plugins.security.disabled=false
- OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- opensearch-data:/usr/share/opensearch/data
ports:
- '9200:9200'
- '9600:9600'
networks:
- opensearch-net
dashboards:
image: opensearchproject/opensearch-dashboards:2.11.1
container_name: dashboards
environment:
- OPENSEARCH_HOSTS=["http://opensearch:9200"]
- DISABLE_SECURITY_DASHBOARDS_PLUGIN=false
ports:
- '5601:5601'
depends_on:
- opensearch
networks:
- opensearch-net
volumes:
opensearch-data:
networks:
opensearch-net:
driver: bridge
Start it with:
docker compose up -d
The opensearch-data volume in the Docker Compose ensures your data is persisted even if you restart or tear down the containers. If you're using this setup temporarily, you can delete this volume to start fresh each time:
docker volume rm your_project_name_opensearch-data
If you're using this in a dev/test/staging pipeline, keeping the volume will help retain logs between sessions.
Once started, OpenSearch will be available at http://localhost:9200
.
Output from my OpenSearch instance:
curl http://admin:your-password-here@localhost:9200
{
"name" : "7eac53284afe",
"cluster_name" : "docker-cluster",
"cluster_uuid" : "UUs6nDxJQEmFuQ2iKjHTqA",
"version" : {
"distribution" : "opensearch",
"number" : "2.11.1",
"build_type" : "tar",
"build_hash" : "6b1986e964d440be9137eba1413015c31c5a7752",
"build_date" : "2023-11-29T21:43:10.135035992Z",
"build_snapshot" : false,
"lucene_version" : "9.7.0",
"minimum_wire_compatibility_version" : "7.10.0",
"minimum_index_compatibility_version" : "7.0.0"
},
"tagline" : "The OpenSearch Project: https://opensearch.org/"
}
OpenDashboards will be available at http://localhost:5601
.

Login to Dashboards at http://localhost:5601
using admin/your password
.
Fluent Bit Setup
There are multiple ways to install FluentBit based on your environment. I installed it with Homebrew:
brew install fluent-bit
I could have used Docker or Kubernetes to run FluentBit but stuck with native install for simplicity.
Quick note about FluentD. FluentD is a full-fledged log collector and processor ecosystem. FluentBit is built on the same architecture but is more performant and lightweight. Fluent Bit is considered the next-generation solution. Read more here.
To upload logs using FluentBit, we need to configure it to read logs from a given directory. Here's the configuration I used. You can click here to read more details on FluentBit configuration.
[SERVICE]
flush 5
daemon Off
log_level info
[INPUT]
Name tail
Path /Users/psingh/dev/pml/logs/*.log
Read_from_Head true
Skip_Long_Lines On
Refresh_Interval 5
Tag batch.log
DB /tmp/fluentbit.db
DB.Sync Normal
[FILTER]
Name record_modifier
Match *
Record hostname ${HOSTNAME}
[OUTPUT]
Name es
Match *
Host localhost
Port 9200
Index singh-app-logs #This is the name of the index that will be created
HTTP_User admin
HTTP_Passwd password-here
Suppress_Type_Name On
Logstash_Format Off
Buffer_Size 256k
Retry_Limit 10
We also need a parser to parse the log lines. This helps structure the logs so they can be searched and filtered efficiently in OpenSearch. Here's a simple regex parser that captures the timestamp and log message:
parsers.conf
[PARSER]
Name multiline-regex
Format regex
Regex ^(?<time>\d{4}-\d{2}-\d{2}[ T]\d{2}:\d{2}:\d{2}) (?<log>.*)
Time_Key time
Time_Format %Y-%m-%d %H:%M:%S
Run Fluent Bit manually:
fluent-bit -c fluentbit.conf -R parsers.conf
This will start Fluent Bit and upload logs to OpenSearch (configured in the output section). It will create an index called singh-app-logs
(or whatever name you configured) in OpenSearch.

OpenSearch Dashboards have powerful searching and filtering capabilities. Dashboard Query Language (DQL), can be used to create queries to search logs. Read more here.
For example the query log: "*AuthService*" AND log: "*ERROR*"
will return only error logs coming from the AuthService. I
Anomaly Detection
OpenSearch has a built-in anomaly detection plugin that can be used to detect anomalies in the logs. For example, spikes in log volume or errors or unusual patterns, etc.
To detect spikes or anomalies, create a detector. A detector is a configuration that defines how to detect anomalies in the logs. You can also create detectors using the OpenSearch Dashboards UI.
Create a detector
curl -u admin:password_here -X POST http://localhost:9200/_plugins/_anomaly_detection/detectors -H 'Content-Type: application/json' -d '
{
"name": "singh-app-log-spike-detector",
"description": "Detects spikes in log volume",
"time_field": "@timestamp",
"indices": ["singh-app-logs"],
"filter_query": {"match_all": {}},
"detection_interval": {"period": {"interval": 1, "unit": "Minutes"}},
"window_delay": {"period": {"interval": 1, "unit": "Minutes"}},
"feature_attributes": [
{
"feature_name": "log_volume",
"feature_enabled": true,
"aggregation_query": {
"log_volume": {
"value_count": {
"field": "_id"
}
}
}
}
]
}'
To start the detector, run:
curl -u admin:your-password-here -X POST http://localhost:9200/_plugins/_anomaly_detection/detectors/<detector_id>/_start

Cleanup & Reset
To re-run ingestion of logs, delete the index and FluentBit database:
- Delete index:
curl -X DELETE localhost:9200/singh-app-logs
- Delete Fluent Bit DB:
rm /tmp/fluentbit.db
- Re-run Fluent Bit
I just scratched the surface of OpenSearch and FluentBit. There's a lot more to these platforms. The idea of this post is to present a use case to quickly set up a logging pipeline for troubleshooting.
If you have worked with OpenSearch, FluentBit and other logging platforms, I'd love to hear your thoughts.