This guide covers configuring log sources, setting up FluentBit collection, and monitoring data ingestion in Sentinel Kit.
Sentinel Kit uses FluentBit as the primary log collection engine, feeding data into Elasticsearch for storage and analysis:
Log Sources → FluentBit → Elasticsearch → Sentinel-Kit frontend and Kibana
- File-based logs (Apache, Nginx, system logs)
- Syslog streams (network devices, servers)
- HTTP endpoints (webhook integration)
- Database logs (MySQL, PostgreSQL)
- Windows Event Logs (via agent or export)
- Custom JSON/CSV formats
This document outlines the various methods available for ingesting logs into the Sentinel-Kit Elastic Stack.
Logs can be funneled into the Elastic Stack using one of the following methods:
- Direct Indexing: Placing pre-extracted log files directly into a specific directory.
- SFTP Transfer: Routing logs via the built-in SFTP service.
- HTTP Forwarding: Sending logs using a forwarder (like Logstash, Fluent Bit, etc.) to Sentinel-Kit's dedicated HTTP ingestion service.
This is the fastest way to index data. To use this method, simply place the data you wish to index into the following directory: ./data/log_ingest_data
By default, the stack is configured for rapid indexing of the following log types:
- Linux Audit Logs
- Windows Logs in the
.evtxformat - JSON Line (jsonl) files (one JSON object per line)
All indexed data is placed into Elasticsearch indices following the format: sentinelkit-ingest-<TYPE>-<YY>-<MM>-<DD>.
You can access and visualize this data via Kibana:
- URL:
https://kibana.sentinel-kit.local - Credentials: Use the
elasticuser account and the password defined in your.envfile (refer to theELASTICSEARCH_PASSWORDvariable).
Sentinel-Kit exposes an SFTP service for data transfer.
⚠️ Note on Accessibility: To make the SFTP service accessible over the internet, you must configure your network equipment (e.g., firewall rules).
The SFTP credentials are found in your local .env file:
SFTP_USER=sentinel-kit_sftp_user
SFTP_PASSWORD=sentinel-kit_sftp_passwdOnce uploaded, the files will be available in:
./data/ftp_data
Important: Data placed here is NOT automatically indexed. You must manually move or copy the desired logs from this location into the ./data/log_ingest_data directory to trigger direct indexing (Method 1).
You can send logs, primarily in JSON format, using a dedicated forwarder. This requires setting up a data source via the backend console.
Execute the following command to enter the backend console application:
./launcher consoleOnce inside the container, run the following command to create a new ingestion endpoint:
sentinel-kit> app:datasource:create <name> <index> [<validFrom> [<validTo>]]This command takes 4 arguments:
- name of the datasource (should be unique).
- name of the Elasticsearch Index (several datasource can specify the same index if you want to).
- (optional) The initial date of validity for log ingestion (logs dated before this will be rejected).
- (optional) The final date of validity for log ingestion (logs dated after this will be rejected).
Example:
sentinel-kit> app:datasource:create MyIngestName temp_index 2020-01-01 2030-01-01
MyIngestName - temp_index
[OK] Datasource "MyIngestName" created successfully
Valid from 2020-01-01
Valid to 2030-01-01
Ingest key (header X-Ingest-Key): M2VmYjRiZTMtYThmNi00ZDhlLTliZTQtMGFjYWNhZDVjY2Mw
Forwarder URL: https://backend.sentinel-kit.local/ingest/jsonOnce the source is created, logs must be sent to the Forwarder URL displayed in the console output.
- Format: Logs must be sent in JSON format, either as a single JSON object or a batch (array of JSON objects).
- Authentication: The header X-Ingest-Key must be included with the associated key value provided during creation (e.g., M2VmYjRiZTMtYThmNi00ZDhlLTliZTQtMGFjYWNhZDVjY2Mw).
Use these commands to manage your data sources:
- List Sources:
app:datasource:list - Delete Source:
app:datasource:delete <name>
With data ingestion configured:
- Avanced ingestion - Extend core capabilities with advanced logs ingestion
- Create Detection Rules - Build Sigma rules for your data
- Investigate Alerts - Learn alert analysis workflows
- Monitor Platform Health - Set up monitoring alerts