Suricata + Redis Instead of eve.json: A Practical Approach from Suri Oculus
Introduction
By default, Suricata writes events to eve.json. It is a standard and convenient mechanism, but under real load it quickly becomes a bottleneck.
If you are building not just an IDS, but a traffic analysis system, the file-based model starts to limit your architecture.
In the Suri Oculus project, we deliberately moved away from eve.json and adopted a streaming approach using Redis. Below is why we made this decision and how it is implemented in practice.
What’s wrong with eve.json
eve.json works well as:
- a source of logs for later analysis
- integration with ELK / Splunk
- offline processing
However, in runtime scenarios, several structural limitations appear.
1. Disk I/O
Suricata generates a continuous stream of events:
- constant disk writes
- increased latency
- filesystem pressure
2. JSON parsing overhead
Any processing service must:
- read the file
- track offsets
- parse JSON
This adds an unnecessary layer that increases latency and complexity.
3. Lack of true real-time processing
A file is not a queue:
- no delivery guarantees
- no clean on-the-fly processing
- scaling becomes harder
Suri Oculus architecture
We moved from a file-based model to a stream-based pipeline.
High-level flow
Suricata → Redis → daemonmove → analytics / UI / AI
Key components:
- Suricata — event source
- Redis — transport and buffer
- daemonmove — central processor (C++)
- analytics modules and web interface
Why Redis
The choice of Redis was driven by practical constraints.
1. Performance
- in-memory processing
- minimal latency
- high throughput
2. Simple queue model
We use LIST:
LPUSH— writeBRPOP— blocking read
This provides a simple and predictable processing model.
3. Minimal complexity
We intentionally avoided Kafka or RabbitMQ:
- fewer dependencies
- simpler deployment
- suitable for low-end hardware
(This is important because Suri Oculus is designed to run even on constrained systems.)
Practical implementation
1. Event stream
Suricata sends events to Redis instead of writing to a file.
Key:
2. Event format
Example event:
{
"event_type": "flow",
"src_ip": "192.168.1.10",
"dest_ip": "8.8.8.8",
"proto": "TCP",
"bytes_toserver": 1234,
"bytes_toclient": 4321,
"in_iface": "eth0"
}We store the full JSON without dropping fields.
3. daemonmove — central component
daemonmove is the core of the architecture.
It performs:
- reading from Redis
- parsing events
- classification
- writing into structured storage
Example logic:
while (true) {
auto event = redis.brpop("suricata");
auto json = parse(event);
switch (json["event_type"]) {
case "flow":
store("flow", json);
break;
case "dns":
store("dns", json);
break;
case "http":
store("http", json);
break;
case "tls":
store("tls", json);
break;
}
}
4. Separation by event type
After processing, events are stored by type:
flowdnshttptls
This enables:
- fast access
- independent analytics
- straightforward aggregation
What this gives in practice
1. True real-time
Events are available immediately:
- the UI can operate in live mode
- AI analysis runs without delay
2. Full control over the pipeline
We control:
- filtering
- routing
- storage structure
3. Performance
We eliminate:
- disk I/O
- file parsing
- offset tracking
The system remains stable even on low-end hardware.
4. Flexibility
Easy to extend with:
- filtering (e.g., LAN-only traffic)
- enrichment (GeoIP, tagging)
- behavioral analytics (HBF)
Limitations
1. No built-in historical storage
If Redis is cleared, data is lost.
Possible solutions:
- periodic dumps
- external storage
2. Not a full SIEM
For:
- long-term retention
- complex search
additional systems may be required.
3. Requires custom logic
You need:
- your own processor
- a well-designed data model
When this approach makes sense
This architecture is appropriate if:
- you are building your own analysis system
- you need real-time processing
- you have resource constraints
- you want full control over the pipeline
Conclusion
eve.json is a solid default.
But for real analysis systems, it becomes a limitation.
In Suri Oculus, moving to Redis allowed us to:
- turn Suricata from a log generator into a streaming data source
- build a controlled pipeline
- achieve true real-time processing
This is a different class of architecture — faster, more flexible, and fully controllable.
What’s next
Next topics to explore:
- Redis key design for IDS systems
- Host Behavior Fingerprint (HBF)
- AI-based traffic analysis
These will be covered in future articles.