# Sinja: A High-Performance JSON Templating Server
Sinja is a blazing-fast, stable, and production-grade HTTP server written in C++. Its sole purpose is to render templates using JSON data with maximum performance and concurrency. It is built on a modern, multi-threaded architecture designed to saturate CPU cores and handle thousands of simultaneous requests with predictable, low latency.
This server is not a web framework; it is a specialized tool designed to be a highly efficient microservice in a larger system.
## Core Features
- **Massively Concurrent:** Built on a SO_REUSEPORT architecture that allows multiple threads to accept connections in parallel, eliminating common server bottlenecks.
- **Blazing Fast:** Written in modern C++ and leverages the high-performance Inja templating library for efficient rendering.
- **Stable & Robust:** Designed for production workloads with graceful shutdown, robust error handling, and a focus on reliability over premature optimization.
- **Simple, Focused API:** A single POST /render endpoint that accepts a JSON payload, making it easy to integrate with any language or service.
- **Modern C++:** Uses C++17 for clean, efficient, and maintainable code.
## Performance Profile
Benchmarks against the final, stable version of the server (sinja/5.0-stable) demonstrate its key characteristics. Under a high-concurrency load, the server achieves:
- Zero I/O or HTTP Errors: 100% of requests are handled successfully without dropping connections.
- Consistent Low Latency: Mean and median client-observed latency are nearly identical (~10ms), indicating a lack of "long-tail" delays and highly predictable performance.
- High Throughput: Capable of handling a high volume of requests per second, limited primarily by the complexity of the templates being rendered.
- Efficient CPU-Bound Work: Server-side render times are consistently low (~9ms for the benchmark template), showcasing the efficiency of the Inja library and the C++ implementation.
The server's strength lies in its ability to maintain this performance profile across thousands of concurrent connections, a scenario where single-threaded or GIL-bound application servers would falter.
## Architecture: The SO_REUSEPORT Model
The core of Sinja's performance comes from its networking architecture. Unlike traditional servers that use a single main thread to accept() all connections and dispatch them to a worker queue (a classic bottleneck), Sinja uses the SO_REUSEPORT socket option.
This allows every single worker thread to create its own listening socket on the exact same port. The Linux kernel then becomes the load balancer, distributing incoming connections directly and efficiently across the worker threads.
### Benefits of this Architecture:
- True Parallelism: Each thread runs an independent accept loop. There is no contention for a shared queue.
- No Head-of-Line Blocking: A slow or complex request being handled by Thread 1 has zero impact on Thread 2, which can continue to accept and process new requests at full speed.
- Perfect Scalability: The server's capacity scales linearly with the number of available CPU cores.
- Resilience: It is inherently more resilient to connection bursts, as multiple threads are available to immediately handle the load.
## API Usage
The server exposes a single, simple endpoint.
- **Endpoint:** POST /render
- **Content-Type Header:** Must be application/json.
- **Body:** A JSON object with two keys:
- `template` (string): The path to the template file, relative to the --templates directory.
- `context` (object): A JSON object containing the data to be used for rendering.
### Example with curl
```bash
curl -X POST http://localhost:8080/render \
-H "Content-Type: application/json" \
--data '{
"template": "welcome_email.txt",
"context": {
"username": "Alex",
"items": [
{"name": "Apples", "price": 1.5},
{"name": "Oranges", "price": 2.0}
],
"is_premium": true
}
}'
```
## Building and Running
### Dependencies
- A C++17 compliant compiler (GCC 8+, Clang 6+)
- CMake (3.10+)
- Inja & nlohmann/json (handled via Git submodules or manual download)
### Build Instructions
```bash
# 1. Clone the repository and submodules
git clone <repository_url>
cd sinja
git submodule update --init --recursive
# 2. Build the project
mkdir build && cd build
cmake .. -DCMAKE_BUILD_TYPE=Release
make -j$(nproc)
# 3. The executable will be in the build directory
./sinja --help
```
### Running the Server
```bash
./sinja --templates /path/to/your/templates --threads 8 --port 8080
```
### Command-line arguments:
- `--templates`, `-t`: (Required) The directory where your template files are stored.
- `--address`, `-a`: The IP address to bind to (default: 0.0.0.0).
- `--port`, `-p`: The port to listen on (default: 8080).
- `--threads`, `-w`: The number of worker threads to spawn (default: hardware concurrency).
## Critical Advice: When to Use Sinja
Sinja is a specialized tool, not a general-purpose solution.
### Ideal Use Cases:
- High-Traffic Email/Notification Service: Rendering thousands of unique emails or push notifications per minute from a user database.
- Dynamic HTML Snippet Generation: Acting as a microservice for a front-end framework, rendering server-side components that are too complex for client-side logic.
- Report Generation: Quickly creating text-based reports (CSV, XML, formatted text) from large JSON data sources.
- Offloading CPU Work: Augmenting an application server written in a GIL-bound language (like Python or Ruby). The application server can handle I/O, while Sinja handles the heavy, CPU-bound template rendering.
### When NOT to Use Sinja:
- As a Full Web Framework: It has no routing, no database layer, no session management, and no authentication.
- For Simple, Low-Traffic Sites: The complexity of a C++ service is overkill if a simple PHP or Node.js script would suffice.
- If Your Bottleneck is I/O: If your service spends most of its time waiting for a slow database, making the rendering part faster with Sinja will have minimal impact.
## Design Rationale
- **Why C++?** For direct memory control, system-level API access (socket, epoll), and the ability to achieve true, lock-free parallelism without a Global Interpreter Lock.
- **Why Inja?** It's a modern, header-only C++ library with a simple API, excellent performance, and native support for nlohmann::json, the de-facto standard for JSON in C++. Its internal caching of parsed templates is highly efficient.
- **Why was the external render cache removed?** During development, a complex LRU cache for rendered output was implemented. However, rigorous benchmarking revealed that it made the server less stable and slower on cache misses. The stability, predictability, and simplicity of the core SO_REUSEPORT architecture with Inja's built-in template cache proved to be the superior production-ready solution. This reflects a philosophy of prioritizing robustness over complex, marginal optimizations.