CQ1 Data Layer
The data layer ensures the routing engine always operates on fresh, queryable state. It combines binary-native storage with real-time ingestion to eliminate the latency and parsing overhead found in traditional architectures.
Binary State Management
Most DEX aggregators store pool data as JSON or cached objects. Every quote request requires parsing text into usable structures.
CQ1 stores pool state as raw binary buffers — the exact byte layout used on-chain.
Comparison
flowchart LR
subgraph Traditional["Traditional"]
T1["JSON String"] --> T2["Parse"] --> T3["Object"] --> T4["Compute"]
end
subgraph CQ1["CQ1"]
C1["Binary Buffer"] --> C2["Compute"]
end
style Traditional fill:#fef3c7,stroke:#f59e0b
style CQ1 fill:#ccfbf1,stroke:#0d9488
Performance Impact
| Approach | Storage | Read Overhead | Latency |
|---|---|---|---|
| JSON/REST | Text | Parse + deserialize | 200-500ms stale |
| Cached Objects | Serialized | Deserialize | 50-100ms stale |
| CQ1 Binary | Raw bytes | Zero-copy | Real-time |
Storage Architecture
CQ1 maintains two storage layers optimized for different access patterns.
Hot Path: Redis Buffers
Pool state stored as raw binary buffers in Redis.
Key: buffer:{pool_type}:{pool_id}
Value: Raw bytes (on-chain layout)
Characteristics:
- Sub-millisecond reads
- Zero serialization overhead
- Matches on-chain data structures exactly
- Updated in real-time via gRPC stream
Cold Path: MongoDB + In-Memory Cache
Structural data (pool metadata, token relationships, graph edges) persisted in MongoDB.
Startup:
MongoDB → Load → In-Memory Map (poolsCache)
Runtime:
Request → poolsCache.get(id) → O(1) lookup
Characteristics:
- Graph topology and relationships
- Loaded into memory at startup
- O(1) lookups during routing
- No database calls during quote generation
Real-Time Ingestion
Binary buffers require continuous synchronization with on-chain state. CQ1 uses a dual-strategy ingestion system.
Fast Path: gRPC Push Stream
Instead of polling RPC nodes, CQ1 subscribes to validator streams via Yellowstone gRPC.
sequenceDiagram
participant Chain as Solana
participant gRPC as Yellowstone gRPC
participant Listener
participant Queue as Redis Queue
participant Worker as Update Worker
participant Cache as Redis Cache
Chain->>gRPC: State change (Slot N)
gRPC->>Listener: Push notification
Listener->>Queue: ZADD pool_priority_queue
loop Every tick
Worker->>Queue: ZRANGE (pop batch)
Queue-->>Worker: [pool_A, pool_B]
Worker->>Chain: getMultipleAccountsInfo
Chain-->>Worker: Latest data
Worker->>Cache: SET buffer:{type}:{id}
end
Note over Cache: Ready for queries<br/>~10ms end-to-end
End-to-end latency: ~10ms from on-chain change to queryable state.
Slow Path: Weight Recalibration
Every ~50 seconds, a background scheduler recalibrates routing weights.
Loop (every 50s):
1. Load full graph topology
2. Batch fetch all pool reserves
3. Update routing costs and weights
└── cost:{pool_id}
└── weight:{pool_id}
Purpose: Ensures routing heuristics reflect actual TVL distribution across all pools — not just actively-traded ones.
Dual Strategy Summary
| Strategy | Trigger | Updates | Purpose |
|---|---|---|---|
| Fast Path | On-chain event | Hot pool buffers | Real-time accuracy |
| Slow Path | Timer (50s) | Routing weights | TVL alignment |
Both run simultaneously. Fast path handles volume. Slow path maintains routing quality.
Data Flow Summary
flowchart TB
subgraph Write["WRITE PATH"]
Solana["Solana"] --> gRPC["gRPC Stream"]
gRPC --> Listener["Listener"]
Listener --> Queue["Queue"]
Queue --> Worker["Worker"]
Worker --> Redis1["Redis Buffers"]
Scheduler["Scheduler (50s)"] --> RPC["RPC Batch"]
RPC --> Redis1
end
subgraph Read["READ PATH"]
Request["Quote Request"] --> Engine["Routing Engine"]
Engine --> Redis2["Redis Read"]
Redis2 --> Math["Math Core"]
Math --> Response["Quote Response"]
end
Redis1 -.->|"same store"| Redis2
style Write fill:#fef3c7,stroke:#f59e0b
style Read fill:#ccfbf1,stroke:#0d9488
Write path and read path never block each other.
Key Takeaways
- Binary storage eliminates parsing overhead
- gRPC push provides real-time state (~10ms)
- Dual ingestion balances speed and accuracy
- Decoupled paths ensure quotes never wait on updates
Updated about 2 months ago
