ankuranand | 3 months ago | on: Show HN: KV and wide-column database with CDN-scale replication
ankuranand's comments
ankuranand | 4 months ago | on: Show HN: UnisonDB – B+Tree DB with sub-second replication to 100+ nodes
It uses WAL-based replication with B+Tree storage to fan out writes to 100+ edge nodes in sub-second latency. Every write is durable, queryable, and instantly available as a replication stream.
Built for Edge AI and distributed systems where data needs to live close to computation. Supports: - Multi-model storage (KV, Wide-Column, LOB) - Atomic multi-key transactions - Real-time change notifications - Namespace isolation for multi-tenancy
We benchmarked it against BadgerDB and BoltDB using redis-benchmark — results in the README show competitive write/read throughput with consistent replication performance even at 100+ concurrent relayers.
Open source (Apache 2.0): https://github.com/ankur-anand/unisondb
Would love feedback on the architecture and use cases!
ankuranand | 4 months ago | on: UnisonDB – A Log-Native Database for Edge AI and Edge Computing
I’ve been experimenting with an idea that combines a database and a message bus into one system — built specifically for Edge AI and real-time applications that need to scale across 100+ nodes.
Most databases write to a WAL (Write-Ahead Log) for recovery.
UnisonDB treats the log as the database itself — making replication, streaming, and durability all part of the same mechanism.
Every write is: * Stored durably (WAL-first design) * Streamed instantly (no separate CDC or Kafka) * Synced globally across replicas
It’s built in Go and uses a B+Tree storage engine on top of a streaming WAL, so edge nodes can read locally while syncing in real time with upstream hubs.
No external brokers, no double-pipeline — just a single source of truth that streams.
Writes on one node replicate like a message bus, yet remain queryable like a database — instantly and durably.
GitHub: github.com/ankur-anand/unisondb
Deployment Topologies
UnisonDB supports multiple replication setups out of the box:
* Hub-and-Spoke – for edge rollouts where a central hub fans out data to 100+ edge nodes
* Peer-to-Peer – for regional datacenters that replicate changes between each other
* Follower/Relay – for read-only replicas that tail logs directly for analytics or caching
Each node maintains its own offset in the WAL, so replicas can catch up from any position without re-syncing the entire dataset.
UnisonDB’s goal is to make log-native databases practical for both the core and the edge — combining replication, storage, and event propagation in one Go-based system.
I’m still exploring how far this log-native approach can go. Would love to hear your thoughts, feedback, or any edge cases you think might be interesting to test.
ankuranand | 6 months ago | on: From O(n) to O(1): Expiring 10M TTL Keys in Go
The naive approach — scanning every key every second — works fine at small scale but collapses once you hit millions of entries.
So I implemented a Timing Wheel in Go — the same idea used in Kafka, Netty, and the Linux kernel — replacing the O(n) scan loop with an O(1) tick-based expiration model.
Here’s what I found when comparing both approaches at 10 million keys:
Avg Read Latency: • Naive Scan → 4.68 ms • Timing Wheel → 3.15 µs
Max Read Stall: • Naive Scan → 500 ms • Timing Wheel → ≈ 2 ms
At that scale, the naive loop stalls reads for half a second. The timing wheel glides through them in microseconds.
GitHub repo: https://github.com/ankur-anand/taskwheel
ankuranand | 4 years ago | on: A high-performance, zero allocation, dynamic JSON Threat Protection in pure Go
There are situations where you do not want to parse the JSON, but do want to ensure that the JSON is not going to cause a problem. Such as an API Gateway. It would be a PITA for the gateway to have to know all JSON schema of all services it is protecting. There are XML validators that perform similar functions.
ankuranand | 6 years ago | on: A high-performance, zero allocation, dynamic JSON Threat Protection in pure Go
JSON requests are susceptible to attacks characterized by unusual inflation of elements and nesting levels. Attackers use recursive techniques to consume memory resources by using huge json files to overwhelm the parser and eventually crash the service.
JSON threat protection is terms that describe the way to minimize the risk from such attacks by defining few limits on the json structure like length and depth validation on a json, and helps protect your applications from such intrusions.
ankuranand | 6 years ago | on: A high-performance, zero allocation, dynamic JSON Threat Protection in pure Go
ankuranand | 6 years ago | on: A high-performance, zero allocation, dynamic JSON Threat Protection in pure Go
JSON threat protection is terms that describe the way to minimize the risk from such attacks by defining few limits on the json structure.
Yes It also validates the json.
ankuranand | 6 years ago | on: A high-performance, zero allocation, dynamic JSON Threat Protection in pure Go
JSON threat protection is terms that describe the way to minimize the risk from such attacks by defining few limits on the json structure.
ankuranand | 6 years ago | on: A high-performance, zero allocation, dynamic JSON Threat Protection in pure Go
ankuranand | 6 years ago | on: Build Your Own Text Editor