Member-only story
Goroutines Are Cheap — Until They Aren’t
The concurrency lesson many Go engineers only learn after their service has been running for a few weeks.
syarif7 min read·Just now--
The graphs looked healthy.
CPU usage was stable.
Latency hovered around 12 milliseconds.
Error rates were almost nonexistent.
At first glance, the service looked like a success.
The Go backend had been in production for about three weeks, processing thousands of requests per second. Benchmarks during development had looked excellent. The system scaled horizontally without trouble. Deployments were smooth.
Then someone noticed the memory graph.
It wasn’t spiking.
It was creeping.
Slowly.
Every few hours the memory usage climbed a little higher. After a few days, the service had doubled its footprint. A week later, Kubernetes started restarting pods.
The strange part was that nothing looked obviously wrong.
The code had no large caches.
No obvious memory leaks.
No runaway loops.