FastAPI vs GoFr: I Built the Same Microservice in Both. GoFr Won.

Every time a team kicks off a new microservice, someone asks the framework question. If you’re a Python shop, FastAPI has probably already won that argument before the meeting ends — 9 million monthly PyPI downloads, great docs, everyone knows it.
I’ve been spending time with GoFr, a Go framework built specifically for microservices, and I wanted to see what the comparison actually looked like in practice. Not in theory, not in benchmarks firing hello-world at localhost. In an actual service.
So I built the same order management API twice. POST an order, GET an order by ID, PostgreSQL, health checks, structured logs, metrics. Same features, both frameworks, start to finish. Some of what I found was expected. Some of it wasn’t.
What I Built
Create an order, fetch it by ID, connect to a database, expose health checks, emit structured logs, publish metrics. The kind of service every backend team has written at least three times. I picked it specifically because it’s boring — frameworks don’t distinguish themselves on exotic features, they distinguish themselves on the work you do constantly.
Getting Started
FastAPI installs in one command:
pip install "fastapi[standard]"
That gives you a web server. A production-ready microservice is a longer conversation. Once you’ve added structured logging, Prometheus metrics, distributed tracing, database migrations, and health checks — the things you actually need before go-live — your requirements.txt looks like this:
fastapi
uvicorn
sqlalchemy
alembic
prometheus-client
opentelemetry-api
opentelemetry-sdk
opentelemetry-instrumentation-fastapi
opentelemetry-exporter-otlp
python-dotenv
Ten packages. Most of them still need to be wired together manually.
GoFr:
go get gofr.dev/pkg/gofr
Observability, health checks, structured logging, metrics, database migrations — inside that one import. Nothing else to install.
I’m not saying FastAPI is bad for needing more packages. The Python ecosystem is rich and those tools are all good. But there’s a real difference between “framework that builds web APIs” and “framework that ships production microservices” and the install step is where you first feel it.
Writing the Same Endpoint
A POST endpoint to create an order in FastAPI:
@app.post("/orders", response_model=schemas.Order)
async def create_order(order: schemas.OrderCreate, db: Session = Depends(database.get_db)):
db_order = models.Order(**order.dict())
db.add(db_order)
db.commit()
db.refresh(db_order)
return db_orderThat handler is clean. What’s not shown is everything it depends on: database.py for the SQLAlchemy session factory, models.py for the ORM model, schemas.py for the Pydantic model. Four files before this endpoint works.
The same endpoint in GoFr:
app.POST("/orders", func(ctx *gofr.Context) (any, error) {
var order Order
if err := ctx.Bind(&order); err != nil {
return nil, err
}
_, err := ctx.SQL.ExecContext(ctx,
"INSERT INTO orders (item, quantity) VALUES (?, ?)",
order.Item, order.Quantity)
return order, err
})The database connection, request context, logging, and error handling come through gofr.Context. There’s no session factory to configure, no dependency graph to wire. You write the query and return.
FastAPI’s dependency injection is genuinely powerful — it earns its complexity in large applications. GoFr’s approach just gets out of the way faster.
Observability
Getting full observability in FastAPI — logs correlated with traces, Prometheus metrics, distributed tracing — means installing four OpenTelemetry packages, writing a custom Prometheus middleware, adding a /metrics route manually, configuring the tracer in main.py , and wiring the instrumentation to your app on startup. Realistically, 60–80 lines of boilerplate before a single line of business logic. Not hard. Just time.

In GoFr, full observability comes from:
app := gofr.New()
Prometheus metrics on port 2121 at /metrics, automatically. Structured JSON logs with correlation IDs on every request. Distributed tracing via Zipkin or Jaeger, configured through environment variables. Twenty-plus default metrics covering HTTP response times, SQL query durations, Redis latency, Pub/Sub counts, and circuit breaker state.
FastAPI is a web framework. GoFr is a production service framework. That’s not a criticism of FastAPI — it’s a description of scope. Observability is just where that difference becomes most visible.
Database Migrations
FastAPI has no built-in migration story. The standard approach is Alembic — a good tool — but getting there means adding SQLAlchemy as an ORM layer, running alembic init, editing alembic.ini, configuring env.py to point at your models, and running alembic upgrade head as a separate deployment step. That's three tools and several config files for something that every production service needs.
GoFr has migrations built in. Write a Go file with your migration logic, register it with the app, and it runs on startup. One tool, one pattern.
If you’ve ever debugged a Kubernetes pod that started before migrations finished, you know exactly why “runs on startup, automatically” is the right design.
Performance
Throughput: Go services handle 50,000–100,000+ RPS on typical HTTP endpoints. FastAPI handles 25,000–35,000 RPS per single Uvicorn worker. To scale FastAPI horizontally you add more worker processes — and each Uvicorn worker consumes roughly 150–200 MB of RAM independently. A GoFr service baseline sits at 10–50 MB.
Docker image size: A GoFr service compiles to a 5–15 MB static binary. With a multi-stage build to a scratch container, the image size is roughly the binary size. An optimized FastAPI image using Alpine multi-stage build lands around 105 MB. Starting from the default build, you’re looking at closer to 1 GB.
Cold start: Go binaries start near-instantly. FastAPI with Uvicorn takes 1–3 seconds in production.
Now the part most GoFr comparisons skip: in raw HTTP benchmarks — the kind that throw 100,000 requests at a bare endpoint — GoFr shows higher latency than Gin, Echo, and Fiber. A July 2025 benchmark on real hardware (Intel i5–12500H, Go 1.24.3) measured GoFr’s basic hello-world latency at 14.62ms versus Echo’s 2.53ms.
That number is real. It’s also measuring the wrong thing. GoFr runs its full observability stack on every request, even in a hello-world test — logging, emitting metrics, propagating trace context. Gin and Echo are bare routers. They have none of that. Once you add equivalent middleware to Gin to match what GoFr ships by default, the gap closes considerably. The benchmark is comparing a stripped track car to a car that already has its engine, seats, and dashboard installed.
Concurrency
Python has the GIL. One thread executes Python bytecode at a time inside a single process. FastAPI handles this well for I/O-bound work through async/await — most web APIs spend their time waiting on databases and external calls, so this is fine for most situations.
CPU-bound work or multi-core utilization requires multiple worker processes. Each one is a full Python process with its own memory. Four workers at 150 MB each is 600 MB before your application does anything.
Go uses goroutines — the runtime manages them, not the OS. A goroutine starts at about 2 KB of stack space. You can run tens of thousands concurrently on a single machine without the memory overhead that separate processes carry. There’s no GIL; Go’s memory model handles concurrent access differently at the language level.
Running 50 microservices where each needs 4 workers versus 50 services each needing one process is a real infrastructure cost difference. At small scale it doesn’t matter. At 50 services it starts to.
Where FastAPI Is the Better Pick
FastAPI is good software. If your team is Python-native, switching languages has a real productivity cost — that’s not nothing, especially on a team where Go experience is thin. FastAPI’s automatic OpenAPI documentation saves genuine time. And if your service touches ML inference, data processing, or anything in Python’s scientific ecosystem — NumPy, Pandas, PyTorch — there’s no real Go equivalent. That’s not a knock, it’s just true.
For rapid prototyping or internal tooling where the team constraint is Python expertise, FastAPI will get you to something working faster.
The comparison shifts specifically when you’re building services meant to run in Kubernetes, be maintained by a team across years, and be debugged at 2am by someone who wasn’t on the original project.
Side by Side

What I Took Away
The thing that stuck with me wasn’t the performance numbers or the package count. It was the observability section. I’ve set up Prometheus middleware in Python projects before. I know what that afternoon looks like — install four packages, write the middleware, wire it to the app, realize the trace IDs aren’t correlating with the logs, spend another hour figuring out why. It’s not hard, it’s just friction that accumulates.
GoFr’s approach is to have decided that stuff for you. The tradeoff is you don’t get to make different choices — you get Prometheus on 2121, you get structured JSON logs, you get Zipkin/Jaeger tracing. If you wanted something else, you’re working against the grain. That’s what opinionated means.
Whether that tradeoff is worth it depends on your situation. For teams building multiple microservices who don’t want to relitigate infrastructure decisions on every new service, it’s a reasonable deal. For teams with specific observability requirements or strong opinions about the stack, maybe not.
If you’re starting a new microservice and Go is an option, spend a few hours with GoFr before defaulting to the familiar stack. The install step alone will tell you something.
GoFr is open source at github.com/gofr-dev/gofr with documentation at gofr.dev/docs. It’s listed in the CNCF Landscape.
FastAPI vs GoFr: I Built the Same Microservice in Both. GoFr Won. was originally published in Level Up Coding on Medium, where people are continuing the conversation by highlighting and responding to this story.