Start now →

How Two Engineers Run a Multi-Million-Dollar Digital Payment Platform (Without Microservices)

By Alex Fomin · Published March 20, 2026 · 7 min read · Source: Fintech Tag
Payments
How Two Engineers Run a Multi-Million-Dollar Digital Payment Platform (Without Microservices)

How Two Engineers Run a Multi-Million-Dollar Digital Payment Platform (Without Microservices)

Alex FominAlex Fomin7 min read·Just now

--

Everyone told us to use microservices. We chose a monolith. Two engineers maintain 15 domains, over a thousand tests, and a system that processes millions monthly. Here’s why it works.

The conventional wisdom says: if you’re building a payment platform, you need microservices. Separate services for users, transactions, providers, wallets, webhooks. Each with its own database, its own deployment pipeline, its own on-call rotation.

The reality for a team of two: microservices give you 15 repositories, 15 deployment pipelines, distributed tracing to debug, eventual consistency to reason about, and network failures between every domain boundary. The operational overhead would consume more engineering time than the actual product work.

We went the other direction. One repository, one process, one database — but with strict internal boundaries that give us the modularity benefits of microservices without the operational cost.

Press enter or click to view image in full size

The Architecture: Vertically-Sliced Domains

The application is split into 15 business domains. Each domain owns its models, services, repositories, and API endpoints. Domains communicate through a message broker — not direct imports.

Press enter or click to view image in full size

The domains cover the full surface of a payment platform: order lifecycle, financial ledger, merchant management, exchange rates, fee calculation, payment method details, webhook delivery, dispute resolution, authentication, user management, notifications, device integration, compliance, file storage, and support.

The key constraint: no domain imports another domain’s internals. All inter-domain communication goes through the message broker. This gives us the same isolation guarantees as microservices without the network boundary.

Each domain follows the same internal structure: models and DTOs, a repository layer for database access, a service layer for business logic, and event definitions for what it publishes and subscribes to. Above the domains sits an orchestration layer for cross-domain operations and a routing layer that exposes HTTP endpoints.

Key Terms

Term Definition Vertical slice A domain that owns the full stack from API to database for its bounded context. No shared models across slices Bounded context DDD concept — a boundary within which a particular model and language is consistent. Each domain is a bounded context Domain event A message published when something significant happens in a domain (e.g., “payment confirmed”, “dispute opened”)

Message-Driven Communication

Domains talk through RabbitMQ. When the payment domain confirms a transaction, it publishes an event. The webhook domain subscribes and delivers the merchant callback. The ledger domain subscribes and updates balances.

Press enter or click to view image in full size

We run 13 message consumers, each as a separate process. They handle webhook delivery, payment and withdrawal automation, ledger operations, notification sending, rate conversion, compliance checks, file processing, and device matching.

Each consumer is its own process with its own health check and metrics endpoint. But they all share the same codebase, the same DI container, the same domain models. Adding a new consumer means adding a new runner entry — not a new repository.

The broker gives us message ordering within a queue, dead letter handling for failed messages, and prefetch control per consumer. The framework adds tracing correlation, metrics middleware, and structured exception handling on top.

Scheduled Workers

Beyond message consumers, we run time-based schedulers for operations that need periodic execution:

Press enter or click to view image in full size

Examples: auto-declining payments that exceed TTL, auto-disputing contested transactions after a deadline, refreshing exchange rates from external APIs, assigning pending withdrawals to available operators, managing operator working schedules, cleaning expired reservations, monitoring device heartbeats.

Each scheduler is a Python process running an asyncio event loop. No HTTP server, no message broker dependency — they query the database directly on interval and execute domain logic. All inherit from a common base class with configurable intervals and batch sizes.

Key Terms: Workers

Term Definition Message consumer A broker subscriber that processes events from queues. Each runs as an independent process Scheduled worker A time-based process that executes domain logic on interval (e.g., auto-decline expired payments) Dead letter queue Queue for messages that failed processing after max retries. Used for manual inspection Prefetch count Number of messages a consumer fetches from the broker at once. Controls concurrency and memory usage

Dependency Injection

With 15 domains, 13 consumers, 13 schedulers, and an orchestration layer on top, the wiring gets complex fast. We use a declarative DI container that wires everything at startup.

Press enter or click to view image in full size

The container is hierarchical:

The container verifies all dependencies at startup. If anything is missing, the application fails to start — not at runtime when that code path is first hit in production.

Every HTTP endpoint, every consumer handler, and every scheduled worker receives its dependencies via injection. No global imports, no service locators, no hidden coupling.

The Test Suite

For a two-person team, the test suite is the safety net. We can’t do extensive manual QA — the tests must catch everything.

Press enter or click to view image in full size

The setup: pytest with async support, parallel execution across workers, factory-based test data generation, branch coverage analysis, and mutation testing that modifies source code and verifies tests catch the change.

Test layers: unit tests for isolated domain logic, integration tests against a real database, HTTP endpoint tests, and load testing scenarios for stress testing.

Test fixtures are centralized — database cleanup, cache reset, DI container overrides, HTTP client stubs. Every test gets a clean state.

Build and Deploy

The monolith has one Docker image but deploys as 30+ pods in Kubernetes via a Helm chart.

Press enter or click to view image in full size

The image is built on a security-hardened base with multi-stage build and deterministic dependency resolution. A Helm chart defines every deployment as a separate Kubernetes resource.

Component Pods Purpose HTTP services 3 API endpoints for different integration types Message consumers 13 Event processing — webhooks, automation, ledger, notifications, etc. Schedulers 13 Time-based jobs — auto-decline, rate refresh, cleanup, monitoring Frontend 2 Static web panels

Infrastructure is declared as Helm dependencies: message broker with replicas, cache in master-slave and cluster configurations. Prometheus monitors are enabled for every service.

Kubernetes features we use: pod anti-affinity so same-type pods land on different nodes, priority classes for HTTP over background workers, environment overlays with encrypted secrets, ingress with TLS, readiness and liveness probes, and per-pod resource limits.

The same Docker image runs everywhere — the entrypoint controls what starts. Scale any consumer independently by adjusting replicas in the Helm values.

CI/CD pipeline: build → test → lint chart → push to registry → deploy to target namespace.

Observability

The platform is instrumented from day one:

Why Not Microservices (For Us)

Dimension Our monolith Microservices (alternative) Deployment 1 image, Helm profiles 15 images, 15 pipelines Debugging Stack trace, one process Distributed tracing across services Data consistency Single DB transaction Saga pattern, eventual consistency Refactoring Move code between domains Move code between repos + APIs Operational overhead 1 codebase to maintain 15 codebases, service mesh Team size required 2 engineers 4–6 minimum Local development docker compose up 15 services + cluster Latency In-process function call Network hop per service call

We spend 80% of time on product features and 20% on infrastructure. With microservices, we estimate that ratio would flip.

What We’d Do Differently

Enforce domain boundaries with import linting. We use code review to catch cross-domain imports, but an automated linter that fails CI would be more reliable.

Add contract tests between domains. When one domain publishes an event and another subscribes, there’s an implicit contract on the event schema. We’ve had breakages when one side changed the schema without updating the other.

Version the event schemas. As the platform grows, event schemas evolve. A schema registry or at least versioned models for events would prevent silent compatibility breaks.

References

  1. Su R. «Modular Monolith: Is This the Trend in Software Architecture?» — arXiv 2024, analysis of modular monolith patterns: arxiv.org
  2. Fowler M. «Patterns of Enterprise Application Architecture» — Unit of Work, Domain Model patterns (2002)

This article is part of a series on infrastructure decisions in digital payment platforms. The architecture described here handles real production volumes.

Looking for a crypto payment gateway?

NexaPay lets merchants accept card payments and receive crypto. No KYC required. Instant settlement via Visa, Mastercard, Apple Pay, and Google Pay.

Learn More →
This article was originally published on Fintech Tag and is republished here under RSS syndication for informational purposes. All rights and intellectual property remain with the original author. If you are the author and wish to have this article removed, please contact us at [email protected].

NexaPay — Accept Card Payments, Receive Crypto

No KYC · Instant Settlement · Visa, Mastercard, Apple Pay, Google Pay

Get Started →