Building a Secure, Internet-Isolated Fintech Platform on AWS
TemaBit Fozzy Group8 min read·Just now--
Authors: Artem Hrechanychenko, Stas Kolenkin
Designing a production-grade, zero-internet-access environment with Amazon EKS and AWS managed services
Financial institutions face a structural tension: they are expected to deliver modern, cloud-native applications at speed while operating under some of the industry’s most stringent security and compliance requirements.
For many fintech and banking platforms, this tension resolves into a clear mandate: production workloads must not have direct internet access. Eliminating internet connectivity significantly reduces the attack surface and helps meet regulatory expectations — but it also raises difficult questions.
- How do you support modern CI/CD pipelines?
- How do you enable fast release cycles?
- How do developers remain productive without direct access to infrastructure?
This article outlines how we designed and deployed a fully internet-isolated, production-grade fintech platform on AWS, running multiple microservices on Amazon EKS. The result is an environment that meets regulatory and security requirements without compromising operational efficiency or developer velocity.
The Challenge: Security Without Slowing Delivery
Our client operates mission-critical banking applications and requires a platform with strict constraints:
- No direct internet access for workloads
- No NAT Gateway or Internet Gateway in production VPCs
- Full isolation from corporate and banking networks
- Complete auditability of all changes and network flows
- Support for modern DevOps and GitOps workflows
At the same time, the platform needed to support external customer-facing services and integrate with internal banking systems, while maintaining tight perimeter controls. Meeting these requirements demanded more than tightening security controls — it required re-thinking network, access, and operational design from first principles.
Architecture Overview: A Private, Controlled Core
We designed the platform around a private, internet-isolated core. All ingress and egress traffic flows through explicit, controlled entry and exit points, with no implicit connectivity.
At a high level:
- External traffic enters through AWS WAF and an Application Load Balancer (ALB), protected by AWS Shield and TLS certificates managed by AWS Certificate Manager.
- Traffic is forwarded to a Network Load Balancer (NLB) provisioned by Istio, which handles internal routing into the service mesh.
- Internal services communicate using Amazon Route 53 Private Hosted Zones, maintaining clear separation between public and private resolution.
- All outbound traffic exits through AWS Transit Gateway and dedicated hardware firewalls, providing a single inspection and logging point.
There are no NAT Gateways, no Internet Gateways, and no public EKS endpoints. This design ensures every network path is intentional, observable, and auditable.
Enforcing Isolation at the Pod Network Layer
In regulated environments, isolation cannot rely solely on policy. It must be enforced at the infrastructure level.
One of our most critical requirements was to ensure that pod IP addresses could never be routed into corporate or banking networks — even accidentally.
Using Non-Routable CGNAT Space
We configured the VPC CNI plugin to assign pod IP addresses from a non-routable CGNAT range:
Pod CIDR: 10.0.0.0/16
This range is intentionally not allocated through AWS IPAM and is not routable across the Transit Gateway. As a result:
- Pod traffic cannot leak into internal networks
- Internal WAN routes never overlap with pod addresses
- Routing boundaries create an additional isolation layer
- The EKS cluster has virtually unlimited pod IP capacity
VPC CNI Custom Networking
To implement this model, we used VPC CNI custom networking with ENIConfig resources per Availability Zone and node group. Each worker node uses an ENIConfig that references dedicated subnets mapped to the 10.0.0.0/.16 range. This configuration allows us to:
- Separate node and pod routing paths
- Maintain deterministic pod egress flow toward the Transit Gateway and firewalls
- Apply strict routing and firewall policies
- Simplify troubleshooting with clean IP separation
ENI Prefix Delegation
For better scalability, we enabled ENI prefix delegation. Each ENI receives a /28 prefix, providing up to 16 pod IPs per prefix. The benefits:
- Far fewer Amazon EC2 API calls
- Faster pod scheduling and accelerated node scale-out
- Higher pod density per node
- Lower operational overhead
Configuration Details
We run the VPC CNI plugin in custom network configuration mode:
AWS_VPC_K8S_CNI_CUSTOM_NETWORK_CFG=true
AWS_VPC_K8S_CNI_EXTERNALSNAT=false
Setting AWS_VPC_K8S_CNI_EXTERNALSNAT=false ensures the node performs SNAT. This enables fully deterministic egress through the firewall while preserving pod identity visibility for the Security Operations Center (SOC) and audit systems.
Routing Isolation as a Security Control
Because 10.0.0.0/16 is non-routable through Transit Gateway:
- Pod subnets are physically unreachable by internal networks
- No accidental trust relationships can form
- Zero risk of IP overlap with banking systems
- Routing boundaries enforce least privilege by default
Application egress still works normally. Pod traffic is SNATed on the node, and egress leaves the VPC using the node’s primary VPC IP address. This address is routable through the Transit Gateway and inspected at the hardware firewall.
This design combines pod-level isolation with controlled egress and full auditability — a hardened network posture aligned with financial regulatory requirements.
DNS as a Security Control
DNS is often overlooked — but in isolated environments, it becomes a critical control plane.
Instead of using Amazon Route 53 Resolver for inbound and outbound queries, we deployed EC2-based DNS resolvers with static IPs. These resolvers:
- Host authoritative records for platform-specific internal domain
- Forward queries for broader internal namespaces
- Maintain predictable behavior for firewall rules and monitoring
This model gives us complete control over DNS behavior, change management, and inspection, while maintaining the benefits of AWS private networking.
Access Model: Zero Direct Human Access
To maintain auditability and reduce risk, we adopted a zero-direct-access model for both production and test environments.
- Developers do not access Amazon EKS clusters directly
- All changes flow through GitOps pipelines using Argo CD
- Amazon EKS API endpoints are private and inaccessible from the internet
This model simplifies compliance, improves traceability, and removes an entire class of operational risk — while still enabling rapid iteration through automated pipelines.
Security Built Into the Platform
Security controls are layered and automated throughout the environment:
- AWS WAF and AWS Shield protect external entry points
- Service Control Policies (SCPs) restrict unnecessary AWS API usage
- AWS CloudTrail, AWS Config, and GuardDuty provide continuous auditing
- Encryption is enforced using AWS KMS customer-managed keys
- Automated compliance checks and remediation run continuously
- Scheduled security audits are executed via Prowler
The platform successfully passed the AWS Foundational Technical Review (FTR) and a dedicated Amazon EKS security audit, validating the architecture against AWS best practices.
Automation and CI/CD in a Private World
Internet isolation does not eliminate CI/CD — it changes how it’s designed.
All pipelines run on private infrastructure and interact exclusively through VPC endpoints and internal services. Key elements include:
- Golden AMIs built and signed using EC2 Image Builder
- Automated patching and configuration via AWS Systems Manager
- Image scanning and secret detection integrated into CI workflows
- Amazon EKS Pod Identity for secure, token-free IAM access
The result is a delivery pipeline that is fully automated, fully auditable, and entirely private.
FinOps: Making Private Infrastructure Cost-Effective
A common misconception is that private environments are inherently expensive. In practice, cost discipline must be intentional.
We applied FinOps principles from the start:
- Karpenter dynamically right-sizes EKS compute capacity
- AWS Graviton instances reduce compute costs for compatible workloads
- Managed services (ECR, CodeArtifact) replace self-hosted tooling
- Eliminating NAT Gateways significantly reduces egress costs
- AWS Budgets and cost allocation tags provide visibility and control
Security and cost efficiency are not opposites — when designed together, they reinforce each other.
Lessons Learned
Several important themes emerged during the design and operation of this platform, each reinforcing the criticality of early architectural decisions in regulated environments.
- Network design comes first. Operating without Internet Gateways or NAT Gateways fundamentally shapes every downstream decision, from CI/CD pipelines to DNS resolution and service connectivity. Treating the network as an afterthought would have resulted in costly rework. By designing routing, ingress, egress, and VPC endpoints upfront, we were able to build security into the foundation rather than layering it on later.
- GitOps changes culture, not just tooling. Removing direct human access to clusters improves security and auditability, but it also requires a shift in how teams work. Developers must trust automation and pipelines rather than rely on ad hoc fixes. Clear communication, training, and strong feedback loops were essential to making this transition successful and ensuring teams remained productive.
- DNS is foundational infrastructure. In an internet-isolated environment, DNS becomes a critical control plane rather than a background service. Taking ownership of DNS behavior increased our operational responsibilities, particularly around availability and scaling, but it also provided predictability, stronger security boundaries, and better alignment with firewall and monitoring requirements.
- Security is a continuous process. Passing an audit is not the end state. Maintaining a strong security posture requires constant validation, automated controls, and regular testing. Tools like AWS Config, Prowler, and Service Control Policies helped ensure that security expectations were enforced consistently over time, even as the platform evolved.
- Private environments don’t have to be slow or expensive. With the right architectural choices, automation, and cost controls, a fully private platform can still support fast delivery and remain cost-efficient.
Technologies such as Karpenter, AWS Graviton, and managed AWS services played a key role in balancing security, performance, and cost.
Recommendations for Regulated AWS Environments
Teams designing regulated platforms on AWS can reduce risk and complexity by adopting a few foundational practices early in their journey.
Start by designing network and access models before building applications. Decisions around routing, ingress, egress, DNS, and identity have far-reaching implications and are much harder to change once workloads are in production. A clear upfront design creates a stable foundation for both security and scalability.
Treat DNS, routing, and identity as first-class security controls, not just supporting services. In tightly controlled environments, these layers define trust boundaries and determine what can communicate with what. Investing in their design pays dividends in auditability and operational clarity.
Adopting GitOps as the primary operational model significantly reduces human risk while improving traceability. When all changes flow through version-controlled pipelines, audits become simpler, rollbacks become safer, and environments remain consistent.
Automation should be embedded from day one. Compliance checks, security audits, and tagging enforcement are far more effective when they run continuously rather than as periodic manual reviews. Automation helps teams maintain compliance as the platform evolves.
Finally, use AWS managed services wherever possible. Managed services reduce operational overhead, improve reliability, and enable teams to focus on security, compliance, and application delivery rather than on infrastructure maintenance.
Conclusion
Building a fintech platform on AWS requires balancing security, compliance, and developer velocity. By eliminating direct internet access, using AWS managed services, and automating security and cost controls, we created a resilient, auditable, and cost-efficient environment for highly regulated workloads.
If you’re working in a similar space, start with network and access design. Integrate security controls into your development process, not as an afterthought. Use services like VPC endpoints, AWS Shield, Amazon GuardDuty, AWS Config, and Karpenter to simplify operations. Adopt GitOps and automate everything you can.
This approach proves that tight security controls and fast development cycles aren’t mutually exclusive — they just need the right architecture.