Account takeover, malicious RAT injected, and how to check if your system is compromised right now.

On March 31, an attacker published malicious axios versions that install a cross-platform RAT trojan. For three hours, 100M+ developers unknowingly pulled malware. If you updated axios between 00:21–03:15 UTC, treat your system as compromised. Here’s your 60-minute response plan.
Table of Contents
- The 15-Minute Window: What You Need to Do Right Now
- How They Did It: The Account Takeover That Bypassed Everything
- The Technical Deception: Why Plain-Crypto-JS Evaded Detection
- Why Detection Failed: The Missing Provenance Metadata
- Remediation: Your Full 60-Minute Response Plan
- The Bigger Picture: Why Supply Chain Attacks Are the New Normal
- What Changes Now: Defense Strategies That Actually Work
- The Uncomfortable Truth: Can We Even Trust npm?
1. The 15-Minute Window: What You Need to Do Right Now
Stop reading for a moment and check your lockfile.
Search your package-lock.json (or yarn.lock / pnpm-lock.yaml) for these exact version strings:
If any of these appear, you installed the malicious versions between March 31, 00:21–03:15 UTC. That's the window you need to care about.
Did you find them? Here's what that means:
If your lockfile contains either compromised axios version, any machine that ran npm install (or equivalent) after that package was published has likely executed the RAT dropper. The malware is designed to clean up after itself—it deletes its own traces from node_modules—but the presence of a plain-crypto-js folder is proof of execution.
The immediate actions (next 15 minutes):
- Check your build machines and CI/CD pipelines first. These are the highest-value targets. If your GitHub Actions, GitLab CI, or Jenkins instances pulled the malicious version, the attacker has credentials to your repositories, deployment keys, and cloud infrastructure tokens. Check the logs for any runs between 00:21–03:15 UTC on March 31.
- Isolate any machine that installed [email protected] or 0.30.4. Don't let it talk to the internet until you've assessed the damage. The RAT communicates with sfrclak.com:8000—if your network logs show traffic to that domain, assume full compromise.
- Check if plain-crypto-js exists in your node_modules. Run this command on affected machines:
find . -type d -name "plain-crypto-js" 2>/dev/null
If it exists anywhere, the dropper executed. The folder itself will be relatively empty (the malware cleans up), but its presence is the smoking gun.
Look for platform-specific artifacts. The RAT dropper leaves traces depending on your OS:
- macOS: Check /Library/Caches/com.apple.act.mond (the malware disguises itself as an Apple system process)
- Windows: Look for %PROGRAMDATA%\wt.exe (a copied PowerShell binary) and check for hidden PowerShell scripts
- Linux: Check /tmp/ld.py for a Python script; also check your cron jobs and systemd user services for suspicious entries
Document everything.
Take screenshots of your lockfiles, build logs, and any evidence of the malware. You'll need this for your incident response team and potentially for compliance/legal.
The hard truth:
If the malware executed on a development machine with SSH keys, cloud credentials in environment variables, or access to your private repositories, the attacker now has those credentials. The next 30 minutes aren't just about detection—they're about damage control.
This is why the 60-minute plan matters. You need to move fast.
2. How They Did It: The Account Takeover That Bypassed Everything
Here’s where the sophistication becomes clear. This wasn’t a vulnerability in npm’s code. This wasn’t a bug in the axios project. This was account takeover — and the attacker knew exactly how to exploit the trust infrastructure that makes open source work.
The theft: A long-lived npm token
The primary axios maintainer (jasonsaayman) had an npm access token stored. The attacker obtained a long-lived, high-privilege npm token that granted publish rights to the axios package. That’s all they needed.
Why this matters: npm tokens are essentially passwords. If you have a valid token, npm doesn’t care where you’re publishing from. It doesn’t care if you’re on the maintainer’s laptop or a compromised server in Russia. The registry trusts the token.
The bypass: Direct npm publishing, no CI/CD
Here’s what makes this elegant from an attacker’s perspective. Legitimate axios releases go through this workflow:
- Developer commits code to GitHub
- GitHub Actions CI/CD pipeline runs tests, security checks, and build steps
- The pipeline publishes to npm with OIDC (OpenID Connect) credentials and SLSA build attestations
- npm records: “This package was built and signed by GitHub Actions run #12345”
This creates an auditable chain of custody. Security teams can verify that a published package actually came from the official build pipeline.
The attacker skipped all of that. They used the stolen token to publish directly to npm, in seconds, from anywhere. No GitHub Actions. No build pipeline. No attestations. No audit trail.
The red flag that should have triggered alarms
Legitimate axios releases include OIDC provenance metadata. When you install a real axios package, npm knows it came from a specific GitHub Actions workflow. The malicious versions? Completely bare. No provenance. No attestations. No way to trace them back to a build.
Security teams that were monitoring for this pattern should have caught it immediately. The question is: were they?
Why the attacker chose this moment
The attack happened on March 31, 2026, at 00:21 UTC — early morning in Europe, early evening on the US East Coast. This is classic operational security: maximum disruption, minimal live monitoring. The malicious versions were live for approximately 2–3 hours before npm security teams detected and removed them.
In that window, anyone running npm install axios (without a pinned version) got the trojan. Default behavior. No warnings. The npm registry tagged the malicious versions as "latest," which means developers installing axios for the first time pulled malware by default.
The sophistication level this reveals
This wasn’t a script kiddie with a leaked password. This was planned:
- They built up plausible publication history for plain-crypto-js (publishing a clean version first on March 30)
- They timed the attack for low-visibility hours
- They used multiple versions of axios (1.14.1 and 0.30.4) to catch different version pinning patterns
- They understood npm’s publish workflow well enough to know exactly what NOT to do to raise suspicions
This is the work of either a sophisticated threat actor or a team. It suggests patience, planning, and deep knowledge of how the JavaScript ecosystem actually works.
The lesson for your organization
Your CI/CD pipeline is only as secure as the credentials that feed it. A single leaked long-lived token can poison your entire dependency tree. And unlike a code vulnerability that can be patched, account compromise gives attackers persistent access — they can keep publishing malicious versions indefinitely until someone notices.
3. The Technical Deception: Why Plain-Crypto-JS Evaded Detection
The genius of this attack isn’t the RAT itself — remote access trojans are commodity malware. The genius is in how it was packaged and hidden in plain sight.
The fake dependency that shouldn’t exist
The real axios has exactly three dependencies:
- follow-redirects
- form-data
- proxy-from-env
All legitimate. All necessary. All well-known.
The malicious versions added a fourth: [email protected]. It’s not imported anywhere in the axios source code. It’s not used by any function. It serves exactly one purpose: execute a postinstall script that drops the trojan.
Why plain-crypto-js specifically?
The package name is brilliant social engineering. It mimics crypto-js, a legitimate and popular cryptography library. The malicious package even copied crypto-js’s description, author name, and repository URL. To a developer casually scanning their dependency tree, plain-crypto-js looks like a typo or a legitimate cousin of crypto-js.
But it’s neither. It’s a trojan wrapper.
The evasion trick: The version that didn’t exist yet
Here’s where the attacker demonstrated deep understanding of how npm security scanning works.
On March 30 at 07:57 UTC, the attacker published [email protected] — a completely clean, legitimate version. This established publication history. The package looked real.
Then, roughly 18 hours later, they published [email protected] with the malicious postinstall script. When npm processing happened, this infected version was pulled into the malicious axios releases.
Why does this matter? Because many security scanners work by trying to install packages and check their behavior. When a scanner tried to install [email protected], it couldn’t find it yet in the registry — or it found a clean version. The malicious behavior only appeared after the package was fully published and propagated.
By the time security tools caught up, the damage was done.
The postinstall hook: Where the trojan lives
Here’s the actual attack vector. Inside [email protected]’s package.json:
{
"scripts": {
"postinstall": "node setup.js"
}
}When npm installs a package, it automatically runs any postinstall scripts defined in package.json. Most packages use this for legitimate purposes — compiling native bindings, downloading data files, etc.
This one uses it to execute malware.
The setup.js dropper: Platform-specific payloads
The setup.js file is the actual dropper. It detects the operating system and executes platform-specific malware:
On macOS: The dropper stores a RAT binary at /Library/Caches/com.apple.act.mond. The path is deliberately crafted to look like a legitimate Apple system process (Apple's Activity Monitor is com.apple.act). Most macOS users never browse /Library/Caches, and the name alone makes it blend in with system files.
On Windows: The dropper copies PowerShell to %PROGRAMDATA%\wt.exe (wt.exe is the Windows Terminal binary name). It then executes a hidden PowerShell script that establishes communication with the command-and-control server. Because PowerShell is a legitimate Windows tool, the execution might not raise alarms in basic EDR solutions.
On Linux: The dropper downloads a Python script to /tmp/ld.py and executes it. Linux systems often have Python pre-installed, making this a reliable attack vector. The script then establishes persistence, likely through cron jobs or systemd user services.
All three payloads communicate with the same command-and-control server at sfrclak.com:8000.
The cleanup: Hiding the evidence
Here’s where the attack becomes particularly sophisticated. After the dropper executes and delivers the second-stage payload, it cleans up:
- It removes itself (the setup.js file)
- It deletes the package.json containing the postinstall hook
- It replaces the package.json with a “clean” version
Anyone inspecting node_modules/plain-crypto-js after the fact will see a completely innocent-looking package. The malicious postinstall script is gone. The evidence is erased.
But the presence of the plain-crypto-js folder itself proves execution occurred — legitimate axios installations never have this dependency.
Why traditional detection failed
Most supply chain attack detection relies on one of three approaches:
- Dependency scanning: “Does this package have suspicious dependencies?” — The attack adds plain-crypto-js, which looks like a legitimate library by name.
- Behavior analysis: “Does this package do suspicious things?” — The malicious code only runs during postinstall. Most scanning tools don’t execute postinstall hooks in their analysis environments.
- Signature matching: “Have we seen this malware before?” — This is a new trojan, so signature databases won’t catch it initially.
The axios compromise exploited gaps in all three approaches simultaneously. It’s not a vulnerability in any single tool — it’s a carefully designed attack that understood the ecosystem’s blind spots.
The real lesson
This attack demonstrates that in a postinstall-hook world, you can’t just scan a package’s code. You have to assume that any package can execute arbitrary code the moment it’s installed. That’s why the next section — on why detection failed — is critical to understand. Because detection didn’t fail due to lack of tools. It failed due to missing infrastructure.
4. Why Detection Failed: The Missing Provenance Metadata
Legitimate axios releases include OIDC provenance metadata and SLSA build attestations — cryptographic proof that the package came from GitHub Actions, not a direct npm token. The malicious versions had neither.
The obvious red flag
When [email protected] appeared on March 31, it should have triggered automatic alerts:
- Previous version ([email protected]) had OIDC provenance
- New version had NO provenance
- Published directly via npm token, not GitHub Actions
- 4-day gap with zero GitHub commits
This isn’t subtle. One metadata check catches it.
Why it wasn’t caught
- Provenance checks are optional, not mandatory. npm supports them but doesn’t require them. The malicious versions just didn’t include them. npm accepted the publish anyway.
- Most organizations aren’t monitoring provenance gaps in real-time. They review vulnerabilities after the fact, not proactively. The attack happened at 00:21 UTC — off-hours for most Western teams.
- The window was too small. 2–3 hours of exposure before npm removed the packages. Millions of installations occurred in that window.
What’s broken in the ecosystem
- npm doesn’t enforce provenance for high-impact packages
- Security tools don’t universally alert on missing attestations
- CI/CD pipelines accept any version matching your version range
- Few organizations have automated supply chain integrity monitoring
The uncomfortable truth
The detection infrastructure existed. It just wasn’t mandatory or widely monitored. This attack reveals we built defenses but didn’t enforce their use. That changes now — but only because we needed a 100M-installation compromise to force it.
What you should ask your security team today
- Does your CI/CD pipeline require OIDC provenance for critical packages?
- Do you have alerts for “major package published without provenance when previous versions had it”?
- Can you automatically reject unsigned packages?
If the answer is “no” to any of these, you have the same gap that made this attack possible.
5. Remediation: Your Full 60-Minute Response Plan
Minutes 0–20: Identify & Contain
- Search your lockfiles for [email protected], [email protected], or [email protected]
- Check your CI/CD logs for any builds between March 31, 00:21–03:15 UTC that pulled these versions
- Look for the malware artifacts
- macOS: Check /Library/Caches/com.apple.act.mond
- Windows: Check %PROGRAMDATA%\wt.exe and search PowerShell history
- Linux: Check /tmp/ld.py and cron jobs - Check network logs for outbound connections to sfrclak.com:8000
- Downgrade immediately
- npm install [email protected] # or older rm -rf node_modules npm install - Block the C2 domain (sfrclak.com) at your firewall and DNS level
Minutes 20–40: Credential Rotation
If plain-crypto-js was installed on any machine with:
- SSH keys
- AWS/GCP/Azure credentials
- GitHub/GitLab tokens
- Database passwords
- API keys in environment variables
Assume those credentials are compromised and rotate them immediately:
- Rotate all GitHub/GitLab personal access tokens
- Rotate all AWS/GCP/Azure service account keys
- Rotate database credentials
- Rotate SSH keys
- Reset passwords for any account that could have been accessed
- Check cloud account activity logs for unauthorized access during the exposure window
- Enable MFA on all critical accounts if not already enabled
Minutes 40–60: Audit & Long-Term Response
- Audit CI/CD access logs for any suspicious actions after March 31, 00:21 UTC:
- New deployments
- Repository changes
- Permission escalations
- Credential exports - File an incident report with your legal/compliance team (GDPR, CCPA, HIPAA may require notification depending on your data)
- Re-image or restore any machine that installed the malicious versions from a verified clean backup taken before March 30, 2026
Update your dependency management policy:
- Pin axios to specific versions (never floating ranges like ^1.14.0)
- Require OIDC provenance checks in CI/CD
- Add automated alerts for missing provenance on critical packages
- Use pnpm with ignore-scripts=true for third-party dependencies (disables postinstall scripts by default)
Implement monitoring for future attacks:
- Alert on major packages published without provenance metadata
- Alert on new versions with unexpected new dependencies
- Alert on any package with postinstall scripts that make network calls
If you’re an enterprise security team
- Notify all teams using axios immediately
- Provide the remediation checklist above
- Require credential rotation for affected teams
- Conduct a supply chain audit of other high-risk dependencies
- Consider implementing a software bill of materials (SBOM) requirement
The hard truth about remediation
If the malware executed on a machine with access to your infrastructure, cleanup isn’t just deleting a folder. It’s credential rotation, log analysis, and potentially re-imaging systems. This is why the detection window matters so much — two hours is enough time for this to spread across your entire organization.
Move fast.
6. The Bigger Picture: Why Supply Chain Attacks Are the New Normal
This isn’t the first major npm compromise. It won’t be the last.
The pattern is clear:
- 2018: event-stream (8.8M weekly downloads) — Compromised to mine cryptocurrency
- 2021: ua-parser-js (7M weekly downloads) — Compromised for data exfiltration
- 2024: LiteLLM (Python, 300k+ installs) — Compromised via account takeover
- 2026 (today): Axios (100M weekly downloads) — RAT trojan via account takeover
Each attack gets bigger. Each attack gets more sophisticated. Each attack targets packages that are deeper in the dependency tree — packages that most developers don’t even know they’re using.
Why maintainer accounts are the new attack surface
Open source maintainers are targets because they’re:
- Under-resourced: Most popular packages are maintained by 1–3 people in their spare time
- Exhausted: Security isn’t their job; shipping features is
- Underpaid: Zero compensation, so sophisticated security practices are rare
- High-value: One compromised account = millions of installations
Securing a maintainer’s laptop is harder than securing a corporation. No IT team. No security monitoring. Maybe an old password used across multiple services.
The systemic problem
We’ve built an ecosystem where:
- Critical infrastructure depends on volunteer-maintained open source
- Those volunteers have minimal security resources
- Their account credentials are the only gate between users and malware
- We discovered this weakness only after a 100M-installation compromise
This isn’t a problem we can patch. It’s structural.
Why this keeps happening
Attackers know that detection is reactive. Even with the best tools, you can’t catch everything:
- Security researchers are few, and they sleep
- Automation has blind spots (postinstall scripts, provenance gaps)
- The window between publication and detection is measured in hours
- npm’s incentive structure doesn’t reward proactive security
So attackers keep trying. And they keep winning.
The uncomfortable economics
Securing npm at scale requires:
- Mandatory provenance checks (breaking backwards compatibility)
- Enforced lifecycle script restrictions (breaking existing workflows)
- 24/7 security monitoring (expensive)
- Investment in maintainer security infrastructure (philanthropic)
None of these generate revenue. None of these ship features. So they get deprioritized until a crisis forces action.
We’re spending $X on remediation because we didn’t want to spend $0.01 on prevention.
What this means for your organization
You can’t trust npm packages the way you used to. The model of “popular packages are probably safe because many eyes see them” is broken. Popular packages are actually higher-value targets.
Your supply chain is now your attack surface. Treat it that way.
7. What Changes Now: Defense Strategies That Actually Work
The axios attack reveals what works and what doesn’t. Here’s what to implement:
What doesn’t work
- Version ranges (^1.14.0, ~1.14.0) – They're convenient but they gave us the malicious version by default
- Lockfiles alone — They prevent surprises in development but don’t stop malicious updates from being installed
- Security scanning after the fact — By the time tools flag a compromise, millions of installations have occurred
- Trusting “popular = safe” — Axios is one of the most popular packages. It was still compromised
What actually works
1. Pinned versions, always
"axios": "1.14.0" // not "^1.14.0"
Pin exact versions for all production dependencies. Forces deliberate updates, not automatic ones.
2. Provenance checks in CI/CD Reject packages without OIDC provenance metadata:
npm install --provenance
npm audit --audit-level=critical --provenance
Make it mandatory for critical packages.
3. Disable lifecycle scripts by default Use pnpm’s security defaults:
pnpm install --ignore-scripts
Then explicitly allow scripts only for packages you trust. This single change would have prevented the axios RAT from executing.
4. Real-time provenance monitoring Set up automated alerts for:
- Major packages published without OIDC provenance when previous versions had it
- Packages with unexpected new dependencies
- Packages with postinstall scripts that make network calls
5. Short-lived, scoped npm tokens Replace long-lived tokens with automation-scoped, short-lived credentials. Rotate them regularly. A leaked token from 2020 should be useless in 2026.
6. Software Bill of Materials (SBOM) Know what’s in your dependencies. Tools like cyclonedx or syft generate SBOMs automatically. Monitor them for unexpected changes.
7. Network segmentation Isolate build machines. If a CI/CD system is compromised, it shouldn’t have direct access to production infrastructure.
For open source maintainers specifically
- Use GitHub’s OIDC integration for publishing (it’s built in, it’s free)
- Never use long-lived npm tokens
- Enable 2FA on your npm account (non-negotiable)
- Don’t store credentials in .npmrc files—use CI/CD pipelines
- Rotate credentials quarterly
The principle underneath all of this
Every defense here is based on the same idea: Make attacks slower and noisier.
The axios attack succeeded because:
- It was fast (2–3 hours)
- It was quiet (no obvious red flags until after installation)
- It bypassed detection infrastructure (no provenance, no alerts)
If you add friction at every step, attackers move on to easier targets. They always do.
Implementation priority
- This week: Pin all dependency versions
- This month: Enable provenance checks in CI/CD
- This quarter: Disable lifecycle scripts by default and implement provenance monitoring
- This year: Migrate to short-lived tokens and implement SBOM tracking
You can’t prevent all attacks. But you can prevent the easy ones.
8. The Uncomfortable Truth: Can We Even Trust npm?
After today, that’s the question developers are asking.
The technical answer is: Yes, with conditions.
npm isn’t broken. The infrastructure works. Provenance checks work. SLSA attestations work. The problem is that none of these are mandatory, and most people aren’t using them.
The practical answer is more complicated.
npm is:
- A business with shareholders and quarterly earnings pressure
- A platform with 2+ million packages (no one can police all of them)
- A service that values backwards compatibility over mandatory security
- Responsible for a critical piece of infrastructure it doesn’t control
That’s a hard position to be in. It also makes them a target.
What needs to change at npm
- Provenance should be mandatory for packages above a download threshold (top 1,000, top 10,000, whatever npm decides). No exceptions.
- Lifecycle scripts should be restricted by default. Postinstall hooks are powerful and dangerous. They should require explicit opt-in, not implicit permission.
- Account security should be mandatory for high-impact packages. 2FA minimum. Regular token rotation. Account activity monitoring.
- Detection should be automated. The gap between “suspicious package published” and “security team notified” should be minutes, not hours.
None of this is technically hard. It’s a policy decision.
The real problem: Incentives
npm makes money when developers use it. They lose money when they add friction. Mandatory provenance checks create friction. Restricting lifecycle scripts creates friction. These decisions are business decisions, not technical ones.
The axios compromise might finally change those incentives. Companies and organizations now have quantifiable losses — credential rotation, system remediation, incident response, regulatory notifications. When the cost of a supply chain attack exceeds the cost of prevention, change happens.
What enterprises should do
If you’re managing a large organization:
- Don’t trust npm defaults. Implement your own policies: require provenance, pin versions, disable lifecycle scripts, monitor for anomalies.
- Consider private registries. Nexus, Artifactory, GitHub Package Registry. Proxy npm traffic through them with your own governance layer.
- Implement SBOMs. Know what’s in your dependencies, track changes, alert on unexpected updates.
- Demand better from npm. Submit feedback. Vote with your infrastructure decisions. The only language open source platforms hear is adoption.
What open source needs
This attack isn’t npm’s fault in isolation. It’s a systemic problem:
- Open source maintainers need security budgets
- Package managers need to enforce security practices
- Enterprises need to invest in supply chain security
- The community needs to accept that free infrastructure has real costs
We got 20 years of “free security” because thousands of volunteers worked in obscurity. That era is ending. Supply chain attacks are the cost of that model.
The future
In 6 months:
- npm will enforce some form of provenance requirement
- Enterprise tools will add supply chain monitoring as default
- “axios-like” incidents will continue to happen (different packages, same pattern)
- More credentials will be rotated, more systems will be re-imaged
The axios attack is data point #N in a series. It won’t be the last.
The real takeaway
npm didn’t fail. Detection infrastructure didn’t fail. The ecosystem failed because we built critical infrastructure on a foundation of unpaid labor without mandatory security practices.
That’s not a technical problem. That’s a structural one. And it requires structural change.
What’s Next
I’m writing more about the engineering decisions behind efficient AI systems — from building production RAG pipelines that actually work, to the architecture patterns that separate “it runs” from “it scales.”
If this resonated with you — whether you’re shipping LLM applications, obsessed with the craft of building systems that work at scale, or just love a good story about a developer refusing to accept “that’s just how context windows work” — follow me here on Medium. Find more about me at: https://tararoutray.com/
And if you want to talk shop, trade war stories about context window disasters, or just connect with someone who thinks deeply about hard problems — find me on LinkedIn.
The hardest problems don’t care about your model size. They only care whether you stayed long enough to solve them.
Axios Just Poisoned 100M Installations: Here’s Your 60-Minute Response Plan was originally published in Level Up Coding on Medium, where people are continuing the conversation by highlighting and responding to this story.