Start now →

Axios Just Poisoned 100M Installations: Here’s Your 60-Minute Response Plan

By Tara Prasad Routray · Published April 1, 2026 · 21 min read · Source: Level Up Coding
Blockchain
Axios Just Poisoned 100M Installations: Here’s Your 60-Minute Response Plan

Account takeover, malicious RAT injected, and how to check if your system is compromised right now.

On March 31, an attacker published malicious axios versions that install a cross-platform RAT trojan. For three hours, 100M+ developers unknowingly pulled malware. If you updated axios between 00:21–03:15 UTC, treat your system as compromised. Here’s your 60-minute response plan.

Table of Contents

  1. The 15-Minute Window: What You Need to Do Right Now
  2. How They Did It: The Account Takeover That Bypassed Everything
  3. The Technical Deception: Why Plain-Crypto-JS Evaded Detection
  4. Why Detection Failed: The Missing Provenance Metadata
  5. Remediation: Your Full 60-Minute Response Plan
  6. The Bigger Picture: Why Supply Chain Attacks Are the New Normal
  7. What Changes Now: Defense Strategies That Actually Work
  8. The Uncomfortable Truth: Can We Even Trust npm?

1. The 15-Minute Window: What You Need to Do Right Now

Stop reading for a moment and check your lockfile.

Search your package-lock.json (or yarn.lock / pnpm-lock.yaml) for these exact version strings:

If any of these appear, you installed the malicious versions between March 31, 00:21–03:15 UTC. That's the window you need to care about.

Did you find them? Here's what that means:

If your lockfile contains either compromised axios version, any machine that ran npm install (or equivalent) after that package was published has likely executed the RAT dropper. The malware is designed to clean up after itself—it deletes its own traces from node_modules—but the presence of a plain-crypto-js folder is proof of execution.

The immediate actions (next 15 minutes):

  1. Check your build machines and CI/CD pipelines first. These are the highest-value targets. If your GitHub Actions, GitLab CI, or Jenkins instances pulled the malicious version, the attacker has credentials to your repositories, deployment keys, and cloud infrastructure tokens. Check the logs for any runs between 00:21–03:15 UTC on March 31.
  2. Isolate any machine that installed [email protected] or 0.30.4. Don't let it talk to the internet until you've assessed the damage. The RAT communicates with sfrclak.com:8000—if your network logs show traffic to that domain, assume full compromise.
  3. Check if plain-crypto-js exists in your node_modules. Run this command on affected machines:
find . -type d -name "plain-crypto-js" 2>/dev/null

If it exists anywhere, the dropper executed. The folder itself will be relatively empty (the malware cleans up), but its presence is the smoking gun.

Look for platform-specific artifacts. The RAT dropper leaves traces depending on your OS:

Document everything.

Take screenshots of your lockfiles, build logs, and any evidence of the malware. You'll need this for your incident response team and potentially for compliance/legal.

The hard truth:

If the malware executed on a development machine with SSH keys, cloud credentials in environment variables, or access to your private repositories, the attacker now has those credentials. The next 30 minutes aren't just about detection—they're about damage control.

This is why the 60-minute plan matters. You need to move fast.

2. How They Did It: The Account Takeover That Bypassed Everything

Here’s where the sophistication becomes clear. This wasn’t a vulnerability in npm’s code. This wasn’t a bug in the axios project. This was account takeover — and the attacker knew exactly how to exploit the trust infrastructure that makes open source work.

The theft: A long-lived npm token

The primary axios maintainer (jasonsaayman) had an npm access token stored. The attacker obtained a long-lived, high-privilege npm token that granted publish rights to the axios package. That’s all they needed.

Why this matters: npm tokens are essentially passwords. If you have a valid token, npm doesn’t care where you’re publishing from. It doesn’t care if you’re on the maintainer’s laptop or a compromised server in Russia. The registry trusts the token.

The bypass: Direct npm publishing, no CI/CD

Here’s what makes this elegant from an attacker’s perspective. Legitimate axios releases go through this workflow:

  1. Developer commits code to GitHub
  2. GitHub Actions CI/CD pipeline runs tests, security checks, and build steps
  3. The pipeline publishes to npm with OIDC (OpenID Connect) credentials and SLSA build attestations
  4. npm records: “This package was built and signed by GitHub Actions run #12345”

This creates an auditable chain of custody. Security teams can verify that a published package actually came from the official build pipeline.

The attacker skipped all of that. They used the stolen token to publish directly to npm, in seconds, from anywhere. No GitHub Actions. No build pipeline. No attestations. No audit trail.

The red flag that should have triggered alarms

Legitimate axios releases include OIDC provenance metadata. When you install a real axios package, npm knows it came from a specific GitHub Actions workflow. The malicious versions? Completely bare. No provenance. No attestations. No way to trace them back to a build.

Security teams that were monitoring for this pattern should have caught it immediately. The question is: were they?

Why the attacker chose this moment

The attack happened on March 31, 2026, at 00:21 UTC — early morning in Europe, early evening on the US East Coast. This is classic operational security: maximum disruption, minimal live monitoring. The malicious versions were live for approximately 2–3 hours before npm security teams detected and removed them.

In that window, anyone running npm install axios (without a pinned version) got the trojan. Default behavior. No warnings. The npm registry tagged the malicious versions as "latest," which means developers installing axios for the first time pulled malware by default.

The sophistication level this reveals

This wasn’t a script kiddie with a leaked password. This was planned:

This is the work of either a sophisticated threat actor or a team. It suggests patience, planning, and deep knowledge of how the JavaScript ecosystem actually works.

The lesson for your organization

Your CI/CD pipeline is only as secure as the credentials that feed it. A single leaked long-lived token can poison your entire dependency tree. And unlike a code vulnerability that can be patched, account compromise gives attackers persistent access — they can keep publishing malicious versions indefinitely until someone notices.

3. The Technical Deception: Why Plain-Crypto-JS Evaded Detection

The genius of this attack isn’t the RAT itself — remote access trojans are commodity malware. The genius is in how it was packaged and hidden in plain sight.

The fake dependency that shouldn’t exist

The real axios has exactly three dependencies:

All legitimate. All necessary. All well-known.

The malicious versions added a fourth: [email protected]. It’s not imported anywhere in the axios source code. It’s not used by any function. It serves exactly one purpose: execute a postinstall script that drops the trojan.

Why plain-crypto-js specifically?

The package name is brilliant social engineering. It mimics crypto-js, a legitimate and popular cryptography library. The malicious package even copied crypto-js’s description, author name, and repository URL. To a developer casually scanning their dependency tree, plain-crypto-js looks like a typo or a legitimate cousin of crypto-js.

But it’s neither. It’s a trojan wrapper.

The evasion trick: The version that didn’t exist yet

Here’s where the attacker demonstrated deep understanding of how npm security scanning works.

On March 30 at 07:57 UTC, the attacker published [email protected] — a completely clean, legitimate version. This established publication history. The package looked real.

Then, roughly 18 hours later, they published [email protected] with the malicious postinstall script. When npm processing happened, this infected version was pulled into the malicious axios releases.

Why does this matter? Because many security scanners work by trying to install packages and check their behavior. When a scanner tried to install [email protected], it couldn’t find it yet in the registry — or it found a clean version. The malicious behavior only appeared after the package was fully published and propagated.

By the time security tools caught up, the damage was done.

The postinstall hook: Where the trojan lives

Here’s the actual attack vector. Inside [email protected]’s package.json:

{
"scripts": {
"postinstall": "node setup.js"
}
}

When npm installs a package, it automatically runs any postinstall scripts defined in package.json. Most packages use this for legitimate purposes — compiling native bindings, downloading data files, etc.

This one uses it to execute malware.

The setup.js dropper: Platform-specific payloads

The setup.js file is the actual dropper. It detects the operating system and executes platform-specific malware:

On macOS: The dropper stores a RAT binary at /Library/Caches/com.apple.act.mond. The path is deliberately crafted to look like a legitimate Apple system process (Apple's Activity Monitor is com.apple.act). Most macOS users never browse /Library/Caches, and the name alone makes it blend in with system files.

On Windows: The dropper copies PowerShell to %PROGRAMDATA%\wt.exe (wt.exe is the Windows Terminal binary name). It then executes a hidden PowerShell script that establishes communication with the command-and-control server. Because PowerShell is a legitimate Windows tool, the execution might not raise alarms in basic EDR solutions.

On Linux: The dropper downloads a Python script to /tmp/ld.py and executes it. Linux systems often have Python pre-installed, making this a reliable attack vector. The script then establishes persistence, likely through cron jobs or systemd user services.

All three payloads communicate with the same command-and-control server at sfrclak.com:8000.

The cleanup: Hiding the evidence

Here’s where the attack becomes particularly sophisticated. After the dropper executes and delivers the second-stage payload, it cleans up:

  1. It removes itself (the setup.js file)
  2. It deletes the package.json containing the postinstall hook
  3. It replaces the package.json with a “clean” version

Anyone inspecting node_modules/plain-crypto-js after the fact will see a completely innocent-looking package. The malicious postinstall script is gone. The evidence is erased.

But the presence of the plain-crypto-js folder itself proves execution occurred — legitimate axios installations never have this dependency.

Why traditional detection failed

Most supply chain attack detection relies on one of three approaches:

  1. Dependency scanning: “Does this package have suspicious dependencies?” — The attack adds plain-crypto-js, which looks like a legitimate library by name.
  2. Behavior analysis: “Does this package do suspicious things?” — The malicious code only runs during postinstall. Most scanning tools don’t execute postinstall hooks in their analysis environments.
  3. Signature matching: “Have we seen this malware before?” — This is a new trojan, so signature databases won’t catch it initially.

The axios compromise exploited gaps in all three approaches simultaneously. It’s not a vulnerability in any single tool — it’s a carefully designed attack that understood the ecosystem’s blind spots.

The real lesson

This attack demonstrates that in a postinstall-hook world, you can’t just scan a package’s code. You have to assume that any package can execute arbitrary code the moment it’s installed. That’s why the next section — on why detection failed — is critical to understand. Because detection didn’t fail due to lack of tools. It failed due to missing infrastructure.

4. Why Detection Failed: The Missing Provenance Metadata

Legitimate axios releases include OIDC provenance metadata and SLSA build attestations — cryptographic proof that the package came from GitHub Actions, not a direct npm token. The malicious versions had neither.

The obvious red flag

When [email protected] appeared on March 31, it should have triggered automatic alerts:

This isn’t subtle. One metadata check catches it.

Why it wasn’t caught

  1. Provenance checks are optional, not mandatory. npm supports them but doesn’t require them. The malicious versions just didn’t include them. npm accepted the publish anyway.
  2. Most organizations aren’t monitoring provenance gaps in real-time. They review vulnerabilities after the fact, not proactively. The attack happened at 00:21 UTC — off-hours for most Western teams.
  3. The window was too small. 2–3 hours of exposure before npm removed the packages. Millions of installations occurred in that window.

What’s broken in the ecosystem

The uncomfortable truth

The detection infrastructure existed. It just wasn’t mandatory or widely monitored. This attack reveals we built defenses but didn’t enforce their use. That changes now — but only because we needed a 100M-installation compromise to force it.

What you should ask your security team today

If the answer is “no” to any of these, you have the same gap that made this attack possible.

5. Remediation: Your Full 60-Minute Response Plan

Minutes 0–20: Identify & Contain

  1. Search your lockfiles for [email protected], [email protected], or [email protected]
  2. Check your CI/CD logs for any builds between March 31, 00:21–03:15 UTC that pulled these versions
  3. Look for the malware artifacts
    - macOS:
    Check /Library/Caches/com.apple.act.mond
    - Windows: Check %PROGRAMDATA%\wt.exe and search PowerShell history
    - Linux: Check /tmp/ld.py and cron jobs
  4. Check network logs for outbound connections to sfrclak.com:8000
  5. Downgrade immediately
    -
    npm install [email protected] # or older rm -rf node_modules npm install
  6. Block the C2 domain (sfrclak.com) at your firewall and DNS level

Minutes 20–40: Credential Rotation

If plain-crypto-js was installed on any machine with:

Assume those credentials are compromised and rotate them immediately:

  1. Rotate all GitHub/GitLab personal access tokens
  2. Rotate all AWS/GCP/Azure service account keys
  3. Rotate database credentials
  4. Rotate SSH keys
  5. Reset passwords for any account that could have been accessed
  6. Check cloud account activity logs for unauthorized access during the exposure window
  7. Enable MFA on all critical accounts if not already enabled

Minutes 40–60: Audit & Long-Term Response

  1. Audit CI/CD access logs for any suspicious actions after March 31, 00:21 UTC:
    - New deployments
    - Repository changes
    - Permission escalations
    - Credential exports
  2. File an incident report with your legal/compliance team (GDPR, CCPA, HIPAA may require notification depending on your data)
  3. Re-image or restore any machine that installed the malicious versions from a verified clean backup taken before March 30, 2026

Update your dependency management policy:

Implement monitoring for future attacks:

If you’re an enterprise security team

The hard truth about remediation

If the malware executed on a machine with access to your infrastructure, cleanup isn’t just deleting a folder. It’s credential rotation, log analysis, and potentially re-imaging systems. This is why the detection window matters so much — two hours is enough time for this to spread across your entire organization.

Move fast.

6. The Bigger Picture: Why Supply Chain Attacks Are the New Normal

This isn’t the first major npm compromise. It won’t be the last.

The pattern is clear:

Each attack gets bigger. Each attack gets more sophisticated. Each attack targets packages that are deeper in the dependency tree — packages that most developers don’t even know they’re using.

Why maintainer accounts are the new attack surface

Open source maintainers are targets because they’re:

  1. Under-resourced: Most popular packages are maintained by 1–3 people in their spare time
  2. Exhausted: Security isn’t their job; shipping features is
  3. Underpaid: Zero compensation, so sophisticated security practices are rare
  4. High-value: One compromised account = millions of installations

Securing a maintainer’s laptop is harder than securing a corporation. No IT team. No security monitoring. Maybe an old password used across multiple services.

The systemic problem

We’ve built an ecosystem where:

This isn’t a problem we can patch. It’s structural.

Why this keeps happening

Attackers know that detection is reactive. Even with the best tools, you can’t catch everything:

So attackers keep trying. And they keep winning.

The uncomfortable economics

Securing npm at scale requires:

None of these generate revenue. None of these ship features. So they get deprioritized until a crisis forces action.

We’re spending $X on remediation because we didn’t want to spend $0.01 on prevention.

What this means for your organization

You can’t trust npm packages the way you used to. The model of “popular packages are probably safe because many eyes see them” is broken. Popular packages are actually higher-value targets.

Your supply chain is now your attack surface. Treat it that way.

7. What Changes Now: Defense Strategies That Actually Work

The axios attack reveals what works and what doesn’t. Here’s what to implement:

What doesn’t work

What actually works

1. Pinned versions, always

"axios": "1.14.0"  // not "^1.14.0"

Pin exact versions for all production dependencies. Forces deliberate updates, not automatic ones.

2. Provenance checks in CI/CD Reject packages without OIDC provenance metadata:

npm install --provenance
npm audit --audit-level=critical --provenance

Make it mandatory for critical packages.

3. Disable lifecycle scripts by default Use pnpm’s security defaults:

pnpm install --ignore-scripts

Then explicitly allow scripts only for packages you trust. This single change would have prevented the axios RAT from executing.

4. Real-time provenance monitoring Set up automated alerts for:

5. Short-lived, scoped npm tokens Replace long-lived tokens with automation-scoped, short-lived credentials. Rotate them regularly. A leaked token from 2020 should be useless in 2026.

6. Software Bill of Materials (SBOM) Know what’s in your dependencies. Tools like cyclonedx or syft generate SBOMs automatically. Monitor them for unexpected changes.

7. Network segmentation Isolate build machines. If a CI/CD system is compromised, it shouldn’t have direct access to production infrastructure.

For open source maintainers specifically

The principle underneath all of this

Every defense here is based on the same idea: Make attacks slower and noisier.

The axios attack succeeded because:

If you add friction at every step, attackers move on to easier targets. They always do.

Implementation priority

  1. This week: Pin all dependency versions
  2. This month: Enable provenance checks in CI/CD
  3. This quarter: Disable lifecycle scripts by default and implement provenance monitoring
  4. This year: Migrate to short-lived tokens and implement SBOM tracking

You can’t prevent all attacks. But you can prevent the easy ones.

8. The Uncomfortable Truth: Can We Even Trust npm?

After today, that’s the question developers are asking.

The technical answer is: Yes, with conditions.

npm isn’t broken. The infrastructure works. Provenance checks work. SLSA attestations work. The problem is that none of these are mandatory, and most people aren’t using them.

The practical answer is more complicated.

npm is:

That’s a hard position to be in. It also makes them a target.

What needs to change at npm

  1. Provenance should be mandatory for packages above a download threshold (top 1,000, top 10,000, whatever npm decides). No exceptions.
  2. Lifecycle scripts should be restricted by default. Postinstall hooks are powerful and dangerous. They should require explicit opt-in, not implicit permission.
  3. Account security should be mandatory for high-impact packages. 2FA minimum. Regular token rotation. Account activity monitoring.
  4. Detection should be automated. The gap between “suspicious package published” and “security team notified” should be minutes, not hours.

None of this is technically hard. It’s a policy decision.

The real problem: Incentives

npm makes money when developers use it. They lose money when they add friction. Mandatory provenance checks create friction. Restricting lifecycle scripts creates friction. These decisions are business decisions, not technical ones.

The axios compromise might finally change those incentives. Companies and organizations now have quantifiable losses — credential rotation, system remediation, incident response, regulatory notifications. When the cost of a supply chain attack exceeds the cost of prevention, change happens.

What enterprises should do

If you’re managing a large organization:

What open source needs

This attack isn’t npm’s fault in isolation. It’s a systemic problem:

We got 20 years of “free security” because thousands of volunteers worked in obscurity. That era is ending. Supply chain attacks are the cost of that model.

The future

In 6 months:

The axios attack is data point #N in a series. It won’t be the last.

The real takeaway

npm didn’t fail. Detection infrastructure didn’t fail. The ecosystem failed because we built critical infrastructure on a foundation of unpaid labor without mandatory security practices.

That’s not a technical problem. That’s a structural one. And it requires structural change.

What’s Next

I’m writing more about the engineering decisions behind efficient AI systems — from building production RAG pipelines that actually work, to the architecture patterns that separate “it runs” from “it scales.”

If this resonated with you — whether you’re shipping LLM applications, obsessed with the craft of building systems that work at scale, or just love a good story about a developer refusing to accept “that’s just how context windows work” — follow me here on Medium. Find more about me at: https://tararoutray.com/

And if you want to talk shop, trade war stories about context window disasters, or just connect with someone who thinks deeply about hard problems — find me on LinkedIn.

The hardest problems don’t care about your model size. They only care whether you stayed long enough to solve them.


Axios Just Poisoned 100M Installations: Here’s Your 60-Minute Response Plan was originally published in Level Up Coding on Medium, where people are continuing the conversation by highlighting and responding to this story.

This article was originally published on Level Up Coding and is republished here under RSS syndication for informational purposes. All rights and intellectual property remain with the original author. If you are the author and wish to have this article removed, please contact us at [email protected].

NexaPay — Accept Card Payments, Receive Crypto

No KYC · Instant Settlement · Visa, Mastercard, Apple Pay, Google Pay

Get Started →