
There is a version of this story where everyone wins. AI tools help developers ship faster. Open source libraries get used more. Maintainers see downloads go up. Everyone is happy. That is not really what is happening.
On January 31, 2026, Daniel Stenberg shut down cURL’s bug bounty program. Not because it failed. The program had been running since 2019, found 87 real vulnerabilities, and paid out more than $100,000 to researchers doing real work. He shut it down because the signal had collapsed.
In the final weeks, the confirmation rate on incoming reports dropped from above 15% to below 5%. One in twenty was real. Seven reports came in during a 16-hour window. None described an actual vulnerability. All sounded confident. All were wrong.
cURL runs on billions of devices. It is in basically every Linux distribution and every macOS install. When the person maintaining something that widely used decides reading incoming reports is no longer worth the time, that should get people’s attention.
Something people forget when they pull packages into a pipeline: someone is maintaining those packages. Usually one person. Sometimes a few. They fix bugs, patch CVEs, answer issues, review contributions, and keep the whole thing moving. A lot of the time without being paid, because the reward was never mainly money. It was reputation, visibility, job opportunities, and the feeling that you built something useful that people actually use.
That system depends on engagement. People reading the docs. People filing real bugs. People showing up with actual patches. That attention is part of what makes it feel worth continuing. Vibe coding starts breaking that loop.
When you tell an AI agent what you want and it writes the code, picks dependencies, handles imports, and wires things together, you never visit the docs. You never file an issue. Half the time you probably do not even know which libraries got pulled in. The AI sits in the middle, the maintainer gets almost nothing back, and the download counter keeps going up like everything is fine. Downloads go up. Engagement goes down. Revenue follows engagement, not downloads.
Researchers from Central European University and the Kiel Institute for the World Economy published a paper on exactly this in January 2026 (arXiv:2601.15494). Their conclusion was pretty blunt: when open source is monetized through direct user engagement, greater vibe coding adoption lowers entry, reduces the availability and quality of packages, and reduces welfare despite the productivity gains. Their own wording was: “Sustaining OSS at its current scale under widespread vibe coding requires major changes in how maintainers are paid.”
Tailwind CSS is probably the clearest real-world example of what this looks like. In early January 2026, Adam Wathan said documentation traffic had fallen roughly 40% since early 2023 and revenue was down close to 80%. He had just laid off three of four engineers. The framework powers millions of sites. The company behind it nearly ran out of money. Not because Tailwind got less popular. Because it got more popular in a way that stopped paying for the work behind it.
Stack Overflow saw a 25% reduction in activity within six months of ChatGPT’s launch, according to a peer-reviewed study in PNAS Nexus. People stopped asking questions in public because AI started answering them in private. That conversation never makes it back into the ecosystem. The maintainer never sees it. The knowledge just disappears into a chat window somewhere.
The code quality problem sits on top of all of this, and it does not help. In December 2025, CodeRabbit compared 470 open source pull requests, AI-assisted versus human-only. AI PRs had 1.7 times more issues overall. And the issues were worse too: more critical, more security-related. XSS vulnerabilities showed up 2.74 times more often. Improper password handling was nearly twice as common.
So now you have more stuff coming in, more review burden, and in many cases lower-quality contributions from people who are less connected to the project itself, landing on maintainers who are already getting less of the indirect return that used to make this feel worth doing. That is a bad mix.
The platforms are not helping either. GitHub launched the Copilot coding agent in May 2025, which lets anyone assign an issue to Copilot and have it submit a pull request automatically. No real maintainer-side controls shipped with that to filter or limit those submissions. Stefan Prodan, core maintainer of Flux CD, described it as AI slop DDoSing open source maintainers. That description is hard to argue with.
The responses from maintainers have been pretty blunt. tldraw closed external pull requests entirely. Ghostty restricted AI-assisted contributions to accepted issues only, with drive-by AI PRs closed immediately and repeat offenders banned. Gentoo Linux and NetBSD banned AI-generated submissions outright, according to reporting from InfoQ and RedMonk. GitHub added settings in February 2026 that let maintainers disable pull requests entirely or restrict them to collaborators only. When the platform starts adding kill switches, that tells you the problem is not just a few annoying PRs here and there.
The maintainer burnout story is real, but that is still not the whole problem. Every package in your pipeline has a person behind it. That person is deciding, continuously, whether maintaining it is still worth the time. If the answer becomes no, a few things usually happen: the project gets abandoned, it gets maintained badly, or security patches stop coming. None of that shows up in sprint planning.
METR ran a randomized controlled trial in mid-2025 with 16 experienced open source developers working on their own repositories (arXiv:2507.09089). The result was weird, and honestly a bit uncomfortable. Developers using AI tools took 19% longer to complete tasks than without them. Before the study, those same developers thought they would be 24% faster. After the study, they still believed they had been 20% faster.
Think about what that means for a second. Teams feel faster, they ship more, and the cost lands somewhere else quietly, on maintainers of packages they are pulling in without ever reading the docs or engaging with the project.
The Koren et al. paper says it plainly: vibe coding is not sustainable without open source. You cannot freeze the current OSS ecosystem and just live off it forever. Projects need maintenance. Bugs need fixing. CVEs need patching. If the ecosystem underneath AI tools starts hollowing out, those tools go with it. That would be bad for everyone.
There is not really a neat ending to this. The proposed fixes exist, but none of them look close to working at scale. A Spotify-style revenue-sharing model where AI platforms pay maintainers based on package usage is probably the most coherent idea I have seen. But the math only works if vibe-coded users contribute 84% of what direct users currently generate. That is not happening anytime soon.
What you can do right now is smaller than that, but still not nothing. Read the documentation for libraries you depend on. File a real bug if you find one. Star the repo. Sponsor a maintainer if the project matters to your stack. None of that fixes the structural problem. But structural problems are made up of a lot of small repeated decisions, and those are still yours.
The packages your automation depends on are not infrastructure in the same way a cloud provider is. There is no SLA. No on-call rotation. No guaranteed team behind the curtain. Sometimes there is just one person who has kept saying yes to maintaining something important, and the conditions around that person are getting worse. That is the kind of risk your dependency scanner will never show you.
Sources
- Koren, Békés, Hinz, Lohmann. “Vibe Coding Kills Open Source.” arXiv:2601.15494, January 2026. https://arxiv.org/abs/2601.15494
- Becker, Rush, Barnes, Rein. “Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity.” arXiv:2507.09089, July 2025. https://arxiv.org/abs/2507.09089
- CodeRabbit. “State of AI vs Human Code Generation Report.” December 2025. https://www.coderabbit.ai/blog/state-of-ai-vs-human-code-generation-report
- del Rio-Chanona, Laurentsyeva, Wachs. “Large Language Models Reduce Public Knowledge Sharing on Online Q&A Platforms.” PNAS Nexus, September 2024. https://doi.org/10.1093/pnasnexus/pgae400
- Daniel Stenberg. “The end of the curl bug-bounty.” daniel.haxx.se, January 26, 2026. https://daniel.haxx.se/blog/2026/01/26/the-end-of-the-curl-bug-bounty/
- Adam Wathan. Comment on Pull Request #2388, tailwindlabs/tailwindcss.com. GitHub, January 2026. https://github.com/tailwindlabs/tailwindcss.com/pull/2388
AI Is Eating Open Source From the Inside was originally published in Level Up Coding on Medium, where people are continuing the conversation by highlighting and responding to this story.