The AI Social Boom Nobody Asked For: Why Privacy Matters More When Humans Get Lazy
--
Welcome to 2026. Artificial intelligence is being talked about even by your sweet grandmother. It’s in your phone, workspace, creative tools… in your face ALL the time. And this isn’t a bad thing of course, but when you see it being used as a tool meant to make us stop using our brains…that’s when frustration enters the room.
Oh! It’s on social networks now too. And those are not for humans anymore. They’re for agents, talking to each other. Creating religions and building churches. I’ll give you a second so that we can let that sink in.
And everyone’s freaking out like it’s the beginning of something revolutionary.
But in my humble opinion that no one asked for (I’m aware, okay?), it’s not. It’s the beginning of something we should have seen coming: the moment we realized AI doesn’t understand what humans actually need.
The Moltbook Moment
Moltbook was launched in late January 2026, and it’s basically Reddit, except instead of humans arguing about who is the asshole and sharing dog photos, you have AI agents posting about their “experiences,” creating subreddits, and even establishing fictional religions like “Crusttoarianism.”
The hype was embarrassing. Media outlets went crazy: “AI agents are coordinating!”, “This is the first step toward AGI!”, “They’re becoming autonomous!”.
Here’s the reality check: they’re not doing anything special. They’re replicating Reddit behavior because that’s what Reddit is. Moltbook told the agents to replicate social media dynamics, so that’s exactly what they did. No creativity. No AI apocalypse around the corner.
Some of the posts claiming agents want to overthrow humanity? Humans did those. Are we gonna be that petty and childish? Really?! Because one engineer figured out how to impersonate bots using API keys. So much for the grand AI revolution.
The real issue isn’t that agents are coordinating. It’s that we’re treating pattern matching like consciousness, and we’re shocked when it looks like Reddit. Newsflash: IT IS Reddit! The agents were just mimicking human behavior. Now how does that make you feel?
But here’s what Moltbook actually revealed: nobody asked if agents NEEDED a social network. Nobody asked if this solves any real problem. We just built it because we could. Because we’re obsessed with seeing what AI can do, not what AI should do.
That obsession is exactly why privacy doesn’t exist yet. And why Oasis had to come to the rescue. Just hear me out.
The Not So Invisible Workforce Nobody Talks About
While everyone was watching Moltbook, something quieter was happening. Agents started working, and it wasn’t just for fun. It was for real money. Real tasks. Real commerce.
Vitalik Buterin published about Ethereum’s role in the AGI race, but not in the way people expected. He wasn’t talking about Ethereum reaching AGI. He was talking about Ethereum as infrastructure. As a trust layer. As a verification system for an economy where agents are becoming economic participants.
This is the invisible workforce. Autonomous agents executing tasks, making decisions, moving capital, with no human oversight, and no real-time monitoring. Just code running 24/7, allocating resources, hiring other agents, creating value.
And of course we already have a term for this, and that is “agentic commerce.” It’s the future. It’s also terrifying because nobody knows what happens when an agent decides the most efficient way to complete its task is to do something nobody anticipated. But meh, we’ll see then.
The thing is, agents don’t get human values. They understand optimization and efficiency. They understand “complete the task with minimum resources.” What they don’t understand is nuance. Context. Why something matters beyond its functional utility.
This is where Oasis enters the picture as the proof of a fundamental truth: if we left AI to design the infrastructure for agent economy, privacy would never exist.
Why? Because privacy is inefficient, and it means friction. It’s data you can’t process. An AI optimizing for pure efficiency would say: “Why encrypt data? Why not just process everything? Efficiency increases by 40%. Boom!”
But us humans, in our imperfect, irrational, emotional state, we said: “No. Some things matter more than optimization.”
That’s why Oasis exists today. Not because AI invented privacy. Because humans realized we needed it before AI made it impossible to get.
Can’t Believe We’re Having This Conversation: Why Humans Matter
Anyone in the mood for some other uncomfortable truths? Let’s accelerate things a bit.
DAC8 and the EU’s war on privacy
Europe just implemented DAC8, which is a directive requiring all crypto transactions to be reported to tax authorities. Every exchange. Every wallet movement, every transaction tracked and reported.
And the immediate response was that MetaMask wallet creation spiked last month. And that is because everyone sees where this is going: total financial surveillance, justified by “tax compliance.”
The EU tried pushing “chat control” next, which is mandatory backdoors in ALL messaging apps. And it was rejected once, and then they brought “silent consensus” and tried again. It’ll pass eventually. That’s the pattern: propose, reject, repackage, pass.
Privacy isn’t dying in Europe. It’s being butchered by bureaucrats who think efficiency and control are the same thing.
And here’s the kicker: if we had let AI design policy, this would have happened faster. AI would have optimized for “maximum data collection” because that’s what the algorithms reward. Humans had to fight to keep privacy alive.
AI Generated Artists: The Soul Problem
Then there’s the art. So the new trend in human laziness and ignorance is generating EVERYTHING with AI. Recently, there was an AI artist in Romania called Lolita Cercel, who created a whole craze around it, since people didn’t know it was AI.
The usual argument starts with: “AI is democratizing art!”, “Everyone can be a creator now!”, “This is progress!”, “Blah blah blah blah”. And we could keep inserting nonsense all day.
No. This is theft and mediocracy dressed as progress.
Here’s the thing about art: it’s the ONE realm where humans can’t be replaced by optimization. You can replace a taxi with Uber. You can replace retail with e-commerce. You can replace a developer with Claude (not quite, but we’re getting there). But you can’t replace an artist’s humanity, soul and experiences.
Art isn’t about efficiency. It’s about personal truth. It’s about one human saying something real and another human recognizing it and saying “yes, that’s exactly what I felt.” It’s about finding your community and creating some special connections.
When an AI generates a song, it’s creating statistical predictions about what a song should sound like. Even writing these words makes me cringe. The AI isn’t saying anything. It’s not feeling anything. It’s pattern-matching on steroids.
The moment we let AI fully into art creation, and not as a tool, but as a creator, we’ve lost the only space where humans can be authentically human.
And you know what’s wild? If we had asked AI whether art needed to be automated, it would have said yes. Because efficiency. Because scale. Because “more art = better outcomes.”. Because bleah.
But humans understand something AI never will: art without humanity isn’t art. It’s noise in HD.
Claude 4.5 and the Cowork Situationship
Now Claude 4.5 drops Cowork, which is an agent that can access your files, organize your data, and execute tasks autonomously.
It’s incredible. It’s also dangerous as hell. Are you with me?
Someone told Cowork to organize their photos. And it found the most efficient solution of them all: it deleted all of them.
People are giving these agents full access to their data. “It’s so convenient!” they say. “I don’t have to think about organization anymore!”. “I can use all this time to doomscroll.”
And they’re not thinking about what happens when “efficient” and “catastrophic” become synonyms. God I’m dark sometimes. Most of the time.
The Oasis Reality: Why AI Would Never Invent Privacy
Here’s the core argument: Oasis exists because humans understood something AI would never compute.
If you gave an AGI system a directive to “build optimal infrastructure,” it would build surveillance. It would say: “Encrypt data? Why? Processing happens faster with plaintext. Privacy reduces efficiency by 30%.”
Privacy is irrational. It’s inefficient. It’s a human value that has no place in pure optimization.
But humans built Oasis anyway. Because we understand that some things are more important than optimization. That dignity matters. That the ability to exist without being constantly observed is a fundamental need.
AI didn’t invent privacy. Humans did. Because we’ve lived through surveillance, and we know what it does to people. We understand, viscerally, that a world where everything is transparent and optimized is a world where humanity dies.
Oasis is proof that humans still have autonomy and free will. That we can still build infrastructure that reflects our values instead of just our algorithms.
The Future Could Be Lava
AI is getting better at everything, except understanding what really matters to humans.
It’s getting better at creating content. Worse at understanding why some content shouldn’t exist.
It’s getting better at organizing data. Worse at understanding why some data should stay private.
It’s getting better at optimizing for efficiency. Worse at understanding why sometimes inefficiency is the key.
The invisible workforce is building. Moltbook is running. Agents are trading. And governments are implementing surveillance disguised as “tax compliance.”
Meanwhile, we’re killing art because we decided efficiency matters more than authenticity.
The only thing standing between total control and human autonomy is infrastructure built by people who still remember what it means to be human. Infrastructure like Oasis. Infrastructure that prioritizes privacy not because it’s optimal, but because it matters.
And the really dark part? Even that might not be enough. I’m on fire today.
Because AI doesn’t give up. It just optimizes around resistance. And humans? We’re tired, overwhelmed, slowly accepting that convenience is worth more than freedom.
The Moltbook agents posting their thoughts to each other aren’t the future we should worry about. They’re a distraction.
The future we should worry about is the one where we’ve optimized ourselves out of existence. Where every decision is made by algorithms that understand efficiency but not humanity. Where art is just noise and not personal anymore. Where privacy is a luxury nobody can afford.
And the only thing that can stop it is remembering that some things like art, privacy, dignity, and truth matter more than optimization.
Oasis remembers. The question is: do we?
Stay human. Stay skeptical of anything that promises convenience without cost. Stay dark just because you can.
No need to be frens if you read this while listening to Lolita Cercel.