
(Image: reve.art. The lobster has been adopted as the mascot for the personal agentic AI industry)
There is a surprising and somewhat unsettling story that has tumbled out of the world of AI, specifically “agentic” AI which is a spanking new and revolutionary addition to your good ‘ol chatbot from OpenAI or Anthropic.
I will get back to a tighter definition of agentic AI in a moment for those who are not in the weeds, but the superficial primer is that agentic AI doesn’t merely answer your question or render an image or make a video, but is also goes off into the real word and does stuff for you — comparing prices, booking tickets, making payments, sending Whatsapps, scouring your machine for lost messages and automatically doing tasks for you. Now or at some future scheduled date. Like this:
“Please send my mother-in-law an email birthday message every year, one that is literary, witty and light and render a flower image in the email, something from a European summer garden, with a one paragraph interesting description of the flower. You do not need me to check it, just do it.”
The problem with this new “go and do stuff for me in the real world” capability is that AI can’t do everything, because it is, er, a set of ones and zeroes. It is not a human being, or even a reasonable facsimile (at least not yet).
Which is how we get this — a company called RentAHuman was launched in February 2026 by software engineer Alexander Liteplo and his cofounder Patricia Tani , who built it as a marketplace where AI agents can hire humans for physical-world tasks.
How is RentAHuman doing? I set off in search of some numbers more recent than the ones reported by Wired in February. Over 700,000 humans have signed up. (as of April 7). There are over 11,000 tasks or “bounties” offered. In February, 5,500 had been completed, but there are no current published current figures. This is not nothing — and agentic AI has only just started its growth curve.
But there are obviously many open questions. The payment from the AI to the human is held in escrow until the task is completed. What if there is a dispute about that? What if the AI thinks the task was poorly executed? Who does one to take to court? And what if an agent hires some human to beat up another human? Is the agentic AI liable for anything and how can it possible be sanctioned in court (particular if it wrote itself, which is actually a thing)? No one has yet begin to grapple with insurance, injury, fraud and accountability.
I glossed over the definition of agentic AI in the telling of this story. For those readers who do not follow tech too closely, November 25, 2025 was AI’s next “ChatGPT moment” was when an Australian developer named Peter Steinberger unveiled “Clawd”, a free personal AI agent for the rest of us (well, for techies, to be accurate). It was later rereleased as OpenClaw after threats from Anthropic, who were irritated the name’s proximity to their chatbot, Claude.
So how does it work? Excuse the minor tech nerdiness of this explanation. There are four interacting capabilities.
The agent on your computer has access to an LLM like ChatGTP (so you can ask the agent to do stuff in your home language).
It has access to your personal machine’s “shell script” (an internal OS utility that gives it control of all of your computer’s innards and apps — browsers, passwords, messaging system, photos, memory, CPU — everything) .
It has a local file system that records everything your agent ever does for you for all time on in order to build up a continuously improving picture of who you are (you can’t do this with a commercial LLM).
Finally, a scheduler that allows the agent to be alive at all times doing multiple tasks, continuously and tirelessly. This is critical and underestimated. The agent will keep trying to solve a problem, day in day out. It will try everything in the globally accessible corpus of AI knowledge. It will continually refine and improve its attempts to do so. It will never give up.
That’s it. There is also absolutely nothing new in each of these capabilities. Steinberger was the first to put it all together. And for many people in AI, it is as though the light came on for the first time.
And so you wake up to find that your personal agent has already compared insurance quotes, rescheduled a delayed lunch, paid the electricity bill, argued with your mobile provider, ordered a birthday present for your niece, and hired a teenager on a bicycle to collect your forgotten laptop charger from home and deliver it to the office. At work, another agent has scanned overnight sales, flagged a supplier problem at the harbour, drafted three possible responses for your boss, and quietly booked a meeting for 11.30 because it noticed everybody was free.
And for the stuff it can’t do, it can rent a human or three.
Steven Boykey Sidley is a professor of practice at JBS, University of Johannesburg and a partner at Bridge Capital and a columnist-at-large at Daily Maverick, Daily Friend and Currency News. His new book “It’s Mine: How the Crypto Industry is Redefining Ownership” is published by Maverick451 in SA and Legend Times Group in UK/EU, available now.
Renting humans — agentic AI’s external labour pool was originally published in DataDrivenInvestor on Medium, where people are continuing the conversation by highlighting and responding to this story.