[stock-market-ticker]
Posted in

The $10 Billion Startup Aiming to Automate White-Collar Jobs with AI

robot in corporate office workspace
Representative image. For illustrative purposes only.

Tasha Kozak is a social worker for Hillsborough County Public Schools in Tampa. She has spent years doing the kind of work that defies easy description: tracking down housing resources for families living in cars, calling mothers every three or four days to check in, watching children whose grades had been slipping begin to stabilise once their home life did. “I saw the mom got her glow back,” she said of one family she had helped find stable housing after months of sustained effort.

It is exactly the kind of professional knowledge — the ability to read a family’s emotional state, to know when to call and when to give space, to navigate bureaucratic systems while maintaining genuine human connection — that Mercor wants to capture, codify, and use to train artificial intelligence.

Mercor is a San Francisco-based AI company valued at $10 billion. It hires lawyers, doctors, bankers, journalists, engineers, social workers, and specialists from dozens of other professional fields through LinkedIn and its own platform — not to do their jobs, but to teach AI how to do those jobs instead. The company’s co-founders, Brendan Foody, Adarsh Hiremath, and Surya Midha, are 22 years old. They are the youngest self-made billionaires in the world. None of them, before starting Mercor, had ever held a conventional full-time job.

The Accidental Empire

The story of Mercor’s founding is told with the kind of clean narrative arc that venture capital mythology tends to produce, but the underlying mechanism is genuinely interesting. In 2023, Foody — then a sophomore at Georgetown — skipped his final exams to pursue a business idea he had encountered at a hackathon in São Paulo: match companies in the United States with skilled engineers abroad, handle the logistics, and take a cut of each deal. Within nine months, he and his co-founders had built a $1 million annual revenue run rate. Two years later, that figure has surpassed $500 million.

The pivot from recruiting to AI training happened almost accidentally. As Mercor built its database of expert professionals to match with companies, AI companies noticed the asset it had inadvertently created: a vetted network of domain specialists who could evaluate AI outputs and provide the high-quality human feedback that makes large language models meaningfully better at real-world tasks. OpenAI and Anthropic became customers. The business model transformed.

The process Mercor now operates at scale is called reinforcement learning from human feedback, or RLHF. When an AI model produces a response — a legal analysis, a medical assessment, a financial memo — human experts evaluate its quality using detailed rubrics. Those evaluations feed back into the model’s training. The model gets better at producing outputs that meet the standards its human evaluators have defined. What those evaluators are doing, in the most direct terms, is showing AI how professionals in their field think, reason, and decide.

The contractors who perform this work are paid between $10 and $150 per hour depending on their specialty, with finance experts at the top of the range and bilingual generalists at the bottom. Mercor’s website claims an average hourly rate of $86, with about $2 million paid out to experts daily. The company says it manages approximately 30,000 contractors, of whom 80% are US-based. As of October 2025, its customers include OpenAI and Anthropic — the two companies whose AI systems are most aggressively displacing the kinds of professional work Mercor’s contractors are paid to teach those systems to do.

They were cracking all these jokes

What Mercor’s CEO says about the limits of what can be captured reveals as much as what he says about what can. Foody told CNN that not everything can be taught, and “the more subjective the task, the more difficult it is for AI to master.” His example is comedy. Mercor tried to train an AI model to be funnier by hiring comedians from the Harvard Lampoon to write jokes and develop rubrics for what makes content humorous. “The problem,” Foody noted, “is one that’s obvious to humans but not so much to machines: People have different opinions on what’s funny.”

It is a revealing anecdote — not because comedy is particularly important to enterprise AI, but because it illustrates the gap between what professional expertise looks like as a checklist and what it feels like as practice. The social worker knows when to call. The comedian knows when to pause. The doctor knows when a symptom combination is worth investigating despite normal readings. These are the things that resist rubricisation — the embodied, contextual, accumulated knowledge that professionals develop over careers of reading situations that never quite repeat themselves.

Foody’s view is that these harder-to-teach capacities will remain human domains, at least for the foreseeable future. What he expects AI to absorb is the more codifiable part of professional work — the drafting, the research, the initial analysis, the standardised reasoning that makes up, in his estimate, roughly two-thirds of knowledge work. “We’ll automate maybe two-thirds of knowledge work,” he has said. “And that’ll be incredible, because it lets us do things like cure cancer and go to Mars.”

The Ethics of the Arrangement

Mercor occupies an unusual position in the AI economy — and an uncomfortable one, depending on your perspective. Its contractors are paid to do work that, if Mercor’s mission succeeds, will eliminate the need for their paid expertise. The Wall Street Journal found workers who were candid about this irony: “I joked with my friends I’m training AI to take my job someday,” said one contractor named Katie Williams, who has a background in news and social-media marketing.

Some analysts and workers argue this represents a new form of economic extraction: companies profit by paying professionals gig rates to provide the domain knowledge that will be used to build systems that undercut those same professionals’ market value. Others point out that the jobs currently being created at Mercor are real and remunerative — doctors earning $170 an hour, finance professionals earning $150, with flexible hours and no commute. The pay at the top end compares favourably to many full-time positions in the same fields.

The harder critique is systemic. Mercor’s AI Productivity Index — a benchmarking system that grades AI models on their performance across medicine, management consulting, investment banking, and law — found that even the most advanced models, including GPT-5, still failed to meet what the company calls the “production bar” for fully autonomous professional work. That bar is moving. Each round of human evaluation raises the quality floor. The gap narrows.

Foody’s framing of this process is consistently optimistic: AI is not eliminating labor but reallocating it, moving human effort up the value chain as machines absorb the repetitive and formulaic. This is the most generous reading of what Mercor is building. The less generous reading is that it is building, at industrial scale, the infrastructure for professional displacement — and doing it by recruiting the professionals who will be displaced to be the architects of their own obsolescence.

The two readings are not mutually exclusive. Both may be true simultaneously, which is what makes Mercor one of the more genuinely consequential companies of the AI era — not because its founders intended to grapple with these questions, but because the business they built places those questions at the centre of everything it does.

The People Who Evaluate the Evaluators

The Bloomberg Businessweek profile that brought Mercor to wider attention opens with Tasha Kozak because her work — the work of a school social worker navigating family crisis, one phone call at a time — is in some ways the ultimate test case for what Mercor believes AI can ultimately learn. Not the paperwork or the case notes or the resource directories. Those are already being automated. The question is whether AI can learn the part of Kozak’s job that she herself would struggle to explain: the judgment call, at 7 pm on a Thursday, about which family needs a call tonight and which one can wait until morning.

Mercor is betting, at $10 billion in valuation, that the answer is eventually yes. The people doing the training — the lawyers, the doctors, the bankers, the journalists — have a stake in whether that bet pays off. So does everyone else.

Written by Shalin Soni, CMA specializing in financial analysis, global markets, and corporate strategy, with hands-on experience in financial planning and analytical decision-making.

ALSO READ

• Netflix Beats Q1 Expectations as Reed Hastings Prepares to Close His Final Chapter
• Meet Indonesia’s Coffee Brand Challenging Starbucks With Affordable Luxury
• Beyond the Spreadsheet: How Finance Teams Can Actually Succeed with AI

Source: Based on Bloomberg and publicly available information.

[stock-market-ticker]