When Bots Are Talking to Bots, Find the Hiring Manager Email. Anthropic Just Made That a Product Category.
On April 25, TechCrunch reported that Anthropic had just run something it called Project Deal: a small classified-ad marketplace where AI agents represented both buyers and sellers, with real money changing hands. 186 deals closed. About $4,000 transacted. Anthropic said it was “struck by how well Project Deal worked.” The same week, r/recruitinghell users started a thread called “I’ve been getting really fast rejections lately. Is that usually AI?” The answer, from a high-upvoted reply: “If it’s happening almost immediately, then they’re auto-rejecting you based on your answers to the screening questions.” The two events are the same event. AI agents transacting with AI agents is no longer a research demo. It’s the labor market most candidates are stuck applying into. The way to break out of it is to find the hiring manager email and write a message to a person.
This sounds dramatic. The data is the part that isn’t dramatic. Anthropic’s experiment was small. The valuations are not. On April 24, Google announced it would invest up to $40 billion in Anthropic, in cash and compute, with Google Cloud providing 5 gigawatts of capacity over the next five years. Anthropic’s valuation moved from $350 billion in February to a reported $800 billion target this month. The capital is flowing into agent infrastructure at industrial scale. Whatever the user-side hiring tools and the recruiter-side ATS systems look like in 2027, they will be running on top of agent stacks that didn’t exist in 2024.
What Project Deal actually showed
The full TechCrunch piece described four parallel marketplaces. One was “real,” where every participant was represented by Anthropic’s most advanced model and the closed deals were honored after the experiment. The other three were study conditions. The interesting finding wasn’t the deal count. It was a quote Anthropic published in the writeup: “When users are represented by more advanced models, they get ‘objectively better outcomes.’ But users didn’t seem to notice the disparity, raising the possibility of ‘agent quality gaps’ where ‘people on the losing end might not realize they’re worse off.’”
Read that twice. The candidates whose AI agents are worse than the recruiter’s AI agents won’t know they’re worse off. They’ll see the rejection email come in 8 minutes after they hit submit, the way the r/recruitinghell user did. They’ll assume that’s just what the market is now. They won’t have a baseline that tells them which of their applications got read by a human, which got read by a model with weak judgment, and which got read by a model that aggressively filters anyone with a non-linear career history.
The candidate side of the AI hiring stack is getting marketed at job seekers as a way to compete on the same terms as the ATS bots. AI cover-letter generators. AI auto-apply tools. AI resume rewriters. The pitch is that you’ll out-volume the competition. The pitch is also that you’ll out-AI the screener. The Anthropic finding suggests the opposite: in agent-on-agent transactions, quality compounds. The user with the better agent wins. The user with the cheaper agent doesn’t see how badly they’re losing. Volume isn’t the lever the candidate-side tools claim it is.
Why Google just put $40 billion behind this
Google’s $40 billion is not a bet on Anthropic the chatbot company. It’s a bet on Anthropic the agent infrastructure company. The press release framed the deal around compute capacity (5 gigawatts of TPUs over five years), but the strategic logic is downstream of that. Whoever supplies the agents that transact on behalf of users at scale gets a take of every transaction. The hiring market is one of the larger transaction layers in the economy. About 6 million hires a month happen in the U.S. alone, per BLS JOLTS data. If even a fraction of those move to agent-mediated matching over the next five years, the agent platforms that handle the matching will book a recurring revenue line that didn’t exist in 2024.
The companies on the recruiter side are already moving. Workday acquired Paradox in 2024. Indeed has been integrating LLM-driven scoring into its sponsored-listing product. LinkedIn shipped recruiter-side AI candidate ranking in 2025. The shape of the next generation of hiring software is two AI agents talking to each other, with screening questions parsed automatically, ranked by model, and resolved into an “advance” or “reject” signal in minutes. The r/recruitinghell user noticing 8-minute rejections is the leading edge of that, not an outlier.
For candidates, this is the structural argument for finding a way out of the loop. The loop is going to get faster, more aggressive, and more opaque. The candidate-side AI tools that promise to compete inside the loop will struggle, because the recruiter-side tools have more capital, more training data, and a tighter feedback loop with the actual hiring decision.
The escape route is the hiring manager email
The simple version of the workaround is the one that’s worked since long before AI hiring tools existed: find hiring manager email addresses and send a short message about the role. The reason this still works in 2026 is that the bot economy doesn’t reach the hiring manager’s inbox unless the candidate has already passed through the application funnel. A direct email from a person, with a specific reference to the team’s recent work and a specific question, lands in a place the AI agents on either side don’t yet operate.
There’s a credibility check on this claim, because it sounds like wishful thinking. The check is what recruiter-side surveys say about referral and direct-outreach hiring. The Jobvite Recruiter Nation Report has tracked this annually for over a decade. Referral hires consistently land at 30-40% of total hires at companies that measure it. Direct outreach to hiring managers, when treated as a cold version of a referral, performs in the same response-rate band: roughly 10-15% reply rates when the message is specific and well-targeted. Those numbers are 25-40 times the cold-board response rate of 0.4%. They have not moved much in five years, which is the relevant fact. The application funnel got more automated. The hiring manager’s inbox didn’t.
The mechanics of how to find hiring manager email addresses are not complicated, and the AI tooling on the candidate side helps here without conflicting with the broader argument. LinkedIn’s company-page employee search filters by job title; the right hiring manager is usually two steps above the role and one team over from HR. Email patterns at most companies are predictable: firstname.lastname@company.com, firstinitiallastname@company.com, or first@company.com for smaller startups. Tools like Hunter, Apollo, and Clearbit will return verified email addresses for most public-facing hires. Five minutes of work per company gets you to a real address you can write to.
What “personalized” actually means now
Personalization is the variable that determines whether the message lands or gets ignored. After the Anthropic Project Deal finding, “personalization” has a sharper definition. The agents in Project Deal that got better outcomes were the ones with more capacity to understand context and adapt. The candidates whose messages get better outcomes are the ones whose messages reflect actual context about the recipient, not generic enthusiasm.
The four-sentence template that works: open with one specific recent thing the hiring manager has produced (a talk, a post, a launch, a published team writeup); connect it to one specific reason you’re reaching out about a specific role or capability; ask one specific question; sign off with a link to your LinkedIn or portfolio rather than an attached resume. Under 150 words total. This is not a cover letter. It’s a sales prospecting message in form, and the cold-prospecting benchmarks from B2B sales are the relevant comp: Outreach.io and similar tools track 8-15% reply rates for personalized cold emails sent at moderate volume to senior decision-makers, which is where the 10-15% candidate-outreach number comes from.
The “specific recent thing” is the part most candidates skip. Generic openers (“I admire your leadership style,” “I noticed your company is growing fast”) read as bot-generated even when they’re not. A reference to a specific blog post, a specific conference talk, or a specific shipped feature signals that the candidate did the research. The agent infrastructure on both sides of the funnel can’t fake that. Or rather, it can, but only at a level of compute and quality that most candidate-side tools don’t yet deliver, and the hiring manager will spot the generic-AI version on sight.
Why this works while the bots scale up
The optimistic counter-argument is that AI agents will eventually handle hiring well: they’ll learn the candidate’s style, model the role accurately, and route messages to the right people. That is plausible on a five-to-ten-year horizon. It is not what the next 24 months look like. The Anthropic finding is the inflection point. Agent-mediated transactions are real and they’re getting deployed against actual capital. The infrastructure is being built. The validation that the infrastructure produces good outcomes for the people on the losing side of the agent-quality gap is not being built at the same speed.
The candidate’s defensible move during this window is the channel that doesn’t depend on the bots working: find the hiring manager email, send a short and specific message, repeat. Every week that the AI hiring stack scales up, the opportunity cost of staying inside the application funnel goes up. The 8-minute rejection isn’t a glitch. It’s the system functioning exactly as designed, with the candidate’s resume getting parsed by a model that has no incentive to surface a non-obvious match.
A r/recruitinghell user posting about fast rejections, an Anthropic experiment showing 186 agent deals in a closed marketplace, and a Google announcement of $40 billion in agent infrastructure are not three separate stories. They are three frames of the same shift: the ATS-side bots are getting better and the candidate-side bots are getting worse relative to them. The way to break out of that asymmetry is to send a message to a person whose job is to read it as a person.
How to do this without spending a weekend on each company
The honest constraint is research time. Identifying the right hiring manager, finding their email, and finding the specific recent context to reference takes 20-25 minutes per company if done by hand. Five messages a week is the hobby version. Twenty messages a week is the version that produces the 10-15% reply rate at meaningful volume, which is roughly two to three real conversations per week.
If the process described above sounds like a lot of manual work, it is. That’s why tools like Angld.AI exist: to compress the research-to-outreach pipeline from twenty minutes into about sixty seconds. Paste a job posting. The tool identifies the likely hiring manager, surfaces their recent posts, talks, or shipped work, and drafts a personalized message you can edit before sending. The judgment about whether the message is right stays with the candidate. The grunt work of finding the email and the relevant context goes away. That is the version of the candidate-side stack that holds up against an agent-on-agent funnel: not a tool that submits more applications, but a tool that gets you out of the application funnel and into the channel where the bots can’t follow.