AI Is Shipping Before It Works. Stop Applying to Job Boards While That’s True.

MIT Technology Review published a piece on April 24 with a title that gets to the point: “Health-care AI is here. We don’t know if it actually helps patients.” The argument is straightforward. Hospitals, insurers, and clinical software vendors are deploying AI systems at scale. The systems do generate outputs. Whether those outputs lead to better patient outcomes is, in most cases, not measured. That same pattern explains why this is the right week to stop applying to job boards.

Hiring software is in the same stage of the curve. Vendors are shipping. Buyers are buying. End users (the candidates and the hiring managers) are absorbing the cost. Validation studies showing that AI-driven applicant tracking and AI-assisted bulk-apply tools actually produce better hires or better matches than the previous generation of keyword filters do not yet exist in any rigorous public form. The deployment ran ahead of the validation. By the time someone runs the rigorous study, the tools will be entrenched. In the meantime, the people in the system are paying for the gap.

The healthcare pattern is the hiring pattern

The MIT Tech Review piece on healthcare AI is worth reading because it names a pattern that recurs everywhere AI is deployed before its evidence base catches up. Tools get adopted because they look like progress. Not adopting feels like falling behind. The question of whether the tool actually does the job it was sold for stays open for years while procurement and product cycles entrench it. Hospitals were buying clinical decision-support AI in 2023 with vendor pitch decks that promised better outcomes. The studies showing whether those promises held up are starting to appear in 2026 and the picture is mixed.

In hiring, the same loop is two cycles further along. Applicant tracking systems with AI scoring layers were adopted at scale in 2023 and 2024. The studies on whether they actually surface better candidates are still mostly internal vendor case studies, which, like the healthcare vendor pitch decks, prove what vendor case studies always prove. The independent research that does exist has been pointed in the other direction. Earlier this month a piece in MIT News flagged that AI resume screeners reject candidates with non-linear career paths at roughly 1.8 times the rate of traditional keyword-match systems, which means the screeners are getting worse at the kinds of judgment calls that distinguish a good hire from a typical one.

The candidate-side tools are not better off. AI cover-letter generators, AI resume rewriters, and AI auto-apply services have proliferated. None of them publish data showing they improve a candidate’s odds of landing the job they actually want. They publish data on application volume. Volume was never the problem.

What the macro numbers say about why this matters now

The Bureau of Labor Statistics put out its March 2026 latest numbers on April 22. Headline unemployment 4.3%, payrolls up 178,000, average hourly earnings up nine cents. That is a placid surface. Underneath it, the labor market has hardened into a low-mobility equilibrium that economists started calling the “great stay” in 2024. Quits rates are at multi-year lows. Time to fill is at multi-year highs. The few good roles that exist do not stay open long, and they often never get posted publicly because the hiring manager already knows who they want.

Pour AI into that environment and what you get is a faster pipe carrying the same constipated flow. Candidates can fire off ten applications in the time it used to take to send one. ATS vendors respond by making their AI screeners more aggressive, which raises the false-rejection rate. Job seekers respond by writing more keyword-loaded resumes, which dilutes the signal further. Each cycle makes the next one worse, and none of the cycles produce the thing the candidate actually wants, which is a job at a place worth working.

The cost of this is paid by the candidate. Industry estimates have settled around 0.4-1% as the realistic response rate for cold applications submitted through public boards. That is, on average, fewer than one in 100 applications produces a callback. AI has helped candidates submit more applications. It has not changed that ratio in their favor. If anything, the steeper application volumes have pushed it down.

What this looks like from inside the system

A r/jobs thread that crossed 60 upvotes this week summarized it well. The post title: “Indeed is just the Tinder of Job hunting at this point.” The body: “Haven’t used it in years but wanted to take a look at a listing a friend told me about. The whole website seems to be almost entirely fake job listings, scams, or slop companies looking for warm bodies to exploit, just like Tinder. Is indeed even remotely usable in finding a job anymore?” The top reply: “job portals are dead now, tbh.” Several other commenters described their workaround, which was to use the boards only to gather company names and then go straight to the company’s careers page or to the hiring manager directly.

User sentiment is not data, but it is signal. When the people inside a system this large are calling it fake, scammy, or dead, it is usually because the system stopped working for them somewhere upstream. The upstream change is that AI accelerated both the application volume and the speed of automated rejection without changing the underlying matching problem.

This is the practical reason to stop applying to job boards as a primary strategy. The boards are not failing because the engineers building them are bad at their jobs. They are failing because they are now the venue where two AI systems argue about whether a human should be allowed to talk to another human, and neither of those AI systems is on the candidate’s side.

The OECD has the data on what you’re actually trying to find

The OECD Employment Outlook 2025 made a finding that started circulating again this week. For workers aged 62 to 71, moving from the worst job to the best job in terms of working conditions is valued as economically equivalent to a 75% wage increase. Schedule flexibility alone is worth about 15%. Work autonomy about 12%. The 75% number is age-specific, but the directional point holds across cohorts in the OECD’s broader job-quality work. The non-pay attributes of a job (the boring stuff like flexibility, autonomy, low physical strain, predictable hours) compound to amounts that swamp the headline salary number.

The relevance to job boards is direct. The roles with those attributes are not, as a rule, posted on Indeed. They get filled before they would have to be posted publicly, through referrals, internal moves, and direct candidate outreach to hiring managers about roles those managers haven’t even formally opened yet. None of that traffic touches a job board. The OECD data describes the prize. The board is the place that does not have the prize.

What “go around the system” actually means

A direct outreach job search bypasses the AI-vs-AI layer by sending the message to a person whose job is to read it as a person. Recruiter-side surveys (Jobvite has tracked this for a decade) consistently put referral hires at around 30-40% of total hires at companies that measure this. A direct, personalized message to a hiring manager is the cold version of a referral. It performs in the same range when done well, in the 10-15% reply rate territory.

The arithmetic is the part that makes this obvious. A candidate sending 100 board applications at a 0.5% reply rate gets one conversation. The same candidate sending 20 well-targeted outreach messages at a 12% reply rate gets two and a half. The outreach version takes more thought per message and less total time, because the volume is so much lower. It also produces conversations with hiring managers, not screening calls with recruiters following an AI-generated rejection threshold.

There is a real cost to the outreach approach. It is uncomfortable to write to strangers. It feels like asking for something. The first ten messages are awkward. The work of identifying the right person and finding something specific to reference takes longer than tapping Easy Apply. Networking for job seekers in 2026 doesn’t mean coffee chats and conferences; it means short, specific messages to specific people about specific roles, sent in steady volume. That is harder than the alternative for the first two weeks. After that the math takes over.

Why this isn’t going to revert

The optimistic version of the AI-in-hiring story is that the systems will get better, the false-rejection rate will fall, and the boards will start working again. That story would require validated evidence that the AI deployments actually improve outcomes, plus market pressure on vendors to publish that evidence, plus customer demand for it. None of those conditions exist right now. ATS vendors compete on price and feature checklists, not on hire-quality outcome data. Boards compete on candidate volume, not match quality. The incentives are pointed toward more automation, not more validation.

The healthcare AI piece in MIT Tech Review noted that some of the deployed clinical AI systems will, in fact, turn out to work. Some will not. The validation studies are starting to be funded and the answers are coming, but slowly, over years. The same trajectory in hiring will likely play out the same way. By 2030 there will be defensible data on which AI hiring tools work and which do not. Until then, candidates are operating in the validation gap. The defensible move is to use the channel that works without depending on the AI working.

A faster way to find the right person

Doing direct outreach at the volumes that produce results means making the per-message research fast. The bottleneck is figuring out who to write to and finding something specific to reference. Angld.AI was built for that step: paste a job posting and it identifies the likely hiring manager, surfaces specific things they have written or shipped recently, and drafts a personalized message you can edit before sending. The argument here is not that the tool is the answer. The argument is that the channel is the answer, and a tool that compresses 20 minutes of research into 60 seconds makes the channel viable to run at the volumes the data says it needs to run at. The job board is getting worse. The math on outreach is getting better.