73% of hiring directors use AI to manage applications, 65% say AI rejected applicants before human review, and 52% use AI data to drive layoff decisions — with most workers unaware any of it is happening.
AI Now Screens Resumes, Flags Underperformers, and Recommends Layoffs — and Most Workers Don't Know
By Hector Herrera | May 15, 2026 | Work
Most workers applying for jobs or showing up to work today are being evaluated by AI systems they have never been told about. A new investigation finds 73% of hiring directors use AI to manage application volume, 65% report their AI automatically rejected applicants before any human saw the application, and 52% are using AI-generated productivity data to drive workforce restructuring and layoff decisions. The systems are expanding faster than any regulatory framework designed to govern them.
The findings land the same week LinkedIn announced it was cutting 900 employees while its AI hiring products crossed $450 million in annual recurring revenue — making the platform itself a case study in the dynamic the data describes.
How Widespread This Actually Is
The numbers in the investigation published by The Washington Times are not edge cases from early-adopter companies. They describe mainstream employer behavior in 2026:
- 73% of hiring directors use AI to manage incoming application volume
- 65% say their system automatically rejected applicants before a human reviewed the application
- 52% use AI to generate the productivity data that informs workforce restructuring and layoff recommendations
- Workers, in most cases, have no way of knowing an AI system made or influenced the decision about their employment
The tools doing this work are sold by a range of vendors — Workday, SAP SuccessFactors, HireVue, Eightfold, and dozens of smaller players — and are positioned as efficiency solutions that help HR teams handle volume that would otherwise require more headcount.
What the Tools Are Actually Doing
At many large employers, the application-to-screening funnel has been compressed to the point where an AI model makes a binary pass/fail decision before any recruiter opens the file. The criteria fed into these models vary by vendor and employer but typically include:
- Keyword matching against job requirements
- Tenure signals — how long a candidate stayed at previous employers
- Credential verification against internal benchmarks
- Behavioral signal analysis from video interviews assessed by AI before human review
The problem is well-documented: these systems encode the biases present in their training data. If the historical hires that trained the model skewed toward a particular demographic or credential type, the AI perpetuates that pattern. The EEOC has issued guidance on AI hiring tools and disparate impact liability, but enforcement actions have been limited.
Get this in your inbox.
Daily AI intelligence. Free. No spam.
Performance: The Invisible Scorecard
The more significant shift is on the workforce management side. 52% of companies now use AI to generate the productivity metrics that feed into performance reviews, promotion decisions, and restructuring plans. For knowledge workers, these metrics typically include:
- Email and calendar activity patterns
- Ticket closure rates in project management systems
- Code commit frequency and volume (for engineering roles)
- Response time and meeting participation scores
None of these metrics were designed to capture the full picture of a knowledge worker's contribution. A senior employee who spends time mentoring colleagues, reviewing work, or managing ambiguous strategic projects may score poorly on every quantitative metric an AI system can track, while a junior employee optimizing for measurable output ranks highly.
When those scores feed into layoff recommendations — which 52% of employers now say they do in some form — the consequences are structural, not just individual.
The Regulatory Gap
The EU AI Act classifies AI systems used in hiring and employment as high-risk, requiring transparency, human oversight, and the right to explanation for affected individuals. Those provisions begin taking effect for covered systems in mid-2026.
In the United States, there is no equivalent federal framework. New York City's Local Law 144 requires bias audits for automated employment decision tools, but enforcement has been limited and the law applies only to NYC-based employers. Illinois, Maryland, and Washington have passed laws requiring disclosure of AI use in video interviews, but those laws do not extend to resume screening or performance management.
The practical result: most U.S. workers have no legal right to know whether AI influenced their hiring or termination, no right to see what data was used, and no right to appeal.
What Workers Can Do — For Now
Until disclosure requirements expand, workers have limited but real options:
- Ask directly. Some employers will disclose AI screening if asked; the question itself may trigger more human review of your application.
- Optimize for AI legibility. Resumes should use standard section headers, avoid graphics or tables that parsing systems misread, and include exact keywords from job postings.
- Request documentation. In jurisdictions with existing rights (EU, NYC), formally request the basis for any adverse employment decision.
- Collective action. Several unions have negotiated AI transparency clauses into contracts; more are in progress.
What to Watch
Federal AI employment legislation has been introduced in multiple sessions but has not advanced. The most likely near-term regulation will come from the EEOC expanding existing disparate impact guidance to cover AI tools specifically, and from state legislatures — California, Colorado, and Illinois have active bills in 2026 sessions. Watch also for the first major class-action litigation against an employer for AI-driven discriminatory hiring outcomes at scale; the legal theory is established, the right factual record has not yet produced a landmark case.
Source: The Washington Times
Did this help you understand AI better?
Your feedback helps us write more useful content.
Get tomorrow's AI briefing
Join readers who start their day with NexChron. Free, daily, no spam.