What happens when the algorithm controls your pay?

AI Is Moving Into Pay Decisions. That Changes the Risk Profile.

The Recruiting Life is brought to you by: ProvenBase 

The Recruiting Life Newsletter

Before we go any further, let me be clear.

I am not a lawyer. I do not play one on TV. And nothing in this piece is legal advice.

This is pattern recognition.

This is reading the signal before it becomes a headline.

If you plan to act on any of this, call your employment counsel first. Let them translate risk into policy. That is their lane.

My lane is different.

I look at where the ground is shifting.

I look at where enforcement posture is hardening.

I look at where plaintiffs’ attorneys will eventually probe.

I am not here to argue case law.

I am here to tell you where the cracks are forming.

Ignore it if you want.

But do not confuse early warning with legal instruction.

The lawyers close the deal.

I just point to the smoke. 👇

Most recruiting systems reward visibility.

ProvenBase rewards proof.

If your team keeps revisiting the same candidates while roles stay open, the issue isn’t talent availability.

It’s how the market is being searched.

ProvenBase uses Deep Search to surface candidates based on demonstrated skills, real-world context, and intent—not who markets themselves best or happens to sit in the usual databases.

The result: broader pipelines, less noise, and faster access to people who can actually do the work.

This isn’t another sourcing tool.
It’s a different view of the talent market.

👉 See how it works: provenbase.com

The HR Blotter

State Lawmakers Block Workplace Microchipping Before It Begins - Washington lawmakers moved to slam the door on employers implanting microchips in workers. The bill bans companies from requiring, requesting, or pressuring employees to accept subdermal chips for tracking, identification, or workplace control. Supporters say the line is simple: workers aren’t corporate hardware.

Fast-Food’s New AI Boss Might Be Listening to Everything - Fast-food chains are quietly wiring kitchens with AI assistants that guide, track, and increasingly judge workers. Burger King’s “Patty” can coach staff but also rate their politeness, triggering backlash over digital surveillance on the job. As McDonald’s, Yum Brands, and Starbucks roll out similar tools, the line between helping employees and monitoring them is getting blurry.

Stuck, Burned Out, and Waiting for the Layoff - Millennials are stuck in jobs they don’t trust and a job market they fear even more—so many are quietly rooting for layoffs. Debt, rising costs, and AI hollowing out career paths have turned the old promise of stability into a gamble. The result is a generation gripping the ladder while the rungs disappear beneath them.

The Job Market Isn’t Crashing—It’s Freezing - Layoffs cooled in February, but the hiring engine is stalling fast. Employers cut fewer jobs, yet hiring plans have plunged 56% this year as AI, restructuring, and economic uncertainty reshape payrolls. The result isn’t mass layoffs—it’s a job market slowly freezing over.

AI Floods the Internet—So Employers Want Better Humans - As AI floods the internet with cheap content, companies are suddenly desperate for people who can actually think and write. Demand for storytelling and communication skills is rising while interest in computer science cools. In the AI age, the most valuable skill might be the one machines struggle with: being human.

…

The Jim Stroud Podcast

Not subscribed to The Jim Stroud Podcast? Then you’ve been flying blind. Here’s a sneak peek at the latest episode debuting tomorrow.

…

What happens when the algorithm controls your pay?

AI is no longer just screening resumes.

It is influencing merit increases.
Promotion timing.
Bonus allocation.
Range placement.

Once it touches compensation, the stakes change.

Because pay is not a product feature.

It is a civil rights issue.

Under Title VII of the Civil Rights Act, employers can be liable not only for intentional discrimination, but also for neutral practices that create unlawful disparate impact. That framework lives in federal statute at 42 U.S.C. § 2000e-2.

The law does not care whether a manager made the decision or an algorithm generated the recommendation.

Impact controls.

The Equal Employment Opportunity Commission has already addressed this directly. In its guidance on software, algorithms, and artificial intelligence used in employment decisions, the EEOC makes one thing clear: employers remain responsible for discriminatory outcomes, even if the tool comes from a third-party vendor.

AI is not a liability shield.

It is a selection mechanism.

And selection mechanisms are regulated.

The technical backbone here is not new. The Uniform Guidelines on Employee Selection Procedures, codified at 29 C.F.R. Part 1607, still define how adverse impact is analyzed in practice.

That structure applies whether you are evaluating a written test, a structured interview, or a compensation algorithm.

If outcomes disproportionately disadvantage a protected group, the employer must demonstrate job-relatedness and business necessity. And even then, liability risk remains if a less discriminatory alternative exists.

Now apply that to pay.

Compensation is not one decision.

It is a chain reaction.

Initial range placement.
Mid-cycle adjustments.
Promotion triggers.
Discretionary bonuses.
Exception handling.

If AI influences even one stage, disparities can scale across thousands of employees in a single review cycle.

And compensation claims are not theoretical. The EEOC explicitly enforces pay discrimination under Title VII, the Equal Pay Act, the ADEA, and the ADA.

The legal pressure is already building around algorithmic systems more broadly.

In Mobley v. Workday, Inc., a federal court allowed discrimination claims tied to algorithmic screening software to proceed, rejecting arguments that the software provider was automatically insulated from Title VII scrutiny.

That case involves hiring.

But the signal matters.

Courts are willing to look inside the machine.

They are willing to ask who controls it.

They are willing to test vendor relationships under agency theory.

Now let’s address the phrase that triggers headlines.

Reverse discrimination.

There is no separate statute for that.

Claims are evaluated under the same disparate treatment and disparate impact standards regardless of which group alleges harm.

The Supreme Court’s decision in Ricci v. DeStefano is the modern reference point. The Court held that employers cannot take race-based action to avoid potential disparate impact liability unless there is a strong basis in evidence that they would otherwise be liable.

Different facts.

Same structural warning.

If an AI-driven pay equity system is explicitly configured to weight protected status in a way that disadvantages another group, that configuration can create exposure under the same civil rights framework.

The law regulates outcomes.

And it regulates how protected characteristics are used.

Right now, public enforcement emphasis remains focused on preventing under-compensation of historically disadvantaged groups and clarifying employer liability for algorithmic systems.

There is not yet documented evidence of a wave of AI-driven majority-group pay lawsuits.

But the pathway exists.

And once compensation algorithms are normalized, plaintiffs’ attorneys will test the edges.

So what should employers do?

Treat AI-enabled compensation systems like regulated infrastructure.

Test for adverse impact before deployment.

Test again after deployment.

Document business necessity for any model-driven pay adjustment logic.

Evaluate whether less discriminatory alternatives exist.

Keep human oversight real, not ceremonial.

Because in a courtroom, “the model recommended it” will not carry weight.

Responsibility flows back to the employer.

AI can tighten governance.

It can standardize decisions.

It can surface hidden disparities.

But it can also encode bias and scale it with precision.

Once AI touches pay, you are not experimenting with technology.

You are operating inside decades of civil rights doctrine.

And that doctrine has teeth.

…

The Comics Section

…

One more thing before I go…

In case you missed it, I launched a new newsletter last week - Career Intelligence Weekly. If you know someone open to new opportunities or looking to step up in their career, this will boost their strategic thinking exponentially. But I could be wrong, see for yourself.

…

And as always, hit reply and let me know how I’m doing. Or slide into my DMs as the kids say. All good.

…

Gimme feedback! I can take it.

Reply

or to participate.