- JooBee's Newsletter
- Posts
- #52 JooBee's newsletter
#52 JooBee's newsletter
TL;DR
š Read this before measuring AI adoption in performance reviews
š Brilliant Jerk or Friendly Incompetent? Which are you rewarding?
š§ Listen to the newsletter here
This newsletter edition is brought to you by Zelt š
![]() | HR leader, are you seen as strategic or stuck in admin?Iāve created the Strategic HR Readiness Quiz to help you find out where you stand and get a personalised report with clear actions to step up your influence and impact. |

Question: : I want AI adoption to be included in Performance Reviews. We need to adapt to change, this is important to the business.
Planning to measure AI adoption in Performance Reviews? Read this first
Founders keep telling me the same thing:
āI want AI adoption included in performance reviews. We need to adapt to change.ā
HR leaders are also saying the same:
āMy founder wants this. We want to keep up with the times. People need to use AI more.ā
Okay, fine. But by now, you already know what Iām going to ask: Why is this important?
And thatās when things start to wobble. Most responses are vague. Trend-driven. FOMO-fuelled.
āWe need to keep up.ā
āEveryone should be using it.ā
āItās the future.ā
Yes, yesā¦but what do you actually want AI adoption to achieve?
[**Cue tumbleweedsšµš¬ļø**]
And here we are, back at the same old problem with performance reviews. We measure what people doā¦not whether it drives results.
Measure what matters: Impact, not busywork
When I press harder, the answers finally get clearer:
āCS can expand accounts by flagging churn and upsells.ā
āSales can grow pipeline with better targeting and outreach.ā
āTech can improve quality through smarter incident management.ā
āWe can hit goals without extra headcount or cost.ā
Now thatās impact and performance. AI isnāt the point ā business outcomes are.
Know your goals: Effort vs. outcome
So letās talk about goals, there are broadly 2 kinds:
šŖ Effort goals (process): hours worked, tools adopted, skills developed, tasks/projects/activities completed, etc.
š Outcome goals (results): revenue delivered, speed of execution, customers retained, risk mitigated, results shipped, etc.
Hereās the critical difference:
Effort = lead indicator ā what went in
Outcome = lag indicator ā what came out
If you listen closely, youāll hear business leaders talk a lot about lead and lag indicators.
Letās illustrate this with an example.
Netflix
Letās say Netflix wants to increase customer renewals. Renewal is a lag indicator ā we only know the outcome after it happens.
But they track lead indicators like viewing hours, watch frequency and interaction levels. Because if those drop, they have time to act before renewal tanks. Thatās the value of lead data.
However, If we are really honest with ourselves, business success ā even survival ā ultimately hinges on renewals, not hours streamed.
So why am I dragging you through all these distinctions? š
Because weāre making the same mistake (again!) with AI in performance reviews. We over-index on effort:
Are people āusing AIā?
āDid they attend the AI lunch & learn?ā
āHow many tools have they tried?ā
Then comes performance review time and everyoneās misaligned:
Founders expect results.
Managers argue and point to effort and activities.
HR gets stuck in the middle, again.
Itās the performance equivalent of installing a treadmill in the office⦠and wondering why no one has run a marathon. š
Donāt repeat the mistake
If you want to include AI in reviews, ask the right question:
āAre we measuring effort ā adoption? Or outcome ā results?ā
Both have their value. Just donāt confuse one for the other.
If you are going to reward for results, AI adoption is not the result. Impact is.

Brilliant jerk? Friendly incompetent? Who are you rewarding
In the article above, I talked about the trap of measuring effort vs. outcome without clear separation. On paper, simple. In practice? Messy.
If your company uses a single performance rating, hereās what happens:
One manager scores based on outcomes.
Another scores based on effort.
Andā¦ā¦employees talk.
They compare. They speculate. One thinks theyāre being evaluated on delivery. The other thinks itās all about attitude. Inconsistency breeds confusion, frustration. Frustration turns into, āWhy did they get a raise?ā
And that leads to the worst thing in any performance system: perceived unfairness.
Is one better than the other: Effort vs. outcome
Nope. And thatās exactly the point.
As I always say, strategy is a choice ā not a default, not borrowed ābest practices.ā
The key is to be intentional. Decide what matters to your business, then back it with structure and consistency.
Iāll share how I tackled the challenge. Not as a prescription, more for the rationale that shape our choices.
The Brilliant Jerk vs. Friendly Incompetent
Weāve all seen it:
The brilliant jerk who delivers results but poisons the team.
The friendly incompetent who everyone likes but is not delivering.

Neither is good for the long run. One undermines culture. The other drags down performance.
But this is what happens when no one is aligned on what āgood performanceā actually looks like. And if youāve only got one rating, managers default to their personal biases:
Results-first managers defend the jerk.
Relationship-first managers protect the nice-but-ineffective one.
Over time, both types erode team standards.
And when that happens, Iāve seen it play out the same way: The real talent (the ones who are both good and good to work with) leaveā¦and the company is left with both ends of the spectrum.
āPut your money where your mouth isā
Thatās exactly what I told our exec team when I pitched the change.
If we truly believe performance is about what you deliver and how you deliver it, then we need to evaluateāand rewardāboth. Explicitly.
So thatās what we did.
Two performance ratings. Not one
We separated performance into two distinct ratings:
Outcome ā how well they deliver against role expectations.
Effort ā how well they demonstrate values and behaviours (the āhowā).
We used a 7-point scale (Iāve tested many scales and my preference is odd numbers. But thatās another newsletter ā email me if you want the rationale). A score of 4 means they are meeting 100% of expectations; either side reflected above or below.
We reward what is important to us
We made a deliberate call: we reward performance.
But in our business, performance = outcome + effort.
That means salary reviews are only triggered when both are met. No one coasts on charm. No one gets rewarded for results at the cost of culture.
Example:
Outcome (role expectations) = 6/7
Evidence: delivered quarterly revenue target with 10% uplift.Effort (values/behaviours) = 3/7
Evidence: consistent complaints from peers about collaboration, ignored agreed team processes.High results, low values = no salary review
Of course, as a business, we expect managers to act. Reset expectations. Build a clear development plan. Support the individual to course-correct.
Itās not about what others value. Itās about what you value.
Performance reviews only work if they reflect what your business truly cares about.
Iāve been asked, āWhat if we value results moreā¦like 75% outcomes, 25% behaviours?ā
Great. Then make that clear. Align your process to it.
When you separate the measures, you make expectations transparent. And that means fewer surprises, more consistency and a fairer system for everyone. #nosurprises
Hereās the one lesson Iāve learned after designing countless performance reviews:
āPeople donāt hate performance reviews.
They hate inconsistent decisions that feel unfair.ā
You can now āLISTENā to the newsletterIāve turned my newsletter into audio, voiced by AI podcasters. Itās in beta, so give it a listen and tell me what you think! | ![]() |
Scaling your start-up?
Letās make sure your leadership, people and org are ready. Here are 3 ways I can help:
š LinkedIn | š JooBeeās blog | š¤ Work with me