#63 JooBee's newsletter

TL;DR

🪞 AI reveals the people issues you can’t just say ‘We’ll fix later’

💡 Lessons from Pleo's former CTO: Scaling tech teams in the age of AI

🎧 Listen to the newsletter here

This newsletter edition is brought to you by Zelt 💛

90% off STEP UP Bootcamp

This is not a typo 😉. If you’re serious about influencing at the exec table, driving business outcomes, and stepping up as a commercial HR leader, click below to find out how to unlock this offer.

Question: We’re moving into the building-to-scale stage and pushing ahead with AI. Everyone tells me this should make us faster, but instead it feels like the cracks are widening. Is AI exposing something we’ve missed?

Founder

AI reveals the people issues you can’t just say ‘We’ll fix later’

When a non–AI-native business begins its AI transformation, especially at the building-to-scale stage, the first shock isn’t the technology. It’s the mirror the technology holds up 👀🪞.

You’ve probably heard this often enough: what got you here won’t get you there.

Once you’ve achieved product market fit, you’re building the growth engine of the business by creating repeatability in 3 core areas:

  • Customer lifecycle: how your product reaches the customer, efficiently

  • Strategy-to-execution cycle: how direction becomes delivery, fast

  • Employee lifecycle: how you find, grow, keep (and exit) talent, with consistent experience

This is already a high-stakes transition for any start-up.

And while I’m SUPER excited about what AI makes possible at this stage, I’m equally SUPER curious about the mirror it holds up to people and organisations — exposing the organisational design needs we can no longer brush aside with, “we’ll fix that later.”

3 mirrors AI holds up (to our organisation)

🪞1. ORG CAPABILITY: Clarity becomes the constraint 

Early teams run on tacit knowledge. “Ask John” is a perfectly legitimate workflow when you’re 15 people.

But with AI in the mix…who or what is John? 🤔

Tools trained on messy inputs produce messy outputs a.k.a GIGO. (Garbage In, Garbage Out…a lesson from my engineering past that AI has politely resurrected 😅).

If your organisation lacks clarity on who owns what, how decisions are made and what "good" looks like — AI will amplify confusion, not solve it.

  • In strong systems, AI accelerates leverage.

  • In weak systems, AI amplifies the chaos.

💡The “we’ll fix that later” we can no longer ignore: Inconsistent roles, decision rights and standards that were “fine for now” are now the bottleneck.

🪞2. INDIVIDUAL CAPABILITY: Judgement becomes the scarce skill

AI can produce almost anything: code, slides, job ads, strategies, prototypes, process drafts. In secs! 

But it can’t tell you what’s useful, that’s still a human job. I heard someone say this recently: “When output is cheap, judgment becomes expensive.”

  • The question is no longer: “Can they produce more?”

  • It’s now: “Can they produce the right thing?”

The skill premium is shifting to those who can:

  • Interpret what AI produces and know what’s missing

  • Distinguish what’s needed from what’s merely available

  • Design systems that scale, not just prototypes that work today

💡The “we’ll fix that later” we can no longer ignore: Company goal setting and performance reviews that still focus on ‘activity’ instead of ‘outcomes.’ Or missing intentional workforce planning for emerging skill needs or under-investment in systems thinking and judgment as core capabilities.

🪞3. CUSTOMER & COMMERCIAL IMPACT: Speed comes with a hidden cost

When org capability lacks clarity (Mirror 1) and individual capability lacks judgment (Mirror 2), AI doesn't just move faster — it moves without guardrails.

Yes, AI reduces the cost of creation. Speed sounds attractive until it creates:

  • Security risks → GDPR fines 😱

  • Messy architecture → Delayed launches 😱

  • Unstable platform → Customer churn 😱

This is where org design really shows up. We don’t design organisations in isolation — we design them to create impact, to serve customers and deliver commercially viable outcomes.

The ultimate test isn’t: “Did we ship faster?”

It’s: Did this improve the customer experience and can we sustain it?"

When speed outpaces judgment and system design, the hidden cost shows up in rework, risk, and recovery. We're not saving time. We're borrowing it — and paying interest later.

💡The “we’ll fix that later” you can no longer ignore: Shifting the mindset that speed equals progress. Build customer centricity and commercial acumen as core capabilities across your organisation, not just in commercial functions.

The mirror is a gift for HR

The mirror AI holds up reveals where clarity is missing, where judgment isn’t being developed and where speed is being mistaken for progress. These were always there; AI just removes the buffer of time and tolerance that allowed us to say, “we’ll fix that later.” 

For HR leaders, the rise of AI has turned the so-called ‘soft stuff’ into the strategy. People and organisation are the infrastructure for scale.

Lessons from Pleo's former CTO: Scaling tech teams in the age of AI

Meri Williams, former CTO at Pleo and Monzo, joined our STEP UP Boardroom community's Business Intel session to explore how these mirrors play out in real tech teams that are building to scale — where old scaling problems meet the new challenge of AI.

Here are 3 key takeaways from Meri.

🪞1. TECH ORG CAPABILITY: Clarity becomes the constraint

Early stage, most start-ups run one tech team on one architecture, what Meri calls "a big ball of spaghetti." Everyone knows everything. Institutional knowledge is everywhere.

But scaling requires evolution: multiple teams shipping in parallel. That means clear ownership, modular design, and well-documented pathways — not tribal knowledge.

You don't have time for everybody to know everything all the time anymore. More shit needs writing down.

Here's where AI makes it urgent; new engineers need to be independently productive within six weeks. Not shadowing for three months. Not figuring it out as they go.

With rapid scaling, the majority of your team will have been there less than three months. If onboarding isn't systematised, you're operating with a team that can't function independently.

Add AI to that chaos? You're training tools on unclear processes and undocumented decisions. GIGO multiplied.

If you've got a 3-month probationary period, you need good enough onboarding that you're enabling and evaluating people at full strength, at full speed.

The payoff when you get this right. Meri's teams at Pleo went from a 5-day cycle time to under 3 (40% faster) while doing 4x the work with the same size team.

💡 Meri's takeaway: Onboarding is infrastructure. If your engineers can't be productive in 6 weeks, your system, not your people, is the problem.

🪞2. ENGINEERING CAPABILITY: Judgment becomes the scarce skill

Engineers used to spend 2-3 years building muscle memory before reviewing others' code. Now, entry-level engineers review AI-generated code from day one.

They need to:

  • Understand what "good" looks like, even when they didn't write it

  • Spot gaps, risks, and architectural problems (not just syntax errors)

  • Think in systems, not scripts

A lot of engineers are spending most of their time reviewing code that they didn't write rather than writing code. When AI can generate anything in seconds, the bottleneck isn't “Can you produce it?” It's “Do you know if it’s any good?”

The better your engineering team is, the more likely AI is to be useful. The worse your engineering team is, the more likely it is to just cause chaos and slop.

This makes user-centricity non-negotiable. Great teams were always user-focused. The proportion of engineers who need to show that capability is much higher now.

Because AI can build almost anything. But it can't tell you what's worth building. The skill premium is shifting away from syntax knowledge toward business understanding, architectural thinking, and user empathy.

💡 Meri's takeaway: If you're hiring for output over judgment, AI will make your team look productive…while they build the wrong things.

🪞3. COMMERCIAL & CUSTOMER IMPACT: Speed is not free

Meri painted a picture: imagine a stunning house. Beautiful facade. Impressive from the street. Now walk around the back. It's propped up with bits of wood. Not structurally sound. Not safe to live in.

That's what Meri calls "vibe coding" — when AI lets you bash out something that looks polished and sort of works, without any of the below-the-waterline work: security, maintainability, performance, scalability.

When speed outpaces structural integrity, commercial impact starts showing up:

  • Support tickets spike

  • Customer trust erodes

  • Customer service teams work weekends fixing preventable issues

This is where tech leaders must speak business:

  • “This shortcut could lead to 10% revenue lost under GDPR”

  • “This mess will slow every feature we ship for the next 6 months”

When you say there might be a security problem, people go, 'Yeah, whatever.' But say, 'What if we lose 10% of revenue to GDPR?' That's a very different conversation.

💡 Meri's takeaway: Speed without integrity costs real money. Make the cost visible in language the business understands.

The mirror is still a gift

AI doesn't create these problems in scaling tech teams. AI removes the buffer time that lets us ignore them, making them too expensive to ignore.

So, before you ask what AI can do for your organisation, ask what AI is revealing about your organisation.

The mirror is a gift. Use it 😉

Step up from HR Leader to Business Leader

Ready to influence strategically, drive business impact and make HR indispensable?
Here are 3 ways I can help:

You can now ‘LISTEN’ to the newsletter

I’ve turned my newsletter into audio, voiced by AI podcasters. It’s in beta, so give it a listen and tell me what you think!