Less noise, more data. Get the biggest data report on software developer careers in South Africa.

Dev Report mobile

Why “Engineering Taste” Is Becoming a Critical Skill for Engineering Teams

8 February 2026, by Nicolette

Everyone is “doing something with AI.” PRs are shipping faster. Demos look impressive. And quietly, a lot of teams aren’t sure if they’re actually getting better. 

In a world where speed is cheap, the engineers who stand out won’t be the fastest or the most prolific. They’ll be the ones with taste. 

Taste is what shows up before the code does.

It’s the judgement to:

  • Design logic instead of just generating output
  • Understand the user well enough to know what not to build
  • Look at an AI-generated solution and say, “This technically works – but it’s wrong.”

🎥 ▶️ In this on-demand event, Barbara Fourie and Jason Tame from OfferZen, alongside Stephen van der Heijden from Sendmarc, unpack what AI fluency actually looks like inside real engineering teams and how it’s redefining what “great work” means today.

TL;DR - Top insights on AI Fluency

  • AI shifts engineers from writing code to designing logic.
    The hard part isn’t producing code anymore, it’s clearly articulating intent, constraints, and system boundaries, and taking responsibility for what ships.
  • When building becomes cheap, judgement becomes the bottleneck.
    Product taste is choosing the right problems and creating real user impact. Speed without judgement leads to bloated products, wasted effort, and missed opportunities.
  • AI fluency doesn’t scale through individuals, it scales through teams.
    It’s not about one “AI wizard” who knows all the tools. What matters is shared standards, visible workflows, and collective judgement that prevent AI from turning into theatre.
  • AI amplifies fundamentals, it doesn’t replace them.
    Output is no longer a reliable signal of competence. Ownership, reasoning, and the ability to explain trade-offs matter more than ever.
  • Speed is table stakes. Taste is the differentiator.
    Everyone can ship faster with AI. The teams that pull ahead are the ones whose judgement compounds in systems, products, and decisions that hold up over time.

What AI fluency really means for engineering teams in 2026

As the conversation unfolded, a clear pattern emerged. Speed wasn’t the debate. Tools weren’t the debate. Taste was. Below are five takeaways, anchored in the taste frameworks redefining how AI is changing engineering work.

AI Fluency in software engineering teams
New framework defining AI fluency in engineering teams presented by Barbara Fourie, Head of Product at OfferZen. Catch the session on demand. 

1. AI shifts engineers from writing code to designing logic

What we heard: AI has pushed engineers, PMs, and designers closer together to create end-to-end experiences. While coding still matters, more of the value now comes from clearly articulating logic, intent, and constraints and knowing how to guide AI to execute within them. 

As Barbara put it:“If you can instruct AI to build clear, elegant systems and take full ownership of what ships and its safety - that’s great design taste.”

Why it matters: Teams that treat AI as a code generator hand over control and hope for the best. Teams that treat it as a system builder they actively supervise end up with software that’s easier to reason about, safer to change, and faster to evolve.

What to do: Invest in logic design skills: system thinking over syntax, architectural clarity over cleverness, and the ability to explain why something is built the way it is – not just how it works.

2. From output to intent, value, and impact

What we heard: AI makes it cheaper and easier to build almost anything. That lowers the cost of experimentation but raises the risk of building the wrong things faster. As Barbara explained, without a strong bias toward impact over output, increased speed just creates a larger backlog of features nobody uses.

“If you understand your users deeply enough to create experiences that genuinely delight them - that’s great product taste.”

Why it matters: When building becomes cheap, judgement becomes the bottleneck. Speed without judgement leads to bloated products, wasted effort, and missed opportunities. Opportunity cost doesn’t disappear just because code is easier to generate, it just becomes easier to ignore.

Teams without product taste optimise for feasibility. Teams with it optimise for meaning.

What to do: Double down on problem selection.

  • Make user understanding the gate for what gets built
  • Use AI to prototype and validate ideas early, not to justify shipping more
  • Treat business impact – not output volume – as the measure of success

3. From knowing tools to shared AI standards

What we heard: The biggest AI risk teams are running into isn’t bad output, it’s knowledge that doesn't travel. One person figures out how to use AI well. They move fast. They ship impressive things. And no one else really knows how it happened.

As Barbara put it: “If you can run quick experiments that help your team recognise good AI output from bad - that’s building shared taste.”

Why it matters: AI fluency doesn’t scale through individual heroics. When prompts, workflows, and decisions live in one person’s head, teams lose:

  • consistency
  • confidence
  • and the ability to reason about risk

That’s how AI turns into theatre: impressive demos, fragile systems, and no shared understanding of what “good” actually looks like.

What to do: Treat AI usage as a team capability, not a personal advantage. Fluency scales through shared mental models. That means:

  • Sharing workflows, not just outcomes
  • Making AI decisions visible in PRs, docs, and reviews
  • Building lightweight team norms like: When did AI help here? When did it hurt?

4. AI amplifies fundamentals, it doesn’t replace them

What we heard: AI increases the surface area of what engineers can produce but it doesn’t change who understands the system. Strong engineers use AI to move faster because they know what to ask for, what to reject, and what to take responsibility for. 

As Barbara put it: “AI can make a good developer great, but it can’t make a bad developer good.”

Why it matters: When output becomes cheap, it stops being a reliable signal. A candidate can generate working code, but can they explain why it’s structured that way, where it might break, or what trade-offs were made? 

Teams that equate AI-assisted output with competence end up hiring for speed and paying for it later in brittle systems, security gaps, and slow, risky change. Responsibility doesn’t disappear just because the code arrived quickly.

What to do: Shift the bar from can you produce code to can you own a system?

  • Ask engineers to read AI-generated code and explain its behaviour, risks, and alternatives
  • Evaluate how they reason about edge cases, failure modes, and long-term maintainability
  • Develop judgement: knowing when to trust AI, when to push back, and when to rewrite

5. Speed is table stakes, taste is the new differentiator

What we heard: Almost every team is shipping faster with AI but not every team is moving forward. You can see it in how work feels day to day. Some teams move quickly and stay steady. Others ship fast, then slow down immediately after.

Barbara pointed to recent survey data as a signal of tension. While 55% of tech leaders describe AI’s capabilities as overhyped , her reading is that this scepticism isn’t about the tools themselves but about teams still being early in the learning curve, before the real gains start to show.

Why it matters: When everyone can ship faster, speed stops being a competitive advantage. If velocity is the only thing you optimise for, AI just helps you create more output, not better systems. The cost shows up later in brittle code, confused users and teams that hesitate every time something needs to change. 

What to do: Stop treating speed as a proxy for excellence. Start defining “great” by questions like: 

  • Did we solve the right problem?
  • Can the whole team explain why this system works the way it does?
  • Did this change make life easier for users or just add more surface area?

🦖 This shift is reflected in OfferZen’s new AI Fluency section in candidate profiles, giving developers space to demonstrate not just output, but judgement, workflows, and responsible AI use.

What is an AI fluent engineer (and what they’re not)

What this looks like An AI fluent engineer is ✅ An AI fluent engineer is not ❌
Core mindset Product-minded, focused on solving the right problems and owning outcomes end-to-end. Output-driven, focused on shipping tasks or features faster.
Relationship with AI Uses AI as a collaborator to design logic, systems, and workflows. Uses AI as an autocomplete tool or authority that replaces thinking.
Judgement and taste Applies judgement to guide AI towards clean architecture, good UX, and clear trade-offs. Equates “it works” with “it’s good enough to ship”.
Ownership and responsibility Takes full responsibility for correctness, safety, and maintainability of AI-assisted output. Deflects responsibility by blaming “what the AI generated”.
Team impact Builds shared AI standards, workflows, and learning loops across the team. Acts as a lone “AI power user” whose impact does not scale.

How should developers communicate AI fluency when job hunting in the AI era?

Developers should show AI fluency in interviews and project portfolios through the AI fluency taste test, not by listing tools or claiming productivity.

Barbara’s point was that AI fluency is hard to see from outputs alone, because AI makes it easy to generate polished results. Instead, developers need to show their taste: how they reason, decide, and take responsibility when working with AI.

She described the taste test as focusing on three things:

  • How you use AI
    Not prompt engineering or tool breadth, but whether you can explain your workflow. How do you instruct AI? How do you supervise it? Where do you lean on it, and where do you deliberately not?
  • What you’ve built – and why
    Can you show recent examples of things you’ve shipped and explain why they were worth building? What problem did you choose to solve? What impact did it have? Speed alone isn’t the signal – intent and impact are.
  • Responsible usage and ownership
    Can you explain how you thought about safety, risk, correctness, and maintainability? Using AI doesn’t remove responsibility. You still own the output.

Key stats about AI fluency highlighted in the session

  • 97% of teams are already using AI in some form: Adoption is effectively universal but usage quality varies widely.
  • 55% of tech leaders say AI’s current abilities are overhyped: A strong signal that many teams haven’t yet overcome the learning curve needed to unlock real productivity gains.
  • 37% of leaders say it’s harder to get headcount approval: AI is increasing pressure to do more with smaller teams, raising the bar for individual impact.
  • 70% of tech leaders say retention keeps them up at night: As “great engineering” becomes harder to define, keeping top talent has become a strategic risk.

Want more data? For more trends, and leadership insights shaping engineering teams today, download the Engineering Leadership Report.

Audience questions

The session sparked a lot of thoughtful questions. Below Jason, Stephen and Barbara provide answers to some of the most common questions engineers and leaders asked during the discussion.

1. In what ways does AI influence your decision making as a tech lead?

Jason: AI doesn't make decisions for me, I treat its outputs with skepticism and always evaluate them critically within the context of our systems and company context.

That said, it's a useful tool for stress-testing my thinking, whether by surfacing blind spots or by simulating perspectives from different disciplines to find gaps in my reasoning.

2. How do you get the individual productivity gains to team level?

Stephen: Build enabling assets rather than rely on individuals, things like shared cursor rules, style guidelines, and conventions that everyone contributes to and benefits from.

The goal is to design a system of building inside the organisation, where roles are clear and judgement is shared, so the gains don’t live in one person’s head.

3. Should engineers be held accountable for AI decisions in automated systems they design?

Jason: Absolutely. Engineers are accountable for every line checked into the codebase, regardless of whether they wrote it by hand or an AI agent generated it.

You can't hold an LLM responsible. We choose to use these tools for efficiency, but that doesn't shift the ownership of what we ship.

4. How do you mitigate bias or unintended consequences in AI systems

Jason: It starts with awareness, understanding how these models predict text, how training data shapes outputs, and where bias can creep in.

From there, it's about building strong evaluation and observability systems that explicitly test for bias, especially when outputs reach real users. You can't just trust that your prompt is good enough - you need to verify it.

5. How do you help developers identify hallucinations in AI coding assistants, as well as avoid the infinite loop that using AI coding assistance can sometimes cause?

 Jason: You need strong verification systems, e.g. linting, type checking, and tests at various levels. These are your automated safety net against hallucinations.

Verification systems alone aren't enough though, developers need to stay actively engaged, critically evaluating plans and outputs as they're generated rather than letting agents run entirely unsupervised. When something drifts off track, you step in and course-correct early.

6. At OfferZen, how do you evaluate AI fluency when matching engineers to roles, and do you see it becoming a baseline expectation rather than a differentiator?

Jason: It's absolutely becoming a baseline expectation. We're redesigning our interview processes to present multi-dimensional problems that require different levels of thinking rather than isolated syntax questions an AI can solve in one prompt.

We want to see that candidates can effectively leverage AI at the right levels to solve real business problems and meet customer needs.

7. For junior developers, how do you recommend using GenAI to speed up development without sacrificing fundamentals, especially when working in complex codebases where AI itself can be wrong?

Jason: First, the codebase needs to be well set up for AI, e.g. strong testing, consistent patterns, and good guardrails. It's unfair to expect juniors to succeed with AI in a poorly structured environment.

From there, the key is to never let code pass that you don't understand; slow down, ask questions, and challenge the AI's output, because the real velocity gains only unlock once you've built genuine mastery.

Your team needs to support this too - you'll still be faster than writing everything manually, but the exponential gains come with time and experience.

8. How important is certification where AI is concerned going forward. Do I need that AWS AI certification or Google AI certification?

Jason: The certification itself likely won't carry much weight in hiring. It's not going to be treated as a reliable signal of AI fluency. That said, if structured learning helps you build real skills, it can still be valuable.

But your time is better spent developing demonstrable AI fluency through actual use rather than chasing the credential.

9. Whilst AI makes a ton of sense for logic, how do we get accurate execution of crafted UX that truly represents the objective rather than the 'trend' that AI has implemented?

Barbara: Practically speaking, having a solid design system and principles comes first, that should be your foundation. On another layer AI doesn’t invent UX intent – it reflects the intent you give it.

When AI produces generic or trend-driven UX, it’s usually because the objective wasn’t specified clearly enough. The job of an AI fluent team isn’t to let AI design your UX, it’s to articulate the UX goals, the non-negotiables, and the user outcome so precisely that AI can help explore and implement it alongside you.

Want the full picture?

If you’re navigating AI adoption and want a grounded, experience-led view of what great engineering looks like now, the full session goes deeper into the trade-offs, tensions, and real team examples behind these insights. 👉 Watch the online event

Recent posts

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.