True Expertise vs. Showmanship

Too often, people masquerade as experts when, at best, they are merely enthusiasts. The situation worsens when they market harmful advice, bearing no personal consequences for the damage it might cause.

Picture yourself at a conference. A speaker with an impressive title steps onto the stage and shares strategies that promise to revolutionize your business, domain, or product. You’re intrigued, but something feels off. It becomes clear that the person promoting these practices likely hasn’t done the work themselves, nor have they closely observed others doing it. This scenario is increasingly common which makes it really difficult to discern practical, proven advice from nonsense—especially when you aren’t an expert yourself.

As Nassim Taleb aptly says, “Don’t tell me what you think; tell me what’s in your portfolio.”

This principle applies universally, even to those with decades of experience in a particular industry. These individuals are particularly dangerous because they deceive both themselves and those around them into believing they are “the real experts” simply because of the years they’ve spent in a field or the formal accolades they’ve accumulated—often more a product of collective effort than personal prowess.

In reality, it’s not just about the quantity of time spent on a specific problem. The 10,000-hour rule is a total myth in unpredictable and complex domains. The quality of time invested is crucial, and it hinges on thoughtful exploration. Most so-called experts wander aimlessly due to a lack of self-awareness and a complete absence of productive reflection on their actions. In other words, these individuals lack the individual readiness and the feedback mechanisms necessary to gain true expertise.

In 2009, two of the leading figures in those areas of research — Gary Klein (NDM), and Daniel Kahneman (heuristics and biases) — got together to hash it out in what Kahneman called an “adversarial collaboration” paper.

In the paper, Kahneman and Klein concluded that whether or not experience alone reliably predicted exceptional performance hinged on the characteristics of the domain in question. Essentially, they agreed that Robin Hogarth had gotten it right: in what Hogarth called “kind” learning environments, experience led to predictable improvement; in “wicked” learning environments, it did not.

You can think of kind learning environments as situations that are governed by stable rules and repetitive patterns; feedback is quick and accurate, and work next year will look like work last year. Think golf or chess: a ball or piece is moved according to rules and within defined boundaries; a consequence is quickly apparent; and similar challenges occur repeatedly.

In wicked learning environments, rules may change, if there are rules at all; patterns don’t just repeat; feedback could be absent, delayed, or inaccurate; all sorts of complicated human dynamics might be involved, and work next year may not look like work last year.

—David Epstein, “Kind” and “Wicked” Learning Environments

Real experts are those who have been deep in the trenches, with a track record of both successes and failures, offering advice that’s specific and highly contextual. They are the ones who’ve fought in the arena, experienced both wins and losses, and have managed to come back, learn, and ultimately succeed. They persevered and created feedback loops that helped them get better. While this doesn’t necessarily make them good teachers of their expertise, it undoubtedly makes them great experts.

A prime example of this is the recent insight from Hilary Gridley that I read in one of the latest editions of Lenny’s Newsletter. Today, everyone claims to be an AI expert, and when it comes to building AI products, they often force-feed generic advice to product managers or newly minted entrepreneurs.

Hillary’s commentary stands in stark contrast. It’s detailed, deep, and highly relevant for anyone building an AI product. You can instantly tell that it comes from someone who has been in the trenches, not a passive observer.

I spend a lot of time thinking about adoption curve segmentation—identifying who adopts a new product quickly and who does not, and what distinguishes these groups. Historically, I’d focus on understanding the value a new product provides to people with different functional needs.

AI changes this dynamic because the most meaningful segmentation often depends on attitudes toward the technology itself: AI embracers versus AI skeptics. Many have discussed the AI ‘phantom PMF’ phenomenon, where novelty-driven acquisition leads to a steep churn cliff, but the reverse is also true.

I frequently speak with customers who reject AI products that meet their needs simply because they don’t trust or want to embrace AI. With the right messaging and onboarding, these skeptics can become superusers! But they behave very differently from AI embracers. I sketched this out in the chart below.

As a result, I’ve had to rethink our user testing approach from the ground up. When testing AI products, I focus on the following:

Longitudinal validation: Are we testing for a long enough period to understand how engagement changes once the novelty wears off?

High-touch testing: Are we staying close enough to users to understand how their attitudes, which drive engagement patterns, change daily? We’re experimenting with user Slack groups instead of traditional surveys and one-on-one qualitative interviews.

Attitudinal segmentation: Are we including both AI embracers and AI skeptics in our early testing groups? Crucially, are we segmenting them carefully to avoid averaging out their engagement and creating ‘tepid tea’—a product that satisfies no one?

Hilary Gridley, Head of Core App Product at WHOOP (via Lenny’s Newsletter)


In a world full of showmen, find those with battle scars. True expertise doesn’t need a flashy pitch—it’s proven in battles. Choose wisely, and trust those who’ve been there and done that.

Leave a Reply

Your email address will not be published. Required fields are marked *

Up Next:

Does ChatGPT make us dumber?

Does ChatGPT make us dumber?