AI & Technology

Mitchell Wakefield Says AI Is Changing UX Research, But Not the Part That Matters Most

The UX researcher and product advisor believes AI can accelerate research work, but the companies that win will still be the ones that know how to interpret human behavior.

Mitchell Wakefield has no interest in pretending AI will not change UX research. It already has.

Transcripts are faster. Tagging is easier. Teams can sort through interviews, feedback, survey results, and product data in a fraction of the time it used to take. For researchers who have spent years buried in recordings, notes, and synthesis boards, some of that shift is welcome.

Wakefield understands the relief. He has spent more than a decade inside products where user behavior was not just a design concern. It shaped access to healthcare, trust in gambling platforms, loyalty programs, and high-stakes consumer decisions.

“I am very bullish on AI taking away the mechanical work,” Wakefield says. “No researcher should romanticize transcription or manual tagging. The problem starts when teams confuse faster synthesis with better judgment.”

That distinction is becoming one of the most important questions in product work. AI can summarize what users say. It can surface patterns. It can make research look more efficient. But Wakefield believes the real value of UX research has never been the transcript. It is the interpretation.

A tool can tell a team that users struggled during onboarding. It cannot always tell them whether the problem is trust, fear, lack of confidence, accessibility, confusion, or a business model quietly working against the user. It cannot sit in the tension between what a company wants users to do and what those users actually need.

“Research is not just collecting evidence,” Wakefield says. “It is knowing what the evidence means, what it does not mean, and what decision the team should make next.”

That view comes from work in unusually demanding product environments. At NHS Digital, Wakefield led research on NHS Login, a product used by 28 million people to access health information. In that environment, accessibility was not a brand value or a design preference. It was a legal and moral requirement. His work contributed to a peer-reviewed ACM publication with Professor Helen Petrie, one of the leading accessibility researchers in the field.

NHS Login also shaped the way he thinks about AI and research today. A product used at population scale cannot be designed around the average user. It has to account for people with disabilities, limited confidence, different levels of digital literacy, and urgent health needs.

“Public health technology teaches you humility very quickly,” Wakefield says. “You are not designing for a neat persona on a slide. You are designing for millions of people, including the people most likely to be failed by careless decisions.”

That is where he believes AI-generated research can become dangerous. The output may sound clean. The themes may look plausible. The recommendation may appear confident. But confidence is not the same as validity.

Wakefield is not worried about AI helping researchers move faster. He is worried about product teams using AI to skip the work of seeing users clearly.

“Bad research has always been possible,” he says. “AI just makes it easier to produce bad research that looks polished.”

After NHS Digital, Wakefield joined CloudKitchens as the first UX research hire, reporting to Travis Kalanick. He built the function from zero, which required a different kind of research maturity. There was no large established research practice to inherit. He had to make research useful inside a fast-moving company where insight only mattered if it changed decisions.

That experience pushed him closer to product strategy. He did not want research to become a report that traveled around a company without changing anything. He wanted insight to shape roadmaps, growth choices, and what teams actually built.

“The researchers who survive this next phase are not the ones who only produce findings,” Wakefield says. “They are the ones who can translate human behavior into product decisions.”

His time at FanDuel added another layer. Wakefield ran loyalty, rewards, and competitive research across Casino and Sportsbook, areas where engagement is not a simple good. Gambling products sit in a behaviorally complex category. They involve money, risk, habit, trust, incentives, and regulation. Small design choices can shape what users do next.

That made Wakefield more alert to the difference between fixing friction and shaping behavior.

“Most people think UX research is about making things easy to use,” he says. “That is too shallow. The real question is what the product is encouraging people to do, and whether that is defensible.”

AI-native product design raises that same question in a new way. Products built around model outputs do not behave like traditional apps. They are probabilistic. They can be wrong. They require correction. They ask users to trust something that may be confident and inaccurate at the same time.

Wakefield sees this as a new discipline forming inside product design. Teams have to decide how uncertainty appears in the interface, how users correct the system, how the product earns trust, and how it avoids making users feel foolish when the model gets something wrong.

“The old playbook does not fully apply,” Wakefield says. “A traditional app usually gives you a fixed path. AI-native products give you an answer that may need negotiation. That changes the design problem completely.”

Today, Wakefield works as Growth Lead, Product and Research Advisor at Golden Egg Media, helping companies make decisions about what to build, who to build for, how to position products, and how to grow without leaning on manipulative patterns. His a16z Scout work keeps him close to founders building in AI, fintech, and consumer software.

That mix gives him a rare view of the field. He sees established operators trying to use AI to move faster. He also sees founders building products where AI is not an add-on, but the core experience.

His conclusion is direct. AI will make research faster. It may make product teams more efficient. It may remove the work nobody enjoyed doing manually. But it will not replace the judgment that turns evidence into responsible product strategy.

“The work is moving higher in the stack,” Wakefield says. “Researchers need to understand strategy, growth, ethics, and execution. AI can help with the raw material. It cannot decide what kind of product you should become.”

For Wakefield, the future of UX research is not smaller because of AI. It is more demanding. The tools are getting faster. The products are getting stranger. The consequences of design choices are becoming harder to ignore.

The companies that win will not be the ones that generate the quickest insight. They will be the ones that know which insight is true, which one matters, and what to build because of it.

For more information on Mitchell Wakefield, visit his LinkedIn.

Author

  • Tom Allen

    Founder of The AI Journal. I like to write about AI and emerging technologies to inform people how they are changing our world for the better.

    View all posts

Related Articles

Back to top button