I have spent most of my career in consumer insights, and I cannot remember a more exciting time to be in this field.
Over the last eighteen months, something genuinely new has happened. Not the incremental kind of change where existing tools get a bit faster or a bit cheaper. The foundational kind, where entirely new research methodologies become possible that simply did not exist before. We are living through a moment where the things insights teams have always wished they could do (depth at scale, real honesty from participants, true global qual in days instead of months) are becoming real.
The challenge is clarity. "AI in research" now covers at least six distinct things. Insights leaders must identify which of these are ready for real work and which remain more promise than proof.
That is what this article is for. I want to lay out what exists, what each approach is genuinely good at, where each one still has room to grow, and how to evaluate whether a particular solution deserves a place in your research portfolio.
The Technology Shift That Made All of This Possible
Before we get into the specific approaches, it helps to understand what actually changed. Because none of what follows was possible even three years ago.
The shift has three layers.
The first is large language models that can sustain coherent, contextual conversation. The generation of language models that emerged from 2023 onward can maintain conversational context across an hour-long exchange, remember what was said twenty minutes earlier, and adapt their approach to the emotional and intellectual trajectory of the conversation. This is what separates a genuine AI-moderated interview from a chatbot with a question list.
The second is multimodal understanding. AI systems can now process video, audio, and text simultaneously. An AI can watch a participant's facial expressions while listening to their words, notice that the two are telling different stories, and probe accordingly. It can observe a participant's kitchen while they demonstrate their morning routine and ask about the things they did not mention: the workaround, the product shoved behind other products, and the moment of friction they have normalised. This is what makes AI ethnography and AI shop-alongs possible. Without multimodal capability, these would just be remote video calls with transcription.
The third is scale without degradation. These systems can conduct hundreds of simultaneous conversations without the quality of any individual conversation declining. Every participant gets the same depth of attention. Every interview probes as deeply as the first one did. Human moderators are extraordinary professionals, but they fatigue. Their twelfth interview of the day is not as sharp as their first. AI does not have that constraint.
These three shifts together created a new generation of research methodologies. Let me walk through each one, starting with the one I believe changes the most.
AI-Moderated Video In-Depth Interviews: The Category That Changes the Most
This is where the technology shift matters most, and where most insights leaders have the most questions.
What it is
An AI conducts a genuine, one-on-one, qualitative interview with a real human participant over video, in real time. Not a text chat. Not a survey with follow-ups. A face-to-face conversation that runs thirty to ninety minutes, where the AI asks questions, listens, watches, builds rapport, follows conversational threads, and probes based on what the participant says and shows.
What it does well
Three things that I think matter enormously.
- First, depth with scalability. Traditionally, depth and scale in qualitative research are inversely linked. You could do six in-depth interviews or six hundred surveys, but not six hundred in-depth interviews. AI moderation removes this limit. Every conversation receives equal attention, and all can happen at once.
- Second, honesty. This is the finding that has surprised me most over the years, and it is the one that has been most consistently replicated. People disclose more to AI interviewers than to human researchers. The social science literature on computer-mediated disclosure goes back decades, but the scale of the effect in research settings is striking. Participants routinely share information they say they would never have revealed to a human moderator. In categories where social desirability bias is strongest (diet, health, spending, personal care), this is not a marginal improvement. It is access to a layer of truth that traditional methods structurally cannot reach.
- Third, linguistic scale. A single AI system can conduct interviews natively in dozens of languages simultaneously. Not translating on the fly, but actually conversing with cultural fluency in each participant's language. For any team running research across multiple markets, this eliminates the most painful operational bottleneck in global qual: coordinating moderators in every market, then waiting for sequential fieldwork and translation chains to complete.
Where the technology is still maturing
The very best human moderators, the ones who have spent twenty years reading people, still bring something distinctive to certain types of studies. Deeply sensitive topics that require therapeutic-level rapport. Highly ambiguous emotional territory where the "right" probe depends on a kind of intuition that is hard to codify.
But here is the reframe I would offer: the practical comparison is not "AI vs the world's best moderator on their best day." It is "AI vs what the team would actually do for this study in practice." In practice, most qualitative studies are moderated by people who are good, not world-class. In practice, most global studies compromise on moderator quality in at least some markets. When measured against a realistic alternative, AI moderation holds up very well and, in many scenarios, produces better data.
How to evaluate a solution in this category
This is where evaluation matters most, because the quality range across platforms is wide. Here is what I would look for.
- Try it as a participant. Every platform should let you experience its AI interviewer firsthand. Give it surface-level answers and see if it probes past them. Contradict yourself and see if it notices. Show emotion and see if it responds appropriately. The participant experience is the single best predictor of data quality.
- Ask about conversational reasoning. Is the AI following a rigid script with conditional branching, or is it reasoning about the full context of the conversation so far? The difference matters enormously for depth. A branching script cannot follow an unexpected thread. Genuine conversational reasoning can.
- Ask about multimodal capability. Does the system only process what the participant says, or does it also analyse facial expressions, vocal tone, and visual context? Multimodal analysis is what separates a video interview platform from an audio chatbot with a camera on.
- Ask about participant engagement metrics. How long do participants actually stay in conversation? What do participants say about the experience? A platform where the average interview length is fifteen minutes is doing something fundamentally different from one where participants routinely stay engaged for sixty to ninety minutes.
- Ask what it cannot do. Any company that claims its AI interviewer is the right solution for every research question is not being straightforward. The best platforms have clear boundaries, and the best companies are transparent about them.
AI Ethnography: Observing What People Cannot Articulate
This is the category I am personally most excited about, because it addresses the hardest problem in consumer research.
What it is
AI-moderated observational research where the participant shares their real environment (their kitchen, their bathroom, their car, the store aisle) via camera, and the AI does not just passively record what it sees. It actively observes. It notices product placement, usage routines, workarounds, and friction points. It asks follow-up questions based on what it sees, not just what the participant says.
Why this matters so much
The fundamental limitation of interview-based research has always been its reliance on self-report. People tell you what they remember, what they think is important, and what they are willing to share. But the richest insights often come from what people do not mention: the awkward grip on a package they have adapted to. The product they reach past to get to the one they actually use. The moment of hesitation at the shelf that reveals a decision is not as automatic as they claim.
Traditional ethnography captures these moments through trained human observers. It is also expensive, slow, geographically constrained, and very difficult to scale. A team of human ethnographers can observe perhaps twenty to thirty households in a study. AI ethnography can observe hundreds simultaneously, with consistent analytical attention across every session.
What it does well
It captures behaviour in context, at scale. The AI notices things the participant would not think to mention and asks about them. It detects nonverbal cues (microexpressions, hesitations, emotional shifts) and correlates them with the participant's actions and words at that moment. For innovation teams, packaging designers, and anyone trying to understand how people actually interact with products in their real lives, this opens up a category of insight that was never accessible at this scale before.
How to evaluate a solution here
- Request a demonstration using real observational data. Ask the provider to show you an example where the AI noticed something the participant did not mention and probed on it. This is the test of genuine observational capability versus a platform that just records video and analyses the transcript.
- Ask about the multimodal analysis specifically. Does the AI analyse what it sees in the environment, or only what the participant says? Can it identify products, read labels, and notice spatial organisation? The gap between "video interview with a camera on" and "AI-powered ethnographic observation" is defined by what the system does with the visual data.
AI Shop-Alongs: The Store Shelf, Without the Logistics
What it is
A specific application of AI ethnography focused on the purchase journey. The participant shops (either in a real store or using stimulus materials) while the AI observes their behaviour, asks questions in real time about their decision-making process, and probes on the moments that matter: what caught their attention, what they picked up and put back, and what they did not even notice.
Why this is different from asking someone about shopping after the fact
When you ask someone in an interview how they choose a laundry detergent, you get a post-rationalised narrative. They tell you a story that makes them sound like a rational decision-maker. When you watch them actually make the choice and ask about it in the moment, you get something much closer to the truth: the role of shelf position, the influence of a promotional tag they barely processed consciously, the brand they considered for half a second before defaulting to their usual.
What it does well
It captures the purchase journey as it happens rather than as it is remembered. The AI can probe in real time ("I noticed you picked that up and put it back. What happened there?"), producing data that post-purchase interviews simply cannot replicate.
How to evaluate a solution here
- Ask about stimulus flexibility. Can the platform work with live in-store shopping (participant in a real store), virtual shelf simulations, and product-at-home evaluations? The best platforms support all three, because different research questions require different contexts.
- Ask about real-time probing capability. Is the AI observing what the participant does during the shopping task and asking about specific behaviours, or is it just asking general questions about the shopping experience afterwards? The value of a shop-along is in the "along": the simultaneous observation and conversation.
Synthetic Personas: Useful Scaffolding, Not a Replacement for Humans
What it is
Large language models trained on consumer data to simulate how target audiences would respond to questions, concepts, or stimuli. No real humans are involved. The AI is the respondent.
This category has gained significant momentum. NIQ launched a synthetic panel product. Qualtrics announced synthetic consumer panels. Multiple startups are building here. The economics are compelling: results in minutes at a fraction of the cost of human research.
What it does well
Narrowing options before you invest in human research. If you have thirty concept ideas and need to get to five, synthetic screening can save weeks and significant budget. Generating directional hypotheses about how different segments might react. Running rapid "what if" scenarios during ideation workshops. The speed and cost advantages are real, and the use case is clear.
Where it has boundaries
Synthetic consumers do not have real experiences. They do not have real kitchens, real morning routines, or real relationships with brands. They cannot surprise you with a need you did not know existed, because they generate predictions based on patterns in historical data. They cannot tell you about the embarrassing workaround they use when no one is watching, because they do not have a life to be embarrassed about.
The validation question is the one that matters most, and it is the one where the industry has the least consensus. How closely do synthetic responses match what real humans would say? As of early 2026, the honest answer is: reasonably well for broad attitudinal questions, less reliably for specific behavioural details, emotional nuance, and the kind of unexpected revelations that make qualitative research valuable.
How to evaluate a solution here
- Ask for benchmarking data. How does the provider compare synthetic responses to real human responses, and how transparent are they about where the accuracy holds up and where it does not?
- Use it for the right job. Synthetic panels are excellent scaffolding. They are not the building. Use them to screen and prioritise. Rely on conversations with real humans for the decisions where the cost of being wrong is high.
AI-Enhanced Surveys: Incremental Progress, Not a New Paradigm
What it is
Survey platforms that replace traditional grid-and-scale formats with conversational interfaces. A respondent types or speaks their answer, and the AI asks a follow-up probe before moving to the next topic. The experience feels more like texting with someone than filling out a form.
What it does well
The data quality improvement over traditional online surveys is documented. Conversational formats tend to produce longer, more detailed open-ended responses. Completion rates are often higher. For studies that need quantitative reach with somewhat richer verbatim texture, this is a reasonable incremental upgrade.
Where it sits in the research toolkit
An AI-enhanced survey is still fundamentally a structured data collection instrument. The AI augments survey capabilities but does not conduct interviews. It cannot build rapport with participants over 45 minutes or pursue unexpected threads into unanticipated territory. A smarter survey remains just a survey. The open-ended responses are richer, and that is genuinely useful, but the methodology does not fundamentally change what kind of insight you can extract. For teams that already run strong survey programmes, this is a worthwhile improvement. For teams looking to fundamentally shift the depth or quality of their consumer understanding, the transformation is happening elsewhere.
How to evaluate a solution here
Ask to see the probing logic. Is the AI asking one generic follow-up ("Can you tell me more?"), or is it conducting genuine multi-turn probing that adapts to the content of the response? Run a test yourself. Respond with a superficial answer and see whether the AI pushes you past it.
Chat With Your Data: Making Qualitative Knowledge Cumulative
What it is
An interface that sits on top of your qualitative data (across hundreds or thousands of interviews, across projects) and lets you ask questions in natural language. Instead of reading transcripts or relying on pre-coded themes, you ask: "What are the top barriers to switching brands in the laundry category among Gen Z respondents?" and the system retrieves relevant verbatims, identifies patterns, and synthesises an answer grounded in what participants actually said.
What it does well
It makes qualitative research cumulative rather than episodic. Product managers, brand teams, and executives who would never read two hundred transcripts can now interrogate the data directly. It also enables cross-project pattern recognition, connecting insights from a concept test with findings from an earlier usage study, surfacing patterns no single analyst would catch across that volume of data.
Where human skill still matters
A good chat-with-data interface can find patterns and retrieve evidence. Interpreting what those patterns mean for your business, connecting them to strategy, challenging assumptions, and building a narrative that moves a leadership team remain profoundly human skills. This tool makes good researchers faster. It does not replace the need for good researchers.
How to evaluate a solution here
- Ask about source attribution. When the system gives you an answer, can you trace it back to specific participants, specific moments in specific interviews, with the actual video clip? A good system shows its evidence. A poor one gives you a summary with no way to verify it.
- Ask about cross-project capability. Can you query across multiple studies, or only within a single project? The real value emerges when the system connects patterns across your entire qualitative data estate.
- Test it with a hard question. Do not just ask an easy topline question. Ask something nuanced and see whether the system returns genuinely relevant evidence or generic theme summaries.
Matching Method to Research Problem
If you take one thing from this article, take this: the right question is never "should we use AI for our research?" It is "for this specific study, which approach produces a better outcome than what we would otherwise do?"
- Need real qualitative depth at a scale or speed that traditional fieldwork cannot deliver? AI-moderated video interviews are the methodology that has changed most dramatically.
- Need to understand how people actually behave with products in their real environments? AI ethnography gives you observational richness at a scale that traditional ethnography never could.
- Need to understand the purchase decision as it happens? AI shop-alongs capture the journey in the moment rather than in retrospect.
- Screening and prioritising early ideas? Synthetic personas can save weeks.
- Quantitative study that needs somewhat richer open-ended responses? AI-enhanced surveys are a reasonable improvement over traditional online surveys.
- Need to make your existing qualitative data more accessible and more connected? Conversational data interfaces let your entire organisation learn from what your participants have already told you.
And when you need to understand group dynamics, how social influence shapes opinions, how people co-create ideas together, a well-run focus group with a skilled moderator is still the right tool. AI has not replaced that, and it is not trying to.
The most sophisticated research programmes in 2026 are not choosing one AI tool. They are building a stack. The question is which tools belong in yours.
The Evaluation Playbook: Ten Questions for Any AI Research Vendor
I want to close with the practical list I wish someone had given me when I first started evaluating these tools. These apply across categories.
Where This Leaves Us
This is, I think, the best time in a generation to be leading an insights function. The tools are finally catching up to the ambition. Depth at scale. Real honesty from participants. Global research in days. Observational richness without the logistics of sending ethnographers to thirty cities.
The insights leaders who will matter most in 2026 are those who understand what each tool actually does, match it to the right problem, and use the time AI saves them to do the work only humans can do: interpret, connect, challenge, and advise.
AI gives you better raw material to work with. It does not replace the judgment. And for those of us who have spent our careers believing that deep consumer understanding is the most undervalued asset in business, that is an incredibly exciting place to be.
I am building Echovane to be the platform that makes AI-moderated qualitative research trustworthy enough for the decisions that matter most. If you run an insights function and want to see what AI-moderated video interviews, AI ethnography, and AI shop-alongs look like in practice, I would welcome the conversation. Book a demo here.




