Experiment
Nothing will ever beat talking to human beings. Not even AI. But, lately Iโve been exploring new ways to bring user voices into the roomโespecially when the room is full of stakeholders. And not just quotes on a slide. I mean really bringing users to life.
What I tried…
Give real users a synthetic voice using their actual verbatim
I used direct user verbatim to generate synthetic HeyGen and Hedra avatars. Each quote was associated with a persona (e.g., โPower Buyer with Trade-inโ or โExperienced Dealer Agentโ).

Video caption.
The upside: why Iโll probably keep using it
Emotional resonance
Synthetic faces + voices transform cold research artifacts into something closer to a documentary. Itโs harder to ignore a frustrated โuserโ who looks you in the eye.
Speed and control
Unlike real video editing (which is time-consuming, redacted, and often restricted by NDA), HeyGen let me compose polished, on-brand user expressions quickly.
Accessibility and consistency
Everyone hears the same tone, pacing, and clarity. That helps stakeholders focus on what is said, not how a participant stumbled through saying it.
The pitfalls: why it still gives me pause
The uncanny valley is real
Synthetic faces + voices transform cold research artifacts into something closer to a documentary. Itโs harder to ignore a frustrated โuserโ who looks you in the eye.
Risks of oversimplification
A 15-second avatar clip can flatten a complex insight into a sound bite. And if youโre not careful, it starts to feel like youโre scripting users, not representing them.
Ethical ambiguity
Even with user consent and paraphrased language, thereโs a weirdness to creating a โfaceโ for someone who never appeared on camera. Itโs respectfulโฆ but also performative. Itโs a line Iโm still defining.
User verbatim communicated using HeyGen
User verbatim communicated using Hedra