Experiment
Online vehicle trade-in flows are notorious for long, multi-step forms that feel tedious, fragile, and easy to mess up. This experiment explores whether a conversational digital avatar could act as an alternative interaction model by collecting the same information through natural, voice-driven conversation instead of traditional form fields.
This is not a production system and not a usability study with metrics. It’s an exploratory interaction experiment designed to probe feasibility, user comfort, and overall experience quality.
Other experiments (all types)
This video is decorative.
The video has no sound.
Curiosities
Integrate Anam.ai digital avatar into high-fidelity MVP Vehicle Passport prototype and gather user feedback and insights.
- Can a conversational avatar reasonably replace a long, structured trade-in form?
- Does voice-based, natural language input feel easier or more intuitive than step-by-step data entry?
- How natural does an AI persona feel during extended interaction?
- What new UX opportunities (and risks) emerge when forms disappear?
The setup
For this experiment, I integrated a digital avatar into a high-fidelity prototype for the Vehicle Passport concept. The goal was not completion speed or accuracy, but experiential realism; something believable enough to gather meaningful feedback.
- A custom AI persona named Sarah Mitchell
- A conversational interface using voice and natural language
- A simulated trade-in flow
- No backend automation, no scoring, no optimization—just interaction
Experiment goals
Integrate Anam.ai digital avatar into high-fidelity MVP Vehicle Passport prototype and gather user feedback and insights.
- Create “Sarah Mitchell,” an AI digital assistant using Anam’s Javascript SDK
- Vibe-code the integration of the Sara Mitchell digital avatar into the Vehicle Passport native iOS 26 experience as a simple simulation for rapid user feedback
- Leverage the native iOS high-fidelity MVP prototype to conduct unmoderated cognitive walkthroughs and complementary mixed-method user research to learn user preferences for either multi-step forms or natural language, voice-driven data capture
Test drive
Meet Sarah Mitchell, a digital assistant designed to guide users through a conversational trade-in experience. This demo is intentionally simple. The goal is to evaluate how the interaction feels, not to complete a transaction.
Anam Lab
For thousands of years, people have connected through natural conversation. Anam.ai is carrying that tradition forward by building AI Personas that make technology feel alive—intuitive, adaptive, and uniquely tailored to you.

Implementation notes
Anam avatars can be integrated into digital experiences using either the very simple and straightforward <iframe> embed feature or, with Anam’s robust Javascript SDK.
Simple embed
- Simple, straight foward implementation method
- Use Anam Lab to build your avatar
- Bring your own custom LLM
- Chat session transcripts downloadable as .txt files via Anam Labs
Using Anam Javascript SDK
- More complex implementation
- Build a persona in Anam Labs
- Bring your own custom LLM
- Call and work with chat session transcripts via API
Observations
These are qualitative observations, not findings:
- The avatar felt surprisingly natural during extended conversations
- Facial expressions and gestures avoided the “uncanny valley” more often than expected
- Voice interaction encouraged more relaxed, less performative responses
- Integration into a native prototype was straightforward
- Conversational input surfaced richer, less structured data than forms typically allow
What comes next
Future iterations could explore:
- Transcribing conversations and extracting structured data
- Mapping conversational inputs to downstream systems
- Training domain-specific language models
- Testing this pattern beyond automotive contexts
Why this matters
This experiment isn’t about replacing forms tomorrow. It’s about questioning whether forms are still the best default interaction model for complex data entry in a world where conversation is finally viable again. If nothing else, it asks a simple question:
- If users can talk naturally, why do we keep making them fill out forms?





