Experiment
Storyboards are a reliable way to align teams around scenarios and flows—but they’re often slow to produce and treated as disposable artifacts. This experiment explores whether modern AI tools can reduce that friction by generating character-consistent storyboard panels quickly enough to be useful in a real UX workflow.
The goal isn’t to replace narrative thinking or design judgment, but to see where AI can meaningfully augment the process—and where it still falls short.
Other experiments (all types)

This image was created using:
Weavy.ai + Google Nano Banana Pro
Experiment goals
The goals of this experiment were to:
- Explore whether AI tools can meaningfully accelerate storyboard creation for UX scenarios
- Test approaches for maintaining character and environment consistency across panels
- Compare dedicated storyboarding platforms versus general-purpose AI models
- Identify workflows that reduce manual effort without sacrificing narrative clarity
Evaluation criteria (qualitative)
Each tool was evaluated using the same qualitative lens:
- Setup time – How quickly could I get to a usable first panel?
- Prompting effort – How much precision was required to avoid garbage output?
- Character consistency – Could the same character remain recognizable across panels?
- Scene continuity – Did environments stay coherent from frame to frame?
- Edit-ability – How much control did I have over refining results?
- Reuse likelihood – Would I realistically use this again in a design workflow?
Storyboarder.ai
Storyboarder.ai is a dedicated platform aimed primarily at cinematic and narrative storyboard artists. It offers a robust, structured environment for building traditional storyboard layouts with strong export and editing capabilities. Designers or storytellers producing polished, presentation-grade storyboards; especially when narrative fidelity matters more than speed.
What worked…
- Excellent control over layout and panel sequencing
- Strong visual fidelity, especially with sketch-style rendering
- Support for uploaded character assets
- Multiple export formats suitable for presentation and handoff
What didn’t…
- Higher setup and learning curve for casual or exploratory use
- Felt optimized for long-form storytelling rather than lightweight UX scenarios
- Character consistency required deliberate asset management




Gemini Pro + Nano Banana
Using Gemini Pro with the Nano Banana image model surfaced the limits of chat-based image generation for sequential storytelling. This approach felt better suited to ideation and mood exploration than structured storyboarding. Without strong guardrails, outputs became visually inconsistent and narratively noisy after only a few iterations.
What worked…
- Extremely fast to get initial imagery
- Low friction for exploratory prompts
What didn’t…
- Character consistency degraded rapidly across panels
- Scene details became increasingly vague or contradictory
- Required heavy prompt micromanagement to maintain continuity




Weavy.ai + Gemini + Nano Banana Pro
I was most skeptical of this approach—and it ended up being the most effective.
Weavy’s node-based workflow, particularly the Prompt Concatenator, transformed vague prompts into structured, repeatable instructions that significantly improved output consistency.
What worked…
- Intuitive node-based interface
- Strong character and environment consistency (~80–90% after tuning)
- High-quality image output suitable for UX artifacts
- Easy experimentation with different models and prompt strategies
What didn’t…
- Still required human judgment to maintain narrative flow
- Occasional drift in fine details between panels









