Case
UX/UI
Experimental Design
Vibe Coding

1:1 Visual Conversation
Journal

OVERVIEW
Bringing Conversations to Life
1:1 is an interactive platform that brings conversations to life. Rather than presenting them as mere transcripts, it visualizes them as vibrant, evolving forms.
Each conversation is recorded and analyzed using AI, resulting in ASCII-inspired blobs that dynamically shift with tone, rhythm, and emotion, creating a more immersive experience.
VISIT THE WEBSITE
How
Python Back-end
Built a Python back-end powered by ChatGPT, enabling robust acoustic and emotional analysis
Front-end Design
Designed an engaging front-end using HTML, CSS, and JavaScript, incorporating ASCII visualizations with p5.js
Three Exploration Modes
Created Timeline, People, and Emotion views for different ways to explore conversations
Exploration Modes
Timeline View
Allows users to navigate through chronological dialogue
People View
Organizes conversations by speaker, providing context
Emotion View
Highlights dominant emotional patterns within the dialogue
Tools
Cursor
It acted as an AI-assisted coding partner that helped me write, debug, and refactor code in Python (analysis) and JavaScript/HTML/CSS (interface). Cursor bridged the gap between my design background and the technical build, allowing me to develop a complete working system.
ChatGPT
Integrated into the back-end as the analysis engine. It classified conversation segments into 92 emotion categories and identified cues like hesitation, humor, or emphasis.
P5.js
Developed ASCII-based blob visualizations in p5.js to explore how tone, emotion, and rhythm of conversations could be expressed through dynamic, responsive forms.
Figma
Designed the overall user flows, layouts, and early motion prototypes to test how conversations could be explored across Timeline, People, and Emotion views.
Design Process
1
Front-end Architecture
The front-end was designed to let users explore a conversation from different angles while keeping the interface simple and approachable. It was built with HTML, CSS, and JavaScript for the structure and interactions, and p5.js for the ASCII-based visualizations.

Visualization Layer
The ASCII blobs translate data into dynamic forms that react to the analysis parameters:
Blobiness
Controls how organic or rigid each blob appears
Grid Density
Reflects the rhythm and pace of speech through tighter or looser character layouts
Charset Size
Increases or decreases the visual weight of characters depending on emphasis
Blobiness
Emotions from 92 categories are tinted into retro-inspired color palettes
Interaction Principles
Clarity first:
navigation stays clean with minimal controls
Exploration through views:
users can switch between Timeline, People, and Emotion perspectives
Seamless link with analysis:
every segment shown visually is directly tied to its transcript and audio
Design Process
2
Back-end Architecture
The back-end was designed to turn raw conversations into structured, explorable data. It was built in Python and powered by ChatGPT for text and emotion analysis.
Workflow
Input
The system takes an MP3 recording of a conversation
Segmentation
Audio is split into smaller parts (sentence or utterance level)
Acoustic Analysis
Each segment is measured for volume, rhythm, pauses, and speed
ChatGPT Analysis
Segments are labeled with emotions (from 92 categories) and conversational cues like hesitation, humor, or emphasis
Data Export
All results are saved into a structured JSON file that feeds directly into the front-end visualizations
Impact & Results
Generated per Segment
Each conversation segment is analyzed to extract multiple parameters that drive the visual representation.
Humor
Signals of laughter and playfulness
Blur
Hesitation or uncertainty in speech
Shine
Emphasis or excitement
Volume
Mapped from loudness levels
Grid Density
Rhythm and pacing of speech
Word Count
Length of the segment
Emotions
Selected from 92 defined categories
Blobiness
Degree of organic vs. rigid form
Charset Size
Scale of ASCII characters
Design Principles
Transparency
Every visual element is grounded in real acoustic or emotional datarkflows
Granularity
Breaking conversations down at the sentence level makes emotions easier to track
Scalability
The framework allows new parameters or emotions to be added without redesigning the system
Key Takeaways
Design Impact
Reorganized filter behavior (collapsible on mobile)
Reflowed navigation into a hamburger menu
Re-prioritized content (product images first, text second)