Designing a Multisensory Font Exploration Experience

Redesigned the Bungee Font Tester to translate visual typographic attributes into accessible, screen-reader-friendly interactions

UX Research

Accessibility Design

WCAG 2.2 Compliance

MY ROLE

UX Designer Accessibility Specialist

TEAM

Gloria Yang (Me) Lan-Ting Ko Smridhi Gupta Simran Kaur Nandita Malhotra

CLIENT

Cooper Hewitt Musuem

TIMELINE

Apr - May 2025

TOOLS

Figma, Adobe Audition, VoiceOver

OVERVIEW

The Bungee Font Tester, part of the Cooper Hewitt Museum’s digital collection, was designed as a highly visual playground for experimenting with layered color, stacking, and bold typographic forms. However, the experience relied almost entirely on sight, making it difficult or impossible for users who depend on screen readers and auditory feedback.

I conducted a UX audit and accessibility evaluation of the existing tester to identify structural and sensory barriers. Based on these findings, I redesigned the interaction model by simplifying the interface and introducing a guided tutorial flow that mirrors assistive technology navigation patterns.

The final solution reframed the tester as a multisensory experience, translating visual richness into structured interaction and sound based feedback, expanding access while preserving Bungee’s expressive personality.

Sneak peek of our final solution

What is Bungee?

Bungee, created by David Jonathan Ross and collected by the Cooper Hewitt Museum, is a layered display typeface inspired by neon signage and urban energy. Its dimensional stacking, color combinations, and rhythmic forms make it a celebration of visual experimentation.

The original font tester was built to showcase this visual vibrancy. Users could layer colors, apply shapes, and manipulate orientation to explore the typeface’s personality. But the experience assumed vision as the primary mode of interaction.

For a museum committed to equitable digital access, this presented a challenge.

Gif of the Current Bungee Font Tester

What's the problem our client is facing?

The tester’s interaction model depended on visual discovery and mouse based manipulation. Core controls were hidden in accordion panels, color selection relied on gradient pickers, and no onboarding or contextual guidance existed for first time users.

For users navigating through screen readers or keyboard input:

  • Controls were difficult to locate

  • Color selection was inaccessible

  • The interface lacked semantic structure

  • The experience conveyed structure but not personality

The central question became:

How might we translate Bungee’s visual expressiveness into an experience that is inclusive, intuitive, and meaningful beyond sight?

Understanding the needs of users with visual impairments who seek meaningful art experiences

Amina Rao
Age: 32
Location: Chicago, IL
Occupation: Museum Educator
Vision Status: Legally blind
Tech Use: iPhone with VoiceOver, AirPods, screen reader on laptop

"Art shouldn't stop being art just because I can't see it. I want to feel it in other ways."

Behaviors

  • Passionate about the arts—especially textile and digital installations

  • Attends virtual gallery talks, uses audio guides frequently

  • Feels left out of most visual-only exhibitions and online art platforms

  • Loves podcasts and audio storytelling as alternative formats

Needs

  • A way to understand visual art through narrative and sound

  • Experiences that go beyond just alt text or surface-level descriptions

  • A way to interact with art without strain or confusion

Understanding the Accessibility Gap

To understand the accessibility gap, I examined:

  • Current WCAG alignment

  • Keyboard operability

  • Screen reader flow and semantic structure

  • Interaction friction in the control panel

  • Comparative multi sensory design examples

We also grounded the work in broader context:

Over 51.9 million Americans experience vision loss
Over 307,000 are completely blind
Most digital creative tools fail to translate visual richness into non visual forms

Key Insights from Research

Users who rely on screen readers often receive minimal information about display typography
Sound characteristics (pitch, timbre, effects) can effectively convey visual weight, color, and layering
Audio feedback must be balanced between informative and overwhelming
Users prefer having control over audio intensity and feedback frequency

What happen when visual richness meets barriers?

Through structured evaluation, I identified four primary breakdowns:

FINDING 1

Hidden controls limited access

The accordion style control panel required multiple clicks and was not fully keyboard operable. Important customization features were concealed, increasing cognitive load for all users and blocking access for non mouse users.

FINDING 2

Color picker depended on vision

The gradient based color picker lacked keyboard navigation and non visual descriptions. Color selection was effectively inaccessible through screen readers.

FINDING 3

Visual only experience

The tester relied exclusively on visual layering and stacking to communicate form and energy. Screen readers could describe text structure but not the expressive qualities that define Bungee.

FINDING 4

No onboarding for new users

There was no contextual guidance or tutorial. Users unfamiliar with typographic layering or assistive navigation had no clear entry point.

The experience was expressive visually but structurally fragile.

Translating insights into our final solutions

With a clearer understanding of user barriers, I redesigned the tester around multi sensory accessibility and structural clarity.

  1. Guided interaction & simplified interface

I removed the accordion layout and made primary controls visible upfront, including font orientation, color themes, background shapes, and layer settings. This reduced interaction depth and improved keyboard flow.

To support first time users, I designed a guided tutorial that mirrors screen reader navigation patterns. The tutorial introduces each control step by step, providing clarity and building confidence before experimentation begins.

This transformed the tester from a hidden control panel into a structured exploration tool.

  1. Translating sight into sound

To preserve Bungee’s personality beyond sight, we introduced an audio interaction layer.

When users adjust font characteristics and press Play:

  • Bold weights are expressed through deeper bass tones

  • Vertical stacking is conveyed through ascending notes

  • Layer density influences rhythmic intensity

This bridges visual structure and auditory expression, allowing users to feel the typography’s mood and rhythm rather than simply hear a literal description. Sound becomes expressive, not instructional.

Why this approach matters?

The redesigned tester shifts from a visually dependent demo to an inclusive multi sensory experience.

The solution:

  • Improves keyboard operability

  • Clarifies control discoverability

  • Reduces cognitive load

  • Introduces structured onboarding

  • Expands expressive access beyond sight

More importantly, it reframes accessibility from compliance to creative expansion. Rather than simplifying Bungee, the redesign preserves its bold personality while expanding who can meaningfully engage with it.

For the Cooper Hewitt Museum, this aligns digital experimentation with its broader mission of equitable design access.

let's get in touch!

© 2025 Designed by Gloria Yang

let's get in touch!

© 2025 Designed by Gloria Yang

Create a free website with Framer, the website builder loved by startups, designers and agencies.