FontContext · Shipped 2026

The Context Aware
Font Editor for Figma

FontContext live preview

Role

Product Designer &
Frontend Engineer

Timeline

Jan 2026 (1 week)

Team

Self Initiated
(Waterloo Figma Campus Leader)

Skills

Product Design
Figma Plugin Dev
TypeScript
UX Research

What if you could preview your actual design content across 1,000+ fonts without ever leaving your text selection?

Most designers suffer through a "scroll and guess" workflow. You select a text box, scroll through a tiny dropdown for 20 minutes, and wait for Figma to piece together fonts one by one. I built FontContext to bridge the gap between discovery and application, allowing designers to see their actual canvas context rendered across the entire Google Fonts library instantly.

FontContext is a live type tester that turns an administrative hurdle into a creative win.

1. Selection Synchronization

The plugin acts as a listener, not just a viewer. When you select a text layer on your canvas, FontContext instantly pulls that exact raw text string into the interface. This eliminates the "copy paste" tax. Your design context follows you.

Before/After comparison: Search Mode showing font names vs Preview Mode showing text rendered in each font

2. The Live Preview Engine

I replaced the standard search behavior with a dual-mode engine. Toggling the "Eye" icon, the list transforms from a standard directory into a live creative stage where every font in the library renders your synced text.

3. Standardized Favoriting

Once FontContext is launched, you can scroll through the library and curate your own type foundry. Hover over any font and click the heart icon to save it, instantly adding it to your personal collection. Your saved fonts are accessible via the heart tab, which displays a live count badge showing exactly how many fonts you've curated.

4. Adaptive Workspace Layouts

Designers work in different environments. FontContext includes a responsive layout engine that toggles between Vertical (List) and Horizontal (Grid) views, ensuring longform headlines are never clipped.

The Pain Was Always There

3+ years running design workshops, judging hackathons, mentoring at Figma events. Hundreds of first-time Figma users.

Every session, the same scene: someone opens the font dropdown. A thousand names scroll by. They pick one, apply it, squint, undo. Pick another. Undo. 10-30 minutes gone.

After hundreds of these sessions, one frustration came up more than any other:

"I'm trying to envision my brand in this font. But I can't see it. I'm just guessing."

Design students. PMs making decks. Founders building their first landing page. The same struggle.

Workshop photo 1 Waterloo Velocity MVP Hacks
Workshop photo 2 Waterloo Velocity MVP Hacks
Workshop photo 3 Waterloo Velocity MVP Hacks

Validating the Observation

Mapping what designers need versus what the tool provides.

Recall vs Recognition

The native picker forces recall. You guess what Playfair Display looks like, apply it, then see if you're wrong. What people actually need is recognition: the ability to see their text in a font before committing to it.

I'd watched this struggle hundreds of times. But I wasn't just observing. Every time I picked a font for my own projects, I felt the same friction.

How might we create a type testing experience that is both contextual and performant, without leaving Figma?

I Built First

I'm a designer, not an engineer. But I didn't wait for one.

I had a hunch that live preview was the answer. But a hunch means nothing until you build it and feel it. The problem was, I didn't know if I could even render the fonts. So I opened VS Code that night and started small: a light mode MVP, bare bones, just to answer one question: can I render a user's text live across multiple fonts inside a Figma plugin?

That was the technical doubt. If that didn't work, nothing else mattered.

The MVP worked. Ugly UI. Slow. But functional with the main features google fonts loading and being able to type characters to preview them in the input bar… it worked after the first 3 stints.

Early MVP

Now I had something real to react to, not a wireframe, not a mockup, but an actual interface I could use and feel.

For FontContext I built the first mvp of the interface myself with the limited frontend knowledge I have from past experiences. Where I got stuck and used claude code and engineer friend for tips was the deeper stuff: injecting Google Fonts into Figma's iframe sandbox, wiring selection sync, and connecting cleanly to the plugin API.

The integration logic (selection changes, mode switching, UI in sync) was mine; AI filled in the low-level patterns I didn't know.

Before shipping through different stages I had an engineer friend do a mentor style code review: edge cases, sanity check on patterns, and a quick pass so I wasn't shipping something that only worked on my machine. AI compressed my learning loop. It didn't replace the need to understand what I was building or to treat the codebase as something that should hold up over time.

What Already Exists (And Why It's Broken)

Looking around while building.

The Two Bar Problem

A real UX challenge with three failed attempts.

The core UX challenge: search and preview are different intents, but both involve typing into a field.

Font Preview fragmented this completely. I watched five people use it for the first time. All five clicked the wrong input field first. Three of them said "wait, which one do I type in?" One person tried typing in both fields to see what would happen. Another person just gave up and closed the plugin.

That's not a design opinion. That's a usability failure.

Hello World preview and search in one input

I tried fixing this three ways.

Two labeled inputs

"Search fonts" on top, "Preview text" below. Same affordance, same action.

Users asked which one to type in. Ambiguous affordances (Don Norman). Good UI makes the next action obvious; this didn't. Unnecessary cognitive load.

Two inputs

Search on the left, preview dropdown on the right. User still sees two text inputs side by side.

Same confusion as stacking them. Two inputs, same affordance. Still taxing mental load.

One input, eye toggle

One field. Type a font name to search; type a phrase to preview. The system infers intent.

Eye icon overrides when the guess is wrong. One input, two modes, zero extra choices.

Of the three options, the third made the most sense: one field that syncs whether you type a font name or a phrase to preview.

For the icon in the input I debated between a pencil and an eye. Pencil would mean "edit," but I wasn't building an editor; I was building a previewer. Eye means "visibility," "show/hide." That's exactly what preview mode does. It's the same icon Figma uses in the layers panel, the same icon Photoshop uses for layer visibility. Users already know what it means. Pattern matching.

Eye won.

I tested the final version with four people. Nobody asked how it worked. They just typed and it did what they expected.

One input. Two modes. Zero choice overload.

Who This Is Actually For

Not just designers. Anyone who touches text in Figma.

I initially thought I was building for workshop students, beginners who didn't know font names yet, people early in their design journey.

Then I tested with twelve people across different skill levels.

Junior Designers (4 people)

Didn't know what fonts existed. They needed to browse, discover, see options rendered. Recognition over recall was everything, they couldn't remember what fonts looked like because they'd never used them before. One junior told me: "I'm trying to envision my brand in this font, but I can't see it. So I'm just guessing."

Mid level Designers (4 people)

Already had favorites. They wanted to search fast, curate a personal library, build their own type foundry over time. One midlevel designer, Sarah, a senior product designer at a startup, said: "I usually know which fonts I want. But I still have to scroll through Figma's list to find them. And sometimes I'm just exploring, and I don't want to commit by typing a name into the search bar. This feels... less risky. I can just browse."

Engineers (2 people)

Were updating UI copy. They didn't care about typography theory. One engineer told me: "I don't care about typography. I just need something that doesn't look terrible so I can move on." They just needed to pick something decent and ship.

PMs and Founders (2 people)

Surprised me most. One PM said: "I have thirty minutes. I just need a font that doesn't look terrible. I don't want to become a typography expert."

That reframed everything.

This Isn't a "Junior Designer Problem"

Some might say "font selection is a junior designer problem." That misses the point.

High frequency, real friction = not a junior problem.

Every designer, junior, senior, staff, picks fonts. Multiple times per project. For years. The friction compounds. A junior designer wastes 30 minutes because they don't know font names yet. A senior designer wastes 10 minutes because they're exploring outside their usual favorites. An engineer wastes 5 minutes because they just need something decent to ship.

The problem scales with everyone. The solution needed to work for everyone.

Scoping as Taste

Font picker, not typography editor.

Coming Soon

Micro Interactions

Every micro interaction came from using the tool or watching someone hesitate.

Coming Soon

The Moment It Clicked

When the tool disappears into the workflow.

There was a specific moment I knew the design was right.

Sarah, a medtech startup founder new to design, helped me A/B test iterations throughout the process. A few weeks after launch, I ran into her working on a brandbook for her startup. FontContext open, scrolling through serifs, hearting options, building her personal library. I asked if she remembered when picking fonts was hard. She looked confused. "What do you mean? You just preview and pick." The tool disappeared into her workflow. That's the point.

That's the taste moment. When the interface stops being designed and starts being inevitable. When you can't imagine moving anything without breaking the whole thing. You can't plan for this. You can't force it. You just stay in the work long enough until it happens.

Final UI: Dark Mode

FontContext dark mode interface

Final UI: Light Mode

FontContext light mode interface

Try FontContext

Live on Figma Community

What V1 Doesn't Have

Version one ships lean, but that means trade offs.

What I Took Away

The Best Design Is Invisible

When Sarah couldn't remember font picking being hard, I knew the design worked. Good design doesn't announce itself. It disappears into the workflow. The tool becomes transparent. The task becomes effortless.

AI Lowered the Barrier, But Taste Still Matters

Claude taught me code patterns I didn't know. Virtual scrolling. CDN injection. Rate limit handling. But AI can't tell me where to put the heart icon or when 120ms feels right. That's taste. Anyone can generate code. Not everyone knows what to build.

Think in Systems

FontContext is a font picker, not a typography editor. Knowing what not to build is harder than knowing what to build. Every decision, from the eye toggle to the heart placement, was tested against one question: would this scale across a design system?