FontContext · Shipped 2026

The Context Aware
Font Editor for Figma

Role

Product Designer &
Frontend Engineer

Timeline

Jan 2026 (1 week)

Team

Self Initiated
(Waterloo Figma Campus Leader)

Skills

Product Design
Figma Plugin Dev
TypeScript
UX Research

What if you could preview your actual design content across 1,000+ fonts without ever leaving your text selection?

Most designers suffer through a "scroll and guess" workflow. You select a text box, scroll through a tiny dropdown for 20 minutes, and wait for Figma to piece together fonts one by one. I built FontContext to bridge the gap between discovery and application, allowing designers to see their actual canvas context rendered across the entire Google Fonts library instantly.

FontContext is a live type tester that turns an administrative hurdle into a creative win.

1. Selection Synchronization

The plugin acts as a listener, not just a viewer. When you select a text layer on your canvas, FontContext instantly pulls that exact raw text string into the interface. This eliminates the "copy paste" tax. Your design context follows you.

Before/After comparison: Search Mode showing font names vs Preview Mode showing text rendered in each font

2. The Live Preview Engine

I replaced the standard search behavior with a dual-mode engine. Toggling the "Eye" icon, the list transforms from a standard directory into a live creative stage where every font in the library renders your synced text.

3. Standardized Favoriting

Once FontContext is launched, you can scroll through the library and curate your own type foundry. Hover over any font and click the heart icon to save it, instantly adding it to your personal collection. Your saved fonts are accessible via the heart tab, which displays a live count badge showing exactly how many fonts you've curated.

4. Adaptive Workspace Layouts

Designers work in different environments. FontContext includes a responsive layout engine that toggles between Vertical (List) and Horizontal (Grid) views, ensuring longform headlines are never clipped.

The Pain Was Always There

3+ years running design workshops, judging hackathons, mentoring at Figma events. Hundreds of first-time Figma users.

Every session, the same scene: someone opens the font dropdown. A thousand names scroll by. They pick one, apply it, squint, undo. Pick another. Undo. 10-30 minutes gone.

After hundreds of these sessions, one frustration came up more than any other:

"I'm trying to envision my brand in this font. But I can't see it. I'm just guessing."

Design students. PMs making decks. Founders building their first landing page. The same struggle.

Workshop photo 1 Waterloo Velocity MVP Hacks
Workshop photo 2 Waterloo Velocity MVP Hacks
Workshop photo 3 Waterloo Velocity MVP Hacks

Validating the Observation

Mapping what designers need versus what the tool provides.

Recall vs Recognition

The native picker forces recall. You guess what Playfair Display looks like, apply it, then see if you're wrong. What people actually need is recognition: the ability to see their text in a font before committing to it.

I'd watched this struggle hundreds of times. But I wasn't just observing. Every time I picked a font for my own projects, I felt the same friction.

How might we create a type testing experience that is both contextual and performant, without leaving Figma?

I Built First

I'm a designer, not an engineer. But I didn't wait for one.

I had a hunch that live preview was the answer. But a hunch means nothing until you build it and feel it. So I opened VS Code that night and started, a light mode MVP, bare bones, just to answer one question: can I render a user's text across multiple fonts inside a Figma plugin?

That was the technical risk. If that didn't work, nothing else mattered.

Problem was, I didn't know how to do it.

Early MVP

[Minimal white interface with unstyled font list, basic search bar, simple text input showing core functionality before design polish]

I'm a designer who's learned some coding, enough to build small tools and read documentation, but not to ship production-grade plugins entirely on my own.

For FontContext, I built the first section of the interface myself with the front-end skills I already had. Where I got stuck was everything deeper: injecting Google Fonts into Figma's iframe sandbox, wiring up selection sync, and connecting cleanly to the plugin API.

I treated AI as a coding assistant, not a teacher. I used Claude Code in my terminal to sanity-check approaches, generate example patterns, and explain concepts I didn't know yet.

I would ask targeted questions ("How do I inject fonts into an iframe?", "How do I virtualize this scrolling list without jank?"), integrate the suggestions, break them, then refine with follow-up questions until the behavior matched what I wanted.

The integration logic, how the plugin listens to selection changes, switches modes, and keeps the UI in sync, was written by me, with AI filling in the low-level patterns I didn't know yet.

AI compressed my learning loop here, but it didn't replace an engineer. I still had to understand what I was building, make the design decisions, and test and refine until it felt inevitable inside Figma. Before shipping, I also had a friend who’s an engineer do a quick mentor-style code review to catch edge cases and sanity-check the patterns.

The MVP worked. Ugly. Slow. But functional.

Now I had something real to react to, not a wireframe, not a mockup, but an actual interface I could use and feel.

What Already Exists (And Why It's Broken)

Looking around while building.

The Two Bar Problem

A real UX challenge with three failed attempts.

The core UX challenge: search and preview are different intents, but both involve typing into a field.

Font Preview fragmented this completely. I watched five people use it for the first time. All five clicked the wrong input field first. Three of them said "wait, which one do I type in?" One person tried typing in both fields to see what would happen. Another person just gave up and closed the plugin.

That's not a design opinion. That's a usability failure.

I tried fixing this three ways.

Attempt 1: Two Labeled Inputs

Two inputs, clearly labeled. "Search fonts" on top, "Preview text" below. I built it, used it, felt the friction immediately. Too much cognitive load. Eyes bouncing between fields. Which one do I type in? What's the difference? Do I need both?

Attempt 2: Tab Toggle

"Search mode" and "Preview mode" as separate views. Cleaner conceptually, but constant clicking between them. I tested this with three people. All three kept clicking the tab toggle trying to figure out what it did. Every click is friction. Friction adds up.

Attempt 3: One Input, Eye Toggle

One input field with an eye icon toggle.

Type a font name,"Roboto," "Playfair",it searches the library. Type a phrase that isn't a font name,"Hello World," "Welcome to Our Wedding",and the system detects that's not a font and switches to preview mode automatically. Every font renders your text. The eye icon lets you override manually if the detection gets it wrong.

I jammed with Claude on the icon choice. Should it be a pencil or an eye?

Pencil means "edit." But I wasn't building an editor,I was building a previewer. Eye means "visibility," "show/hide." That's exactly what preview mode does. It's the same icon Figma uses in the layers panel, the same icon Photoshop uses for layer visibility. Users already know what it means. Pattern matching.

Eye won.

I tested the final version with four people. Nobody asked how it worked. They just typed and it did what they expected.

One input. Two modes. Zero choice overload.

Who This Is Actually For

Not just designers. Anyone who touches text in Figma.

I initially thought I was building for workshop students, beginners who didn't know font names yet, people early in their design journey.

Then I tested with twelve people across different skill levels.

Junior Designers (4 people)

Didn't know what fonts existed. They needed to browse, discover, see options rendered. Recognition over recall was everything, they couldn't remember what fonts looked like because they'd never used them before. One junior told me: "I'm trying to envision my brand in this font, but I can't see it. So I'm just guessing."

Mid level Designers (4 people)

Already had favorites. They wanted to search fast, curate a personal library, build their own type foundry over time. One midlevel designer, Sarah, a senior product designer at a startup, said: "I usually know which fonts I want. But I still have to scroll through Figma's list to find them. And sometimes I'm just exploring, and I don't want to commit by typing a name into the search bar. This feels... less risky. I can just browse."

Engineers (2 people)

Were updating UI copy. They didn't care about typography theory. One engineer told me: "I don't care about typography. I just need something that doesn't look terrible so I can move on." They just needed to pick something decent and ship.

PMs and Founders (2 people)

Surprised me most. One PM said: "I have thirty minutes. I just need a font that doesn't look terrible. I don't want to become a typography expert."

That reframed everything.

This Isn't a "Junior Designer Problem"

Some might say "font selection is a junior designer problem." That misses the point.

High frequency, real friction = not a junior problem.

Every designer, junior, senior, staff, picks fonts. Multiple times per project. For years. The friction compounds. A junior designer wastes 30 minutes because they don't know font names yet. A senior designer wastes 10 minutes because they're exploring outside their usual favorites. An engineer wastes 5 minutes because they just need something decent to ship.

The problem scales with everyone. The solution needed to work for everyone.

Scoping as Taste

Font picker, not typography editor.

Micro Interactions

Every micro interaction came from using the tool or watching someone hesitate.

The Moment It Clicked

When the tool disappears into the workflow.

There was a specific moment I knew the design was right.

Sarah, a medtech startup founder new to design, helped me A/B test iterations throughout the process. A few weeks after launch, I ran into her working on a brandbook for her startup. FontContext open, scrolling through serifs, hearting options, building her personal library. I asked if she remembered when picking fonts was hard. She looked confused. "What do you mean? You just preview and pick." The tool disappeared into her workflow. That's the point.

That's the taste moment. When the interface stops being designed and starts being inevitable. When you can't imagine moving anything without breaking the whole thing. You can't plan for this. You can't force it. You just stay in the work long enough until it happens.

Final UI: Dark Mode

[Complete dark mode interface: Search bar with eye icon, "Hello World" previewed across multiple fonts (Playfair Display, Roboto, Montserrat), heart icons, three-dot menu, layout toggles]

Final UI: Light Mode

[Same interface in light mode: Clean white background, black text, subtle borders, all controls visible and accessible]

Try FontContext

Live on Figma Community

What V1 Doesn't Have

Version one ships lean, but that means trade offs.

What I Took Away

The Best Design Is Invisible

When Sarah couldn't remember font picking being hard, I knew the design worked. Good design doesn't announce itself. It disappears into the workflow. The tool becomes transparent. The task becomes effortless.

AI Lowered the Barrier, But Taste Still Matters

Claude taught me code patterns I didn't know. Virtual scrolling. CDN injection. Rate limit handling. But AI can't tell me where to put the heart icon or when 120ms feels right. That's taste. Anyone can generate code. Not everyone knows what to build.

Think in Systems

FontContext is a font picker, not a typography editor. Knowing what not to build is harder than knowing what to build. Every decision, from the eye toggle to the heart placement, was tested against one question: would this scale across a design system?