Houston, We Have a Problem: Building Trust in the Age of AI

Arthur C. Clarke famously wrote that any sufficiently advanced technology is indistinguishable from magic. At a recent panel at Patterns Houston, Knapsack's VP of product, Robin Cannon flipped that observation on its head: magic, by definition, cannot be trusted with anything important. We don't build hospitals on magic. We don't run financial systems on it. The moment something matters, we demand to see the mechanism.

That's exactly where we are with AI. The technology is extraordinary. The trust is not. And until we close that gap, the promise of AI-powered product delivery will stay just out of reach.

The AI Trust Deficit

The trust deficit shows up across industries. At Patterns Houston, practitioners from very different contexts described versions of the same problem.

Amber Koelzer, Sr. Product Design Manager at GoDaddy shared that an AI agent built to help customers launch new businesses gave good recommendations to help customers but adoption was lagging. Distrust surfaced in support tickets, in user behavior, and in the same patterns being manually recreated by people who didn't trust what the system had already built for them.

Larry Pelty shared a similar story: AI-generated sales materials and revenue forecasts that teams simply rejected. Not because the outputs were wrong, but because the models couldn't explain themselves. There was no reasoning to inspect, no source to verify.

Stefany Alarcon, a Sr. Product Designer working on AI-assisted patient journeys at Baylor, Scott, and White, shared feedback she was hearing from colleagues: What's the difference between this internal agent and just using ChatGPT? Where is this information coming from? How certain are you?

These aren't fringe cases. A 2025 Gartner survey found that 53% of consumers don't trust AI search results — and that number is only likely to grow as AI becomes more embedded in products and workflows. The perception of AI as a magic solution is quickly fading. Any AI-powered experience that can't answer the question "how do you know that?" is likely to struggle with adoption.

But the trust problem isn't so much a technology problem as it is a data and systems problem.

Control Issues: Why AI Needs a Source of Truth

To be fair, skepticism toward AI isn't irrational. Inconsistent outputs, hallucinations, a tendency to tell users what they want to hear rather than what's true — these are real limitations that teams are running into every day. But it's also worth remembering that we're still in the early stages of enterprise AI adoption. These models are improving rapidly, and many of the rough edges we experience today are characteristic of any powerful technology finding its footing.

The deeper issue is that AI has largely been deployed without a control plane. Without a source of truth to draw from, constrain against, and be held accountable to, even a capable model is unreliable. The inconsistency isn't just a model problem — it's an infrastructure problem. AI left to its own devices will drift. It needs something solid to anchor to.

The good news is that this is a solvable problem. The solution looks less like a better model and more like better systems supporting the model — specifically, a design system.

Giving AI Something to Believe In

When most product people hear "design system," they picture exactly the kind of fragmentation that makes AI untrustworthy — a Figma library here, a code repository there, documentation that's perpetually out of date, brand guidelines living in a PDF somewhere. Each speaks a different language, lives in a different tool, and updates on a different schedule. For humans, this fragmentation is frustrating. For AI, it's unworkable.

Not all design systems are created equally. The kind of design system that can actually support AI isn't a collection of files and components — it's a universal source of truth. One that takes everything your team is already working with, translates it into a single unified language, and makes it legible to the AI tools working alongside them.

This is what Knapsack does. Knapsack takes your design tokens, components, documentation, and brand standards — whatever tools and formats your team is already using — and translates them into a structured, AI-readable language that models can draw from, reason about, and be held accountable to. When your AI tools are working from Knapsack, they're not guessing. They're working from the truth.

Design Systems as Decision Frameworks

What sets Knapsack apart is that it's not just a structured system — it's an opinionated one. And that distinction matters more than most teams realize. A design system can be structured without being opinionated. But it can't function as a source of truth unless it is both. Structure organizes your assets. Opinion is what gives them authority.

An opinionated design system doesn't just tell teams what components exist — it tells them what's right and what's wrong. It embeds weighted considerations, defined standards, and clear guidance on where to flex and where to hold firm. It treats composability as a principle: giving designers the right building blocks and the context to use them well, rather than infinite flexibility that leads to inconsistency and drift.

This is where design systems become decision frameworks — and where they become truly useful to AI. When your AI tools are drawing from a system with real opinions and real guardrails, the outputs reflect that structure. Constrained by something good, AI produces something good.

Why People Struggle to Trust the Process

The technology and infrastructure challenges are real, but trust is also a people problem. Users resist change — sometimes because they genuinely dislike it, more often because they simply aren't used to it yet. And for some populations, like many neurodivergent users, it's because inconsistency is a genuine barrier to adoption. Stability isn't just a nice-to-have. It's a requirement for building trust, creating a consistent experience, and driving adoption.

Solving the human side of the trust problem means giving people a genuine sense of understanding and ownership in the system. That doesn't happen through onboarding decks or mandates — it happens when people can see how the system works, contribute to it, and watch their input shape the outcomes. The most powerful thing you can do to build lasting trust is empower the feedback loop. When teams can experiment, critique, and feed their experience back into the system, trust becomes self-reinforcing. Because adoption follows trust, trust follows ownership, and ownership follows participation.

A Paired Approach to Building Trust

Building trust in your AI outcomes requires two things working in parallel: the right foundation, and the right processes to support it.

The foundation is infrastructure. A mature design system — one that unifies your design decisions, code, documentation, and brand into a single AI-readable source of truth — is what everything else is built on. Without it, AI has nothing solid to anchor to. Knapsack is launching a Design System Maturity Assessment to give teams a clear-eyed view of where they stand across infrastructure, governance, adoption, and AI-readiness. If you'd like to be among the first to access it, reach out and we'll be in touch.

The second challenge is human. Knapsack isn't just a platform — it's a partner in building the feedback loops, workflows, and collaborative practices that turn skeptical teams into invested ones. The technical foundation and the human foundation have to be built together.

AI doesn't earn trust on its own. It earns trust when it's grounded in a system people believe in, maintained by processes people own. The teams getting this right aren't the ones with the most sophisticated models. They're the ones who built the foundation first — and brought their people with them.

Interested in learning more about the Design System Maturity Assessment? Get in touch.

Get started

See how Knapsack helps you reach your design system goals.