Digital products no longer exist as standalone apps. They live inside complex ecosystems of interfaces, AI systems, legacy infrastructure, and workflows that all have to work together. In this episode of Patterns, Chris Strahl talks with product design leader Andi Rusu about what it takes to design reliable digital experiences in environments where multiple systems—and increasingly AI—are shaping how products behave.
Drawing on experience at Disney, Sonos, Axon, and Microsoft, Andi explains why trust is becoming the central design challenge in modern product development. As AI becomes embedded in digital products, the job of design expands beyond crafting interfaces to shaping how complex systems behave, how decisions are made, and how users understand what’s happening behind the scenes. The conversation explores how designers can balance abstraction and transparency, when friction actually improves the experience, and why human judgment still plays a critical role in building trustworthy AI-powered products.
We’ll explore:
- Why modern digital products behave more like ecosystems than individual apps, and how fragmentation across systems creates new design challenges for product teams
- How AI is becoming a new layer inside product development, influencing how workflows, decisions, and automation shape the user experience
- Why trust becomes harder to maintain in AI-driven products, especially when systems make decisions users cannot see or easily understand
- Why human judgment still matters in AI-powered design, and how designers balance abstraction, transparency, and intentional friction to create reliable user experiences
Guest
Andi Rusu is a product design and research leader focused on creating user-centered experiences across complex product ecosystems. He has led design teams and initiatives at Disney, Axon, Sonos, Microsoft, and Deloitte, helping organizations deliver impactful digital products at scale. He has also taught experience design at Cornish College of the Arts, the University of Washington, and the School of Visual Concepts.
Transcript
Robin Cannon [00:00:00]:
This podcast is brought to you by Knapsack, the intelligent product engine helping teams design, build, and deliver digital products at the pace of ideas. Knapsack creates a system of record built for both humans and AI, giving product teams the data structure and alignment they need to deliver with speed, scale, and confidence. Learn more at knapsack.cloud.
Chris Strahl [00:00:22]:
Hey everyone, welcome to the Patterns Podcast. Each episode, we sit down with the leaders and builders defining how modern digital products come to life. From systems and tools to culture and decision-making. We dig into what's driving real impact today and shaping the future of how teams build.
Hey everybody, welcome to the Patterns Podcast. I'm your host, Chris Strahl. Today we're talking about product ecosystems. Bit of a change up from some of our more narrow topics.
Chris Strahl [00:00:49]:
We're going really broad here. We're talking about how digital products don't just include apps. They're interconnected systems of hardware, software, AI, there's a bunch of legacy infrastructure, and all these things get stitched together and then they get sent to potentially millions or even billions of users. The real challenge isn't building new features for these applications, it's making that entire ecosystem work efficiently and consistently so people actually trust it and wanna use it.
So our guest today is Andi Rusu. He's been designing inside of that level of complexity for a really long time. And so we're going to unpack what it takes to design across an ecosystem without losing trust and control. Andi, why don't you give us a quick backdrop or understanding of what you've been building in?
Andi Rusu [00:01:34]:
Well, thanks, Chris. My name is Andi Rusu. I'm a senior product design manager at Disney in the Workforce Technologies group currently. What I do is leading the effort to design a unified experience across all employee segments across the Disney ecosystem, which is about 250,000 employees, right? So prior to this, I was the product design and research director for connected devices and services at Axon, dealing with first responder and law enforcement users and workflows. Before that, I led the Sonos device setup configuration and onboarding effort for all the Sonos products in the market. Lastly, at Microsoft, I was the design director for the location and identity related experiences for Windows platforms, as well as the HoloLens.
Chris Strahl [00:02:20]:
So you spent a ton of time at big companies doing lots of things with tech that touches lots and lots of different systems. And oftentimes in environments that are, I would guess, legally challenging or regulatorily challenging areas of interest where there's a lot more at stake in terms of trust than just does the app work or does it not.
Andi Rusu [00:02:45]:
That is true. The thing about trust as a currency, let's call it, user trust and operating interface, like any trust for that matter, is, is hard to gain and easy to lose. It's a common rule. And the work of building quality experiences nowadays is more difficult than ever. The contact and content surfaces are more varied. Consistency of operation is more difficult than ever. And the failure points for trust loss are more distributed than ever. What we do as designers has only expanded in complexity because we have not only to aim forward in designing the new, but we have to integrate it and mindfully interface with the old.
Andi Rusu [00:03:25]:
This really is a forgotten sort of design art about how to blend two very different legacy systems with new systems and so on. It's a big challenge.
Chris Strahl [00:03:35]:
So when you think about those ecosystems, and you think about those failure states or those places where they break, what are the fracture lines and the fault lines in those systems?
Andi Rusu [00:03:45]:
That's a good question because it's a contextual problem depending on what the company does, what the users do, what the workflows are, and so on. But to take one, for example, if we're thinking about authenticating across many different credential systems, that's like, let's say, one SIEM. And this is necessary for security, but when you deal with a unified experience that touches many different areas of a system, this impacts overall engagement and trust because it's perceived as a fault rather than a necessity. When you need to hop from one system to another to access one document or another. And this particular thing is almost in direct conflict with the promise of easy and intuitive operation that we are constantly tasked with as designers, and we struggle to deliver. But some of these things are out of our control, and then we need to sort of work gracefully with these things to then be able to communicate what is happening, why it's happening, and making sure that the user doesn't necessarily lose the context because they're required to do a certain thing that, well, the system does for various reasons, like security, for example.
Chris Strahl [00:04:51]:
Yeah, so you don't want a user to feel bounced around, but oftentimes that bouncing is somewhat of a necessity. And so you have all these different points of contact that you're touching in that experience. You have your device, the thing that a user is actually touching. Like a cell phone or a body camera or something like that. You have the interface, like what are you presenting in that application or that interface to that user? And then you have like some sort of communication with the cloud. And in those three sort of different interface layers or different failure points, how do you really think about design across that stack?
Andi Rusu [00:05:32]:
It works like a story. It has a beginning, a middle, and an end. A good story works because those stages of the storytelling, you move seamlessly and you move elegantly between stages and you reach a conclusion of whatever you're trying to communicate or whatever's communicated to you. So design works in the same way. If we picture beginning happening on a device, the middle happens in a cloud, and then the end happens on the device again. This cycle and this circuit, it being completed in a way that doesn't necessarily make you think of the device that you're operating or the process that takes place. Instead, it just happens. You reach that point of ignoring the surface and the device.
Andi Rusu [00:06:14]:
You move seamlessly through the experience and through the workflow. It's key. When I was designing for Axon for devices for first responders, that was a big push that we wanted the first responder to not think about the device that they're operating. We want them focused on the outcome and what they're trying to do. Lives are at stake. So that was a key part of our mission is how to reduce the complexity of understanding the operations and rather focusing on the outcome of the experience itself.
Chris Strahl [00:06:42]:
Yeah. So what did the challenges or the failure points look like in that? I mean, I can imagine some, right? I have wearable device technology on me right now. I have a ring. I have a watch. And those things are largely invisible to me on a day-to-day basis. Where the watch has an interface that is like an extension of my smartphone. My Ring doesn't have any interface at all that I necessarily touch unless I actually go into an app. When I'm thinking about that relative to something that is like a wearable in a first responder or law enforcement scenario, that seems to bring up an entire other level of potential points of failure and trust that seem really important.
Chris Strahl [00:07:17]:
And I'd be curious, as you were exploring those things, what came up as things you ran into or things that were difficult design problems?
Andi Rusu [00:07:26]:
We design under a certain set of assumptions that then we check with the users and make sure that we have their priorities in focus, and we understand exactly how they operate these things. We used to go on ride-alongs with officers to understand how they operate the equipment, to understand exactly what were the pressures of the environment that dictate operations in a specific way. The problem is that you don't foresee all the possible permutations of complexity that happen given all potential interactions that you may have.
Chris Strahl [00:07:56]:
And you're trying to abstract a bunch of these interactions, right?
Andi Rusu [00:07:58]:
Absolutely.
Chris Strahl [00:07:58]:
Because you don't want users to know everything that's happening on a device. It's too much of how the sausage is made.
Andi Rusu [00:08:04]:
Yeah, and that's where you get into a situation where, with the best of intentions, you design for all possible permutations that you can see and you can anticipate. However, certain things happen, and certain combination of things happen that cause you to reevaluate. To give you a concrete example, the police body camera used to have a long press. There's one big button in the front that the officer would start the recording by double tapping the big button, and that starts the recording process. That recording process can be started by other means, like via sensor, like a signal sensor that when the gun gets pulled out of holster, the camera starts recording. So there are other means of starting. However, the manual way of starting it, you just double tap the big button. To stop the recording, you have a long press, a 3-second long press on the button itself, and that stops the recording.
Andi Rusu [00:08:52]:
Those two are vastly different muscle memory, and we wanted to keep it that way for this very specific reason. So that was our initial design and design intent for these operations. There was an incident in Cedar Falls, I believe, where there was a police and then a suspect altercation where they got into a physical scuffle. The officer started the recording. The recording was operating nominally as designed. However, in the scuffle, there was a bear hug situation where they were in really close proximity, and the suspect in effect, actuated a long press on the camera itself. In that process, the suspect reached for the officer's gun. They were struggling.
Andi Rusu [00:09:33]:
The officer managed to get control of the weapon and shot the suspect. The advantage was that a squad car was located in the right spot with a camera aimed in the right direction to capture this interaction. So the officer, in his retelling of the story, it was confirmed by the squad car camera, like, how'd this happen? But that caused us to rethink of that on-device UX pattern. It couldn't be the long press anymore. So then we had to introduce a confirmation press. So we had to introduce friction to this thing. So we have to operate outside of the easy and intuitive muscle memory-based thing. So double tap to start, and then long press, and then confirm by pressing a different button that you indeed want to stop the recording.
Andi Rusu [00:10:15]:
Otherwise, it continues recording. Just to avoid this type of situation. This gives you an idea about how design is impacted by external forces and how this can fail, again, not by design, but by operation.
Chris Strahl [00:10:28]:
Yeah, and it also gives us a sense of how fragile a lot of these systems are. When you think about, in the case that you're talking about, how do I control the devices that I'm working with versus like, what's the status of those devices? There's a lot of trade-offs that you're making, and those trade-offs are oftentimes counterintuitive. Like you said, introducing friction into what should be a rather rote, simplistic part of the operation of a physical device ultimately is a necessary thing because of outside implications. And that's very difficult to consider. And again, back to that abstraction standpoint, with that abstraction of complexity of things, you oftentimes end up in this place where it pushes a lot of complexity underneath the surface.
Andi Rusu [00:11:11]:
Yeah, and that's where it becomes difficult because designing for muscle memory, especially in these critical systems that I was talking about, like driving or first responder interactions, is critical. While our instinct as designers is to constantly rethink, reinvent, and reimagine interaction patterns, that can be an extreme liability given the potentials for a combination of factors leading you to a completely unforeseen outcome. This translates into cost. Especially when thinking about change management at scale that's required to retrain somebody on a device that operates differently, like say a Tesla driving interface that's on a panel versus the buttons that are, you know, you're used to controlling your entertainment system in the car. You have to retrain yourself how to use that thing on a flat screen versus, you know, as you're driving, looking at the road and being able to reach out for a button that you exactly know where it is and being able to turn down the music. So cost, change management, retraining, and also the cost of mistakes while adapting the new and adopting the new. There is a cost that gets reflected in the mistakes that people make with this, and those can be sometimes very impactful.
Chris Strahl [00:12:19]:
Yeah, it's interesting, right? The idea of software making things worse. Like on the surface, the idea of I govern my entire automobile via a very large touch panel. Versus the tactile response you get from a physical button. And now, like, the hybridization of that, like, for example, I just got a brand new EV. My buttons have like little mini— maybe it's ink, I don't know exactly what it is, but the buttons will actually dynamically change based on the interface. I get tactile feedback with them, but it's still the button's different depending on the context of what I'm looking at in the car at any given moment. And there is this sense of confusion about that. And there's also the whole problem of, you know, very often there's negative patterns that exist inside of applications to either keep users trapped in an ecosystem or make it more difficult to accomplish a task like a cancellation or something like that.
Chris Strahl [00:13:12]:
And you still end up in this situation where between those systems of like device, interface, cloud, abstraction is how we ultimately make these things make sense to people. But it also introduces, I guess, risk in the system of either those negative patterns showing up or those abstractions leading to behavior that is counter to a user's intention or the intention of the product.
Andi Rusu [00:13:37]:
There's also an aspect of you can, through software interaction and software use, you can mask certain things that do happen on the device that need to happen on device, but give the user the impression that something else is going on to be able to keep them in the moment. At Sonos, when we were dealing with the configuring of new speaker that you bought, that's a really complex operation. That speaker could have been on the shelf for several months. You get it, and then it needs to update its firmware for it to play music. Well, there's an extensive process of detection, connection, authentication, firmware download, firmware install. And then configuration, and then playback. That chain needs to happen. In the previous app, it took about 14 taps to get this chain accounted for.
Andi Rusu [00:14:28]:
Well, those taps were not necessary. The user would just confirm that this step took place, and now you're moving to the next step in the app. You don't need to confirm that. You just move the users right along, and you only confirm, or you only ask for user input if things go wrong. So we were able to reduce the interactions that the user has with the app from 14 to 4 taps. That gave the user a level of comfort with the device and the understanding that things are happening and things are moving along. That created the impression that the software setup, the configuration, and then, you know, download the firmware and installing the firmware was a lot faster. So that was the perception.
Andi Rusu [00:15:08]:
However, The actual process of doing that is exactly the same because you're dealing with the same internet, the same pipelines, the same variance in the internet weather, which may cause it to download faster or slower, the same process of inflating the package, installing it, testing it, making sure it runs, and then confirming it. But hiding all that, telling the user something's happening, now we're moving to the next step seamlessly, created impression that something was happening a lot faster when in fact, It was the same way. So you can, with careful use of design and messaging, you can give the user a sense of comfort that is not possible with just hardware implementation, straight-up hardware implementation.
Chris Strahl [00:15:50]:
That's so funny. There used to be the old designer joke is like, you want to improve performance, add a spinner. You know, that idea of perceived performance and something is happening is what gives the user that, like, trust and that comfort. I do want to kind of move over to that idea of trust and comfort for a second. So we've talked a lot about these ideas of ecosystems and these big complex things that have all these different moving parts. I mean, if we're thinking about this in terms of device interface cloud, and then we're adding on AI into that, we have this new thing that is coming on where these are systems that are oftentimes built to govern other systems. In the pre-meet before the show, I brought up the idea of compound engineering. And the structural assumption that there's a bunch of AI systems that are being built specifically to govern a large, complex set of tasks that are themselves another system.
Chris Strahl [00:16:41]:
This level of automation is akin to something we've never seen before. And whether or not you fall on the hype chain or the skepticism side of the spectrum of AI, I think that the intention is, of course, to say that AI is more than, hey, we can have a conversation with a robot. AI is a lot about how we actually can create impactful systems that are somewhat autonomous or self-governing, that then can have effects on other systems. And there is a lot here to unpack because the ideas of abstraction and trust and all these things are just as true for AI as they are for anything else. But I think that AI is also not particularly well understood right now. There is a lot of mistrust., and there's a lot of potential and a lot of skepticism around where all this goes. And a lot of times people talk about this in terms of how do we have human in the loop? Like, where are the human-based moments that we interact with all these different AI systems? And then where do we think about the defined inputs and outputs and how humans control those things? And I'm curious, when you think about the introduction of AI and the design for AI inside of these big ecosystem plays, where does your head go to? And how does that stay relevant in the conversation that we were just having about how the devices and the ecosystems that we're creating are all about giving user comfort and abstracting away a bunch of the things that they maybe don't need to know?
Andi Rusu [00:18:11]:
The interesting thing is that as a rule, there are soft requirements and hard requirements when you're dealing with this stuff. And as a soft requirement, humans inherently know instinctively what other humans want and respond to. And as a hard requirement, the human still makes better decisions on things that require human-driven nuance and, you know, unpacking human contextual ambiguity. So ambiguity is generated by a lot of external factors and contexts, and the combination of these things can be very, very dauntingly large. And while the machine is very good at understanding all possible permutations of this, the machine very good, or right now at least, not very good at understanding how the human reacts to those things. The idea of the human in the loop is specifically designed to account for this, is to account for the human recognizing another human and recognizing the need of the other human. It's not so much about checking the machine and not so much about double-checking the output, though it is about that at the end of the day, but it is to understand what you as a human need to get from the machine and then making the right call given the right ambiguity that is human-driven sometimes. How that response will be perceived.
Andi Rusu [00:19:26]:
You need to be there to accommodate the other human that experiences your output or the product's output. So that's where I view this is that inherently all products are designed for and by other humans. If a machine designs for another machine, totally fine. The machine will take care of itself. It's all good. But when the human uses a machine to design for another human, they need to check the output to make sure that the other human consumes what they've designed.
Chris Strahl [00:19:54]:
Hey everyone, I'm taking a quick break to tell you about Knapsack's Pattern Summits. If you've never been, these gatherings are for senior product design and engineering leaders navigating the complexity of modern digital production work. We bring folks together for thoughtful, discussion-based sessions where you can share what's top of mind, learn from peers, and leave feeling renewed. Pattern Summits are invitation only and intentionally small so the conversations stay meaningful. If you'd like to join us, visit knapsack.cloud/events to request an invitation.
Chris Strahl [00:20:45]:
I think that we're seeing these ever increasingly complex ecosystems with that too, right? You look at like OpenClaw, right, which is like taking the world by storm. I was reading this article that was written by this guy that was talking about how he wrote a really simple connection between Signal and Claude. And it was all about how him and his partner shopped for their weekly groceries on Instacart.
And the idea was, is like, because it was in a Signal text message, him and his partner could connect over Signal, use that as essentially a text messaging service that would then utilize OpenClaw to be able to interact with Instacart and automatically fill out a shopping list based upon the things they were asking for. Like, I want steak tacos. And so it needs to know it adds tortillas, cheese, peppers, onions, and steak. And the points that were chosen where that human interaction was essential in that system were really interesting to me. There were some that were really clear, like, I don't want an LLM in charge of using my credit card to actually click the buy now button and actually send the order to my house. I want to make sure that I review every shopping cart before I actually click buy. But there were other ones that were also like really subtle, like, hey, initially my partner was uncomfortable with the idea of an LLM listening to our conversations and doing things in the background that I was unaware of. And so it had to be a response that was written in the app that said, I'm going to take this and transform it into a shopping list for you.
Chris Strahl [00:21:50]:
And then there were also additional points of like, okay, so what steak, which tortillas? And those become interesting. Sort of like, how do you drill down and get a little deeper into this? But though I think these are the new crop of design challenges that we're going to face, right? Because Signal is not in charge of anything associated with that LLM. LLM or Instacart. Instacart is completely unaware that an LLM is actually the thing that is doing all the browsing of the website. Or maybe they're not. Maybe they have some sort of way to detect that. But regardless, they are not affiliated with the thing that's actually navigating and clicking the buy links. And then all of that is written into basically a middleware application that is all of the bridging technology between all these different ecosystems.
Chris Strahl [00:22:29]:
And then there's like Anthropic, who has the LLM that is powering the middleware technology. That to me is a really salient example of the direction of where all this stuff is headed. And I'd be curious, like, your take on this. How do people design for these systems appropriately? Where are those failure states that you see? And like, are there really any dangers in the automation of a lot of this stuff?
Andi Rusu [00:22:55]:
Yeah, I mean, you pretty much nailed it. The fact that we as humans, we're concerned only with the, you know, in Amazon terms is the last mile. Problem, right? We as humans, we're very tuned to, I want steak tacos. That is the last mile, the last decision. But to get to that, you have a myriad of other decisions. What tortilla? What peppers? You know, what flavor of this? What flavor of that? And in order to get there, you have the intermediate state, which you describe perfectly as saying, well, I'm going to add a middleware to deal with that middle layer of decision-making that gets me to the last mile of, you know, I'm getting my steak taco.. And what's the predictive system that we put in place to account for, based on your pattern of eating steak tacos, we're gonna infer that this is the right tortilla, this is the right pepper, this is the right cheese. And then therefore, when you say, I want steak tacos, all these middleware, quote, decisions are being taken care of, and you're getting the steak taco you want rather than, I didn't want Colby Jack, I want Pepper Jack.
Andi Rusu [00:23:57]:
What is this? This is garbage now. And creates that failure point of like, well, the AI didn't understand that I want steak tacos that have this particular flavoring to it. This right here is the clearest example of where the failure points exist in a system like this, where today in my Disney work, we're implementing a lot of large language models to deal with human resources problems. And these are very specific problems that deal with this last mile thing that, you know, there's a myriad of intermediate things that need to happen to get to that point. There are simple questions like, how many days of PTO do I have left? Well, there's a simple calculation that takes place and the answer is very clear. There's other things that are not as simple. And the quote middleware that needs to be put in place to infer the right context and the right location of the person, the right function of the person, the right level of the person, the right workflow of the person that is dealing with these things. There are so many variables in place that it creates its own sort of ecosystem of failure, for lack of a better word, that would make a great t-shirt if printed properly, the ecosystem of failure, because we all kind of live in it, but it's based on the complexity of the task that we have to accomplish.
Andi Rusu [00:25:11]:
And we are the ultimate judges of what success is for that.
Chris Strahl [00:25:14]:
So how do we push back against that ecosystem of failure? To me, there's kind of like a couple of parts of this too, right? There's this idea of trust in the system. So how do you deal with people that are resistant to that? Or maybe not resistant, maybe uninformed. There's the consistency and then the non-determinism problem of like, if we ask the same LLM, buy me a shopping cart full of steak tacos, on any given day, you might get different types of steak or different seasonings in your taco. It'll generally approximate what a steak taco is, but it's maybe not the same every time. And then how do you think about the actual measurement of that success? And so maybe we should break that down that way. Does that sound like a decent framework?
Andi Rusu [00:25:55]:
It does. And that's where the small incremental progress is important. That's the work right now is driven by how do we effectively and measurably show people and set the expectation that things are getting better, that things are improving. There's that Will Smith eating spaghetti test that you constantly see, like, look at how far we've gone in several short months.
Chris Strahl [00:26:17]:
It's like the most bizarre internet meme that showed up as the barometer or benchmark for AI. Such a strange thing.
Andi Rusu [00:26:24]:
Yeah, but it is very effective to show you a very clear picture of the improvements that have been made. And that's where I think a lot of the gradual success comes into play. But also, it's important to tell people, kind of like give them an idea and set expectations that things are getting better, and then clearly communicate what those improvements are and when they can expect to see that improvement come up. Because not doing that, you know, some users are conditioned to always see this as a failure because they experienced a failure with an LLM several months ago. You know, it's like bad impressions last forever type thing, right? And they will always think of the worst possible interaction that they had with it, and they keep that as a benchmark of how good the experience is. So being able to constantly find a way to communicate to users that things are getting better, that things are improving, and here's how, and here's why, Will Smith is eating pasta at a higher increasing rate of accuracy. It becomes obvious to people that things are improving. And in fact, they can now use it with a different degree of trust than they used it several months ago.
Chris Strahl [00:27:28]:
Yeah, I mean, are you talking about then using measurement as a trust engine? So we're basically saying like— Absolutely. Hey, our ability to set new benchmarks is the way that we really establish the trust.
Andi Rusu [00:27:40]:
Understanding where the user is in terms of expectation we'll let you know how to connect to that user in terms of the current outcome that you have. Are you off, or are you on track, or are you indeed exceeding it? So then communicating that and saying like, hey, you user have this level of expectation for this because of your past experiences with this. Let us tell you where we are now and let us put in front of you where we are now to give you that extra level of comfort. Give it another try. Tell us what you think. Help us learn. Help us help you. And help us put you in a better spot.
Chris Strahl [00:28:12]:
You're effectively talking about evals in a large sense as being the thing that ultimately ingratiates user trust. And so it sounds a lot like design and how we think about design in the future is going to need to think about evals intrinsic to determining is like, are we below bar, at bar, above bar for any of the things that we're actually building for in terms of AI?
Andi Rusu [00:28:33]:
Yeah, and that's my point behind it is that there's the understanding that as a human being, in the times of confusion, when faced with lack of clarity about where things are, where they're going, you need principles to guide your progress. What guides your progress is your moral compass and your principles that you set for yourself. And that's how you're able to deal with uncertainty, ambiguity in any type of situation, right? And using AI and designing software is not any different. You need to set up principles. And you need to have a set of rules that guide your development, like immovable baseline things that don't necessarily change a lot. Principles, metrics, user testing, the design foundations are the key that unlock this next level of value that we designers put forward. There's the silver lining to this AI onslaught that we're all faced with in that what we know and the rules and the things that we were trained with to evaluate human reaction to design, not only is it valuable, but it is essential for how design in this age needs to be properly contextualized and positioned for value to be returned. Human reactions don't change.
Andi Rusu [00:29:42]:
They haven't changed for tens of thousands of years, right? Like, due to certain things. It's a similar situation that we were put in the evolution of how internet and internet development has gone along the way, right? You know, first there was HTML. And then there came Flash and there came asynchronous JavaScript and there came DHTML, VRML, if you're old enough to remember that. But all these things were delivered through the pipeline of the internet. They all rely on packets being delivered with a header, with all the things that they would be assembled into your screen in an experience. Those things don't change. You still need tags. You still need the old school delivery method of the internet to carry all the current extraordinary developments that we're seeing.
Andi Rusu [00:30:25]:
So there are some things that don't change, and that's what, as designers, we have to sort of train ourselves to understand and realize that there's a bedrock that things are built on. What is that and how can you use that to then give the new the flavor it needs to have?
Chris Strahl [00:30:39]:
Yeah, I guess like taking that bedrock metaphor and bringing it one stage further. I mean, all the world's buildings are built on some form of bedrock, from the tallest towers in the world to like the cruddiest little hovel. And so in that, my intention is to paint this picture of adoption is going to be uneven. There's going to be gaps. There's going to be organizations that get it and organizations that struggle. In that, how do we address the fact that you're going to see a bunch of companies, people, ecosystems potentially really struggle in a world where AI adoption is what makes you gets superpowers. And hey, maybe that doesn't happen and maybe the skeptics are right. But in the case of the way the world appears to be trending, if you aren't able to embrace AI in at least some part of your workflow, there's a serious problem of being left behind or being left out.
Chris Strahl [00:31:36]:
Potentially that's worse of a world where AI governs a lot of it.
Andi Rusu [00:31:41]:
There's this idea that we've had for a while that I think it was Kurzweil that basically said it, that the future is here, it's just unevenly distributed. That's really the reality of the world that we live in today. How do we navigate this is going to be honestly one of the big challenges that we face moving forward because a lot of the conversations driven by a very small percentage of the population that can experience these things at their fullest value. Meanwhile, 99.9% of the population is in a very uneven state when it comes to receiving experiencing and deriving meaning and value out of this stuff. Yet the 0.1% of the population is pushing it forward as a great engine of innovation and value. So what we talked about earlier about setting expectations, being able to communicate this clearly, eloquently to the rest of the population and making sure that we bring them along is literally going to be, I think, the next frontier in terms of like both design, communication, expectation setting, and value derived by the companies that are moving it forward.
Chris Strahl [00:32:48]:
Well, Andi, thank you so much for being on the show. This has been a pleasure, and it's great to get your take, your philosophy, your ideas. Thank you so much for offering your opinion and spending time with us.
Andi Rusu [00:32:58]:
I appreciate you having the conversation. It's valuable to me to bring it all together in my head. Having these conversations are important to me to then contextualize it for different conversations later.
Chris Strahl [00:33:09]:
If you're interested in more conversations like these, we don't just have the Patterns podcast. We actually have Patterns events. They happen all across the country. There's one a month. Our next one is going to be in Houston, Texas on March 19th. If you're interested in joining, you can sign up online. knapsack.cloud/patterns. Check us out over there.
Chris Strahl [00:33:25]:
Otherwise, this has been the Patterns podcast. I'm your host, Chris Strahl. Have a great day, everyone.
Hey everyone, thanks for listening to the Patterns Podcast. You can reach us on LinkedIn using the link in the show notes. The Patterns Podcast is brought to you by Knapsack, the intelligent product engine helping teams design, build, and deliver digital products at the pace of ideas. Learn more at knapsack.cloud.


