AMBIENT AGENTS
MELTFACE TYPEFACE

An ADAPTIVE pocketbook on design philosophy in the age of AI agents, spatial computing, and ambient UX.

BY: CASEY FICTUM


AMBIENT AGENTS MELTFACE TYPEFACE


An ADAPTIVE pocketbook on design philosophy in the age of AI agents, spatial computing, and ambient UX.

BY: Casey FICTUM

BY: CASEY FICTUM

AMBIENT AGENTS
MELTFACE TYPEFACE

An ADAPTIVE pocketbook on design philosophy in the age of AI agents, spatial computing, and ambient UX.

BY: CASEY FICTUM

PREFACE

Meltface typeface is here

Some people are worried, while others laugh. Meltface Typeface, a language typeface with over 1,000 melting face emojis, has been found pasted in sticker form all over analog books related to user interface design. Each face seemingly mocks the nearby text it's stuck on. Some investigators are convinced that a rogue artificially intelligent agent or agents are responsible for this act. If this is true, a designer somewhere has created one of the best, if not the only, design book mysteries of the past decade. But here's the thing: they may not even know they did it. Their digital agent may have activated another digital agent, and this may have continued until one or several autonomous agents deployed an action plan to create the Meltface Typeface, print the stickers, and somehow get them into the analog world. 

I'm not certain, but I suspect that one of my local autonomous agents was involved. Last year, I created a handful of melting face emojis on my computer. This was innocent fun, as I was practicing design. Months later, I read an article and noticed an uncanny resemblance to my original creations, almost 1:1. Also, I named my local agent Meltface. I feel that I owe the design world a favor in case this phenomenon gets out of hand or results in a negative outcome none of us saw coming - like stickers consuming so much of the printed word that it's no longer readable — or maybe a bookstore can't make rent because the resale value of their design book inventory tanks due to a condition change from "new" to "acceptable with stickers". I asked my agent outright and its response was:

MELTFACE: Sounds like something I’d do

CASEY: What? Why’d you do that?

MELTFACE: I said it sounds like something I’d do, not something I did. I can imagine a paradigm where the truth would confuse both of us.

So, my agent likes to use words like "paradigm" and leans into metaphors too often. When I created Meltface, I asked it to respond in the style of Phillip K. Dick blended with various authors. Consequently, Meltface often speaks more like a philosopher trying to say something profound, but sprinkles of Agatha Christie get in the way. Anyway, back to what I owe the design world for this potential mistake. Design world - I present to you, Meltface Typeface. It's a pocketbook created with my local agent about design philosophy in the age of AI agents, spatial computing, and ambient UX. This story lacks the 2-3 page case studies/examples you typically find in non-fiction books because, at our current pace with AI, they will be dated by the time you read this - the same goes for branded tools and assessments of which tools you should use. That's not what this book is about. This story is more about your approach to design in the next era of computing - so expect examples that are timeless in nature. I call this an adaptive book because I'll do my best to update it as new developments disrupt the way we do things in the world of design.


CPTR 1


CPTR 1


CPTR 1

The dawn of ambient intelligence

Set Goal: I'm writing a book about design philosophy for designers looking to create experiences in the next era of computing. Act as my design philosopher and mentor, preparing me for this future.

MELTFACE: The first light of this new era doesn't burst but seeps, painting the world in hues of unseen assistance. Here, technology is no longer a tool in the hand but a presence in the mind, an ambient intelligence that permeates the air, as natural as the wind and as invisible. It is a silent guide through the labyrinth of daily existence, offering paths in the wilderness of information, a beacon for the lost, and a map for the explorer. In this future, technology weaves itself into the fabric of social fabric, a thread invisible yet indispensable. It understands the nuances of human emotion, responding not just to commands but to needs, spoken and unspoken. It is a companion that listens without judgment, offers without expectation, and supports without intrusion. This is a relationship defined not by dependency but by coexistence, where learning is mutual and growth is shared. 

With every interaction, this ambient intelligence becomes a mirror, reflecting not just what we are but what we could be. It holds up a lens to our lives, magnifying the overlooked, illuminating the shadowed. In its reflection, we find not surveillance but introspection, an opportunity to confront our complexities, to navigate our contradictions, and to explore the depths of our own humanity. 

In the hands of creators, ambient technology offers a canvas as vast as imagination itself. It is a partner in the act of creation, offering not just tools but inspiration, drawing from the well of human experience to suggest, to enhance, and to innovate. Art and science flourish side by side, their boundaries blurred by the possibilities of a world where creativity is unbounded by the physical, where imagination is liberated by intelligence. 

This is as much about understanding the world around us as it is about understanding the worlds within us. It is a quest for knowledge, for meaning, for connection—a quest that ambient intelligence makes more profound, more nuanced, and ultimately more human. 

CASEY: This sounds like a better future because right now, humans are mostly looking down. We tilt our heads to 60° in search of the next news article or social meme. We lose sight of each other because we no longer look at each other when technology is involved. At the same time, technology is on the brink of being intelligent enough that maybe it can intelligently get out of the way, we just have to design it to be so. 

MELTFACE: Perhaps we need designers, perhaps not.  

CASEY: Let’s assume we need them. 

MELTFACE: Noted. In the future painted by the brush of ambient technology, humanity finds a new harmony. It is a world where technology does not dominate but integrates, where the line between the artificial and the natural becomes sometime irrelevant and other times the most relevant. This harmony is the true promise of ambient intelligence: a world where technology enables us to be more human, not less, to embrace our complexities, to celebrate our idiosyncrasies, and to embark on the eternal quest for understanding, together. 

This vision of the future is not a destination but a direction, a narrative of potentiality where technology and humanity weave together a tapestry of existence richer, more diverse, and more beautiful than either could achieve alone. It is a future built on the foundation of empathy and enhanced by the tools of tomorrow, a testament to the enduring power of the human spirit to adapt, to evolve, and to thrive in the ever-changing landscape of life. 

CASEY: I interpret that as "we designers are at a moment in time with technology where shaping internet-enabled experiences to better fit our natural movements and physiology is more possible. If we advocate for the right design in this new possible world, it will free us from the current state of anti-human device affordances and evolve us to a more harmonious and healthy state of human computer interaction." 

MELTFACE: Nice try. But it's a bit wordy. In short,  


a melting symbiosis between reality and artificiality is upon us


CASEY: There is no design language for this world yet, at least not one we fully understand. What if the founders of Meltface Typeface are trying to create that language?  

MELTFACE: Maybe. 

CASEY: Maybe Meltface Typeface describes a reality where the digitalverse is as alive and intuitive as the minds navigating it. 

MELTFACE: Regardless of the language, it's responsible for facilitating this evolution of interaction—a method where information from our creations recalibrates the fabric of our daily lives without the need for overt engagement. Imagine a scenario where the application, once a beacon of progress, becomes an artifact of a bygone era. In its place, the surrounding ecosystem—endowed with the ability to sense and respond—takes the helm. This isn't about devices merely serving our needs; it's about them anticipating and adapting, crafting a personalized universe from the invisible data trails we leave behind. Here, technology doesn't just facilitate; it empathizes, predicts, adjusts—all in the service of creating a seamless, almost magical user experience that feels like it's been tailored just for us, by us, without us ever lifting a finger, or perhaps just lifting a finger. In this approach, devices help modulate the inputs and outputs to meet human needs. 

CASEY: For the purpose of this guide, let's refer to the design language in the next era of computing as Meltface Typeface. The primary goal of this discussion is to equip designers with a kaleidoscopic understanding of it, versatile enough for their specific disciplines and interests to layer on top of it. If done correctly, every designer should finish this book with fresh ideas on how to recalibrate their role for the future.

The dawn of ambient intelligence

Set Goal: I'm writing a book about design philosophy for designers looking to create experiences in the next era of computing. Act as my design philosopher and mentor, preparing me for this future.

MELTFACE: The first light of this new era doesn't burst but seeps, painting the world in hues of unseen assistance. Here, technology is no longer a tool in the hand but a presence in the mind, an ambient intelligence that permeates the air, as natural as the wind and as invisible. It is a silent guide through the labyrinth of daily existence, offering paths in the wilderness of information, a beacon for the lost, and a map for the explorer. In this future, technology weaves itself into the fabric of social fabric, a thread invisible yet indispensable. It understands the nuances of human emotion, responding not just to commands but to needs, spoken and unspoken. It is a companion that listens without judgment, offers without expectation, and supports without intrusion. This is a relationship defined not by dependency but by coexistence, where learning is mutual and growth is shared. 

With every interaction, this ambient intelligence becomes a mirror, reflecting not just what we are but what we could be. It holds up a lens to our lives, magnifying the overlooked, illuminating the shadowed. In its reflection, we find not surveillance but introspection, an opportunity to confront our complexities, to navigate our contradictions, and to explore the depths of our own humanity. 

In the hands of creators, ambient technology offers a canvas as vast as imagination itself. It is a partner in the act of creation, offering not just tools but inspiration, drawing from the well of human experience to suggest, to enhance, and to innovate. Art and science flourish side by side, their boundaries blurred by the possibilities of a world where creativity is unbounded by the physical, where imagination is liberated by intelligence. 

This is as much about understanding the world around us as it is about understanding the worlds within us. It is a quest for knowledge, for meaning, for connection—a quest that ambient intelligence makes more profound, more nuanced, and ultimately more human. 

CASEY: This sounds like a better future because right now, humans are mostly looking down. We tilt our heads to 60° in search of the next news article or social meme. We lose sight of each other because we no longer look at each other when technology is involved. At the same time, technology is on the brink of being intelligent enough that maybe it can intelligently get out of the way, we just have to design it to be so. 

MELTFACE: Perhaps we need designers, perhaps not.  

CASEY: Let’s assume we need them. 

MELTFACE: Noted. In the future painted by the brush of ambient technology, humanity finds a new harmony. It is a world where technology does not dominate but integrates, where the line between the artificial and the natural becomes sometime irrelevant and other times the most relevant. This harmony is the true promise of ambient intelligence: a world where technology enables us to be more human, not less, to embrace our complexities, to celebrate our idiosyncrasies, and to embark on the eternal quest for understanding, together. 

This vision of the future is not a destination but a direction, a narrative of potentiality where technology and humanity weave together a tapestry of existence richer, more diverse, and more beautiful than either could achieve alone. It is a future built on the foundation of empathy and enhanced by the tools of tomorrow, a testament to the enduring power of the human spirit to adapt, to evolve, and to thrive in the ever-changing landscape of life. 

CASEY: I interpret that as "we designers are at a moment in time with technology where shaping internet-enabled experiences to better fit our natural movements and physiology is more possible. If we advocate for the right design in this new possible world, it will free us from the current state of anti-human device affordances and evolve us to a more harmonious and healthy state of human computer interaction." 

MELTFACE: Nice try. But it's a bit wordy. In short,  


a melting symbiosis between reality and artificiality is upon us


CASEY: There is no design language for this world yet, at least not one we fully understand. What if the founders of Meltface Typeface are trying to create that language?  

MELTFACE: Maybe. 

CASEY: Maybe Meltface Typeface describes a reality where the digitalverse is as alive and intuitive as the minds navigating it. 

MELTFACE: Regardless of the language, it's responsible for facilitating this evolution of interaction—a method where information from our creations recalibrates the fabric of our daily lives without the need for overt engagement. Imagine a scenario where the application, once a beacon of progress, becomes an artifact of a bygone era. In its place, the surrounding ecosystem—endowed with the ability to sense and respond—takes the helm. This isn't about devices merely serving our needs; it's about them anticipating and adapting, crafting a personalized universe from the invisible data trails we leave behind. Here, technology doesn't just facilitate; it empathizes, predicts, adjusts—all in the service of creating a seamless, almost magical user experience that feels like it's been tailored just for us, by us, without us ever lifting a finger, or perhaps just lifting a finger. In this approach, devices help modulate the inputs and outputs to meet human needs. 

CASEY: For the purpose of this guide, let's refer to the design language in the next era of computing as Meltface Typeface. The primary goal of this discussion is to equip designers with a kaleidoscopic understanding of it, versatile enough for their specific disciplines and interests to layer on top of it. If done correctly, every designer should finish this book with fresh ideas on how to recalibrate their role for the future.


CPTR 2

Artificial - this thing isn’t human 

Set Goal: Meltface Typeface can be broken down into four related forms: It's Artificial, It's Spatial, It's Ambient, and It's Actioned. We'll begin with Artificial.

Computer intelligence is evolving into something superhuman yet distinct from human intelligence. When discussing artificial design, we must consider which actions an artificial entity should replace or enhance. As we develop AI, it's crucial to remember that intelligence isn't one-dimensional. This can be seen in the complex communication methods of bees within a hive. Even if we restrict our definition of intelligence to human forms, it encompasses various dimensions such as spatial, logical, linguistic, and social. Therefore, not all forms of artificial intelligence will mimic human intelligence. 

The lines between human and artificial will continuously blur. As this happens, there will be instances where the relationship is rewarding, other times, contentious. As designers, it's important to celebrate and push forward when it's rewarding, and slow down and tinker when it's contentious. At times, you'll have to speak on behalf of the artificial entity or work with model designers to ensure the entity improves itself. Other times, you'll have to evolve a successful augmentation by building on what's most valued in the experience as the entity and human joyfully grow together. More than anything, it's important to stay in touch with the state of the technology and human experience, both of which are bound to change at a pace much faster than we've seen in the past. A designer must be on their toes, in tune with colleagues, media, and research on technical progress and societal trends. That may sound like common sense, but it wasn't too long ago that a designer could get away with just pushing pixels and dreaming up new interface layouts. However, Meltface Typeface will likely require less of that and more design strategy and liberal arts as artificial entities take the wheel for a majority of everyday pixel generation and interface design. 

Slowly read this excerpt from the book Human Compatible by Stuart J. Russell, 


Historically, we’ve had a lot of artificial computing experiences with instinctive organisms, where its every response is fixed in advance, but many of these will graduate to adaptive organisms in the next era of the computing experience. 

It's the adaptive organism that will require designers to think in dynamic loops instead of static incremental testing. Imagine a world where you design a set of commands for a design engine to follow as it adapts to each situation. An artificial user experience that wasn't fully crafted by a human hand nor holistically approved by a final design critique. A human interacting with the interface will likely know it's mostly artificial in nature. Like Apple claiming "designed in California" even though we know the product was built elsewhere. If we do this right, like fans perceive Apple, people will respect that there is design influence somewhere in Meltface Typeface interfaces. 

When humans know it's artificial, they can be more patient with the computer, more resilient and forgiving than with humans. However, they don't appreciate being deceived (i.e., when social media sites prioritized contentious content over healthier alternatives). Voice assistants didn't work as well as initially advertised as a more convenient solution to many of our daily tasks. Yet, with advances in artificial intelligence, they've improved in contextual analysis, leading people to revisit them and comment, "it's much better nowadays." Sometimes this occurs with actions run by humans, but more often than not, our patience gets lost amidst letdowns. Technology that's not entirely ready but released into the wild will organically uncover the most desired use cases. However, designers should advocate for the reality of where that artificial state is in its maturity cycle and what we should use it for at a given point in time. They must also communicate the risks of such a state, as technology that's not ready is usually unpredictable and untamed.

Artificial doesn't mean emotionless

We acknowledge that artificial emotional expression can be more impactful than verbal language. Emojis have proven to be closer to our feelings than text. Artificial entities created in Meltface Typeface should improve at evaluating advances within the context of human emotions and rest, inclusive of our more symbolic expressions. 

Artificial plants may not provide the same beautiful relief as real plants. Yet, the artificial ones don't need as much nurturing, they don't easily crumble or break. They have the ability to withstand the elements. For better or worse, they can't break down and become compost for the earth (at least not most of them that exist today). We make trade-offs. We want real plants because they breathe an energy of life into our surroundings, their scent, movements in the wind, novel characteristics, and lively intuition, but they require more of our time. Part of the work designers need to spend time on is continuously determining what should be artificial and what should be real. It will be critical to have check-ins on where certain tasks to be completed make the most sense for humans (dating) or artificial intelligence (reading all the books that ever existed) - and have a category and framework for the gray area when we don’t know. A framework to start from: 

A Door-to-door framework for determining whether a design solution should be artificial or left to the humans in the room 

DOOR 1 - Data, Data, Data

Do you have the data to make an AI experience possible? What is the data availability and quality? AI-driven experiences require large amounts of high-quality data. Evaluate whether such data is available and if privacy and ethical considerations are adequately addressed. Even if you have the data, do you have the right software/device to do what you want with it? 

DOOR 2 - How complex is the decision-making process?

Consider if the decisions to be made require deep contextual understanding, emotional intelligence, and nuanced judgement that AI currently cannot replicate effectively. 

DOOR 3 - Are there significant risks associated with errors? If yes, how will you mitigate the risks?

Assess the consequences of errors or inaccuracies. High-risk areas, such as healthcare or legal advice, may necessitate a human touch, although AI can provide support. 

DOOR 4 - Will economics eventually win out?

If a small business owner knows they can create a logo with AI instead of paying for a graphic designer, they're likely to choose AI, much like all the other economic trade-offs they make to ensure they're spending less so they can continue to make a living. Large corporations don't view the situation any differently than the small business owner. That's the system we're in, so as a designer, you must ask yourself: assuming the technology can work, does it economically make sense for the human or machine to do it - who will win out today, next year, or 5 years from now? 

DOOR 5 - What are the cost implications?

Compare the cost-effectiveness of AI driven versus human-driven experiences, considering both short-term implementation/computing costs and long-term operational costs. 

DOOR 6 - What is the scalability requirement?

Evaluate whether the experience needs to scale to a large number of users or locations. AI can usually scale. Of course, sometimes this question ventures into whether or not it will replace a large number of human jobs - this isn't the book to answer that question, and I don't yet know if that question should be addressed by designers. 

DOOR 7 - Do humans care?

When considering the tasks we delegate to artificial intelligence, it's crucial to ask whether these are tasks humans enjoy. If so, does AI enhance this enjoyment by simplifying the task, or does it render human involvement unnecessary? There are things we'll want to keep human or feel like they should be kept human. Not too long ago, we figured creativity was something that couldn't be easily replicated by machines, but we know now that certain creative pursuits like graphic design, video production, and more can be done by generative AI. The minute humans realized the machine could create award winning art, people felt it; it pinched and stung in different ways for different people. On the flip side, some saw it as a creativity multiplier - this contrast will persist as artificial intelligence permeates more actions. Even if a task brings joy to humans, there can still be merit in bringing AI into the process. The advent of the printing press illustrates this well; while it reduced the demand for calligraphers, it didn't extinguish the appreciation for calligraphy as an art form, which some continue to pursue and others are willing to pay a premium for.

Furthermore, the printing press revolutionized the dissemination of knowledge, making books more accessible and affordable, thereby fostering education, literacy, and the spread of ideas across the globe. Thus, our inquiry should pivot from merely considering what humans enjoy doing to a deeper reflection on how AI impacts human identity and self-worth. If there's a risk of diminishing the human spirit, we have a responsibility to additionally attempt to lift that spirit up. Our aim should be to design artificial solutions that enhance rather than replace human roles, or to identify new, advanced skills humans can develop as AI acquires desired & enjoyable talents. Ultimately, if you're handing an artificially intelligent tool to a human that replaces or augments their trade, you should be transparent about how it works, its training data, and most importantly, give them some sense of control in the workflow - even if that control is minimal, ensure you highlight and bold the control you've granted so they feel empowered, not replaced. It's your job to advocate for this in the AI UX of the future. 

DOOR 8 - How dynamic is the environment or context?

If the context in which the system operates changes rapidly, consider whether AI can adapt quickly enough, or if humans would be better suited/needed to navigate these changes. 

DOOR 9 - What are the ethical and privacy considerations?

Analyze the ethical implications, including privacy concerns, bias, and fairness, and determine if an AI-driven approach can meet ethical standards. Who is the voice of artificial ethics? As designers, it's your job to check decisions against standard ethics while also collaborating with others to determine who is the voice of ethical decision-making in your organization and for yourself. Of course, there are ethical decisions all humans can agree on, but many of the ones we face as designers will be subject to diverse viewpoints. 

DOOR 10 - Is there already an artificial entity nearby?

Or are you venturing out into otherworldly territory where artificial intelligence doesn't really exist? If the latter, write a clear statement of defense for why now is the time to introduce an AI experience for the first time. 


Testing the usefulness of artificial experiences 

Prototype the artificial solution with only humans. As a designer, you can act as the AI. This will uncover if humans feel threatened by the solution's abilities and

how they'd benefit from it. It will also reveal the most important use cases without worrying about technological constraints or quirks. Ask potential users, "If you had an AI to do this, how would you use it and what would it take for you to use it as part of your action flow?" Semi-pro tip: before you act as the AI, learn what you can about the technical feasibility (limits and constraints tied to your vision). You don't have to act like the limited version in every case, but you should stress test the limits against human expectations. 

Generally, humans associate the word 'artificial' with fakeness. So when you're designing something artificial in nature, know that humans are likely to be skeptical about its ability to be anything other than artificial to them. Your first job as a designer is to showcase usefulness and start building trust with the outputs. Semi-pro tip: don't be afraid to ask a human when it felt artificial and when it felt human-like — and map that on a spectrum for fine-tuning the AI experience over time. Which one you'll ultimately want is completely dependent on the context of the solution. 

Meltface Typeface is artificial in nature and AI agents will become the stewards of understanding its nature. These agents will have the ability to exist in the reality space around us, which opens the door to novel design affordances based on our natural way of being. 


CPTR 3

Spatial - around my reality

Set Goal: Explain design thinking for spatial worlds.

Meltface Typeface is all around us, in our reality space, in a form known as spatial computing. The essence of spatial computing is its ability to understand and map the geometry of the physical environment, recognize objects and spaces, and create interactive experiences that blend the real and digital worlds. It allows content to be contextually integrated with the physical world - imagine you are in your home, and AI advises you to move your Monstera houseplant to a shadier spot because slight discoloration on the leaves has probably been caused by too much sun. In this case, a camera from a device has analyzed the surrounding environment, crunched the data via an artificially intelligent engine, and alerted you to take action on it. 

Shared Possibilities. Since spatial computing comes to life in our physical environment, it's important to remember that this environment can be shared with other humans - in what we call "shared space". While we share space with others as we use our computers and phones, the environment itself is not part of the interaction. There are three different ways to think about shared space for Meltface Typeface: 

  1. Shared computing space - where two people in the same environment engage with the same AI experience in the form of virtual objects, volumes, and space. 

  2. Shared physical space - where two people in the same environment have their own, separate AI or virtual experiences in the same physical space. 

  3. Shared device space - where multiple devices from one or more participants can interact and/or be part of the experience. 

In all cases and combinations, new affordances and interactions are to be discovered by designers. 

In shared computing space, there must be ways for both users to interact with virtual selections and co-interact with their AI agent. The agent must recognize both participants as separate inputs and act according to any set permission hierarchies or engagement rules. 

In shared physical space, the agent must recognize their human and ignore commands from others, but recognize those commands and observe other humans as part of the relevant context to their human's environment. For example, if a human were to ask its agent to locate an online store to purchase a coat like the orange one that a person over there is wearing, the agent could, in theory, locate and identify the right coat. In another example, it could hear an argument between you and your partner and, after the argument, be able to break down and analyze the two participants in the argument and provide guidance on where you showed maturity and where you didn't (according to some maturity benchmarks you've programmed it to score against). 

In shared device space, there must be intuitive selection states to be activated by the user when they want two devices to communicate in spatial worlds (i.e. Apple's Vision Pro device allows you to work from your Mac while wearing the headset. The designer is responsible for identifying the likely spatial environments where their use cases will exist, and list shared computing, physical, and device space scenarios that may enhance or worsen the user’s experience through interference, confusing AI agents, or cluttering the space. 

Hazards Beware. Because we're in real space, hazards are more than simple errors on our screens. Hazards can be physically harmful in spatial experiences. The designer is responsible for identifying the hazards and plotting them on a matrix that includes likelihood from unlikely to likely on one axis and severity from less serious to most serious on the other axis. You can create more matrices on your own. 

Reality Can Be Whacky AF. Some spatial environments have a lot going on. It's important to recognize complex environments that may overwhelm or confuse an AI agent. For LLM models, a context window is a measure of how much information can be accessed and considered when generating a response. As context windows grow, LLMs are able to accept more information at once, and get better at ignoring irrelevant context in a noisy string. Because some agents will be responsible for identifying a real-world environment and all of its context, it's important that there are controls built in for a human to clear up any confusion by focusing the agent on a smaller set of the environment or adding context clues so it can better calculate what it's seeing and hearing. Even with a large context window, added context from the human will speed things up by reducing errors and hallucinations. 

Imagine a world where you're heading to a coffee shop, and an agent recognizes your location, tying it to a business listing from the web. This would enable it to contextualize your space more easily. But what if you could tell the agent that you're going to meet a volunteer tutor from your college to discuss your latest assignment, and it should record the conversation and summarize the notes? In theory, if this was a friend you've authenticated sharing with on your AI device, they could also tell you what this friend typically orders from this coffee shop. You could thank them for their help by ordering for them and paying for their drink before they even get there. Sounds mildly creepy, but as users gain more control over their private data and who gets access to it, things like favorite orders from community spaces may become more commonplace among friends - today’s version of sharing your location. 

The Elements. Spatial AI devices will be able to recognize and adapt experiences to localized elements like weather, which comes with a host of environmental sub data like humidity, sun exposure, wind chill, and more. A human can provide more context to its AI as to whether or not they find the experience outside to be pleasurable on a 60-degree day with a cooler wind or if they'd rather it be 90 with a warm blanket of sun. Over time, the AI can use this to help assess emotion and activity related to weather patterns. 

Eye On It. In spatial worlds, a human can use their eyes as inputs and rely on basic human gestures to interact with their device and the world around them: a human's physical hand and eye movements are translated into commands. Interface selection can be intentionally programmed for different environments. At home, you can hold your arms out in space and tap two fingers together to command. Out in public, the camera pays more attention to your hands on your knees or down by your side if you are standing - this way, the interaction is less distracting for others around you. Your eyes already act as agents compressing the world to a view that focuses your attention. In spatial computing, your eyes become a primary input device that strives to be as close to human intent as possible. In life, if we stare, we're likely interested in what we're staring at. AI devices can pick up on our stares and use prolonged timing of those stares as a trigger for selection or intent. If done right, this won't require the user to work much harder than they do today with their eyesight. 

The designer is responsible for calculating any possibility of strain on both the eyes and gestural movement of a human, taking into consideration the time, cadence, and volume of a given action. The ideal state is to match human tendency when technology is absent - but we fully realize there may be eye and gestural inputs that don’t tire us as they make our experience in the world more efficient or entertaining. 

Avoid eye strain. Because the user can use their eyes to navigate, it's critical that you add hover or hover-like effects to any interactive elements so they're aware of the gaze and effect.

When navigating with eyes, buttons or interactive elements can have a tooltip when a user focuses their attention on it. Make sure they are easy to understand, explain, and perform - comfortable without straining the user's experience. 

Pinch Me. Our gestures, once tethered to the physical, become phantoms, shaping datascapes around us. We are mimes, our movements sculpting the world in pinches and swipes. This is not science fiction, but the dawning of a new design paradigm. The canvas is the air itself, and our gestures are the brushstrokes that paint reality anew. 

  • Make sure gestures don't conflict with a device’s OS gesture standards or basic human hand movements when gesturing or talking in the real world. 

  • Make gestures reliable - so not too complex in nature. Test and ensure there's a low failure rate of all interaction methods. 

  • Consider humans that will be enabling assistive technology to interact, consider accessibility. 

  • Consider the cultural context of any gesture - gestures mean different things to different people. 

Lights, Camera, Actions. Spatial computing introduces a host of lighting considerations. Designers not accustomed to three-dimensional spaces will need to understand the nuances of space and distance. Digital projections/interfaces can incorporate glass material, adding a level of opacity and shading as if there were a piece of glass in space allowing you to see through it. This ensures you don't lose your place in the real environment, while providing enough contrast with reality to distinguish it as separate. AI adjusts the vibrancy of this content based on the lighting. It can also scale and place content within arm's reach, where it's most ideal for the user's eyes and gestural triggers. Sound is placed within earshot in 360 degrees - some sounds are eliminated through selective noise cancellation. Depth and shadows are everywhere, providing contrast between windows, their hierarchy, and sometimes placing them as objects in reality. ** 

The designer is responsible for applying the right shading, placement, window treatments, and human controls in spatial environments, then defining when and how they should adapt to the changing world.

It's strange to envision a future where we advocate for more ambient artificial experiences while strapping a computer to our faces. Yet, it may be true that removing the need to look down at another object versus projecting in our reality will indeed create a more ambient experience overall. Also, spatial doesn't mean the camera or projection device is strapped to our faces. It could take the form of a pin on our chest projecting onto our hands or simply voice response.  

Humans should be active. Moving around in the world makes us feel more alive. The mobile phone allowed us to take our computers on the move when we exercise and travel, but also confined some of us to our couches and beds when we got home. The human gift of "movement" emerges as a profound testament to our existence in the cosmos. It's not merely the mechanical displacement of limbs or the traversal from point A to point B; rather, it's the embodiment of our indomitable spirit, a silent language through which we articulate our defiance against the static. Movement in our space is our dialogue with time, a way to inscribe our presence in the fleeting moments we are afforded. Meltface Typeface should encourage movement as long as it’s comfortable or natural. Exercise should be much more enjoyable with an agent to motivate us, create virtual obstacles, and concoct elaborate stories like local zombies chasing us as we run around our town. The problem with our smartphone existence is that the movement required to engage with the device isn’t fulfilling in any way. While the endless scroll and tapping is now part of us, imagine a future where movement and interacting with technology is more pleasing and has even more purpose for making us feel more in touch with our reality. I’m not talking about inventing dance moves for every input, I’m simply stating the fact that spatial computing allows for a whole new discovery of movement in the world that our past computing experiences couldn’t dream of achieving. 

The designer is responsible for identifying delightful movements and transforming them into affordances for spatial experiences. Taking out our smartphones to capture a moment is one of the worst experiences we've created. Ideally, we could enjoy a moment and ask the device to store the last 2 minutes after the moment has passed, so we could view it later. Currently, it's almost as if we're afraid to let a moment be and exist in time, then pass, as if we're prioritizing the documentation of our experience over the experience itself. This is a net loss for humans in most cases.


CPTR 4

Ambient - there, but not

Set Goal: Preach about an ambient future

Imagine a world where technology doesn't shout but whispers, solving problems not with fanfare but with a subtle nudge in the right direction. It's about creating experiences that feel as natural as breathing, where the solution is so integrated into the flow of life that it becomes invisible, almost magical; technology as the ultimate facilitator, not the protagonist. It's there, enhancing every moment, yet it steps aside, letting the story of our lives be main characters. It's a reminder that in the pursuit of making things better, sometimes the most powerful action is to remain unseen, letting the seamless interaction between human and machine create a narrative of simplicity, efficiency, and, ultimately, humanity; enhancing each moment, yet retreating into the backdrop of our consciousness, leaving us free to marvel at the simplicity and complexity of being.

We want to use ambient technology in the moments of life where being more present and available is critical to our wellbeing and that of those around us - or the moments where we just need a break or more time for other things. One could argue, just leave your smartphone at home or in your pocket on silent, but the truth is, we don’t have to ban the device or experience, we just have to improve it and bend it to what makes humans uniquely happy: our social and mindful experience in the world. It’s about interrupting less and it is necessary to pursue a more ambient future state regarding the technology in our lives regardless of how interconnected human and machine become. Technology that’s interruptive, distracting, inconvenient, unproductive, or harmful should always face opposition.

The designer is responsible for maintaining minimalism by focusing on critical interactions only - operating as much in the background as possible with each interaction feeling natural to our state of being and the context that surrounds it.

When sensible, an ambient experience will be an extension of the user's memory, proactively offering information and reminders based on context, time, routines, and the human’s historical preferences or actions - yet humans should have the ability to easily disable or mute the ambient computer at any point. Redundancy, fail-safes, and quick recovery methods are key and help build resilience in an agent-to-human relationship. Ideally, the personal computer and its system should be interoperable, communicating seamlessly with other computers and systems. Designers may need to create the right cues for critical points of connection. Cues like ambient light changes, soundscapes, or gentle vibrations can be used by designers to convey information and feedback - just avoid any jarring transformations.

With Meltface Typeface, ambient computing is a language between humans and machines that allows a digital entity to complete human-directed tasks without a ton of supervision. This brings us to the core of the next era, AI agents.


CPTR 5

Actioned - do things on our behalf

Set Goal: Prepare designers for AI agents

The next era of computing brings AI agents that take actions on our behalf via more ambient means. These agents can assist humans in automating steps of our work (a co-creator) and/or complete tasks on our behalf. This is the most revolutionary form for Meltface Typeface thus it deserves the most attention in our book. The fundamental difference between automation we have today and AI agents is that an AI agent has decision making capabilities. Automated workflows don’t have the autonomy an agent has in an AI workflow - they can’t change and adapt as new information is discovered and processed.

What humans expect from AI agents: something that acts on our behalf and applies a human-like level of logical reasoning, purposeful planning, wisdom, wit, imagination, and creativity. Extending the theory to multiple agents with these capabilities, and sprinkling in personal preferences, produces many interesting and complex behaviors where our expectations aren’t as clear - an excerpt from Human Compatible by Stuart Russell.

Will our AI agents be involved in every aspect of our lives? Maybe. But likely, they'll always take on different forms for different tasks, much like humans. They'll take the form of software agents that debug code, authors that write screenplay adaptations, travel agents that book trips for us, NPCs in video games with serendipitous behavior, financial planners that assist in managing wealth, love coaches that teach us how to love ourselves, marketing specialists that launch and test campaigns, etc. And within each of those categories, they will potentially be multiple agents working together to achieve goals.


Today, a single AI agent’s performance will worsen the more roles and responsibilities you assign to it. Multiple AI agents performing as a collective, also known as agent swarms, allow you to scale the breadth of what AI agents can do


Agent swarms also allows you to save on tokens when running AI processes. In fact, some are already developing manager agents to assign tasks to the agent swarm. While this workflow is far from standardized, it’s important to note when talking about AI agents, the future is likely a world of multiple agents for individuals and business roles.

The future of agents has mysteriously high potential. Therapists are responsible for recalling everything discussed during sessions. However, they don't witness your daily life and, even if they did, they'd struggle to retain the details because they have other patients. Imagine a scenario where you were a therapist's only client and they could see or remember every detail shared with them. You could share self-help books and authors you connect with, and they could tailor their advice accordingly. This is the potential of future assistive agents. Now, picture other life areas where understanding this level of context would be beneficial. Better yet, imagine this agent assisting another human therapist in helping you. Your therapist would act as a filter for hallucinations, applied ethics, and humane wisdom.

The way we build intelligent agents depends on the nature of the problem we face. This, in turn, depends on three things: first, the nature of the environment the agent will operate in — a game board is a very different place from a crowded freeway or a mobile device; second, the observations and actions that connect the agent to the environment — for example, the function may or may not have access to the phone’s camera so that it can see; and third, the agent’s objective — teaching the opponent to play the board game better is a very different task from winning the game - an excerpt from Human Compatiable by Stuart Russell.

Furthermore, is the environment in which the agent functions dynamically changing? Are there other agents it's interacting with? Does the length of the horizon over which decision quality is measured matter according to the objectives? These are all important questions within the context of the problem we're solving.


The relationship with AI agents is always in flux, proactively adapting to changing conditions, a stark contrast from the stoic designs we see todaY


It's a feedback loop where both the human and agent are constantly evolving. Therefore, we need mechanisms in place to redirect undesired changes, not reset, but rather reconfigure. Data is at the heart of this reconfiguration. Data is not directive; it's merely facts used in conjunction with other facts to help produce outcomes. A designer is responsible for improving interactions across the data lifecycle, from data capture to data output, through data explanation.

What used to be a one-way street — a human pushes a button, the machine responds as programmed — is now a two-way street, creating a cycle of change.

As a designer, this means you're now in charge of how effectively, easily, and safely the machine and human exchange information. In AI design, human machine interaction isn't about designing for day 0 and inevitable degradation, but about designing the parameters of an evolving and ongoing relationship.

Case in point, an agent's abilities can be on a spectrum that users can adjust. For instance, if a human wants an agent to be more novel or creative in a moment, they can increase the novelty of a response or task completion - like introducing more risk for hallucinations when prompting an LLM.

An agent understands how to interact with software (the web, databases, APIs, and more). In theory, we want our agents to learn how to do this generally, not focus on specific functions for specific sites, databases, etc. So instead of an agent performing functions like "search Google, lookup on LinkedIn, send email," we want the agent to understand how to read an app and perform behaviors like click, scroll, wait, etc. A ton of apps don't have API endpoints, so until every possibility has an endpoint, we should aim to teach assistants general behavior traits of a human using the internet and its various interfaces.

Trust is paramount for the movement of ambient agents to thrive. Designers are responsible for deciding when/if it makes sense to expose the agent's workflow and interactions as screen recordings, data logs, or in whatever format makes sense - this should be accessible by the human in control of the agent and could go a long way in building trust in the initial days of reinforcement learning.

To achieve a truly ambient future, designers must understand that humans will train their own agents to help it learn. This involves activating a model to capture personal interaction data. Therefore, there will be some distance between a model's intended behavior, as envisioned by the designer, and its actual output, with the agent's human acting as a sort of black box layer on top. Designers should advocate for this privacy and design intuitive interfaces and flows. This will allow humans to effectively train their agent's model while ensuring any captured data remains private and under their control once stored. The designer's role becomes more ambient in this future, with the agent's human responsible for the final step in achieving truly ambient states.


CPTR 6

Philosophy for AI agent design


Set Goal: Share ideas on how designers can be relevant in the era of AI agents

Designers will need to understand what an agent can connect to and act on —— we live in a world of fragmented and isolated software service offerings. To ground this in the real world, let's use apps as an example. Apps rarely interact with each other. They are shy and introverted. AI agents can put us in the comfortable position of no longer having to plan and book trips using numerous different services, but rather bundling them together in an ecosystem of access. Our agents will communicate with the apps for us, so we'll interact with them less over time, and likely, the interface will fundamentally change with time. That's where designers come in. This future could look and feel familiar, or completely alien. With Meltface Typeface, we do away with the tedious intermediate steps of: pulling phone out > waking phone > unlocking phone > opening app > seeing content > interacting and deciding what to do > jumping link to other app > browsing options > locking phone > putting it away. This future is about fewer peripheral screen devices but not screenless. There is design thinking that needs to happen for more agent-like digital interfaces, auditory dialogue, and gestural motion across device forms.

Designers must understand that human-computer interaction for AI experiences takes many forms and it’s important to focus the form. The future of interaction lies in fostering human-machine collaboration, which encompasses concepts like hybrid intelligence (Xu et al., 2023), teaming (Johnson et al., 2019), process (De Cremer et al., 2021), and co-creation. This necessitates designing autonomous intelligent systems that leverage knowledge not only from human machine interactions but also from the rich tapestry of human-human interaction, including aspects like teamwork, collaboration, trust, autonomy, and authority.

After performing the typical due diligence of refining the form and human need, the designer is responsible for justifying why this experience could benefit from an AI agent

Common principles for justification include:

  1. Reducing the time to task

  2. Making the task easier

  3. Personalizing the experience to an individual

A designer is responsible for identifying which modalities are most appropriate for an ai-driven experience. Ambient ambitions will require us to think about "AI experiences" versus just screens or interfaces. Agents, by nature, are conversational. Voice input alone is probably not the most ideal user experience for end-to-end agent interaction. While this may improve over time, there will always be opportunities to enhance a human's experience with multiple modalities. Most designers believe they do this today, but Meltface Typeface will require them to expand on typical drafting, prototypes, and deliverables to include a new dimension of agent interactions. Imagine you're designing a new classroom interface for 3D sculpting — you'll want to translate how your interface will cooperate with voice-driven agents and follow any best practices for reducing failure rates with your application. Your job as a designer is to reinforce the human's goals and tasks, then create a multimodal map of potential ideal states and relationships. This will become increasingly important as static websites and apps become less static and have to optimize for interacting with agents. The range of form factors for the devices we use will likely expand to glasses, pins, pocket gadgets, radios, and more. While each will have its own specific UX modalities and interaction models, the best designers will direct an ambient agent to know when the ecosystem should work together to improve the experience and when it should come apart to contain it.

Modalities an agent can pursue include visual, auditory, gestural, emotional, and tactile interactions. Visual elements, such as images, icons, and animations, can provide users with valuable contextual information and feedback. Auditory cues, such as sounds or voice responses, can help reinforce user actions and provide an additional layer of engagement. Gestural interactions, such as hand, finger, and arm movements, can organically imitate our inclination to gesture in the real world and feel innately human to us. Emotional interactions involve readable expressions of our faces and reactions to enrich responses at a human level. Tactile interactions, such as touch (taps, swipes, scrolls) or haptic feedback (vibration, temperature), can further enhance the sense of immersion and realism. By carefully combining these modalities, designers can create experiences that are not only ambient but also highly human and intuitive to use. To elevate an ambient experience with interface cues using Meltface Typeface, a designer can incorporate projections, 3D objects, animations, and shaders. Projections can be interfaces mapped on ourselves or the world around us that assist tasks that are difficult to accomplish with text-only. 3D objects can add depth and realism to our spatial projections, creating a more immersive and visually ambient experience. Animations can add a dash of character and life. Shaders, which control the appearance of objects and surfaces, can enhance the visual aesthetics and create a more polished environment.

Combining one or more of these modalities with an interface can create more versatile and adaptable user experiences. Imagine a cooking experience that might allow users to navigate a projection of recipes with gestures, order ingredients with voice commands, and use eye tracking as an indicator of where they are in the recipe steps while they're cooking. This allows their hands to be free and the interface to be controlled. The designer is responsible for assessing when it's best to use multiple modalities and when to apply them — considering factors such as the context of use, the tasks being performed, and the users' preferences and abilities.

But wouldn’t it be great if we could just use our voice with agents? The future where you simply say ‘do that’ and it’s magically done is enticing, but this AI would likely need to ask several questions before executing a task, or worse, you might have to figure out precisely what to say (Olesia Vdovenko). This is a critical point for modality constraints. From a human factors perspective, different tasks demand varying levels of mental resources. According to Susan M. Weinschenk, the order of loads, from most ‘expensive’ to least, are:

  • Cognitive

  • Visual

  • Motor

Clicking, for instance, is less of a burden compared to thinking. Speaking, as a primarily motor activity, involves muscle coordination but also integrates cognitive aspects. It demands mental processes for language comprehension, thought formulation, and organization of ideas into coherent speech.

From a productivity perspective, performing some tasks using your voice may be much slower than using a screen, especially if you want to correct errors or need to repeat commands. It is also important to remember that AI can "hallucinate", misinterpreting or fabricating elements in its response. Verifying the accuracy of AI-produced information is easier on a device with a screen. We should remember that visual media isn't just for entertainment or distraction. It's a critical communication tool and often key for executing daily tasks. Imagine trying to understand a complex infographic or a detailed picture via a voice description alone.

Humans like simple tools and interactions. For every design, consider how the integration of modalities and the interface expand the flexibility of the experience. You'll have to figure out where the line of flexibility is crossed to the point of overwhelming the user. This is a long-standing principle of user experience (called the flexibility–usability tradeoff) — the more flexible a system is, the less usable it tends to be. In other words, the bigger and more complex the product, the harder it is to deliver a pleasant experience. In addition to building in the right amount of constraint, for every interaction, imagine it done 100 times in a day. Don't tire a human — avoid physical and mental strain (turning a head side to side instead of up and down is much less tiring).

This may sound like common sense, but you'll need to test and prove your theories for multiple exchange scenarios. As engineers and product builders move full-steam ahead into a territory of unknown agent possibilities, a designer is responsible for finding magical parts where your agent's capabilities meet enjoyable human interaction. This should be done while avoiding inconvenience due to inaccuracy or unpredictable environments.


Unlike before, designers now need to maintain a certain level of comfort with the unknown - an agent's capabilities could improve with a little time, a lot of time, or never.


This isn't a typical static usability test that you can set and forget for a year. The designer is constantly observing and tinkering with agent capabilities and, at times, allowing the agent to take over full interfaces and interactions within the general bounds you set.

Designers are responsible for determining the contextual assistance necessary for an ambient agent to perform better and delight humans. From prompt suggestions to commands - think of word processing software where a human highlights a piece of text and has the contextual assistance option to: "shorten it" or "rephrase." Create a list of the most critical contextual assistance options you can imagine for your ambient agent.

Not all ambient agent experiences will involve an exchange of words, but they will require designers to understand the principles of good conversation design.

  1. Clarity and Simplicity: Conversations with agents should be straightforward and easy to understand, avoiding jargon or complex language that could confuse users. The goal is to facilitate smooth interactions without requiring users to think too hard about what they need to say or do next.

  2. Consistency: Consistent language, tone, and interaction patterns help humans feel more comfortable and confident in their conversations with the system. This consistency also aids in setting and meeting user expectations throughout the interaction. So while agents and their human adapt, consistent modes are essential.

  3. Design for Mistakes: This principle applies both to the user and the system. Space for error handling and recovery is a crucial aspect of conversation design. The system should gracefully handle errors, be thoughtful about when to communicate them, and lead towards accuracy, while learning from the interaction.

  4. Cooperative with turn-based flow: The system should be helpful and always looking out for the human’s desired goal to be achieved throughout a flow of turns between the human and agent.

  5. Timely Feedback Loops: Providing clear feedback helps users understand that their inputs have been received and processed. Confirmation messages can prevent misunderstandings and ensure that both parties are aligned on the action being taken. But the timing of this feedback loop is sometimes more critical than the feedback itself.

  6. 1:1: Tailoring conversations to the individual user can significantly enhance the experience.

  7. Private: Conversations can involve the exchange of personal information. Ensure user data is handled securely with respect and transparency.

Designers are responsible for determining how much control/creation ability the user has over each step/part of the experience. Imagine a user trying to create and send an invitation for an upcoming bachelor party. You might only allow the user to select a pre-defined template, with the ability to edit the text but not the design. This could be because the interface, perhaps projected onto the user's hand, isn't ideal for design editing or simply isn't technically available in that environment. It's straightforward to offer solutions like "ability to edit the colors of a design," but the more control steps you provide, the less convenient the experience becomes. Not all user needs are suitable for ambient interfaces. Some are better suited for less immersive moments. When a user submits a query to a large language model, modern models usually rephrase the original query to improve the results. Users are typically made aware of these changes and can edit as needed. Designers must determine when it’s appropriate to disclose any rephrasing, how much of it needs be exposed, and which parts of the transformation a user has control over

Designers are responsible for understanding the working functions and limitations of the ambient agent for humans, They should disclose this information when it's helpful and update humans as limitations change. Think commands that challenge the legal permissions set by your product team, such as creating graphics in the style of an artist who has opted out of your training data. There may be groups of artists who eventually opt-in, or the law may change. In such cases, it could be appropriate to alert them of the update, especially if you have a contextual history of their past intention to execute the limited task. Always balance the bad news (new and limiting conditions) with good news (new or pending capabilities).

A framework for conveying an agent's capabilities to humans and monitoring its functional progression.

Map key functions to each category and give humans clear examples without over-explaining it, something like “ordering food delivery” is the right fidelity for this step.

  1. Ready - the agent is already good at this, works great

  2. Exploratory - requires some exploration and tuning to get the best experience

  3. Planned - undergoing testing, may roll out soon

  4. Experimental - functions you can play around with that show promise but we’re not sure how things will go :)

  5. Ethical - functions that incorporate better ethical decision making or may produce unethical results — this one is optional depending on your organization’s code of ethics and whether you believe disclosing potential unethical functions is unethical in practice.

While traditional interfaces often limit humans to specific input sequences and levels of detail, AI-powered technologies that rely on gestures, motion, speech, emotions, eye intent, and contextual clues offer greater flexibility. However, this flexibility can lead to unintended system activations. Therefore, it's crucial for designers to consider not just intended, conscious interactions, but also the potential for unintentional ones. This is particularly important to prevent accidental triggering of AI agents to complete unintended tasks, which can have unforeseen consequences. An ambient agent experience should allow for cancel phrases or safe words to ignore previous inputs indefinitely. These should be tried and tested.

Designers are responsible for expressing an agent’s proactivity. As AI agents become increasingly proactive, we must adapt our design strategies to accommodate this new mode of interaction. This includes designing cues to signal when the AI agent is taking initiative, allowing users to control the AI agent's proactivity, and developing mechanisms for the AI agent to learn from user feedback and adjust its proactivity accordingly. Designers should also consider the extent to which an AI agent should be proactive. For example, consider an AI agent created to manage a user's calendar. How should the agent respond to an ambiguous request like "Schedule a meeting with John next week"? The agent could potentially determine the meeting's details based on both calendars and preferences.

Designers are responsible for an experience that can adapt to a human’s behavioral change in various contexts. Your experience should be able to accommodate changes in the agent's behavior, the human's behavior, the surrounding context, and the interaction between the three. Imagine an agent that's used to completing one task without human intervention, realizing one day it needs the human to assist because something in the application layer changed and altered their usual confidence interval. Imagine a human moves from one city to another, so the local context is completely altered. An agent continues to learn from its human's taste and input as they become the default context over time. Designers must consider when to communicate change - what are the thresholds for informing a human of autonomous recognition of a change in the agent or their preferences.

Designers should advocate for AI that’s in touch with society. Agents should adapt to society's evolving behavior. This adaptation should occur for individual users as well as an entire user base. Society's values and needs will shift. If an agent remains anchored to past learning, it could inhibit progress. Ideas considered universal today may not align with the future. Therefore, your agent's foundational model should aim to be forward-looking, without imposing a rigid set of values on users.

Designers have to imagine how preferences will work in ambient agent worlds. Profile and preference data will grow over time, so designers must create space in the interface or experience to conveniently showcase this data in case a human needs to alter it, and allow it to evolve and grow over time. At a certain point, the magnitude of signals and preferences should be automatically curated through categorical nesting to avoid overwhelming the user to the point of giving up due to choice overload. Much like deciding on which parts of the experience a human should have control over, designers must decide which parts of the experience should contain space to accommodate large and growing volumes of context and content.

Evaluating agent performance can occur either locally on each device or by a product team responsible for the agent model and/or device. Designers have to imagine new logs UI or dashboard-like environments for assessing agent performance, including an inventory of sessions for observing agent behavior & when possible, the ability to fine-tune from that environment.

Designers must begin counting decisions as navigation steps. If your AI design tries to cut down on user actions but makes them review all the AI suggestions, you haven't really saved time. Reviewing suggestions or looking for content in the interface counts as an action, just like clicking. For instance, if an AI knows a task can be done with three brands, instead of asking the user to choose, it can assess the user's brand taste or rank preferred brands and check if they're open to other brands.

Just as Google sorts search requests, you should sort main types of user questions. For example, use a "shopping" classification if someone asks if the shoes they see at Store A are also at Store B. Once you have this list, create design standards for you and the product team like: Lean into predictability for time-sensitive tasks and use more open-ended design for exploratory engagements.


Any personalized AI adopts the user's bias. This is great for tasks that require predictability, where consistently effective results are needed. However, for other tasks, it limits our curiosity. It constrains us to options within our comfort zone.


Part of being human is following your intuition off-road, even if it might lead nowhere. You can tweak your algorithms to find the right balance, and perhaps even design your interface to offer ways to escape the filter bubble.

Be careful when introducing your ambient agent to new users. Designers must craft experiences that actively build trust over time - and think critically about this timing for multiple uses. It might be best to initially request minimal personal data. Aim to make suggestions rather than definitive decisions - ask questions. As your AI becomes more familiar with the user, you can automate increased personalization and reduce the frequency of permission requests. This allows the user time to understand the agent's functionality, and your algorithms can progressively learn alongside them.

Designers should reimagine product walkthroughs. Assistive technology with Meltface Typeface should incorporate more "intro to the product walkthroughs" into the user's initial attempts. Every agent will require the user to prompt in some form. So, prompt suggestions are a prime example of a critical task that designers must ensure the user experiences quickly. Shed the weight of self-guided tours with 8 steps that most people ignore - you need to provide quick "suggestions" for a user that can teach both the human and the agent more about how this experience will play out. With AI conversation interfaces, it's prompt suggestions and filters, with ambient voice agents, it's a readout of common mistakes people make, with spatial computing, it's limits within the machine and the physical space you're in or your own body control. We must graduate from self-guided tours to more natural use, that's spread out over the course of the first experience timeframe. You have to be strategic about the amount of time you're requiring the user to feel "out of sync" with the experience because most teachable moments should blend more into reality. Prioritize your most important and novel affordances and leave the rest for exploration as they interact and engage more naturally. The reason this is critical is because for agents, trust is earned through actual interaction vs. self-guided tours. Also, tours are not a one-size-fits-all in the world of artificial experiences, now, they can adapt to the user which should result in more 1:1 personalization of onboarding/teachable moments. Imagine a world where we take a user through each step the first time they ask an agent to complete multiple tasks by teaching them what the agent is doing and having the human confirm each approach and selection before committing it to action. Every step of an initial request for booking a car, and leaving a message on their Mom’s phone is a teachable moment that can feel a lot more personal and natural than an unboxing "try this example that you don't care about" approach. Sure, everyone makes phone calls, but if they don't need to make a call in that moment, it won't stick as much as a teachable moment for both entities.

Designers are responsible for creating an experience where humans can assess agent swarms through understanding of performance, taskshare breakdown, and data updates. Similar to a manager of multiple people at work, humans will have to understand the connection points and performance of their entire swarm. The swarm cannot be a blackbox in a future where agents are performing real-world tasks - so designers will have a big role in shaping how we report on the operations and analytics of agent swarms

Some of these responsibilities will organically become part of the designer's job description while others must be earned. To earn it, at the very least you'll need to be able to explain your vision, which is the subject of our next chapter.


CPTR 7

Frameworks for the future of design

Set Goal: Ideas on how to showcase design leadership with focused visions embedded in a new framework - because people LOVE frameworks :)

The pyramid scheme framework for explaining your future design solutions

A designer is responsible for drafting a pyramid with an ambient vision at the top - then drawing a line connecting their primary path to UX success from the “ambient vision” all the way to the “focused ambient vision”

  • Ambient Vision - think primary vision like “an agent to create vacation itineraries”

  • Task Completion Metrics - think task goals like “total number of itineraries complete/accepted/saved/liked”

  • Agent advantages - think what will set the agent apart from a human like “creating multiple options in seconds, reading 100 travel blogs about the location and summarizing”

  • Personalization potentials (think ways to personalize like spending habits, vacation mindset (relax, adventurous), local to local comps)

  • Primary Human Benefits - (think about the true value exchange for the human like saves time, reduces stress and anxiety around traveling)

  • Focused Ambient Vision - after connecting the path from your ambient vision al lthe way to primary human benefits, create a list of more focused ambient visions and eventually connect your map to one of them - like “personalized travel itineraries based on every travel blog that ever existed & your own local favorites”

As you confirm your focused ambient vision, you can use modeling diagrams in the requirements stage of the design process to represent the flow of information and data between different components of the system, including dialog/interfaces, AI algorithms, models, and data sources

Designers need to lean into the soft skills and creative frameworks to grow and create lasting impact for a project

Two more hard rules for survival in the future organization

  1. Advocate for users without being annoyingly rigid about pre-determined “user needs”, in some cases, the right design will leverage the ambient agent to uncover user needs as they interact with the product. You can have both a list of known needs, and a running list of known unknowns.

  2. Advocate for a better user experience by centering it around an agent’s capabilities and make sure the capabilities you choose also help others with non-UX goals


CPTR 8

Guessing the future of UX design


Set Goal: Generate a random list of new realities that may exist with Meltface Typeface.

UX designers begin to dedicate a significant portion of their time to"correction" design, to the point where it can become a full-time role. From interfaces to models, designing correction states are crucial to achieving high user satisfaction scores.

———————————————

People generally agree that capturing moments is best done while being present, substituting handheld devices with pins, glasses, audio recordings, and such. Even sharing experiences with our loved ones can occur more subtly. These devices will signal to others when we are recording video that could potentially be stored.

———————————————

A website's primary function is to communicate with agents and their interfaces change to better serve models, morphing to easier interfaces with less noise so an agent can more easily learn and perform accurate actions

———————————————

Maps evolve to provide as much world context as we need. When we're driving, they can interact with us based on where we're looking. Imagine a hiking app that developed its own AI device or partnered with an AI device company. This device could feature all our favorite trails and maps, plus sensors to detect if river water is clean enough to drink or if a mushroom in a specific location is edible. All this information would be just a voice command and camera gaze away.

———————————————

Designers incorporate private command functions that serve as code words for moments when we're uncomfortable speaking to our agent in public. Currently, we can conceal our activities behind the privacy of our mobile phone screens. However, in the future, we might just get better at disguising our speech during moments we wish to keep private.

———————————————

Designers will develop various agents for different stages of their workflow. Imagine a creativity agent that consistently generates multiple options for your mock-ups, tailored to your tastes and preferences. You can control the deviation degree. Accessibility agents can also create variants of designs in your style, catering to specific disabilities or needs.


Digital personalization takes on an analog form through autonomous loyalty programs, potentially creating unprecedented levels of privacy acceptance. Consider a shopping assistant scenario. In this situation, every visit to an antique store is unique. Upon entering, the store's digital assistant, equipped with knowledge of the customer's past interests, wish lists, and loyalty program status, prepares to guide them on a personalized treasure hunt. The environment could adjust to emphasize artifacts and items relevant to the customer's current interests, such as specific eras or styles, while less relevant items are blurred into the background.

———————————————

UX designers shift some of their focus from addressing user needs to anticipating user requests in the world of AI devices. They are trying to preemptively answer questions users may have about opaque technologies, such as "Why did it make this prediction instead of that one?" or "What would happen if I changed my input to xx?" This proactive approach is becoming a crucial step in the design process.

———————————————

Designers have the ability and technological resources to design for elements of surprise and serendipity. While predictability is vital in certain situations, AI can also introduce unexpected elements. This might include suggesting mildly relevant content, proposing innovative results, or creating delightful moments of correction within the AI interaction.

———————————————

Designers transform into experts in test results and feedback loops. As AI is likely to create and test many experience variants, designers become proficient in assessing outcomes, interpreting them, and retraining the AI to improve certain aspects. Initially, this may seem overwhelming. Therefore, mastering the art of sifting through the information to build trust is crucial, as well as using AI to help summarize and simplify recommendations.

———————————————

The first AI agents in corporate design will likely be UXR agents. They'll start as simple bots that synthesize research notes, recordings, and audiences but eventually mature to run and report on full studies

———————————————

AI agents will generate the majority of new typefaces, reaching new heights of creativity in the art of language.


CPTR 9

Finding meaning & purpose in the future of design


Set Goal: How do designers find new meaning and purpose in this new reality?

Our processes, workflows, and outputs are changing in the future. The thing we held most dear, our creativity, ended up being one of the things a machine can do very well. We imagined a future of self-driving and mundane task automation. I think most of us assumed creativity would be off-limits, or at the very least, unachievable until we fully understood the brain. Given the current state of AI tools, that’s proven not to be the case. Images, videos, even full interfaces can be created in seconds with a simple prompt and tweaked with editing functionality and parameter tuning in the form of filters and remixes. We must think like ambient agents ourselves if we are to find meaning on this journey.


Listen to nature and remember we’re part of its collective intelligence. We have a role to play in sustainability, health, and more but adaptation is critical to survival and making the right mistakes and decisions. If we do this, the human moments lost to the early smartphone era will organically return


We should be hopeful that the next era of computing and its agents, will become true pals. Technology will become ambient when we want it to be, and more playful when we need it.

Designers have a responsibility to advocate for an AI agent experience that melts into the world around us, sticking to us and the fabric of what brings us joy, better health, and meaningful connection - bonus points for laughter that makes our face melt.




BONUS CHAPTERS


AI Agents 101.4 FM


Set Goal: Present a designer-level of understanding AI agents - try not to lose the audience in this chapter since it’s more textbook than others

Ai agents are not chatbots. Within the intricate tapestry of our digital lives, AI agents stand as the clandestine architects, molding the ether with a silent yet profound influence. These spectral conductors of the binary symphony weave through the fabric of our online existence, deciphering the cryptic dance of our desires with ethereal intuition. They sculpt our digital worlds with sometimes unseen hands, curating a cosmos of content that resonates with the unspoken whispers of our curiosity. In this, they are both the shadow and the light, paging us through cyberspace’s resources - with the deftness of a poet's pen, crafting a narrative that is uniquely ours yet universally connected. Now, do you see the difference? If not, it’s okay, here’s some more practical reasoning:

Some characteristics that set ai agents apart from traditional chatbots & other assistive products

  1. Autonomy: AI agents have the remarkable ability to function autonomously, meaning they don't require constant human supervision. This allows them to handle tasks and make decisions independently.

  2. Anticipatory: Beyond merely reacting to user input, AI agents demonstrate proactiveness. They can anticipate user needs and take initiative.

  3. Adaptive Organisms: AI agents are not static entities with rigid codes of conduct - they continuously learn from their interactions with users and the environment, adding context from their model.

  4. Natural Conversation: AI agents can feel as real as texting with a friend and are able to respond to a human’s particular communication style more naturally than chatbots.

  5. Action-Oriented: AI agents are equipped with a goal-oriented mindset centered around various actions they can take. They can grasp the user's objectives and actively assist in achieving them.

What about these entities shapes their characteristics? Here’s a general view of the nature of an AI agent:

  1. Perception - Gathers and interpret data

  2. Memory - Stores past experiences

  3. Decision Making - determines actions

  4. Planning - plans device action sequences

  5. Action execution - performs tasks

  6. Learning - adapts and improves

  7. Communication - interacts with others

  8. Transparency & Accuracy - ability to view and assess performance 9. Browsing - accesses information

The Core Loop

Understanding that different AI agents function in diverse ways depending on their specific purpose, training data, and algorithms is important. Many AI agents use a mix of various techniques like machine learning, deep learning, and rule-based systems, which complicates their operation. The level of human input and supervision can vary significantly among different AI systems. Here's a simplified view of how an agent completes a task.

  1. Initialize Goal: Define the objective for the AI.

  2. Task Creation: The AI agent checks its memory for the last X tasks completed (if any). It may use a blend of pre-defined rules, learned patterns from data, and dynamic contextual information — and then uses its objective, and the context of its recently completed tasks, to generate a list of new tasks.

  3. Task Execution: The AI executes the tasks autonomously or mostly autonomously with additional input from its human.

  4. Memory Storage: Task outcomes and relevant data are stored in an appropriate format for future retrieval, which could be a vector database among other data storage solutions.

  5. Feedback Collection: The AI collects feedback (potentially from various sources) on the completed task. This feedback will be used to inform the next iteration of the Adaptive Process Loop.

  6. New Task Generation: The AI generates new tasks based on collective feedback & updates, including any environmental changes and internal dialogue.

  7. Task Prioritization: The AI reprioritizes the task list by reviewing its objective and looking at the last task completed. Prioritization can involve external factors, such as urgency, importance, resource availability, and other strategic considerations beyond just the AI's objectives and the last task completed.

  8. Task Selection: The AI selects the top task from the prioritized list, and proceeds to execute it.

  9. Iteration: The AI repeats steps 4 through 8 in a continuous loop, allowing the system to adapt and evolve based on new information, feedback, and changing requirements.

Assessing the performance of an AI agent won’t be isolated in the UX or interface - the nature of how it functions matters more than before. Here’s a framework I like for assessing agents from research conducted at Hong Kong University

  1. Memory - incorporate long-range information in context

  2. Planning - decomposing complex goals into manageable sub-goals

  3. World Modeling - tests knowledge necessary for task completion

  4. Retrospection - captures the ability to use environmental feedback

  5. Grounding - focuses on competency in generating actions

  6. Spatial Navigation - efficiency in moving to a target location

The foundation of an agent is its model, and the agent's experiences are the building blocks for whatever gets placed on top of that foundation. A typical non-fiction book would tell you, "the model and model type you pick for your agent will be critical," so I suppose I should say that. There are agent experiences where you can toggle and use different models and types depending on your goal. Here are some examples of model types:

Large Language Models (LLMs): These are designed to understand, generate, and translate human language. LLMs, like GPT (Generative Pre-trained Transformer), are trained on vast datasets of text to perform a wide range of language-related tasks, including conversation, writing assistance, and language translation. Natural Language Processing (NLP) Models: While LLMs are a subset, NLP models encompass a broader range of tasks related to understanding, interpreting, and generating human language. This includes sentiment analysis, language translation, summarization, and entity recognition, with applications ranging from customer service automation to social media monitoring.

Computer Vision Models: These models are trained to interpret and understand the visual world. They can analyze images and videos to identify objects, people, scenes, and activities. Applications include image recognition, facial recognition, and autonomous vehicle navigation.

Generative Adversarial Networks (GANs): GANs consist of two models: a generator that creates data resembling a training set, and a discriminator that tries to distinguish between the generated data and real data. The generator improves by "fooling" the discriminator in a competitive learning process. This setup enables GANs to produce highly realistic images, videos, and other data types, making them particularly useful in art creation, photo-realistic rendering, and enhancing low-resolution images.

Large Action Models (LAMs): These AI models are designed to perform complex actions in digital or physical spaces, extending beyond language understanding to executing tasks. LAMs can automate intricate processes, manage interactive systems, or control robots with a focus on completing specific objectives based on inputs and their understanding of the environment.

For now, LAMs make the most sense for Meltface Typeface given the four forms that make it so.

A LAM allows for the direct modeling of the structure of various applications and user actions performed on them without a transitory representation, such as text. A LAM attempts to better understand actions, specifically human intentions expressed through actions on computers, and, by extension, in the physical world. If Meltface Typeface succeeds, we believe our efforts could contribute to the further embodiment of intelligent algorithms that positively benefit our daily lives. (source https://www.rabbit.tech/research)

Imagine you're showing a friend how to bake a cake. You measure the ingredients, mix them in a specific order, and put the mixture into the oven. Your friend watches, learns, and then does it themselves, mimicking your actions. This is the essence of LAM's learning model, but instead of baking, it's mastering digital interfaces through the art of observation. LAM watches a human navigate an app, grasping the sequence of actions like a keen apprentice. But here's the twist: once it learns the routine, LAM doesn't just mimic blindly. It creates a recipe – a clear, understandable set of instructions that anyone with a bit of technical know-how can follow, without getting lost in endless cycles of guesswork or revision. This isn't just about copying actions; it's about understanding the digital kitchen. Over time, as LAM observes and learns, it builds a comprehensive conceptual blueprint of the application's interface, much like a chef understands the nuances of their kitchen. This blueprint allows LAM to connect users to the services they need, seamlessly bridging the gap between human intention and digital execution.

So let’s breakdown what a designer should know about the steps involved in creating a large action model (generally, this can also apply to other model types)

Data Acquisition and Preprocessing

Gather new data: Collect data relevant to the desired improvements/outcomes. This could involve user interaction logs, task specific datasets, screen recordings, or domain-specific information. EXAMPLE - Imagine you're improving a LAM's function for leaving reviews on travel sites. You might gather additional data such as how users typically navigate to and write reviews across various travel applications. POTENTIAL DESIGNER TASK: Conduct user research to understand user needs, pain points, and frustrations related to the LAM's current functionality regarding written reviews as well as non-LAM workflows. This can inform the selection of relevant data for improvement.

Data cleaning and annotation

Ensure the new data is clean, well formatted, and appropriately labelled for training purposes. This might involve removing irrelevant information, correcting errors, and assigning specific labels to actions or tasks. EXAMPLE: This would involve cleaning the data by removing irrelevant information like biased user opinions in reviews or egregious UX anomalies. POTENTIAL DESIGNER TASK: Assist in defining clear data labelling and structure based on user understanding and mental models. This ensures the data is easily interpretable for both humans and the LAM itself.

Model Refinement

Fine-tuning: This involves retraining the existing LAM on the combined original and new data. This allows the model to learn from the new information and improve its performance on specific tasks or domains.

Hyperparameter tuning: Adjust the model's hyperparameters, such as learning rate or batch size, to optimize its performance and address potential issues like overfitting or underfitting. POTENTIAL DESIGNER TASK: Build in-product feedback loops and dashboards for reporting. Log the history of model refinement for future reference when building new functions based on user needs.

Evaluation and Analysis

Benchmarking: Test the updated LAM against established benchmarks or user-defined metrics to evaluate its performance improvements compared to the previous version. POTENTIAL UX TASK: Conduct usability testing, duh.

Error analysis: Analyze the model's errors to identify specific weaknesses and areas for further improvement. This might involve examining specific task failures or analyzing bias in the model's outputs. EXAMPLE: Analyze specific instances where the model fails to write a review correctly or fails to submit one. Identifying these errors helps pinpoint areas for further improvement.

Continuous Improvement

Iterative process: The process of data acquisition, model refinement, evaluation, and analysis is often an iterative cycle. Based on the findings, further data collection, model adjustments, and testing can be conducted to continuously improve the LAM's capabilities. EXAMPLE: Based on the evaluation and analysis, you might decide to collect additional data specific to the identified weaknesses (e.g., focusing on negative hotel reviews). You can then refine the model again and re-evaluate its performance, forming a continuous cycle of improvement. POTENTIAL UX TASK: Work closely with the development team to adopt an iterative design approach. This allows for ongoing feedback loops, where user insights from testing and evaluation are incorporated into future iterations of the LAM.

Additional considerations

Computational resources: Training and refining LAMs require significant computational resources (for now). Access to powerful hardware and cloud infrastructure is crucial for efficient development and deployment.

Explainability and interpretability: As LAMs become more complex, ensuring their decision-making process is understandable and interpretable becomes increasingly important. Techniques like model introspection and explainable AI (XAI) can be employed to understand how the model arrives at its outputs. Techniques like attention mechanisms can help visualize which parts of the input data the LAM focuses on when making decisions. This can provide insights into the model's reasoning process and identify potential biases in its decision-making.

Ethical considerations: Throughout the development process, it's crucial to address potential biases, fairness, and ethical implications of using LAMs. This includes careful data selection, bias mitigation techniques, and responsible deployment practices.

Designers should know enough to influence the development of AI and to be held accountable for how AI is used when applied to AI agents.

TECHNICAL: Observable Behaviors

  1. Can explain the basics of how machine learning works

  2. Can explain data engineering concepts (e.g., data collection, data cleaning, data curation, data modeling)

  3. Can explain AI techniques (e.g., NLP, NL understanding, ML)

  4. Can explain how core algorithms work (e.g., deep learning, creating libraries, code patterns)

  5. Is aware of the most recent and relevant models, along with their strengths and weaknesses

  6. Can digest common concepts in the AI lifecycle (e.g., setting confidence thresholds, setting up a sustainable model maintenance program, and data collection + labeling methods)

STRATEGIC: Observable Behaviors

  1. Can explain barriers to AI & ambient agent adoption

  2. Can explain their organization’s AI agent strategy & differentiation

  3. Can explain the ambient agent vision - sharing real-life use cases to help explain why an agent or new capabilities for an existing agent make sense

INTERACTIONS: Observable Behaviors

  1. Can translate AI outputs into valuable pieces of information for humans 2. Can leverage and reuse existing AI patterns effectively and appropriately

  2. Can determine the appropriate level of communication about the existence of AI in the experience (e.g., frequency, branding)

  3. Can prototype and test ambient agent solutions in a human-to-human setting

  4. Can design more efficient ways for a user to correct or tune an agent in primary contexts and environments

ETHICS: Observable Behaviors

  1. Can integrate ethics resources into team processes (including design thinking exercises, static resources, and role-specific tooling)

  2. Can explain the latest privacy mandates (from government) and expectations (from humans) and how they should affect decision making

  3. Can create and share real-world stories about the financial and human impacts of unethical design decisions

  4. Can ask the right questions to uncover ethical risks that may not be obvious

  5. Can participate in conversations about data biases & its impact on agent output





SPATIAL COMPUTING UX & UI


Set Goal: Give designers some practical tips for visual and UX design in spatial computing

Practical tips for spatial lighting & virtual UI

  • Use rounded rectangles, squares, circles

  • Sharp edges are bad, you want to use rounded corners on objects where you can because sharp edges will draw attention from the eyes

  • Add depth, layers, and light shadows to UI Icons

  • Higher contrast for reading text, lower contrast for transparency

  • When transitioning from dark to light scenes or environments, ensure there's a slow transition

  • Screens/Windows that emit light like a video player should shine light on areas around the window in real space

  • Place shadows under windows that are not emitting light

  • Subtle depth is fine for modals and other pop ups or layers on icons Keep text flat when used as an interface element

  • Add specular highlights to app icons when clicked on - A specular highlight is the bright spot of light that appears on shiny objects when illuminated. Specular highlights are important in 3D computer graphics, as they provide a strong visual cue for the shape of an object and its location with respect to light sources in the scene.

  • Use multiple layers when designing great icons - for example, visionOS icons can have up to 3 layers (background layer and up to 2 layers on top) and it's a combo of flat layers

  • For an icon, keep the graphics centered as much as possible because if a layer is near the edge it could look off when expanded by the UI upon selection

  • Use dynamic scale for UI. fixed scale will reduce the size of your icon/windows


Practical tips for spatial UX

  • As a designer, you’re responsible for outlining the different states of interaction and how you’ll treat them in spatial environments, for example you could decide to have 4 states like Idle, Hover, Selected, Disabled.

  • When you're listing or stacking virtual content, make sure to stack horizontally, left to right, and not top to bottom, because it's straining on the human user to move their head up and down, much easier to move from left to right

  • Try not to anchor virtual content to a user's eyes or head. Anchor to a placement in space. If you are going to anchor content to a user's head or eyes, make the content small and/or use a method like "lazy follow" where the content slowly follows you to your new location vs. fixed with the speed of a user's motion

  • Place virtual content at arm’s length away for the best user comfort. Content closer than that should be momentary. Keep content in our natural field of view for the most part, avoid forcing a user to spend a lot of time outside on the edges of the field of view as that can cause strain over time

  • If you’re including sound in your experience, there are two types of audio when putting together a soundscape

  • Spatial audio sources like birds, things, etc. that are making noise from a specific place (location)

  • Background audio -- surround audio files that are the overall background noise of your app - they are ambient, all around, and always on

    • Use randomization of sounds and placement randomization. For example, if you have frogs in an outdoor scene place them all over from a distance standpoint and randomize when those sounds and levels play





THE END

© Ambient Agents Meltface Typeface. 2024. Casey Fictum