Do you consider all five senses when designing for UX? What about how those senses and our physical and cognitive abilities combine, otherwise known as multimodal design?
John Alderman and Christine Park address just that–as well as how to create a cohesive user experience, and even the secret to good UX–in their new book, Designing Across Senses: A Multimodal Approach to Product Design. The authors sat down with Content Science to answer a few burning questions about the new title.
We use our senses together in really fundamental ways to our physical and informational experiences. Our sense of balance stabilizes the way we see and how we move. Our sense of hearing and vision work together to recognize sounds. And our most memorable experiences are usually a combination of our senses together.
Multimodal design is a way to think about and design for experiences that integrate our sensory, physical, and cognitive abilities together. For too long, we have thought about UX and content design as exclusively either visual, touch, or sound-based designs—even specializing designers and creatives along those lines—when actually people use them all together.
It’s an important approach to user experience because the technical limitations that in the past constrained interactions to screens and desktops have exploded. Being able to design for the new opportunities and challenges that present means understanding how we receive, process, and make use of diverse types of information in different physical and mental contexts.
Multimodal design is an approach that recognizes that the whole human sensory apparatus operates differently in different situations. What we sense, the information we need to understand and operate smoothly, and what we expect to do, all change depending on whether we’re having a conversation at a party, jogging down a path, driving a car, or reading a text alert. The patterns of how these bundled expectations and abilities come together are called modalities.
Understanding these combinations is essential to designing the next generation of user experiences, like VR, Voice, and sensor-based IoT experiences well. But it also can improve the way we design experiences for all of the different kinds of products and media being created now. It can also address some of the ways in which digital experiences have become detrimental to our real world lives.
It’s kind of like why it’s important that things make sense. Mostly it’s because it means that someone (in this case, a user) understands what’s going on, feels able to act appropriately, and is able to create a response that supports what they are trying to do.
It’s important to engage several senses because that’s how we are accustomed to reality working. If something is experienced in several senses it’s judged to be more real, we are more certain about it, and the experience or information is more memorable.
Second, having multiple sensory options is important because one sense might not be working for someone, either temporarily or permanently. Having substitute sensory channels is necessary in such cases, and the importance of having options increases with how critical or relied on the activity or information is.
Empathy, empathy, empathy.
Mistaking what is measurable for what is memorable.
Designing experiences that chase after many measures leads to being interruptive rather than supportive. We say that focus is the new engagement. It’s much better to understand and prioritize what a user is trying to do, and supporting that focus, than trying to push them to engage with your product or interface. Often that means playing a supportive role. That’s important for devices, but it’s also important for content. It’s what it takes to build a brand rather than unleash a nuisance.
Also, there can be a bias towards looking at content as stuff that needs to be created, curated, managed, warehoused. That’s not wrong, but it is only one aspect. Thinking of designing communication or conversation rather than content as stuff can help focus on what the audience needs or understands, as well as what they might have to say. Buckminster Fuller famously said, “I am a verb,” meaning not static but with dynamic needs conditions and activities. Looking at the communication you create as a dynamic actor that may need to be a silent, observant listener as much as speaker can be a very useful lens.
Designers, marketers, and other creators spend a lot of time thinking about the process of communicating ideas and information through screens and interfaces. We hope that our readers will spend a bit more time thinking about how that information makes its way through our senses to our heads–and hearts.
Follow Christine Park on Twitter at @raygunfactory and follow John Alderman at @mrhungry.
Learn how the most successful organizations scale and mature content operations. Based on our research with 700+ content leaders and professionals.
Discover why + how an end-to-end approach is critical in the age of AI with this comprehensive white paper.
Learn more about the much-anticipated third edition of the highly rated book by Colleen Jones. Preorder the electronic version.
Learn how to bring out the full potential of text generative AI to create impactful content from this on-demand course.
Use this white paper to diagnose the problem so you can achieve the right solution faster.
Training for modern content roles through on-demand certifications + courses or live workshops.
Comments
We invite you to share your perspective in a constructive way. To comment, please sign in or register. Our moderating team will review all comments and may edit them for clarity. Our team also may delete comments that are off-topic or disrespectful. All postings become the property of
Content Science Review.