Podcast Episode Transcript: Making things simple in user experience isn’t always simple

With L. Gordon Moore, MD

Dr. L. Gordon Moore: Welcome to 3M’s Inside Angle Podcast. This is your host, Dr. Gordon Moore. With me today is Anna Abovyan. She is a user experience expert, and I am fascinated by the meaning of that and why it’s important and would like to learn more. Welcome.

Anna Abovyan: Hello.

Dr. Moore: Tell me a little bit about your role and your work and your title.

Anna: Sure. So I lead the user experience design department at 3M M*Modal which is our clinical solutions division within healthcare information sciences department. What that essentially means is that I lead a group of user experience designers and researchers. Our primary goal, as I see it, the role of my team is to make sure that the products that we put out there, which are primarily software, make sure that those products are intuitive, understandable, easy to use and facilitate the end user goals.

We take a look at things like workflows and flows through the systems and overall context in which people are using our products, and those are primarily physicians, nursing staff or other care providers. We make sure that our products don’t interfere with the work that those people are already doing, but gently blend into those workflows.

Dr. Moore: And that sounds really simple on its face, but I have a suspicion that there’s actually a lot more depth to this. So for instance, one of the concepts I’ve run across in conversation with you as we were setting up this discussion was the idea of cognitive overload. I wonder if you could help me understand that.

Anna: Yeah, of course. Cognitive overload, and generally, cognitive load is this central idea to making things simple. Just like you said, making things simple sound simple, but really, what does it mean? That essentially means reducing the amount of mental processing that is required to perform a task. So in cognitive psychology, cognitive load essentially refers to the amount of working memory resources that are allocated to that task.

The concept was initially developed as a part of cognitive loads theory and it was developed for purposes of instructional design or learning, people who are looking into what does it take to learn new information and this is where this idea of cognitive load came through, basically idea being that the less cognitive load a person experience and the better it is distributed, the better you would acquire new information, which all make sense, but since those were early days of cognitive load theory, the concept has expanded and people talk about it now in areas beyond learning, but also in the areas of just performing tasks.

Every single task that you perform, especially if it’s a task that involves new information, you are essentially learning, whether you’re searching, you’re looking up information, you are trying to add information, all of that includes various elements that require your brain to both process information that’s in front of you and cross-correlate it with other information that you already know, kind of match apples to apples and make sense of it before you take any kind of action.

Dr. Moore: And how does this concept intersect with the work you do in terms of usability?

Anne: That’s a great question. The way I like to describe, and a lot of times, people, when they hear design, user experience design or interaction design, people think of colors and fonts and the pretty stuff. But the way designers, especially user experience designers like to describe the work is H”ow can we make the product or the task or whatever it is that we’re designing fit the mental model and how can we make it run smoothly”, which if you think about it, is essentially how do we reduce certain aspects of cognitive load.

I want to clarify here that not all cognitive load is bad necessarily. Cognitive load, typically people describe it and it comes in basically several flavors. Some of the cognitive load is the task itself. If you are working on something challenging, let’s say a math problem, that in itself, it has intrinsic cognitive load, which is not bad. If we’re looking on a website for some kind of information, it’s good, we’re looking for cognitive load, we’re looking for new information. This is the good thing that we want to preserve.

What’s unfortunate is that there is also this concept of extraneous cognitive load, and that’s essentially all the resources related to spending mental capacity on tasks that are not directly related to the tasks. So let’s pick an example, let’s say I’m trying to find some information on a website, but the website has some—maybe a quasi example is font variation. Some text is in the small font, some text is green, some text is blue, there’s some kind of maybe an animation in a corner. All of those things distract me from what I’m actually trying to learn.

But I can just ignore it. We are evolved to notice things like animations, evolved to notice differences. So what my brain is doing is constantly thinking, oh, that text is a little smaller, or, that image is blinking, that must mean something. And it takes quite a bit of my mental processing, this extraneous cognitive load to first of all process, say, hey, what was this? And then make a decision on whether or not it was important to the task at hand, which, if it isn’t, then that’s the extraneous cognitive load.

Really, the role of design and usability is to minimize or bring to zero, ideally, that’s the goal, the amount of extraneous cognitive load so that we can take those same resources and essentially free them up for the intrinsic cognitive load for the task itself.

Dr. Moore: It’s interesting as you described this in websites. Immediately I’m thinking, there seem to be a lot of websites that have incredible extraneous cognitive load, but then I think, wait, maybe it’s not extraneous. Maybe that distraction is on purpose.

Anna: Exactly. If you’re thinking about, for example, that ad that’s sitting there and maybe the box where the ad is slightly growing and catching your attention or maybe it’s brighter than everything else, or say you’re watching TV and the ad comes on and it’s louder, that’s intentional. We call this in design dark design patterns that’s utilizing, basically hijacking these things that your brain does in the favor of whoever is pushing that information.

So that’s obviously not something we want to do definitely not in our work. Our goal is to help physicians get through their day with as little distraction as possible and get through their task, but that is something that’s definitely closely related to design.

Dr. Moore: Yeah, it’s a little frightening sometimes when I pull back the curtain and I realize that there’s some incredible science and experts, and sometimes they’re bent on malevolent intent. But what you’re talking about in healthcare is not. So give me an example within healthcare of the distraction, the extraneous load and how you think about working there.

Anna: Oh, gosh, it comes in so many forms. So let’s take my favorite example, that’s the electronic health records because that’s the world that I work in. So I see physicians that we observe and study, make products for, I see them work with enormous amounts of information. There was actually recently a study by Mayo Clinic where they looked at the amount of data points that a physician, ICU physician specifically—needed to look at in order to assess the patient.

One thing that they found, that there’s about 50,000 data points on the average patient, ICU patient within the electronic health record. The truth is that all the physician needs on average within that ICU visit is about 60, 60 out of 50,000. That’s about 0.1 percent. It’s completely negligible. But essentially, what we’re talking about is finding a needle in the haystack, not only just the sheer volume, but also, these systems are often designed as—they’re trying to present you everything just in case because they don’t necessarily know which 0.1 percent you’re going to be looking for.

So that results in systems that essentially are extremely visually noisy. Everything is on the screen, every little thing that you can possibly want to as opposed to tying to tailor those screens and interactions to the particular task of that ICU physician in that moment, which maybe would mean that you could employ techniques like—I’m going to throw design terms here, like progressive disclosure. That’s a concept of understanding what task the physician were—end user really, is likely to do at the moment, and trying to only show the information that’s pertinent to the immediate next decision.

Think about your favorite mobile apps, like the one screen of your favorite mobile app, probably that’s just one task. And that’s the idea of—you disclose that, and then as the user takes the next step and says, “Oh, I’m interested in which medication this patient is taking,” then you expose all the other medications as opposed to littering the screens with tables and buttons for every case in the world for every particular workload that they may or may not ever perform.

Dr. Moore: That’s so appealing and fascinating but also worrisome to me. When I think about some of the algorithms built into online engines that may understand that I’m more likely to be interested in these things, but it brings me down a rabbit hole where I’m inside a subset of information that lessens my understanding of the whole world, how is it that you can look at these tens of thousands of data elements in an ICU and say with confidence that these are the things that are important to that clinician at that moment?

Anna: Yeah, that’s an excellent question because I think that’s the common criticism. You may have heard these design principles that people sometimes talk about, like every bit of information has to be accessible within, let’s say three clicks, some arbitrary number of clicks, like it’s a magic pill.

The trick here is not to necessary let’s tuck everything in, hide it away, make it uniform for every physician in the world because we know there’s personal workflow differences. We definitely know there’s specialty differences. The trick here is to truly understand the target user, like the target person who is going to be using this. If we just sit in our ivory tower and try to decide, you know what, this is what the physician really needs at the moment where a patient with trauma arrives, we’re probably going to be wrong.

So this is why I talked in the beginning how on my team I have UX researchers, and even every designer, people who are called designers that consider research to be a huge part of their work. The idea is you never design anything for a human unless you truly have seen how they work and have truly embraced and walked in their shoes. So we do this through a variety of techniques. We shadow people, we interview people, we do diary studies, we do so-called card sorts, for example. That’s a great technique.

Oftentimes, we take for example 50 million data points, that’s not true, but let’s say a hundred data points that we don’t know how to organize. So instead of bringing our biases to this, what we do is put it in front of target users and say, “How do you think about them? How do you organize them?” It’s a simple, simple exercise where they essentially are shuffling Post-It notes and organizing them in groups and naming them in a way that matches their mental models.

A point of the story here is that the goal of the designer is not necessarily to prescribe an interface or prescribe that progressive disclosure and more to understand and reflect the user’s mental models.

Dr. Moore: So give me an example, just a functional example of how this kind of thing would work in an electronic medical record. Because I know that there are huge EMRs out there and I would figure that they have this nailed, or what’s your experience with that?

Anna: They don’t. There’s been an improvement and some of them have been focusing on some narrow workflows that they’ve been ironing out. We can definitely see an improvement over the years observing physicians using the same EMR, I have noticed an improvement in some areas. But one example that maybe is easy to explain is problem-based charting. So if you think about—as a physician, I’m sure you know, you’re taught to think about the problems that patients have.

The patient presents they are diabetic or maybe they have some kind of an injury that happened to them. You’re thinking about that injury, you’re thinking about managing that diabetes, and everything else revolves around that, so you prescribe certain medications and certain labs and all those things relate to the treatment of that diabetes.

So your mental model is, I have a diabetic patient that maybe also has some complications. The way the electronic health record system often thinks about it is more in terms of a database structure, super chunked up system that would segregate the medication module apart from the diagnosis module, apart from narrative documentation about that diagnosis. So you would basically have to be hopping through three, four, ten different screens in order to describe one diabetes.

What some EHRs have started doing is introducing these problem-based documentation ideas into the workflows to say, you know what, let’s just deal with this diabetes, describe everything around this and make information pertinent to that diabetes close to the moment when you’re describing it or looking up information on it.

Dr. Moore: That’s interesting. The way you described that, the EMR database model, all of a sudden, that makes so much sense when I talk to colleagues and hear from them about their frustration in clicking through different things in EMRs to find stuff and how challenging that is. I’ve heard a lot of frustration about that from colleagues where they tend to focus a lot of their anger at what they see as functional gaps. I don’t know that that’s the source of all their pain. It just may be where they’re focusing. Is that the cognitive load you were talking about earlier?

Anna: Yes, absolutely. There was actually a survey done recently, I believe 2019 by HIMSS that found that 77 percent of physicians identified that documenting and charting in the HR as their source of cognitive overload. And that’s a huge problem because what we haven’t talked about is, okay, so cognitive load, what’s so bad about it? We’re humans, we have these great brains, we’re built to think, solve problems. The problem is not really the cognitive load itself, it’s the cognitive overload. It’s what happens to your body, to your brain, to your thinking patterns when you reach that point when you can’t process information efficiently.

I mentioned in the beginning how cognitive load specifically relates to the short-term working memory limitations. It’s limited both in its time, it’s short, it’s in a name, but also in capacity. So for example there is this one human study by George Miller back from the ’50s. This is before cognitive load theory was even developed. What George Miller found is so-called Miller’s Law. You may have heard of it. It’s referred to as Magic Number Seven, Plus, Minus Two.

Sounds kind of funky, but essentially, what it says is that an average human being with average capacity can only hold seven plus, minus two units of information in their short-term memory, so that means five to nine. So what kind of implications that has? Let’s say you’re a practicing physician and you need to remember elements, you need to remember what did your patients just tell you, what did the nurse just tell you, also, what was the name of that drug, what was that ICD-10 code, and maybe a couple more things and then you need to remember some information from the previous screen in order to enter it for, let’s say billing purposes.

Now, we are reaching that point, we’re getting beyond my seven plus, minus two, and you have to remember that if you are under stress, your limit is probably going to be seven minus two, so five units. So you’re going to start forgetting things, which means that you’re going to start taking shortcuts. And shortcuts mean—it obviously leads to mistakes, it leads to psychological stress, leads to even stereotyping, like what we’re talking before we started recording here.

Dr. Moore: Yeah, that’s interesting. You talked about the shortcuts. It reminds me of Thinking, Fast and Slow by Kahneman and talking about his work with Tversky on the thinking fast or the mental heuristics or shortcuts we take that are just sort of automated responses to things, which serve us well and have for millennia, but not always. And there are times when we will go through that automated thinking, which does not necessarily take into account nuanced and other information we might know about something, and so we come to a rapid but sometimes erroneous conclusion.

Anna: Exactly. And design definitely has a role to play here. We could capitalize on those shortcuts. So for example, what a good design would do is introduce things like good defaults, predefined steps. There are many techniques like this, like opt-in kind of modalities versus opt-out type of modalities for information entry, checklists, obviously, that are used in medicine and obviously for a lot of reasons, very similar kinds of reasons.

But basically the idea here is that you could capitalize on this ability of the brain to shortcut things. And as long as you put these little stepping stones for picking easy shortcuts that will lead to the desired behavior in the end of the day, you are going to actually be able to direct the mental capacity towards the task that they should be thinking about. They shouldn’t be thinking about, which item I should be selecting from this dropdown if that’s not the point of the task, right? The point of the task is to do something else, a dropdown is sort of in the way.

So if you select a good default for it and all I have to do is just visually confirm it and say, “Yup, that looks right,” assuming they don’t abuse that system and use it for intended purposes, this is a very powerful design trick.

Dr. Moore: So the challenge I think about with the clinician in the ICU, going back to that example, or the number of distractions that are inherent in the day-to-day work, and when I think about, for instance, the need of, “I’m working on this patient and I’m working with this nurse and this consultant,” and maybe a family member, we’re going to try to go through some complicated things, and then I get an alert from somebody else, somebody in the next room or down the hall, for instance. It’s like I have to pull myself out of that context, spin up this other context and then get into work.

I’m wondering, is context shifting one of those aspects of cognitive load and overload?

Anna: Yeah, you touched on a lot of elements there, yes, absolutely. So conflict shifting is huge, but you also touched on alert fatigue, which is near and dear to my heart. We’ve been working on that problem for close to a decade now. That happens. And unfortunately, some of this comes with a profession and that is something that we expect from our doctors. That’s something that we actually want them to think about and to kind of be able to care for multiple people at the same time.

So this sort of goes back to what are we going to treat as a problem that we can solve, what are we going to treat as an extraneous cognitive load versus what are we going to say is intrinsic to the task itself, and we’re going to try to remove our other barriers.

That said, in your particular example here, could we do better? Absolutely. We could, for example, whenever you do come back to that same patient that you had to rapidly leave and look maybe on a different patient’s chart, could we give you a great summary of where you are? Could we plop you in the same place where you were instead of on the “homepage” of that original patient from which you navigated away? That does not happen in the systems that we see. Everything is sort of stateless.

If you think about how humans interact, we never interact that way. If you were working, instead of in the HR, if you were working with another human who was helping you document and you had to contact switch, you would then very clearly tell that human and say, “Hey, let’s go back to the previous patient.” And this other human who’s helping you would probably tell you, “Oh yeah, so let’s see, what did we do? We already placed the medications. Oh, we were talking about labs.”

This is a great interaction model that I’m hoping we will get to and we can model off of for our digital systems.

Dr. Moore: Is that a pie in the sky or is there any chance of that happening?

Anna: No, I think this might be closer than we think, actually. The interesting thing is that we have all the data. It’s all digitized, it was digitized for the purposes of providing better workflows. We are technically able to track every step and know what you clicked on, what you didn’t click on, and there is no reason that we can’t orchestrate workflows like these other than there’s not a lot of incentives to do that.

Dr. Moore: So you’re giving me a ray of hope here, which is great. I hear about cognitive load and cognitive overload and context switching and distractibility, and now I’m hearing that there may be a future where things are better. I certainly am getting a better understanding that there are experts like you and teams that are versed in science and research to understand things better. Do you have any examples of the kinds of things that you’ve been talking about and the impact they’ve had?

Anna: Yes, of course. I’m struggling with the scope here. I feel like we solve problems like these everyday and some of them are so small where we sometimes fail to even notice that that’s what we’re doing. But it’s important to realize that every single time you make a decision about a design, that’s exactly what you’re doing, you’re affecting somebody’s workflow. It might seem completely mundane for us to design a screen and put a button and call it blue or make it grey and put it on the right side or put it on the left side, or make it a radio button instead or make it a link.

Those are completely mundane design decisions that sometimes maybe can be treated as a work of art or sort of artistic choice, aesthetic choice, but they aren’t. With every single one of those choices, you have the power and the freedom to make a better decision and to kind of facilitate that larger workflow throughout. I’m not sure if I’m answering your question here though.

Dr. Moore: You are, actually, but I’m thinking, the way you’re describing this, you’re describing some very discreet, small changes in the context of, are you making the world a better place? And I’m hearing that link and I’m fascinated by the idea that a radio button versus another type would actually have that level of impact. That’s what I’m hearing?

Anna: You are, yeah, absolutely. Because there’s never going to be one radio button. It’s going to be a sea of them. If you look at certain systems where physicians need to deal with, or nurses especially, nurses deal with a crazy, just crazy amounts of discreet data entry, it’s never one radio button, it’s hundreds of them that all look absolutely identical, or they look different for no good reason, which also takes up extra mental processing power.

So if you don’t orchestrate the path of the user through this, especially if they are doing this task repetitively. So I often hear from designers talking about the software needing to be intuitive and maybe have a lot of white space and be all airy and easy to use so that a child can pick them up. This is great maybe for a consumer level application, but we’re talking about enterprise software here.

Essentially what that means is that that particular nurse would probably interact with that particular workflow 15 times a day, maybe more, day after day, every day of their life. So making a mistake is costly, but making a mistake is also easy because we are building the systems in such a repetitive fashion.

I guess what I’m saying is that yes, every radio button matters, and of course, you can take that same kind of thinking and scale it to a larger systems level and say, well, are we even solving the right thing? Do we even need to be asking people to fill out dropdowns and radio buttons? Is this even the right way to collect this information? Do we need to be collecting that information in the first place?

Dr. Moore: So is there—when you look at the—aggregating the impact of this overtime, I would expect that it would actually be noticeable to the end user. They would say it’s actually easier. And I guess I’m trying to think of examples in my life when I’ve run across things that were just so nice and so straightforward and easy, it really made the task flow, and others that were grindingly difficult and frustrating, and that I’m guessing from our conversation that a lot of that may come down to very thoughtful research around design.

Anna: Absolutely, yeah. Both what we call generative research, understanding, to begin with, those mental models, how does a person think, what’s—how do they think about the task and its flow, that is all very important. But equally as important is the evaluative research. So we do things like usability studies where we essentially put our software and our ideas to the test.

We said, okay, we researched the problem, we think we’re solving the right thing. We think we have an okay solution. But is it really working? Because oftentimes, it seems like we have followed every step correctly, we may have already implemented system, we’re ready to deploy, there is this temptation to just let it be, let it go into the world. But it’s an extremely useful step to actually put it in front of people and watch them use it.

It turns out people just never use your software the way you designed it, for a good reason. Essentially that typically reflects the fact that we can never understand people well enough in that early stage. People will always come up with workarounds or another edge case we didn’t think about, or they might be in a specific mode where they’re dealing with, for example, context switching, that maybe we didn’t see when we were researching ahead of time.

So essentially, what we do is we sit people down and say, “Go ahead, use it and talk me through it. Tell me why you clicked on this thing and why you didn’t, and tell me what you were looking for,” and we do things like measure task efficiency, how long does it take people to get from one point to another. We ask them to think out loud, sort of like, “Oh, this is what I’m looking for. I’m not finding it. I guess I’ll try this one.” And every one of those cues is a key for us that says, maybe we used the wrong jargon, maybe we’re not exactly matching the mental model or maybe again we’re solving the wrong problem to begin with. Maybe they’re just trying to use this workflow to actually facilitate a whole different task that we just didn’t dig deep enough.

Dr. Moore: That was interesting. My background is in a lot of process improvements and walking around and looking at analog, real-life patient flow and how people work together. And in the early days of electronic medical record, “Oh, this is going to digitize everything. We don’t need paper anymore,” walking in and seeing all this paper, all these notes that people are writing and all these Post-Its people were sticking all over the place. I’m thinking, wait a second, I thought this was the be all and end all, we didn’t need paper.

Anna: Yeah, exactly, but guess what, it’s way easier to find something on the yellow Post-It stuck to your monitor than dig through some kind of search interface.

Dr. Moore: Well, I have to say, this has given me hope. When I peel back the curtain and I see that there are experts like you and your team and others who are this thoughtful and this researched about what you’re doing, I think the world is heading in a right direction, at least in this context. Thank you so much for doing this work.

Anna: Sure, absolutely.

Dr. Moore: Thank you very much for your time today.

Anna: Great talking to you.

View Session Spotlight (PDF)