hci

Adopting The Right UX Perspective

Adopting The Right UX Perspective

UX is about just that, user experience. Not software design, but the experience that the user feels. While watching a promotional video for a new calendar app, Peek, I just realized my perspective on UX was too software centric.

About 38 seconds into this video, the girl taps on an item (the hour picker), to go into a submenu (choose which hour of the day). Instead of two separate taps on two screens, Peek implemented a tap and hold, follow by a slide across the screen the desired location, and release.

This UX model of tap, hold, slide, and release (THSR)  won't work in many places. In this case, the THSR model works because the submenu has a few unique characteristics:

1) Simple menu with a fixed number of easily understood choices
2) Frequently used (by a heavy calendar user)
3) Is a "one and done" list

As I've thought about software designs that I interact with, I've usually thought in terms of taps and swipes. Peek demonstrates that my frame of reference didn't capture the subtleties of how the user actually interacts with the app. I should have been thinking in terms of touches and removals. A tap is a touch and a removal.

Had I been responsible for designing the UX in Peek, I would likely have implemented the following:

tap
remove
move finger to new screen location
tap
remove

Peek implemented:

tap
move finger to new screen location
remove

Genius. Peek removed 2 steps from a 5 step process.

Moral of the story: in UX, break everything down into the most granular steps possible. Rather than thinking about the design of the software, think about each physical and mental process the user must walk through in order to derive value from the software.

Why Will Medical Professionals Use Laptops?

This post was originally featured on EMRandHIPAA.

Steve Jobs famously said that “laptops are like trucks. They’re going to be used by fewer and fewer people. This transition is going to make people uneasy.”

Are medical professionals truck drivers or bike riders?

We have witnessed truck drivers turn into bike riders in almost every computing context:

Big businesses used to buy mainframes. Then they replaced mainframes with mini computers. Then they replaced minicomputers with desktops and servers. Small businesses began adopting technology in meaningful ways once they could deploy a local server and clients at reasonable cost inside their businesses. As web technologies exploded and mobile devices became increasingly prevalent, large numbers of mobile professionals began traveling with laptops, tablets and smartphones. Over the past few years, many have even stopped traveling with laptops; now they travel with just a tablet and smartphone.

Consumers have been just as fickle, if not more so. They adopted build-it-yourself computers, then Apple IIs, then mid tower desktops, then laptops, then ultra-light laptops, and now smartphones and tablets.

Mobile is the most under-hyped trend in technology. Mobile devices – smartphones, tablets, and soon, wearables – are occupying an increasingly larger percentage of total computing time. Although mobile devices tend to have smaller screens and fewer robust input methods relative to traditional PCs (see why the keyboard and mouse are the most efficient input methods), mobile devices are often preferred because users value ease of use, mobility, and access more than raw efficiency.

The EMR is still widely conceived of as a desktop-app with a mobile add-on. A few EMR companies, such as Dr Chrono, are mobile-first. But even in 2014, the vast majority of EMR companies are not mobile-first. The legacy holdouts cite battery, screen size, and lack of a keyboard as reasons why mobile won’t eat healthcare. Let’s consider each of the primary constraints and the innovations happening along each front:

Battery – Unlike every other computing component, batteries are the only component that aren’t doubling in performance every 2-5 years. Battery density continues to improve at a measly 1-2% per year. The battery challenge will be overcome through a few means: huge breakthroughs in battery density, and increasing efficiency in all battery-hungry components: screens and CPUs. We are on the verge of the transition to OLED screens, which will drive an enormous improvement in energy efficiency in screens. Mobile CPUs are also about to undergo a shift as OEM’s values change: mobile CPUs have become good enough that the majority of future CPU improvements will emphasize battery performance rather than increased compute performance.

Lack of a keyboard – Virtual keyboards will never offer the speed of physical keyboards. The laggards miss the point that providers won’t have to type as much. NLP is finally allowing people to speak freely. The problem with keyboards aren’t the characteristics of the keyboard, but rather the existential presence of the keyboard itself. Through a combination of voice, natural-language-processing, and scribes, doctors will type less and yet document more than ever before. I’m friends with CEOs of at least half a dozen companies attempting to solve this problem across a number of dimensions. Given how challenging and fragmented the technology problem is, I suspect we won’t see a single winner, but a variety of solutions each with unique compromises.

Screen size – We are on the verge of foldable, bendable, and curved screens. These traits will help resolve the screen size problem on touch-based devices. As eyeware devices blossom, screen size will become increasingly trivial because eyeware devices have such an enormous canvas to work with. Devices such as the MetaPro andAtheerOne will face the opposite problem: data overload. These new user interfaces can present extremely large volumes of robust data across 3 dimensions. They will mandate a complete re-thinking of presentation and user interaction with information at the point of care.

I find it nearly impossible to believe that laptops have more than a decade of life left in clinical environments. They simply do not accommodate the ergonomics of care delivery. As mobile devices catch up to PCs in terms of efficiency and perceived screen size, medical professionals will abandon laptops in droves.

This begs the question: what is the right form factor for medical professionals at the point of care?

To tackle this question in 2014 – while we’re still in the nascent years of wearables and eyeware computing – I will address the question “what software experiences should the ideal form factor enable?”

The ideal hardware* form factor of the future is:

Transparent: The hardware should melt away and the seams between hardware and software should blur. Modern tablets are quite svelte and light. There isn’t much more value to be had by improving portability of modern tablets; users simply can’t perceive the difference between .7lb and .8lb tablets. However, there is enormous opportunity for improvements in portability and accessibility when devices go handsfree.

Omni-present, yet invisible: There is way too much friction separating medical professionals from the computers that they’re interacting with all day long: physical distance (even the pocket is too far) and passwords. The ideal device of the future is friction free. It’s always there and always authenticated. In order to always be there, it must appear as if it’s not there. It must be transparent. Although Glass isn’t there just yet, Google describes the desired paradox eloquently when describing Glass: “It’s there when you need it, and out of sight when you don’t.” Eyeware devices will trend this way.

Interactive: despite their efficiency, PC interfaces are remarkably un-interactive. Almost all interaction boils down to a click on a pixel location or a keyboard command. Interacting with healthcare information in the future will be diverse and rich: natural physical movements, subtle winks, voice, and vision will all play significant roles. Although these interactions will require some learning (and un-learning of bad behaviors) for existing staff, new staff will pick them up and never look back.

Robust: Mobile devices of the future must be able to keep up with medical professionals. The devices must have shift-long battery life and be able to display large volumes of complex information at a glance.

Secure: This is a given. But I’ll emphasize this is as physical security becomes increasingly important in light of the number of unencrypted hospital laptops being stolen or lost.

Support 3rd party communications: As medicine becomes increasingly complex, specialized, and team-based, medical professionals will share even more information with one another, patients, and their families. Medical professionals will need a device that supports sharing what they’re seeing and interacting with.

I’m fairly convinced (and to be fair, highly biased as CEO of a Glass-centric company) that eyeware devices will define the future of computer interaction at the point of care. Eyeware devices have the potential to exceed tablets, smartphones, watches, jewelry, and laptops across every dimension above, except perhaps 3rd party communication. Eyeware devices are intrinsically personal, and don’t accommodate others’ prying eyes. If this turns out to be a major detriment, I suspect the problem will be solved through software to share what you’re seeing.

What do you think? What is the ideal form factor at the point of care?

*Software tends to dominate most health IT discussions; however, this blog post is focused on ergonomics of hardware form factors. As such, this list avoids software-centric traits such as context, intelligence, intuition, etc.

Overcoming the Challenge of Checklists: Access

In The Checklist Manifesto, Dr. Atul Gawande outlines some of the challenges associated with implementing checklists in clinical environments.

Checklists cannot take longer than 90 - 120 seconds to complete
Checklists have to assume a basic level of competency; they cannot be too basic or menial
Checklists must contextual in light of a variety of clinical scenarios and workflows
Checklists must be either READ-DO or DO-CONFIRM. A given checklist cannot mix and match READ-DO and DO-CONFIRM items.

Every medical professional we've interacted with - both clinical and administrative - understands the value of checklists. We are yet to encounter anyone that doesn't understand or believe in the value that checklists create.

Checklists can be used in any context in which there's a repeatable set of steps in which the cost of forgetting a step can be substantial. There are hundreds of workflows in hospitals in which forgetting a step can be detrimental to patient outcomes.

Despite this, adoption of checklists has been remarkably slow. Checklists are still only used in a narrow set of clinical environments. Why? Why aren't checklists being adopted in pharmacies, labs, in drug administration, or the ER?

People don't like doing more stuff. Medical professionals (MPs) are already overburdened with clinical documentation, meaningful use, defensive practices, etc. Although checklists can materially improve outcomes in many settings, they also introduce friction into existing workflows. As such, providers have only been adopting checklists in settings in which the cost of being wrong is extraordinarily high. Surgery is the highest acuity and riskiest avenue of care, but it's not the only that can materially improve from checklists.

How can we reduce the friction that checklists introduce? Let's consider the steps involved in completing a checklist:

First, the MP must recognize that a checklist should be used; second, the MP must physically access the checklist, which may be on paper, a wall, or computer; third, the MP must complete each item of checklist and document that each step was completed.

Pristine isn't tackling the first point of friction, yet. But we are dramatically reducing the friction required to complete items #2 and #3. By reducing friction, we are driving improved compliance, and ultimately improved outcomes and reduced costs. How do we reduce friction?

While wearing Pristine Glass, MPs just have to gently rock their head back, and say

"Ok Glass, start central line checklist"
"Ok Glass, start IV checklist"
"Ok Glass, start intubation checklist"

With Pristine CheckLists, MPs can access checklists without thinking, without going anywhere, and without using their hands. Pristine CheckLists dramatically reduce the friction between MPs and checklists.

Once the checklist has been initiated, MPs can navigate checklists with contextual voice commands such as:

"Washed hands"
"Prepped site with aseptic technique"
"Wore sterile gloves"

With Pristine CheckLists, MPs can access and complete checklists without interrupting their workflow. MPs can interact with and complete checklists while providing care. Pristine CheckLists represent an enormous leap forward in access and ease of use that will drive adoption of checklists in many places where they simply weren't practical or possible before.

Communication is a Means to an End

Healthcare delivery is perhaps the most fragmented service on Earth. Medicine continues to fragment and specialize further every year. In The Checklist Manifesto, Dr. Atul Gawande joked that surgeons are specializing in left ear and right ear surgery. Healthcare delivery is fragmented across medical disciplines, job classes, job functions, geographies, and even within and among buildings on a medical campus.

Pristine envisions a future in which medical professionals communicate seamlessly with one another without thinking. Eyeware computers such as Google Glass will be the enabling technology.

Let's examine a few use cases:

For a general consult: "OK Glass, start an EyeSight call with Dr. Smith."

For a derm consult: "OK Glass, start an EyeSight call with a dermatologist."

With a CRNA wearing Glass in the OR: "OK Glass, start an EyeSight call with an anesthesiologist."

For a concerned nurse: "OK Glass, text Sally 'the patient in room 3 is doing fine.'"

For a physician in clinic: "OK Glass, text Dr. Johnson 'we discharged the patient in room 5.'"

For an EMT in the field: "OK Glass, start an EyeSight call with a trauma specialist, stat."

For a wound care nurse: "OK Glass, start an EyeSight call with a wound care specialist."

For an intensivist resident: "OK Glass, start an EyeSight call with my attending."

Glass presents the foundation to support the ultimate Pristine communication platform. Communication platforms have traditionally imposed a significant cost on medical professionals: using hands. But in many circumstances, medical professionals can't and shouldn't use their hands even though they need to communicate with others. Pristine's handsfree communication platform will open new communication channels.

Communication is a means to an end, not an end in and of itself. The most important result of seamless communication in medicine is that patients will have more access to better, more cost effective care. Communication lies at the crux of the triple aim: cost, quality, and access.

Speak Freely

I began blogging in January 2013 as a new years resolution. During that first month of blogging, I wrote brief essays on the power of the keyboard and mouse in human computer interaction models:

Optimizing the keyboard

Optimizing the mouse

The resurgence of the command line UI

At the mHealth Summit, I recently had a chance to play with Nuance's and VoiceFirst's (by Honeywell) latest voice solutions. I was thoroughly impressed. The key function that caught my attention was that the Nuance app was parsing a single block of text into multiple functions. An example:

From the patient selection screen I said "order 500mg Levaquin for John Smith."

Nuance would first recognize that John Smith is admitted and in the current patient list. Next, it opened John Smith's chart. Then it displayed a new screen with the order details of the pending order on the top half of the screen and a listing of existing orders and allergies on the bottom half of the screen. Lastly, it prompted the provider to fill in the rest of the required fields - route, frequency, etc.

In a separate demo, the Nuance rep showed me voice-print based authentication, aka logging in with your voice. If you combine the two above, doctors wouldn't have to sign orders so long as they spoke them. The EMR would know who placed the order based on voice, and the order would be authenticated against one's voice. Awesome.

The point of this post isn't to praise Nuance. It's to postulate on the future of voice based interfaces in medicine.

Voice is an interesting beast. Designing UXs that heavily incorporate voice can significantly alter UI design. For example, voice can help solve the 'there's too many buttons on the screen' problem. Just get rid of the buttons. If you don't want tabs for labs, vitals, allergies, meds, immunizations, etc, then get rid of all of them and make them accessible via voice.

That's a powerful concept. With voice, UIs don't have to be exclusively bound by the pixels on the screen. There are still some pixel based limits, but voice can virtually extend pixels. So long as the voice command is contextual and intuitive for the user, buttons can be removed.

Voice also opens incredible new opportunities for the patient to enter data him- or herself without even knowing it. So long as the provider guides the patient and asks the right questions, the patient could fill out the EMR as they speak. See the example below.

EMR.png

In this example, the provider could prompt the patient "So tell me a bit more about your [complaint]. How long have you had it? Where does it hurt? Any associated symptoms?" The NLP could pick up that this is obviously referring to HPI, and then dictate the patient's response into the HPI field. CC and HPI are supposed to be patient reported anyways.

Looking at the two examples above, it's clear that voice will be a highly contextual UI concept. Although this is intrinsically true in visual UIs (keyboard / mouse and touchscreen), it's worth repeating in a voice-driven UX because voice doesn't appear to be bounded by what the user can see. For developers, this means a few things:

Don't try to design voice commands to be generic across the entire application. And don't try to show available voice commands on the screen. Assume users learn to use the voice queues over time. Train them and provide subtle visual cues to encourage them to use voice. Help them explore.

Try to really understand context. Context was previously bounded by screen real estate. It no longer is. For example, while looking at a given patient's labs, the user may want to jump to meds. Don't force the user back up the tree to the dashboard before they can navigate to meds. (Note, most EMRs aren't architected to support this. Many are intrinsically tree-based. In these instances, they'd need to simulate two steps programmatically).

Use voice consistently. Although voice can remove the need for certain buttons / tabs, it should be available for navigational elements that are on screen. If one navigation item can be selected to via voice, all comparable voice items should be.

All voice commands and triggers should be one or two words. Given that voice is 90-95% accurate per word, if 1/10 voice commands fail, that's probably ok. Users won't mind repeating one or two words (but they will be frustrated repeating two sentences).

If your company is doing awesome stuff with voice, please let me know. I want to learn about it.