Slower and More Slurred…
Hi, I’m Sophia, co-creator of the Problem Collective with Ojen and this is our very first post; let’s see how it goes!
I’m supposed to be studying but naturally, I’m daydreaming, or just thinking about something else. And what am I thinking about specifically? Speech-recognition technology.
I am a 33-year-old woman with disabilities. I have a condition called Ataxia from a bleed on my brain stem. Ataxia is a neuro-movement disorder that causes me to have a lot of shaky and jerky movements. It causes “a lack of coordination in the muscles.” Ataxia is not the only disability I have. I also have an eye condition called Nystagmus, which causes involuntary eye movements. Nystagmus is sometimes, and romantically, referred to as “dancing eyes.” I think “pain-in-the-ass eyes” is more honest, but who wants honesty these days?
I wrote all of that for a reason: my issues with speech-recognition technology. I have the Apple iPhone X, by the way. Aside from the obvious, I’m a fairly regular person, and I have all the guilt and shame of a regular person too. I would love to text someone in public without shouting private things at the tops of my lungs. Better voice recognition would have the added benefit of helping me write texts faster. I don’t write texts fast enough, which may seem trite but, when the pressure’s on, I freeze. I know I sound kind of spoiled, but this is not some spoiled girl’s rant. This is an issue for many people with disabilities. And just many people in general. I just hope that someone who can help; sees this.
Sending a text to my Uber driver in a desired time frame is something I wish I could do.
About two years ago, ya know, when things actually happened, I was at school when I ordered an Uber ride. I had to go to the disability office to speak to a counselor quickly, and I did the really intelligent thing and ordered the Uber before I was out. For once and at the very wrong time, it was only two minutes away- but I hadn’t even seen the counselor yet! I had to let the Uber go and I waited until I had gotten to the designated pick-up spot for Uber to order another one. This time it was 11 minutes away. So, I waited. And waited. I saw that the sun was starting to go down. I checked my phone a couple of times. The last time I checked it, I saw that the Uber had arrived. But it was nowhere in my line of vision. After looking around for one or two minutes, I spotted it like some kind of explorer, like Magellan, or like Neil Armstrong or something. The car was about 100-feet away. I tried calling the driver twice to tell him to wait for me. He didn’t answer the phone. Then, I tried texting him using voice commands to text, since I cannot physically text fast anyway; especially when the pressure is on.
Between the thought of losing my ride and my Ataxia kicking in, my speech patterns became slower and more slurred.
While I was talking to Siri, I knew she was supposed to give me a sound or screen indication that she could hear me, but I didn’t know if anything was happening. I knew she was supposed to give me a sign, but she didn’t. Maybe I missed it, maybe she stopped working, but whatever it might have been, I felt like Siri left me in the dark. I tried to use voice commands in order to text multiple times, knowing my Uber ride was about to leave me. To make this story short, my Uber ride did indeed leave without me.
Ojen, co-creator of the Problem Collective and an Interaction Designer, has a few interesting ideas of how developers might address inclusivity for voice technology. First, maybe there could be an onboarding with empathy and diversity in mind. This type of onboarding can allow a person to indicate if there is a user with a speech impairment or any type of disability. The onboarding should go through a series of speech recognition after indicating speech impairment. Doing so will also initiate advanced coding algorithms for optimal speech understanding to deliver desired outcomes for users.
Additionally, the speech impairment option/setting might house more refined and tailored choices and tweaks to accommodate different types of speech impairments. Speech impairments can come in many varieties, such as: slow or fast, slurred, paused, and stuttered, etc. Perhaps, developers can implement gauges, levels, on and offs, and all the many other different types of interactions to adjust settings. Hence delivering inclusivity to a diverse range of challenges for those who will be able to adjust user experience settings for accommodations and their liking.
Lastly, voice command feedback should be better adapted with more obvious indications only when the onboard settings of the device are set to specific needs. Obvious indications of feedback will allow users not to doubt their intelligent devices because a chime or pop-up was missed. These feedbacks are usually standard: sound, vibration, or pop-up, but in this case, feedbacks should be more prominent to assist users with their needs through the form of an obvious indication. Ultimately, reducing doubt and boosting confidence through unmistakable feedbacks will give peace of mind to users with needs; giving productivity more of a chance.
What we are trying to say is, let Siri, Google, Amazon, Bixby, or whatever the next big thing is to have more adaptation toward voice commands and as we mentioned earlier, please adapt to all challenges or disabilities. Why not even have individual third-party certified apps adapt to a more inclusive approach?