While the computer mouse and keyboard are useful devices that allow us to interact with our computers, the fine movement that is needed to use them creates a barrier for many users with motor disabilities. Automatic Speech Recognition or ASR technology makes it possible for voice commands to replace or supplement keyboard and mouse input. However, existing ASR systems require clear speech and/or long training periods where the system “learns to understand” a user’s particular speech by creating models based on their input speech samples. When developing the CanSpeak speech interface research project, we decided to develop an alternative solution.
CanSpeak is a customizable speech interface that associates a very small number (i.e., 1-7) of keywords to computer commands, such as mouse clicks or keyboard presses. The keywords are specifically selected for each user so they can say them easily. Because CanSpeak uses a small vocabulary selected to be easy to pronounce for a given user, it works well for users whose speech is affected by a physical or medical condition. CanSpeak is open-source and written in the Java programming language. It quietly sits in the background of the Windows operating system and listens for keywords selected previously by the user.
When the user says a keyword, CanSpeak springs to action and executes an associated command such as performing a mouse right-click or pressing the space button. If the user does not want to use CanSpeak, they can put it on “pause” mode, where it will not react to input keywords until “unpaused”. I started developing CanSpeak several years ago at the CanAssist Research Group at the University of Victoria, BC. There, under the supervision of Professor Nigel Livingston and Leo Spalteholz, I developed the first version of CanSpeak that was used to support navigating the World-Wide Web. From the beginning, our design process had an important participatory component and we worked with real-world users to get early feedback on the system. Later, I continued the project under the supervision of Professor Melanie Baljko at the GaMaY Lab at York University, Toronto.
When looking for participants to further test the system, I met Connie Ecomomopoulos through the Friedreich’s Ataxia Made Easier (F.A.M.E) network. Connie had a motor disability that made it tiring for her to use the computer keyboard and especially the mouse. She had used some existing ASR systems before but found them unsatisfactory. She was interested in contributing to assistive technology research and agreed to help us test and develop the system further. Working with Connie, I soon realized that her ideas were going further than user feedback. Connie’s ideas provided deep insights into the underlying design of CanSpeak.
We started to co-design the system and make changes so that it would be possible to use it with more applications (e.g., mail and calendar) other than just web-browsers. Other changes followed. Prior to our collaboration, CanSpeak could only simulate keyboard button presses. Connie suggested that perhaps mouse clicks and double clicks could also be simulated; a suggestion that I gladly incorporated into the system.
We described the CanSpeak system and our approach to developing it in detail in a research paper entitled, “Co-designing a Speech Interface for People with Dysarthria of Speech”. The paper was published in the Journal of Assistive Technologies (volume 9, number 3, 2015). Please see this link for more information.
For me, this project highlights that it is essential to use participatory and co-design approaches to the design, development and deployment of assistive technology. These approaches allow for designers to be informed by the invaluable life experiences of people who are going to eventually use their designs.