A free, open-source online screen-reading program that gives visually impaired students the ability to surf the web from any internet-connected device, and a system that enables students with severe physical handicaps to control computers or wheelchairs with only their tongues, are among the latest developments in assistive technology (AT) that aim to lessen–if not completely obliterate–the gap between the able and the disabled.
"We are seeing exciting trends that open the door to increased access with greater simplicity for less cost. The emergence of open-source tools and hardware that is easy to use will enable more people with special needs to have access to technology that will improve their quality of life," said Tracy Gray, director of the National Center for Technology Innovation, which advances learning opportunities for persons with disabilities.
New AT developments are giving disabled students anytime, anywhere access to tools that can help them learn from wherever they are, freeing them from having to sit at a particular computer workstation.
Earlier this year year, eSchool News reported on software from Kurzweil Technologies and the National Federation of the Blind that turns a multifunctional cell phone into a portable reading machine. (See "Cell phones tackle reading, language barriers.") AbleNet Inc., a company that delivers a wide spectrum of AT solutions, also plans to harness handheld technologies–such as Apple’s iPhone–to create anytime, anywhere AT devices.
"We’re examining the possibility of integrating multiple functions into a single device–much like today’s cell phones are also portable media players and cameras," says Mary Kay Walch, marketing associate for AbleNet.
But for users who can’t afford expensive software or a phone upgrade, a new online service is opening the internet to the visually impaired anytime, anywhere–and it’s free.
Called WebAnywhere, this new web-based tool is "self-voicing," a term indicating that an audio file begins to play on a web browser automatically, letting someone who is blind or visually impaired surf the web from any computer with speakers or headphone connections. Taking advantage of the phenomenon known as "cloud computing," the software processes the text of a web page on an external server (currently housed at the University of Washington [UW] campus) and then sends the audio file to play in the user’s web browser.
WebAnywhere requires only minimal permissions on the client computer, and it starts up quickly without requiring a large download before becoming functional. WebAnywhere can run on many mobile devices as well, regardless of the underlying platform.
Developed by Jeffrey Bigham, a UW doctoral student in computer science and engineering, and funded by the National Science Foundation, the system could serve as a convenient, low-cost solution for visually impaired users on the go, for users unable to afford a full screen reader, and for web developers targeting accessible design.
Bigham says screen readers cost nearly a thousand dollars for each installation, isolating many of the more than 10 million people in the United States who are blind or visually impaired.
"WebAnywhere demonstrates the potential of software solutions to provide an accessible interface from existing hardware and with no installation," says Bigham. "I think this will be a growing trend."
To access the system, users first browse to a web page. WebAnywhere narrates both its own interface and the contents of the web page that is loaded. Users can navigate to other web pages, and the WebAnywhere interface will allow new pages to be narrated to the user.
The system was designed in close consultation with blind users and is still in its alpha release–meaning improved features will be added.
"We’re also interested in extending the system to support other groups that could benefit from voice feedback, such as people with low vision and those with certain learning disabilities," says Bigham.
He says he’s been contacted by many organizations and individuals expressing interest, including educators who find it easier to use one free site than obtaining permission to install screen-reading software on a school computer and then ensuring that visually impaired students have access to that single machine.
Bigham says WebAnywhere has been released as open-source software, which means that anyone can contribute to its development, add new features, and run a personalized installation of it.
In May, Bigham was named winner of the Accessible Technology Award for Interface Design for the Imagine Cup, a student programming contest sponsored by Microsoft Corp.
During the summer, Bigham worked independently for Benetech, a not-for-profit company that owns Bookshare.org–which has a collection of more than 36,000 books accessible to the visually disabled.
Bookshare.org recently partnered with Don Johnston Inc., a provider of supplemental instructional tools, to give print-disabled students a free text reader to access electronic books from the Bookshare.org library. (See "Free text reader to help print-disabled students.") But that solution requires software to be installed on a user’s machine.
Another new AT development with important implications for education is a system created by engineers at Georgia Tech that allows persons with disabilities to operate computers and interact with their environments simply by moving their tongues.
"Revolutionary" is what observers at the 2008 Rehabilitation Engineering and Assistive Technology Society of North America called The Tongue Drive System.
Developed with funding from the National Science Foundation and the Christopher and Dana Reeve Foundation, the system works by attaching a small magnet–the size of a grain of rice–to a person’s tongue by implantation, piercing, or tissue adhesive. This allows tongue motion to direct the movement of a cursor across a computer screen or power a wheelchair across a room.
Movement of the magnetic tracer attached to the tongue is detected by an array of magnetic field sensors mounted on a headset outside the mouth or on an orthodontic brace inside the mouth. The sensor output signals are wirelessly transmitted to a portable computer, which can be carried on the user’s clothing or wheelchair. All movement occurs instantaneously, because the sensors react in real time.
The system can capture a large number of tongue movements, each of which can represent a different user command. A unique set of specific tongue movements can be tailored for each individual based on the user’s abilities, oral anatomy, personal preferences, and lifestyle.
The research team has begun to develop software that can connect the system to a variety of readily available communication tools, such as text generators, speech synthesizers, and readers. Researchers also plan to add control commands, such as switching the system into standby mode to permit the user to eat, sleep, or engage in a conversation while extending battery life.
"This device could revolutionize the field of assistive technologies by helping individuals with severe disabilities, such as those with high-level spinal cord injuries, return to rich, active, independent, and productive lives," said Maysam Ghovanloo, an assistant professor in Georgia Tech’s School of Electrical and Computer Engineering. Ghovanloo developed the system with graduate student Xueliang Huo.
"Tongue movements are also fast, accurate, and do not require much thinking, concentration, or effort."