Anywhere, Anytime (+Anyone) Access
to the Next-Generation WWW
Gregg C. Vanderheiden Ph.D.
Trace R&D Center
College of Engineering, University of Wisconsin - Madison
Madison Wisconsin, 53719
With increasing power, miniaturization, and thin-client/NetPC structures, people will soon be able to access the full network environment wherever they are. Information access points/appliances will be built into the walls, incorporated int o our working environments, carried and even worn by us, and used as an integral part of most of our daily activities.
At the same time, as the Internet and information technologies are being woven into the fabric of education, business, and daily life, greater attention is being focused on whether the ordinary person, including those with disabilities, will b e able to access and use these systems.
It is interesting that these two seemingly different objectives have similar solutions. If we design systems which are truly ubiquitous and nomadic; that we can use whether we are walking down the hall, driving the car, sitting at our workstat ion, or sitting in a meeting; that we can use when we're under stress or distracted; and that make it easy for us to locate and use new services -- we will have created systems which are accessible to almost anyone with a physical or sensory disability. W e will also have gone a long way to creating systems that are usable by a large percentage of the population who currently find systems aversive or difficult to learn. In addition, strategies and ideas developed for people with disabilities can provide va luable techniques and insights into creating devices for all nomadic computer users.
1. The Role of Environment, User and Tasks in Defining Next-Generation NII Interfaces
1.1. Range of Environments and Nomadicity
The devices of tomorrow, which might be referred to as TeleTransInfoCom (telecommunications / transaction / information / communication) devices, will be operating in a wide range of environments. Miniaturization, advances in wireless communication, an d thin-client architectures are all quickly breaking our need to be tied to a workstation, or to carry a large device with us if we want to have access to our computing, communication, and information services and functions.
As a result, we will need interfaces that we can use while we're driving a car, sitting in an easy chair, sitting in a library, participating in a meeting, walking down the street, sitting on the beach, walking through a noisy shopping mall, taking a s hower, or relaxing in a bathtub, as well as when they're sitting at a desk. The interfaces will also need to be usable in hostile environments - when camping or hiking, in factories or shopping malls at Christmas time.
In addition, many of us will need to access our information appliance (or appliances) in very different environments within the same day - and perhaps within the same communication or interaction activity. These different environments will put constrai nts on the type of physical and sensory input and output techniques that will work (e.g., it is difficult to use a keyboard when walking; it is difficult and dangerous to use visual displays when driving a car; and speech input and output, which work grea t in a car, may not be usable in a shared environment, in a noisy mall, in the midst of a meeting, or while in the library). Systems designed to work across these environments will therefore need to have flexible input options in order to work in the diff erent environments. The techniques, however, must operate essentially the same conceptually, even though they may be quite different (visual versus aural). Users will not want to master three or four completely different interface paradigms in order to op erate their devices in different environments (perhaps even on the same task). There will need to be a continuity in the metaphor(s) and the "look and feel" even though the devices may be operating entirely visually at one point (for example, in a meeting ), or entirely aurally at another (e.g., while driving a car). As noted above, many users will also want to be able to transition from one environment to another and from one device to another (e.g., workstation to hand-held), and from one mode to another (e.g., visual to voice), in the midst of a task.
1.2. Range of Users
If both government and industry are going to build the infrastructure needed for the NII of the future we will need to have systems that are usable by a much greater percentage of the population than we have today. This is necessary for both economic a nd political reasons.
The systems will need to be usable and understandable by individuals who, today, avoid technologies or who use them only when they have to. They will have to be operable by people who have difficulty figuring out household appliances. They will also ne ed to address the issues of individuals with literacy problems, as well as individuals with physical, sensory, and cognitive disabilities. These latter groups account for between 15% and 20% of the population and close to 50% of the population who are eld erly.
At the same time, however, these interfaces need to be both operable by and efficient for experienced and power users. The same argument that says that it is not economically efficient to create special interfaces or count on special devices being developed for the bottom quartile of the population with regard to interface skills, is just as valid at arguing that the mass market interfaces for the next generation NII products need to be usable by the top quartile of the population. Interestingly, i t turns out that many of the individuals who have disabilities (such as blindness), turn out to be some of the best power users, as well, as long as the interfaces stay within their sensory capabilities.
Does Disability Access Happen By Default?
It is also interesting to note that most all of the issues around providing access to people with disabilities will be addressed if we simply address the issues raised by the "range of environments" discussion above. For example:
- When we create interfaces that will work well in noisy environments such as prop-airplanes, construction sites, shopping malls at Christmas, etc., or for people who have to be listening to something else while they use their device, we will have creat ed interfaces that work well for people who cannot hear well or at all.
- When we create interfaces that will work well for people who are driving a car or doing something else where it is not safe to look at the device that they are operating, we will have created interfaces which can be used by people who cannot see.
- As we develop very small pocket and wearable devices where it is hard to use a full-sized keyboard or even a large number of keys, we will have developed techniques that can be used by individuals with some types of physical disabilities.
- When we create interfaces that can be used by someone who is doing something that occupies their hands, we will have systems that can be used by people who can't use their hands.
- When we create interfaces for individuals who are very tired, under a lot of stress, under the influence of drugs (legal or illegal), or simply in the midst of a traumatic event or emergency (and who may have little ability to concentrate or deal with complexity), we will have interfaces which can be used by people who naturally have reduced abilities to concentrate or deal with complexity.
Thus, although there may be residual disability access specifics which need to be covered, the bulk of the disability access issues are addressed automatically through the process of developing environment/situation independent (modality-independent) i nterfaces.
1.3. Range of Tasks - No Single Interface
The range of activities that will need to be carried out by these new devices on the next generation Internet is growing rapidly and will vary widely. As interface devices become smaller and more intelligent, and the Internet itself becomes more highly utilized and intelligent, it is hard to imagine any activity which would not conceivably involve these technologies in some role. Communication and information technologies will begin to resemble electricity in that they will be incorporated into almost every device, every environment, and every activity. Activities will include writing, talking, shopping, virtual travel, learning, authoring, disseminating, selling, voting, working, playing, collaborating, etc. It will also give us new tools for doing th ings we cannot now do, including visualizing concepts which are not inherently visible; listening to data or information which is not auditory; defining laws of physics in order to better explore either the real or constructed environments; enhancing our sensory, physical, and cognitive skills; and tackling tasks which we would not attempt due to the sheer amount of work that would otherwise be required.
We will also undoubtedly be seeing these new technologies spawn more and different applications - applications we have not thought of yet because they are not possible without these technologies. We will also probably become as dependent upon these tec hnologies as we are on electricity today.
This great diversity will not be handlable with a single interface or interface approach. We are going to need a variety of interfaces; many of which will be tuned to specific tasks or types of tasks.
2. What Will Be Needed - Requirements of Next-Generation Nomadic Systems
Taking the above requirements together, then, it would appear that in the near future we need to develop a family of interface techniques and strategies which will allow us to build interfaces which are:
- Widely varying
- Modality independent
- Straight-forward and easy to learn
Widely varying -- To meet the diversity of tasks that will be addressed. Some interfaces will only need to deal with text capture, transmission, and display. Others will need to be able to deal with display, editing, and manipulation of audiovis ual materials. Some may involve VR, but be basically shop and select strategies underneath. Others may require full immersion, such as data visualization and telepresence.
Modality independent -- The interfaces will need to allow the user to choose the sensory modalities which are appropriate to the environment, situation, or user. Text-based systems will need to allow users to display information visually at some times and auditorially at others - on high-resolution displays when they are available and on smaller, low-resolution displays when that is all that is handy.
Flexible/adaptable -- We will need interfaces which can take advantage of fine motor movements and three-dimensional gestures when a user's situation and/or abilities allow, but which can also be operated using speech, keyboard, or other input t echniques when that is all that is practical given the environment the user is in, the activities they're engaged in, or any motor constraints.
Straight-forward and easy to learn -- So that as much of the population as possible is able to use it, and so that all users can master new functions and capabilities easily as new ones evolve and are introduced.
3. Is It Possible to Create "Everyone" Interfaces on Future Appliances?
3.1. No Single Interface Approach Will Work
Trying to create an "everyone" interface sounds wonderful but unobtainable. Trying to design to a least common denominator clearly does not work. If we only use those abilities or input techniques which everyone has or which we could use in any environ ment, we would have to rule out all visual and auditory displays and probably tactile displays, as well. Even thinking about limiting interfaces to only those that we could use while driving a car or in a noisy environment seems to eliminate many of the m ultimedia techniques and approaches.
3.2. In Addition, Even the Most Flexible Systems Will Be Inaccessible to Some
No matter how flexible an interface you create, there will always be someone with a combination of two or three severe disabilities that in combination render the interface unusable. There are also applications such as telepresence (for example, a cult ural tour of the museums and orchestras of Europe) which cannot be made fully accessible to people who are blind or deaf. Some aspects can be made accessible, and all of it can be made partially accessible to both of these groups, but neither group would be able to have full access to all of the information presented because of its nature.
3.3. But Creating Systems Which Have the Flexibility Needed for Nomadic Use Will Also Create Systems Which Are Very Accessible
A tremendous degree of access to general information and transaction systems, however, can be provided in a fairly straightforward fashion -- much more than is usually assumed. For example, it is possible to allow individuals who are blind to access an d use a 3-D virtual shopping center whether it is rendered in VRML or as a high resolution total immersion simulation. At the same time, these techniques allow an individual who is driving their car to access and use the same shopping simulation and allow s the simulated shopping center to be more easily accessed and used by artificial intelligence agents as well.
A couple examples of systems which provide modality-independent, accessible interfaces are helpful here. These are not currently on nomadic systems, but they do demonstrate how a single system can be made to work (at different times) in hands-free or v ision-free or hearing-free fashion.
Example 1 -- A Touchscreen Kiosk
The first example is a touchscreen kiosk interface which has just been unveiled at the Mall of America in Minneapolis and is being incorporated into other multimedia kiosks across the country. This touchscreen kiosk interface includes a set of feat ures developed at the University of Wisconsin called the EZ Access package. The EZ Access features add flexibility to the user interface for those who would ordinarily have difficulty using or be unable to use a touchscreen kiosk. They add this flexibilit y without changing the way that the kiosk looks or behaves to users without disabilities. With the EZ Access features in place, the kiosk can now be used by individuals:
- who have difficulty reading;
- who cannot read at all;
- who have low vision;
- who are completely blind;
- who are hard of hearing;
- who are deaf;
- who cannot speak;
- who have physical disabilities;
- who are completely paralyzed; and
- who are deaf-blind.
Moreover, the techniques can be implemented on a modern multimedia kiosk by adding only a single switch (which appears to the kiosk's computer as the right mouse button) and incorporating the EZ Access features into the standard interface software for the computer. Once the EZ Access features are built into the standard user interface software a company uses to create its kiosks, implementing the techniques on subsequent kiosk designs is simple and straightforward. The kiosk demonstrates the feasibilit y of very flexible interfaces which has been implemented on a public commercial information system. (For discussion of strategies used see below.)
These techniques are now being adapted and extended for touchscreen kiosks which browse the web.
Curtis Chong, President of the Computer Science Division of the National Federation of the Blind,
using Mall of America, Knight-Ridder Newspaper's Jobs kiosk at the Mall of America.
Professor Gregg Vanderheiden looks on.
Example 2 -- Modality-Independent QuickTime Movies
QuickTime movies on the web are being captioned and described in order to make them accessible to and viewable by people who can't listen to them (because they are deaf, because they cannot turn up the volume in the environment they're in, or becau se the environment they're in is too noisy) as well as people who can't see them (because they are blind or because their vision is otherwise occupied). These movies take advantage of QuickTime's ability to have multiple audio and time-synched text tracks . What would be thought of as closed captions on a television show are stored in a text track as a part of the QuickTime data structure. Users who cannot hear or listen to the sound track can turn on the text track and have the "captions" of the audio tra ck displayed immediately below the QuickTime movie as it plays. Similarly, an alternate audio track can be pulled up which adds a verbal description of what is visually happening on screen, so that someone who cannot see the image can "view" the QuickTime movie.
Click here for an example of captioned and described QuickTime movies prepared by the CPB/WGBH National Center on Accessible Media in Boston.
It is also possible for a user to use the search command built right into QuickTime to search for any occurrence of a particular word in the movie and jump to that instant in the movie. These movies can also be searched by intelligent agent software wh ich can pull a movie or clips out of a movie in response to a user's requests.
When full length movies and other programming are prepared in this way they will be accessible in audio-visual mode (standard viewing format) as well as being viewable as audio only or video only format. This will allow them to be 'viewed' in a wide va riety of fashions. A person viewing a movie in standard format could (if they have to get up) switch to audio only mode while they go give Jimmy a drink of water, go pick up milk at the store etc. They can also switch to all video mode with the sound turn ed off if their spouse decides to go to sleep while they want to finish off the end of the movie , the in-laws call in the middle of the game, or the vacuum cleaner wipes out the audio.
4. Strategies for Achieving AA+A Interfaces
To achieve these flexible mobile interfaces users are going to need new interface strategies and new interface architectures that allow a user to switch between modalities in a seamless, coherent and intuitive fashion. They will need to be able to chose from different compatible input/control techniques dependent on their situation - and be able to choose display formats compatible with their environments.
Although research into AA+A interfaces has just begun, a few basic principles and strategies have been defined from disability research and development which have been used to provide modality-independent, user-independent, and hardware-independent int erfaces.
These principles/strategies include:
4.1. Modality-Independent or Modality-Redundant Materials
All of the basic information should be stored and available in either modality-independent or modality-redundant form.
Modality-independent refers to information which is stored in a form which is not tied to any particular form of presentation.
For example, ASCII text is not inherently visual, auditory, or tactile. It can be easily presented visually on a visual display or printer. It can just as easily be presented auditorially through a voice synthesizer, or tactually through a dynamic brai lle display or braille printer.
Modality-redundant refers to information which is stored in multiple modalities.
An example would be a movie which includes a description of the audio track (e.g., captions) and a description of the video track in audio and electronic text format so that all (or essentially) of the information can be presented visually, auditoriall y, or tactually at the user's request based upon their needs, preferences, or environmental situation.
4.2. Cross-Modality Presentation Option
The system should have viewers which support the selective modality presentation of the information. That is, it should provide a mechanism for displaying captions, playing alternate audio tracks, etc.
4.3. Using a Uni-List Based Architecture Under or as Part of the Interface.
By maintaining an updated listing of all of the information currently available to the user as well as all of the actions or commands available or displayed for the user, it is possible to relatively easily provide great flexibility in the techniques t hat can be used to operate the device or system.
For example, in a 3D virtual shopping mall, a database is used to generate the image seen by the user and to react to user movements or choices of objects in the view. If properly constructed, this database would be able to provide a listing of all of the objects in view as well as information about any actionable objects presented to the user at any point in time. By including verbal (e.g., text) information about the various objects and items, it is possible for individuals to navigate and use this 3 D virtual shopping system in a wide variety of ways including purely verbal.
- Individuals who are unable to see the screen (because they are driving their car, because their eyes are otherwise occupied, or because they are blind) can have the information and choices presented verbally (or via braille). They can then select item s from the list in order to act on them, in much the same that an individual may reach down and pick up or "click on" the object in the environment.
- Individuals with movement disabilities can have a highlight or sprite step around to the objects, or they could indicate the approximate location and have the items in that location highlighted individually (or other methods for disambiguating could b e used) to select the desired item.
- Individuals who are unable to read can touch or select any printed text presented and have it read aloud to them.
- Individuals with low vision (or who left their glasses upstairs) can use the system in the same way as a fully sighted individual. When they are unable to see well enough to identify the objects, they can switch into a mode that lets them touch the ob jects (without activating them) and have them named or described.
- Individuals who are deaf-blind could use the device in the same fashion as an individual who is blind. Instead of having the information spoken, however, it could be sent to the individual's dynamic braille display.
4.4. Use of a Simple Set of Alternate Selection Techniques
The use of a simple set of alternate selection techniques, which can accommodate the varying physical and sensory abilities that an individual may have due to their environment/situation (e.g., walking, wearing heavy gloves, etc.), can provide coverage for a very wide range of environmental situations and/or personal abilities.
A suggested selection of operating modes might be:
- Standard mode -- the way the device should most effectively behave for individuals who have no restrictions on their abilities (due to task, environment or disability)
- A list mode -- where the user can call up a list of all of the information and action items, and use the list to select items for presentation or action. The mode should not require vision to operate. It may be operated using an analog transduc er to allow the individual to move up and down within the list, or a keyboard or arrow keys combined with a confirm button could be used. This mode can be used by individuals who are unable to see or look at a device.
- External list mode -- that makes the list available externally through a hardware or software port and accepts selections through the same port. This mode can be used by individuals who are unable to see and hear the display, and therefore must access it from an external auxiliary interface. This would include artificial intelligent agents, which are unable to process visual or auditory information which is not also available in text form.
- Select and confirm mode -- that allows individuals to obtain information about items without activating them (a separate confirm action is used to activate items after they are selected). This mode can be used by individuals with reading diffic ulties, low vision, or physical movement problems, as well as by individuals in unstable environments or whose movements are awkward due to heavy clothing or other factors.
- Auto-step scanning mode -- that presents the individual items in groups or sequentially for the user to select. (This mode can be used by individuals with severe movement limitations, or movement and visual constraints [e.g., driving a car], wh en direct selection [e.g., speech input] techniques are not usable.)
- Direct text control techniques -- these include keyboard or speech input.
4.5. Provision of a Text-Based Auxiliary Interface Port
This text-based auxiliary interface port can take the form of either a software connection point or a hardware connection point such as an infrared port. The purpose of the port is to allow external hardware or software to query the system (to receive the list of information and action objects available) and to make selection from among the available actions. This port would be used in conjunction with the 'external list' mode described above.
The port might, for example, be used to connect an external dynamic braille display for viewing and controlling the device (kiosk, PDA, or tele/trans/info/com appliance). As mentioned above, this port can also allow intelligent agents or devices to hav e (text-based) access to the information and functions in the device/system.
5. Advantages of this Approach on a System Level
Using modality independent data storage and serving discussed above has a number of advantages besides supporting future 'nomadic', and disability access discussed. Because it allows access via a number of modalities, it also allows information to be m ade available today in a number of channels. The same information or service can be accessed via graphic web browser or via telephone. Different resolution displays can be easily supported. Even very small low resolution displays can be used. In fact, the problems small low resolution displays pose resemble low vision issues. Low bandwidth systems can also take advantage of the text only access that would be available from such a system. Those with higher bandwidth would not have to be limited to this for mat but could take advantage of the full graphic interface that their displays and bandwidth would allow them. As a result, information /service providers could use a common information or service server to handle inquiries from a wide variety of people ( and agents) using devices with a wide range of speed, and display technologies. Also as technologies evolve, the same serving structure could be used across the technologies.
Through the incorporation of presentation-independent data structures, an available information/command menu, and several easy-to-program selection options, it is possible to create interfaces which begin to approximate the anytime/anywhere and anyone (AAA) interface goal. These types of interfaces have been constructed and are now being used in public information kiosks to provide access to individuals with a wide range of abilities. The same strategies can be incorporated into next-generation tele-tr ans/info/com devices to provide users with the nomadicity they will be seeking and requiring in their next-generation Internet appliances.
It won't be long before individuals will be looking for systems which will allow them to begin preparing an important communication at their desk, stand up and continue it as they walk to their car, and finish it while driving to the next appointment. Similarly, users will want to be able to freely move between high and low bandwidth systems to meet their needs and circumstances. They will want to be able to access their information databases using visual displays and perhaps advanced data visualizatio n and navigation strategies while at their desk, but will want to access many of the same information databases using auditory-only systems as they are walking to their next appointment. They may even wish to access their personal rolodexes or people-data bases while engaged in conversations at a social gathering (by using a keypad in their pocket and an earphone in their ear - "What is Mary Jone's husband's name?").
The approaches discussed will also allow these systems to address the equity issues of providing access to those with disabilities and those with lower technology and lower bandwidth devices - and provide support for intelligent (or not-so intelligent) agent software.
The AAA strategies presented here do not provide full cross environment access to all types of interface or information systems. In particular, as noted above, fully immersive systems which presented inherently graphic (e.g., paintings) or inherently a uditory (e.g., symphonies) will not be accessible to anyone without employing the primary senses for which this information was prepared (text descriptions are insufficient). However the majority of today's information and most all services can be made av ailable through these approaches. And extensions may provide access to even more.
Finally, it is important to note that not only do Environment/Situation-Independent interfaces and Disability-Accessible interfaces appear to be closely related but that one of the best ways to explore Environment/Situation-Independent Nomadic interfac e strategies may be the exploration of past and developing strategies for providing cross disability access to computer and information systems.
For more information on these and related topics, see Designing a More Usable World, a cooperative web site on universal design hosted by the Trace R&D Center at the University of Wisconsin-Madison.
URL's used in this paper
Click here for an example of captioned and described QuickTime movies prepared by the CPB/WGBH National Center on Accessible Media in Boston. = http://www.boston.com:80/wgbh/pa ges/ncam/captionedmovies.html
Designing a More Usable World = http://trace.wisc.edu/world/