Developed as a cooperative industry/consumer/researcher effort coordinated by the Trace R & D Center, Dept. of Industrial Engineering University of Wisconsin - Madison with funding from the Information Technology Foundation (ITF) (formerly ADAPSO)
Gregg C. Vanderheiden Ph.D.
This is the initial DRAFT RELEASE of this document. It is released specifically to facilitate input and comment from consumers, researcher, and industry. Comments, corrections, input, ideas, and issues are solicited. Address comments to:Gregg C. Vanderheiden, Ph.D.
Copyright 1994 Board of Regents University of Wisconsin System
NOTE: To facilitate this document's review and use, you are free to duplicate and disseminate it freely. You may also excerpt ideas and materials from it freely. Acknowledgement is appreciated but not required.
However, some of the charts and concepts in this document are taken from other authors and publications. These are so marked, and separate permission must be sought directly from those authors or publications before use (apart from copying this whole document).
Support for this work has been provided by the Information Technology
Foundation (formerly ADAPSO ) and by the National Institute for
Disability and Rehabilitation Research (NIDRR)
of the US Department of Education under Grant #G00850036.
The opinions expressed in this document are those of the author and do not necessarily reflect the opinions of the Information Technology Foundation, the General Services Administration (GSA) or the National Institute for Disability and Rehabilitation Research (NIDRR).
This document would not be possible without the input of a large number of people, programs and agencies. Listed below are some of the people who have contributed to this document either through comment or by having their ideas or comments included. The list is meant to acknowledge their contribution to efforts in this area and any of their ideas that are captured in this compilation of ideas. Inclusion in this list does not in any way indicate an endorsement of this document or any recommendation(s) in it. Since so many people have contributed it is easy to have missed logging one here. If you see that your name is inadvertently missing from the list, please drop us a line so that it may be added to the next release.
[This list is temporarily left out while we try to locate all of the names that contributed - and to be sure everyone would like to (or is allowed to) have their name listed. It will be in the formally distributed version.]
There are many people who need to be able to use standard software programs in their jobs, schools or homes but are unable to because of the design of the programs or their interfaces. These people, because of accident, illness, congenital condition or aging have reduced visual, hearing, physical or cognitive/language abilities. The current estimate of people with disabilities is over 40 million people -- a sizable portion of our population.
Making computers and software accessible to people with disabilities is not just the responsibility of application software developers. In fact there is only so much of the problem that can be address ate the application software level. System software manufacturers and 3rd party disability access software and hardware developers also need to play an important role.
The purpose of these guidelines is to document what application developers can do (or need to do) in order to make their software accessible and usable by people who have disabilities or reduced abilities due to aging.
The guidelines document does this by providing information on the problems faced by people with disabilities in using current software and documenting ways in which application software can be made more accessible and usable by them. The document also describes the roles of the other players (system software developers, 3rd party developers etc.) in this process along with the types of access modifications available from them.
As you read the guidelines, you will also see that the recommendations can increase the usability of the software to people without disabilities as well. In some cases the recommendations make the programs easier to use. In other cases they increase the ability of the software to work with other software, with scripting utilities, or with alternate input (e.g., voice input) or output (e.g., speech output) technologies.
Basically, making application software more accessible consists
of three complementary components:
1. Designing your software so that it is as usable as possible to the greatest number of people - without requiring them to use special adaptive software or hardware. (This is referred to as Direct Accessibility).
2. Designing your software in such a way that it will work with
special access features built into the operating system or attached
to it by users who require them.
(i.e., Compatibility with operating system or third-party access features / software / devices for those people who will not be able to use your software directly.)
3. Making sure that your documentation, training, and customer support systems are accessible.
A brief sumnmary of the guidelines by disability area follows.
People with physical disabilities can have a wide range of abilities and limitations. Some people may have complete paralysis below the waist but may have no disability at all with their upper body. Others may have weakness overall. Some may have very limited range of motion, but may have very fine movement control within that range. Others may have little control of any of their limbs, or may have uncontrolled, sporadic movements which accompany their purposeful movements. Some with arthritis may find that hand and other joint movement is both physically limited and limited by pain.
A physical disability, by itself, does not usually affect a person's ability to perceive information displayed on the computer screen. Access is generally dependent on being able to manipulate the interface.
Therefore, you can increase the accessibility of your software (both direct and via access features/software and hardware)
Many users with hearing impairments need to have some method for adjusting the volume or for coupling the sounds more directly to their hearing aids. Both of these are hardware considerations and can be met with systems having volume controls and headphone or audio jacks. Users who have more severe hearing impairments may also use a combination of these techniques, as well as techniques for people who are deaf. Such techniques generally involve the visual display of auditory information.
Therefore, you can increase the accessibility of your software to users with hearing impairments
In addition, you should make sure that product support people are reachable via Text Telephones (also called TDD's or Telecommunications Devices for the Deaf).
You can increase the compatibility of your software with access features/software
People with low vision may have any one of a number of problems with their vision ranging from poor acuity (blurred or fogged vision) to loss of all central vision (only see with edges of their eyes) to tunnel vision (like looking through a tube or soda straw) to loss of vision in different parts of their visual field, as well as other problems (glare, night blindness, etc.).
For people with low vision, a common way to access the information on the screen is to enlarge or otherwise enhance the current area of focus. Given this, you can increase the direct accessibility of your software
In addition, you can increase the compatibility of your software with low vision access features/software
Many people who are legally blind have some residual vision. This may vary from just an ability to perceive light to an ability to view things that are magnified. The best design is for this group is therefore one that doesn't assume any vision but allows a person to make use of whatever residual vision they may have.
Access by people who are blind is usually accomplished using special screen reading software to access and read the contents of the screen, which is then sent to a voice synthesizer or dynamic braille display.
On computers which use a graphic user interface this is a bit tricky, but there are a number of things that application software developers can do to make it possible for people using screen readers to detect and figure out what is on the screen. These include
You can also increase the compatibility of your software with screen readers using the following considerations:
Since screen readers can only read text (or give names to separately identifiable icons or tools) it is a good idea to:
Finally, you can make your documentation and training materials more accessible
This is perhaps one of the most difficult areas to address. Part of the difficulty lies in the tremendous diversity that this category represents. It includes individuals with general processing difficulties (mental retardation, brain injury, etc.), people with very specific types of deficits (short term memory, inability to remember proper names, etc.), learning disabilities, language delays, and more. In addition, the range of impairment within each of the categories can (like all disabilities) vary from minimal to severe, with all points in between. In general, software that is designed to be very user friendly can facilitate access to people with language or cognitive impairments.
Somewhat more specifically, you can increase the accessibility of your software
In addition, because print disabilities are more common among people with language and cognitive impairments, you can increase the accessibility of your software by ensuring that it is compatible with screen reading software. (See the section titled "For People Who Are Blind," above.)
Finally, you can increase the overall accessibility of your software
The purpose of these guidelines is to lay out what application developers must do in order to carry out their part in the overall process of making standard computers and software as accessible as possible to people with disabilities or those with reduced abilities due to aging.
This document is divided into two sections plus appendices.
To facilitate ongoing use, the design guidelines are provided up front. In order to understand the rationale behind the recommendations, however, it is strongly recommended that Part II be read in its entirety at least once.
Part II of this document is a Background and Reference Section which provides:
Since much of the role of application software developers is in making sure their software works well with other access strategies and features of the operating system, this background information can be of great value in understanding the rationale behind and properly implementing the recommendations and ideas in Part I.
Most of the work of making computers more accessible is addressed by the computer and operating system manufacturers, or by third party access product manufacturers (see Part II). There are certain things, however, that require the cooperation or compatibility of the application program. Without this cooperation or compatibility, it is often very difficult or impossible for people with disabilities to gain access or use a particular piece of software.
For example, people who are blind can use special software programs to read the text on the screen aloud using a voice synthesizer. Using this capability, many people who are blind use computers every day as programmers, scientists, teachers, secretaries, etc. This screen reading software, however, depends on its ability to detect the text and controls on the screen and to present them via speech or braille to the user. If the application program does not use the standard system tools to draw the text to the screen or to create the controls for the program, it may be impossible for the user who is blind to access or use that program. If use of that program is or becomes a required part of their job, they may no longer be able to do their job.
The overall design guidelines are broken into four sections.
First is general design guidelines. These are basic principles or themes that underlie the detailed guidelines found in the second section. They also provide an understanding of the rationale behind some of the specific guidelines, and help in making implementation designs.
The second section provides specific design guidelines by topic. These guidelines are grouped around topics such as menus, keyboard, controls, etc., rather than by disability. However, a summary of these both by topic and by disability is provided in tables at the end of Part II.
The third section provides guidelines relating to the documentation, training, and support materials, for users.
The fourth section deals with recommendations for testing and support for disability access developers.
There are a few general themes that you'll notice occurring repeatedly in the specific guidelines in the next section. They are worth noting since they provide the rationale for many of the specific guidelines and can be used to help make decisions when options exit for a given design.
Many software based access programs provide their alternate input and display capabilities by tapping into the system software. These access systems depend on the application program using the system tools provided for input and output. Application programs which do not use the system tools may not be accessible to people using special access software or features in the operating system.
For example, alternate input software may take Morse code in and convert it into alternate or "counterfeit" keystrokes which it then puts into the input cue or buffer just as if they came from the keyboard. Application software that takes its keystrokes from the input buffer will find these alternate keystrokes and treat them just like regular keystrokes. If your application program bypasses the input buffer and takes its keystrokes directly from the input hardware, then the alternate keystrokes will not be seen and the person will not be able to use it.
Similarly, screen reading software for people who are blind works by watching the activity of the text drawing routines in the operating system. By watching commands sent to the operating system telling it to draw text on the screen, the screen reading software can keep track of everything that is written to the screen. If application software writes text directly to the screen, then the screen reading software will not know that it is there.
Alternate mouse or pointer routines would also depend on the ability to make system and application software think that a person was moving the mouse when in fact they were operating a mouse simulation program.
Wherever possible follow system standards and style guides. For people with cognitive disabilities it makes it easier to predict and understand how things should operate and what they mean. For people who are blind and use screen readers to find out what is on the screen, predictable layouts and controls are easier to figure out. Also, adaptive software manufactures can build techniques into their software to handle the standard objects and appearances, but not unique or one of a kind implementations. If you do something different, be sure it is accessible (see "Product Testing and Developer Support" at the end of Guidelines - Part I.)
Application programs which provide the ability to access all of the menus by using the keyboard greatly facilitate access by individuals who cannot use the standard mouse. It also makes access easier (or possible) for people with poor eye hand coordination or those who are blind. This access may be provided either by use of the arrow keys to move around through the menu structure, or through use of keyboard equivalents for ALL menu items.
The best way to view people who have disabilities is to think of them simply as individuals with reduced abilities rather than as people without an ability. The reduction in their abilities may vary from slight to severe. The more you can reduce the sensory, physical, or cognitive skills necessary to operate the program, the more people will be able to directly use the program. It also makes it easier for everyone else to use the program. Some examples: using a slightly larger or clearer type, using menus which can be scanned rather than commands which must be memorized, keeping menus short and dialog boxes uncluttered, reducing or eliminating the need for fine motor control.
It is also helpful to provide multiple ways of accomplishing functions in order to adapt to different needs or weaknesses. For example, having pull-down menus reduces the cognitive load and makes it easier to operate computers. While providing hot keys reduces the motor load and makes it easier and faster for individuals with physical disabilities to use computers, providing both addresses the needs of both groups and gives all users more options to meet their preferences. A second example would be the ability to use either the scroll bar or the keyboard to select position within a document.
The third general strategy is to provide layering to reduce visual and cognitive complexity. One example of this are programs which provide both short and long forms of their menus. The use of option buttons in dialog boxes or other techniques for nesting complexity would be a second example of this.
As mentioned above, the most important and easiest mechanism for ensuring greater compatibility with access software is to use the tools and conventions which have been established for the operating system. Most access software works through modifications to the system tools, or bases its operation on assumptions that the standard conventions for the system will be followed. As long as application software programs use the system tools and conventions, there is generally little problem.
When commands are all executed through the menus, access software has very little trouble in both accessing listings of the available commands and activating the commands. Program commands which are issued in other fashions - such as tool bars, special palettes, etc. - present problems. It is difficult to get a listing of all of the commands (for example, to present to somebody who is blind). It is also difficult to directly activate the various commands (for example, by an alternate access routine for someone with a severe physical disability). Where all of the palette and tool bar commands are available via the standard menus, this is not a problem. When these commands, however, are not otherwise available, it is important that access somehow be achieved.
Access to commands in a program consists of four parts. Fortunately, the movement toward inter-application control is making the commands in a program more accessible electronically. Features like balloon help are also useful for providing descriptions of the commands and buttons on the screen. Eventually, it would be nice to be able to:
When these capabilities are all available in a standardized format, it will make the process of developing access programs much simpler and more complete. In the meantime, programs which have most of their commands available for inter-program control may consider making the rest of the program commands available as well.
Providing access to people who have disabilities is in many ways just a natural extension of the open systems approach to software design. Support of the open systems through GOSIP, POSIX, and the applications portability profile facilitates compatibility with special access software and hardware within these environments. With the rapid advance of technologies and operating systems, software that is based upon open systems concepts and which retains a stable or similar interface format across platforms greatly facilitates the efforts of third-party accessibility developers in keeping up and adapting their products.
These design guidelines are grouped by topic as listed below.
Wherever possible, applications should use the standard text-drawing tools included in the system. Most screen access software programs for computers with graphic user interfaces figure out what is on the screen by watching the use of these tools. Even when the tools are used to draw characters in other (nonscreen) locations of memory and then copy the information to the screen, it is still possible for access software to track its use. In this fashion, the access software can keep track of which characters with which attributes appear in each location on the screen without having to attempt to do optical character recognition directly on the bit-mapped fonts on the screen. (Direct OCR of the pixel image of the characters on the screen has been proposed, but is currently not practical. When small point italic characters are used, they are generally so distorted as to be unrecognizable. In addition, underlining, shading, outlining, and other attributes to the text can make it difficult to recognize. As a result, tracking the use of the text-drawing tools is the only currently available technique.)
Occasionally, applications will draw the text characters in a different portion of memory, and then copy the block of text onto the screen. As mentioned above, as long as the text-drawing routines are used, this does not pose a problem. However, when the applications are done with this text and they want to re-use the area, they will often directly zero the space in memory where they were drawing the characters rather than using the text-drawing tools to erase this area. This makes it more difficult for the screen reading software to keep track of which characters are or are not still drawn in that portion of memory.
Occasionally, applications will use text which has been predrawn and stored in the program as a bit image. Such painted text cannot be read by any current screen reading routines. When this text is purely decorative, as on a start-up screen, it does not pose a problem. If it contains important information or information necessary to use or understand the program, it should be created in real time using the text-drawing tools in order to be accessible by screen reading programs.
The problems surrounding cursors and pointers generally fall into two categories:
Eventually, some standard mechanism for allowing electronic cursor/pointer location may be devised. In the meantime, the following strategies may be used.
Whether using text-based or graphics-based screens, using the system cursors and pointers wherever possible facilitates their location. Again, most screen reading programs can easily locate the system cursor and pointer. However, if the application software creates its own cursor (by highlighting text, by creating a box, etc.), there is no way for the access software to easily tell where the cursor is.
If the application software does use some special nonsystem cursor, one strategy is to drag the system cursor along with the special cursor. The system cursor can be invisible. It will still be "seen" and tracked by most screen reading (or enlarging) software even though it is not visible on screen to a sighted user. In this fashion, the access software can follow the custom cursor which would otherwise be invisible to it. Even when the focus is indicated by other means (e.g., a heavy black square around a cell on a spread sheet) the system cursor can be dragged along with the focus. In some systems the cursor rectangle can be defined to be the same size as the cell on the spread sheet... allowing the screen reader to determine more easily which characters on screen are within the focus area. If there is more that one highlighted area on screen, the system cursor should be taken to whichever would be the primary focus at the present time given the users activity.
Some individuals with low vision are able to use computers without screen enlargement software, either by using the standard font or a slightly larger font. The text cursor (and some mouse cursors), however, sometime consists of a single thin line which easily disappears from the user's view. As the user enlarges the fonts, the cursor line usually gets taller, but it does not necessarily get any thicker or easier to see. If an application is using a standard system cursor, then the problem should be handled at the system level (since the system should already support an alternate system cursor which would be heavier and easier for individuals to see). If the application software is providing its own cursors, however, then provision of an alternate cursor with a heavier line width should be considered. Alternately, a special control which would make the cursor stand out in some fashion, to make it easy to locate, could be provided. Some strategies for making the cursor easy to locate include:
For individuals who are color blind, the ability to select the colors used for all aspects of the screen is helpful. In general, most displays use light characters on a dark background or dark characters on a light background. As a result, they are generally visible no matter what their color is, simply because of the difference in their intensity. However, the ability to adjust colors to increase contrast is helpful for some individuals.
When using color to encode information, using colors having much different intensities makes the colors easier to differentiate. A light yellow and a dark green, for example, could be distinguished even if the screen were displayed in gray-scale mode because of the difference in their intensity.
If there is a method to set the colors of standard elements from the system control panel then use those colors for the corresponding elements in the application.
If there are no standard elements then either provide a way to set the colors within the application or make sure that color blindness will not effect the readability or interpretability of the information displayed in color by using color redundantly, and by making sure that high contrast is maintained.
One mechanism to circumvent problems with color is simply to provide a monochrome or gray-scale option for the program. Individuals having difficulty with colors can then use the program in the monochrome or gray-scale mode.
However, care should be taken to make sure that there is sufficient contrast between text and background. It is fashionable to make some buttons using black text on a dark gray button. This low contrast combination makes it more difficult for people to read, especially those with low vision.
Some systems plan to have a "High Contrast" mode. In this case low contrast controls and information can be used more freely to dress up the application, as long as high contrast modes are available and used within the application when the "High Contrast" flag is set in the operating system's control panel.
For individuals who have low vision, consistency of screen layout is important. As discussed earlier, individuals with low vision often use screen enlargement software to access the screen. As a result, they are only able to view a small portion of the screen, similar to looking down a paper tube. Similarly, individuals who are blind must use screen reading software to locate items on the screen, searching one letter or word at a time. Thus, programs that have a consistent location for menus, feedback messages, etc., are much easier to use. Where operating systems specify standard procedures and locations for things, it is very helpful for application programs to follow these standards.
Alert messages that pop-up and disappear quickly may be missed by some individuals, depending on their screen access tools. To avoid this problem, alert messages should remain on screen until dismissed by the user.
Some other applications have text which appears when the mouse cursor touches some point on the screen. If the mouse cursor moves off of that point, the text disappears. This provides a particular problem for screen access software, if it moves the mouse pointer along as it reads the text.
A typical scenario of this problem would occur as follows. The user moves the cursor to a point on the screen, causing the text to pop-up. The user then tries to read the text, but as the screen reader begins to read the text, it moves the mouse cursor to move along with the reading. As soon as the cursor moves to the first word, it has left the original trigger point on the screen, and the text that the user is trying to read disappears.
Individuals with learning disabilities may experience similar problems. For example, there is now a special utility program on the market which allows people with learning disabilities to get reading assistance: the user points the mouse cursor at a word, and the program reads the word aloud. Such a program would be unable to read words in pop-up messages such as those described above. As soon as the user moved the cursor to tell the special utility which word to read, the message would disappear.
At the present time, the balloon help on the Macintosh suffers from such a problem. A mechanism which would allow triggered text to be locked on, so that the individual can move the cursor over the text to read it, would be helpful.
Text-based screen readers default to reading left to right. Text which is positioned in columns within a window or object on screen is often read as if it were continuous text; that is, the text in the first column is read, and then the screen reader jumps to the next column and continues reading. Many screen readers can be programmed to deal with text in columns. Where possible, however, continuous text is easier to deal with -- especially in help files.
If objects on the screen have a definition table, it is important to attach a label to the object. Even if the label does not appear on the screen, this information is available to screen readers. Wherever possible, labeling controls visibly on the screen makes their function clearer and also facilitates access via screen readers.
Icons which are embedded in text and convey meaning (not merely
decorative) can be missed by screen readers, resulting in misunderstanding
or incomplete comprehension of the information by people using
Some application programs provide their own on-screen indication as to whether the CapsLock, ScrollLock, and NumLock keys have been depressed. In some cases, this feedback is independent of (and therefore sometimes contradictory to) the flags in the system or the status of the lights on the keyboard. This can cause inconsistent feedback to people who are using access programs which check the status of these indicators. Applications programs should either use the status flags in the system and keyboard or update them to agree with the program.
Making all aspects of the program, including menus, dialogs, palettes, etc., accessible from the keyboard significantly increases accessibility for many users. Although a MouseKeys feature (which allows the user to use the keypad to drive the mouse around the screen) could be used to provide access to toolbars, for example, this is a very slow and ineffective mechanism. Even if the individual is using MouseKeys for drawing, rapid access to the tools via the keyboard can greatly facilitate the use of the application software by individuals with disabilities (and other users as well). Access by allowing users to "walk" the menus using the arrow keys as well as by keystroke equivalents can greatly increase the efficiency and ease of use for many users both with and without disabilities.
Again, use common conventions, system standards, and style guidelines wherever possible when designing keyboard access to all aspects of the program.
One problem faced by individuals with disabilities is the inability to hold down two keys simultaneously. "StickyKey" programs which provide electronic latching for the Shift, Control, Alternate, Option, and Command keys on the different computer platforms already exist, and are being made available by operating system manufacturers. As a result, it is not necessary to build this type of feature into your application program. In fact, this is an example of an accessibility feature which is best handled at the system level. Moreover, implementing it in an application can cause a conflict with and therefore interfere with the feature in the system software. See Part II for complete listing and description of common keyboard access features in new operating systems.
Screen reading software for people who are blind uses the control names and types to provide information about the control to the user (who cannot see the shape of the control).
Screen readers used by people who are blind can easily detect and identify these types of controls on the screen.
If you want a custom look, and the operating system has an "owner-draw" style (such as MS Windows), use it instead of a custom control. This type of control will appear to the blind user's screen reader as a standard control. Be sure to fill in the text label for the control (even if you don't use it to label the control on screen). The screen readers use this name to describe the control to the blind user.
If you define a CUSTOM control which behaves similarly to a standard control, use the name of the standard control as part of the name of your custom control. The screen reader can use the name of the control to pass information on to the user which will help the user understand the general type of the custom control. For example you might name a custom button "SpecialButton".
"Fake" buttons or "hot spots" on pictures make it difficult or impossible for a screen reader to tell that there is a button there. Strategies and approaches for dealing with this are being developed - but have not yet been standardized.
As discussed earlier, most access software works by attaching itself to the operating system. When application software uses standard system menu tools, access software is able to read the list of available commands and can provide the individual with the ability to directly maneuver through and activate the commands.
Menu items that are not text-based and are not accompanied by text are difficult for screen reading programs to access.
Application programs which provide the ability to access all of the menus by using the keyboard greatly facilitate access by individuals who cannot use the standard pointing device. This access may be provided either by use of the arrow keys to move around through the menu structure, or through use of keyboard equivalents for the menu items.
Application programs which provide multiple mechanisms for accessing commands better accommodate the differing needs of users. Access via menus and layered dialogs provide easier access for individuals with lower cognitive abilities. Direct access with key combinations provides better access for individuals with physical impairments and for individuals who are blind.
As with menus, application programs which provide direct access to palettes and toolbars greatly facilitate access by individuals with different disabilities. If the toolbar is only a shortcut method to accessing items in the menu, and the menu is accessible, then access to the toolbar would not be necessary. When the toolbar commands are not available in the menu, however, direct access might be provided, or the items might be provided redundantly as an optional menu.
Screen access software for individuals who are blind works by monitoring the operating system's screen drawing routines. When individual icons are drawn separately, they can be individually identified, named, and accessed. If a toolbar or palette is drawn as a single bit image, the individual tools within that palette are not individually identifiable or accessible using standard techniques.
Helpful for both individuals with physical disabilities and with visual impairments.
Again, when naming buttons and controls within a dialog box (whether the name appears on the button/control on screen or not) be sure that clear, logical, descriptive names which match the words printed on the screen near them. Screen reading software accesses these names in helping the person who is blind to decipher the information within the dialog box.
In some operating systems, buttons within a dialog box are not normally accessible directly from the keyboard. Access utilities exist which allow individuals to tab through the buttons until they reach the desired button, after which they can select it from the keyboard. The order in which the tab moves through the buttons is dependent upon the order in which the buttons are defined in the dialog definition tables. If the button definitions are not in logical order, the tabbing key will jump the highlight in what appears to be a random pattern around the dialog, highlighting the buttons in their definition order. Although this does not prevent access, it is disorienting.
If the caption is not a part of the button itself, use some standardized spatial relationship so that the location of a label for a button (or a button for a label) is predictable for individuals using screen readers to explore/use a dialog box.
Again, the best solution is to provide direct keyboard access to all aspects of the dialog, including buttons, scroll windows, text entry fields, and pop-up menus.
Many users with low vision can use an application without a screen enlargement program, provided the application allows users to adjust the font size. In fact, most users will appreciate being able to adjust the font size as a way to reduce eyestrain.
The font size in the on-line help should change in concert with the adjustments to the font size made by the user in the application.
As discussed in "Cursors, Pointers, Highlighting and Other Focus Techniques" above, allowing for the substitution of larger or heavier lined cursors and pointers makes it easier for many users to track cursor and pointer movements, and maintain their attention on the application's current focus.
Lines are often drawn using a default set at a single pixel for the width. These size lines can be hard to see in a variety of viewing environments and on different display hardware. Additionally, users with low vision may be unable to see single pixel width lines under any circumstances. Therefore, make sure that you use the system's tools for determining monitor resolution parameters, and be aware that future operating systems may allow users to adjust line thickness to suit their needs. (For example, in Windows you can call GetSystemMetrics with SM_CXBORDER and SM_CYBORDER constants to determine the proper line width for the users monitor and resolution - and later, their preference).
There are many uses for sound in an application. Some of them are:
In Uses 1 and 2, a person who cannot hear the sounds is not at a disadvantage. In Use 3, and particularly in Use 4, however, visual presentation of the information should be provided as an option for people who cannot hear - or are in a noisy environment where the sound would be lost or not intelligible - or in environments where the sound may be turned off (e.g., library or long row of workstations)
A general solution which solves the access problems for both individuals who are hard of hearing and individuals who are deaf is the provision of all auditory information in a visual form as well. Auditory warning beeps can be accompanied by a visual indicator. Beeps and other sounds would be described in text, both to differentiate the sounds and to allow access by individuals who are deaf-blind (and would be using a braille screen reading program to access all of the information from the computer). Speech output (in cases where it is important for understanding and using the program) can be accompanied by text on the screen (either as a normal part of the program, or in a caption box). This presentation of information visually can be programmed to happen at all times, or can be invoked if a special operating system flag is set indicating that the user would like all auditory information presented visually. If the system software provides a "ShowSounds" switch, the setting of this switch could then trigger the visual display feature.
For beeps or other sounds which are not normally accompanied by a visual indication, application software should check for a system "ShowSounds" switch. At the present time, the "ShowSounds" switch is not a standard feature. In the future, however, it should be appearing as a standard system switch which can be accessed by software. Users who are in noisy environments or who cannot hear well would then be able to set the "ShowSounds" switch. Application programs could then check that switch and provide a visual indication to accompany any auditory sounds.
NOTE: What kind of visual indication accompanies the sound is entirely up to the application. In some cases where the sound carries a rather urgent cue or warning, you might want the whole screen to flash. In other cases the window or its title bar might flash. Also, see "Ensure that Visual Cues Are Noticeable" below.
NOTE: In addition to providing a "ShowSounds" switch as a part of the operating system, manufacturers of operating systems are also being encouraged to build captioning tools directly into the operating system to facilitate the implementation of closed captioning by application programs.
When providing a visual cue to what would otherwise be an auditory alert, it is important to ensure that the cue is sufficient to attract the user's attention when viewed out of the corner of the eye. An individual who is looking at the keyboard and typing, for example, is not going to notice a small icon that appears and disappears momentarily in the corner of the display. A flashing menu bar or area at the bottom of the screen will stand a better chance of attracting attention (flashing should be 2 hertz or below).
As programs incorporate the use of synthetic or recorded speech, closed captioning should be considered. Again, in those cases where the information being presented via speech is already presented in text on the screen, there is no need to present the information visually in any other fashion. In those cases where information is being presented via speech which is not otherwise displayed on the screen, application programs might check for the "ShowSounds" switch. If the switch is set, a small box containing the text being spoken could be displayed on screen. Music or other sounds being provided for adornment would not have to be presented in caption form, if they are not important to the operation of the program. Where the tune or sound is important to the operation of the program, then some description to that effect could appear in the caption box.
For some users, simply increasing the volume of the sounds is enough to provide access to all auditory information presented by the application. Auditory output should not have a fixed volume but should be adjustable using the control panel or other user settable sound features in the operating system.
In other instances and in other environments, users may want to eliminate any sound output at all. For instance, while working in a library, auditory output can be irritating to the other patrons.
Although the use of sound can be a problem for people with hearing impairments (if a visual counterpart is not available), the use of sound in programs can be very helpful for users who are blind and in some applications for people with cognitive disabilities as well.
Programs requiring time-dependent responses in less than 5-10 seconds should have provision for the user to adjust the time over a wide range, or have a non-time-dependent alternative method.
These should remain on screen until the user consciously acknowledges or dismisses them.
Flickering screens can trigger seizures in people with photosensitive epilepsy. The worst frequency is 20; at frequencies above 60 and below 2, sensitivity is greatly reduced. Sensitivity increases with the brightness and the area on the screen that is flickering.
In order to facilitate access to programs by individuals using their access software, it is useful to have all user-settable parameters both readable and settable via external software. This might be accomplished in a number of fashions, including providing an optional menu which could be enabled (since the access software would already have access to the menus). This technique would allow the software both to easily get a list of the externally available commands and to execute them. Commands can be provided for reading and for setting parameters, either directly or via dialogs.
Although this is true in any environment, it is especially true in character-based programs. Dual column text, pup-up menus, etc., can be problematic and require custom programming of the interface for each application program. Even then, the results are mixed. The screen reader tends to read from left to right across the page, mixing columns and drop-down menus as if it were all running text.
Where possible, use extended ASCII character graphics rather than standard ASCII characters (such as "***") for drawing lines, making boxes, etc., When screen readers hit such text, they may read it as "asterisk, asterisk, asterisk," unnecessarily slowing down the process. A particular nuisance is text buried in a string of asterisks. In order to read the text, the individual must sit while the screen reader reads off the punctuation or other characters. Screen reading programs can be programmed to skip nonalphabetic characters; however, this can cause the individual to miss important information on the screen.
A similar problem appears when alphabetic characters are used to draw boxes. Using 1's (the digit one) or l's (lower case L) to draw a vertical line is obvious to somebody looking at the overall screen. When reading a single line of text using a screen reader, however, these do not look like a vertical line but are read aloud as the characters "One" or "L."
Software that presents information in a color graphics mode often uses different strategies to highlight or select text. Providing an optional monochrome mode in your software greatly facilitates access software, particularly cursor finding.
A common strategy for selecting items from a list is to use the arrow keys to move a highlighted bar up and down the list. A highlighted bar is much harder for screen reading software to detect than is a character. If a small character is also moved up and down a list (along with the highlight) or in some other way change the characters on the line that is selected in the list, it greatly facilitates access by screen reading programs. An example is shown below.
> Item 2
An important component to the accessibility of any software is the ability of the user to access the documentation. Documentation can be made available in a number of formats, including standard print, large print, braille, audio tape, and electronic form. The most universal of these is the electronic format. In order to be really accessible for people who are blind, the information should be available as an ASCII text file. This would involve converting photographs and diagrams into descriptions, and identifying other techniques for providing emphasis to particular words other than the use of different fonts and highlights. Once a file is available in a pure ASCII form, it can be easily accessed using screen readers as well as translated and printed out as braille or recorded in audio tape format.
Although individuals who are blind will find an ASCII text file to be the most useful form, individuals who have severe physical disabilities may find that an electronic copy of your manual which also provided pictures and diagrams will be the most useful form. The electronic form of the manual would allow people with physical disabilities to have access that they would not normally have, because of the difficulty in manipulating books. Having a full graphic version of the manual would provide them with the maximum amount of information.
Someday, when "electronic paper" is common, having the manual in both ASCII and "electronic paper" would be optimal. In the meantime, the ASCII version is the most universally accessible format.
Even the design of standard print manuals can be done to better facilitate their direct use by individuals with visual and other impairments. Some things which can be done to improve the accessibility of standard print documents are:
One form of electronic documentation which is becoming increasingly more prevalent is on-line help. As long as the help is presented using standard screen-writing routines, access should be no problem. If pictures are used within the on-line help, then text should accompany the picture and provide enough information that the picture or diagram is a redundant visual aid.
Translating documentation from its standard print form into an ASCII text file which is effectively formatted can take some effort. However, there are programs set up in the United States which can provide technical assistance in the translation process.
Some packaging techniques make it difficult or impossible for people with manipulation problems to open the package. Where products are sealed for warranty or virus protection, some means for easily opening the package should be provided.
In addition to the printed and on-line documentation, many programs have videotapes or other multi-media training materials available for them. In addition, some companies provide training courses, either in the direct use of their product or for programmers or other professionals wishing to use or extend their product.
Having access to the training materials for a program can be as or more important than access to the basic documentation. As software becomes more and more complicated, the ability to access and use the training materials becomes essential. Videotapes with closed (or open) captions, provision of equivalent training materials which do not require the ability to see, and the use of descriptive video (where the actions taking place on the screen are described as a narrative on a separate audio track) are examples of some strategies which can be used here. Providing more accessible training does not mean that videotapes cannot be used because there are people who are blind, however. It could mean that the same information provided in the videotapes is also available in a form that does not require sight.
In addition to the training materials themselves, it is also important that training sessions be as accessible as possible. Some strategies for doing this include holding the training sessions in facilities which meet ADA accessibility standards, and may include the provision of interpreting or other services to meet the needs of specific attenders.
Another key to having software which is more accessible is the provision of specialized customer support. Often, an application program will seem to be incompatible with various adaptive hardware or software products, when in fact it will work with them if certain parameters are properly set. In other cases, it may be incompatible with one particular adaptation, but be easily accessed using others. Such information is important to users who have disabilities, and generally cannot be obtained by calling the standard customer support lines. In fact, a number of companies have built-in accessibility features in their products which are unknown to their own customer support teams.
While it would be nice to have all of the customer support personnel fully aware of all types of disabilities, adaptations, and compatibility issues, this is unrealistic. There is simply too much specialized information. Even with a specialized hot line, application companies may find that they identify different individuals with expertise on how to use or adapt their software for users with different disabilities.
A two-tiered approach to support for users with disabilities is therefore suggested. First is the inclusion of basic disability access issues and information across all of the customer support personnel. This would include both a TDD (telecommunication device for the deaf) line and a voice line. It would also include an awareness of the efforts by the company to make their products more accessible, and the existence of the specialized customer support line. All customers, including those with disabilities, could then use the standard support lines to handle standard product use questions. When specialized questions arose, such as compatibility of the product with special disability access utilities, the calls could be forwarded to a disability/technical support team.
The second tier would be the creation of a customer support line specifically for individuals who have disabilities. If your company provides an electronic customer assistance mechanism, a special forum or section for disability access should also be provided. The purpose of these mechanisms would be to provide specialized and in-depth information and support regarding disability access and compatibility issues or fixes for different access utilities.
For some small companies, it may be difficult to develop a depth of expertise in each of the disability areas. In that case, rather than trying to hire someone with expertise in the different disability areas as well as expertise in the technical support aspects, the company might contract with an outside agency who does have this expertise and give them the training on the company's software and technical support information.
The existence of the special customer support, as well as the phone numbers, should be prominently listed in the documentation. Specific services and disability access features of products should also be plainly documented in manuals.
It is difficult to ensure that new application software will not cause problems for any of the many different types of special access and adaptive hardware and software. Often, the only way to tell whether a product or new features in a product will cause problems is to actually try it out with the different access products. As a first pass, companies may have people with disabilities on site who can test new programs for general usability. However, there are literally hundreds of different adaptive aids. As a result, it is difficult for each application software manufacturer to have all of the adaptations on-site to try with their new software or new features. Two alternate strategies are therefore suggested.
The first strategy is to include individuals from the various adaptive hardware manufacturers and software developers as a part of the early beta testing of a product. This will take a concerted effort on the part of application software developers, since these adaptive product manufacturers themselves do not represent a large enough market to normally qualify for early beta release of application software programs.
A second strategy would be to contract with a third-party testing lab that is familiar with a) the different types of hardware and software adaptations available and b) the problems usually encountered by these access products with application software. This would involve a financial investment on the part of the application software developer. On the other hand, it may provide for a better mechanism to get a relatively high confidence evaluation of the compatibility of the application software. It would also allow testing with a range of different hardware and software adaptations without requiring the application manufacturers to release their software to a large number of different manufacturers. The early testing of software (pre-beta) is important, since problems with accessibility are likely to occur at a level that is difficult to address at the beta stages of an application. A major difficulty with this approach is that there are no known testing labs with the broad cross-sectional base of information that would be needed to carry out such testing at the present time.
The best approach at this time therefore appears to be involving the developers of the adaptive hardware and software as early as possible in the testing of a product or update.
Another key area in ensuring the accessibility of application software is support for companies developing disability access software. Again, these companies are usually small enough that they do not qualify for the types of support generally provided to other, larger developers and operating system manufacturers. As a result, it is often difficult or impossible for them to qualify for access to technical support in the same manner as other larger third-party manufacturers. In addition, the types of problems they have sometimes differ. It is often therefore helpful to have individuals within the technical support team who specialize in these issues, and who can work with developers to both a) identify strategies for those developers to effectively access your application, and b) identify ways in which your application or future editions of it can be made more user-friendly.
This latter point is essential in the development of new versions of application programs. As mentioned above, discovering an incompatibility with access software at the beta testing stage is too late. Typically, the types of inconsistencies that occur with access software occur at a rather fundamental architectural or structural level in the application. Thus, it is usually too late by the time the beta test occurs to do anything about accessibility problems. On the other hand, software is usually not available for testing until it is substantially completed. Ensuring the future accessibility of software products is therefore highly dependent upon interchange and communication between the software development team at the application manufacturer and the third-party access product developers. Through this interaction, as well as through documents such as this, application software developers can begin to identify the kinds of things that do or might cause accessibility problems. They can then get in contact with the third-party assistive device manufacturers and explore ways to circumvent these problems.
There are many reasons for a company to consider making their applications more accessible. They include:
There are between thirty and forty million people in the United States who have disabilities which affect their ability to use computers and application software. At the same time, computers are becoming integral parts of our living, educational and working environments. As a result, there is a growing concern that if computers, operating systems and application software are not accessible to this fairly large portion of our population, they will be unable to participate effectively in these environments.
The population is steadily growing older. As we age, most of us lose some of our physical, sensory, or mental abilities. By age 55, 25% of us will experience functional limitations (see Figure 1). By age 65, this percentage will rise to 50%. For the growing number of us who will live to be 70 years old or older, 75% will experience functional impairments. In fifty years, it is estimated that more than a third of the population will be over age 55 and a sixth will be over 70 (based on US Congress Office of Technology Assessment OTA-BA-264).
Curbcuts were put into sidewalk street corners for people in wheelchairs, but for every one person in a wheelchair who use these curbcuts, there are ten individuals with bicycles, carts, baby strollers, etc., who use the curbcut. Similarly, the adaptations to software for people with disabilities that make the software easier to see on the screen, operate from the keyboard, understand, etc., also make the software easier to use quickly, efficiently, and without errors for individuals who do not have disabilities. One example is MouseKeys, a feature that was added to operating systems to allow people who cannot use a mouse to move the mouse cursor from the keyboard. This feature is also commonly used by people doing graphics layout to make fine adjustments in graphic positioning, because it allows precise, pixel-by-pixel movement from the keyboard which is not possible using the standard mouse.
Some of the principle strategies for making application software more compatible with disability access software include:
These also make the program more compatible with other nondisability-related system extensions and inter-application macro and scripting utilities.
Among the legislative efforts is Section 508 of the Rehabilitation Act. This mandates the General Services Administration of the U.S. Government to work with the National Institute on Disability and Rehabilitation Research to develop guidelines for the purchase of computers and other electronic office equipment in order to ensure that the equipment purchased by the Government is accessible to its employees with disabilities. The text of Section 508 is provided in Figure 2. A copy of the 508 related regulations and guidelines is included in appendix D. At the present time, the GSA Guidelines describe features that would be desirable in computers and operating systems. Discussions are underway, however, regarding an extension of the GSA Guidelines to include application software, to make sure that applications cooperate with access features being built into the operating systems as well as lending themselves to access and use by people with disabilities. This White Paper reflects these discussions, and provides industry with a mechanism for participating in the exploration and discussion of these topics as well. Review, comment, and feedback on this White Paper and subsequent cooperative Industry Design Guidelines can help provide guidance to others in industry interested in this area. Also, in that interested people within the government also receive and review this document it can act as a means of communication and input to government processes and deliberations on this topic as well.
The recently enacted Americans with Disabilities Act requires that companies make their work environments more accessible to individuals with disabilities. As a result, not only the Federal government but the public sector and private companies will be increasingly interested in software application programs which are more accessible and work well with existing and future special access features and accessories.
Sect. 508. Electronic Equipment Accessibility
(a) (1) The Secretary, through the National Institute on Disability and Rehabilitation Research and the Administration of the General Services, in consultation with the electronics industry, shall develop and establish guidelines for electronic office equipment accessibility designed to insure that handicapped individuals may use electronic office equipment with or without special peripherals.
(2) The guidelines established pursuant to paragraph (1) shall be applicable with respect to electronic equipment, whether purchased or leased.
(3) The initial guidelines shall be established not later than October 1, 1987, and shall be periodically revised as technologies advance or change.
(b) Beginning after September 30, 1988, the Administrator of General
Services shall adopt guidelines for electronic equipment accessibility
established under subsection (a) for Federal procurement
of electronic equipment. Each agency shall comply with the guidelines
adopted under this subsection.
(c) For the purpose of this section, the term special peripherals means a special needs aid that provides access to electronic equipment that is otherwise inaccessible to a handicapped individual.
The bulk of all accessibility design features cost little or nothing once they are included in the basic design of the product. For software products the difference in manufacturing costs is often zero. In exchange, the products are usually easier for everyone to use and the products are applicable to a wider market.
The ability of people with disabilities to work, receive an education, or even access information and other services from their homes, is rapidly becoming dependent upon their ability to access and use computers. If computers and application programs are not accessible, then individuals with disabilities will not be able to participate in education, employment, or daily living. It isn't appropriate to design software that cuts off that many people from such an important area when more accessible software costs no more to manufacturer and is generally faster, easier, less fatiguing, and less error-prone to use for everyone.
If properly done, making software more accessible:
First, it is important to understand that there are many different types and severities of impairment which lead to disabilities. Some types of impairment are:
Within each of these major types, there are many variations and degrees of impairment. Each of these may present different barriers and need to be addressed with different strategies.
The following pages provide a brief overview of the major types of impairments, along with a brief discussion of the implications of these impairments on computer use.
PLEASE NOTE: It is not up to the application software developer/ manufacturer to directly meet all of these needs. The next section will discuss the role of application program manufacturers versus the role of others in providing accessibility. It is important, however, for everyone to understand the basic problems faced by people with different types or degrees of impairment and their resulting disabilities.
Visual impairment represents a continuum, from very poor vision, to people who can see light but no shapes, to people who have no perception of light at all. However, for general discussion it is useful to think of this population as representing two broad groups: those with low vision and those who are legally blind. The National Society for the Prevention of Blindness estimates that there are 11 million people in the U.S. who have visual impairments. This includes both people with low vision and those who are blind.
Low vision is defined as vision that is between 20/40 and 20/200 after correction. (20/200 means that something at 20 feet would be just as visible as something at 200 feet would be to someone with normal 20/20 vision) There are 9-10 million people with low vision. Some of these can read print if it is large and held close (or viewed through a magnifier). Others can only use their sight to detect large shapes, colors or contrasts. There are approximately 1.2 million people with severe visual impairments who are not legally blind.
A person is termed legally blind when their visual acuity (sharpness of vision) is 20/200 or worse after correction, or when their field of vision is less than 20 degrees. There are approximately half a million people in the U.S. who are legally blind.
Blindness can be present at birth, acquired through illness or accident, or associated with aging (glaucoma, cataracts, macular degeneration, optic nerve atrophy, diabetic retinopathy). According to the American Foundation for the Blind, almost 1 person in every 1,000 under age 45 has a visual impairment of some type, while 1 in every 13 individuals older than 65 has a visual impairment which cannot be corrected with glasses. With current demographic trends toward a larger proportion of elderly, the prevalence of visual impairments will certainly increase.
Functional limitations of people with visual impairments include increased sensitivity to glare, viewing the world as through a yellowed lens, no central vision, no peripheral vision, loss of visual acuity or focus, poor night vision, reduced color distinction ability or a general hazing of all vision. Those who are legally blind may still retain some perception of shape and contrast or of light vs. dark (the ability to locate a light source), or they may be totally blind (having no awareness of environmental light).
As would be expected, people with visual impairments have the greatest problem with information displayed on the screen. However, mandatory use of a mouse or other pointing device requiring eye-hand coordination is also a problem. Special programs exist to provide individuals with the ability to magnify the screen image. There are also programs which allow the individual to have the content of the screen read aloud. However, application programs sometimes do things in ways that make it difficult or impossible for these special programs to work well or at all. Individuals with low vision may also miss messages which pop-up at different points on the screen, since their attention is usually focused on only a small area of the screen at any time.
Written operating instructions and other documentation may also be inaccessible if they are not provided in electronic or alternate form (e.g., audio tape or braille) and even then people may have difficulty accessing graphic or pictorial information included in documentation. Because many people with visual impairments still have some visual capability, many of them can read with the assistance of magnifiers, bright lighting (for printed text), and glare reducers. Many are helped immensely by use of larger lettering, sans-serif typefaces, and high contrast coloring.
Key coping strategies for those who are blind or have severe visual impairments include the use of braille, large raised lettering or raised line drawings, braille and audio tape. Note, however, that braille is preferred by only about 10% of people who are blind (normally those blind from early in life). Those who use braille, however, usually have strong preferences for it, especially for shorter documents. Raised lettering must be large and is therefore better for providing simple labels on raised line drawings than for extensive text.
Hearing impairments are among the most prevalent chronic disabilities in the U.S. More than 15 million people have some form of hearing impairment. Almost two million are deaf.
Hearing impairments are classified into degrees based on the average hearing level for various frequencies (pitches) by decibels (volume) required to hear, and also by the ability to understand speech. Loudness of normal conversation is usually 40-60 decibels. A person is considered deaf when sound must reach at least 90 decibels (5-10 times louder than normal speech) to be heard, and even amplified speech cannot be understood, even with a hearing aid.
Hearing impairments can be found in all age groups, but loss of hearing acuity is part of the natural aging process. Of those aged 65 to 74, 23% have hearing impairments, while almost 40% over age 75 have hearing impairments. The number of individuals with hearing impairments will increase with the increasing age of the population and the increase in the severity of noise exposure.
Hearing impairment may be sensorineural or conductive. Sensorineural involves damage to the nerves used in hearing (i.e., the problem is in transfer from ear to brain). Causes include aging, exposure to noise, trauma, infection, tumors and other disease. Conductive hearing loss is caused by damage to the ear canal and mechanical parts of the inner ear. Causes include birth defects, trauma, foreign bodies or tumors.
The functional limitations faced by people with hearing impairment fall into four categories.
First, individuals may not be able to hear auditory information if it is not presented loudly enough as compared to the background noise. The ability to control volume or to plug headphones or other devices into a headphone jack are the primary strategies for dealing with this problem.
Second, individuals who are deaf or who have more severe hearing impairments will not receive any information which is presented only in auditory form. Beeps which are accompanied by an on-screen visual indication prevent this problem. They also avoid the problem of the sound output being too quiet, since the auditory information is also provided visually. With newer systems which include voice output, presentation of the text on-screen or the ability to turn on captions may be necessary.
Third, as voice input becomes more prevalent, it too will present a problem for many deaf individuals. While many have some residual speech, which they work to maintain, those who are deaf from birth or a very early age often are unable to learn to speak or have very poor speech. Thus, alternatives to voice input will be necessary for these individuals to access products which require voice input.
Fourth, many individuals who are deaf communicate primarily through ASL (American Sign Language). It should be noted, however, that this is a completely different language from English. Thus, deaf people who primarily use ASL may understand English only as a second language (and may therefore not be as proficient with English as native speakers).
Because individuals who are deaf cannot hear and sometimes cannot speak, they have difficulty using telephone support services. Special telecommunication devices for the deaf (TDDs) have been developed, however, which allow individuals to communicate over the phone using text and a modem. In order for these users to access phone-in support services, software companies would need to have TDD-equipped support personnel. Individuals who are deaf are also be unable to take advantage of support systems that use touch-tone input and recorded voice output.
Physical impairments vary greatly. They include paralysis (complete or partial), severe weakness, interference with control, missing limbs, and speech impairment. Causes include cerebral palsy, spinal cord injury, traumatic head injury (includes stroke), injuries or diseases resulting in amputation, or various diseases such as arthritis, ALS (Lou Gehrig's Disease), multiple sclerosis or muscular dystrophy.
Cerebral Palsy (CP). CP is defined as damage to the motor areas of the brain prior to brain maturity (in most cases, this occurs before, during or shortly after birth). There are 400,000-700,000 individuals in the U.S. with CP. The most common types are spastic, where the muscles are tense and contracted and voluntary movement is very difficult, and athetoid, where there is constant, uncontrolled motion. Most cases are combinations of the two types.
Spinal Cord Injury. Spinal cord injury can result in paralysis or paresis (weakening). The extent of paralysis/paresis and the parts of the body effected are determined by how high or low on the spine the damage occurs and the type of damage to the cord. Quadriplegia involves all four limbs and is caused by injury to the cervical (upper) region of the spine; paraplegia involves only the lower extremities. There are 150,000 to 175,000 people with spinal cord injuries in the U.S.
Head Injury and Stroke. The term "head injury" is used to describe a huge array of injuries, including concussion, brain stem injury, closed head injury, cerebral hemorrhage, depressed skull fracture, foreign object (e.g., bullet), anoxia, and post-operative infections. Like spinal cord injuries, head injury and also stroke often results in paralysis and paresis, but there can be a variety of other effects as well. Currently about 1,000,000 Americans (1 in 250) suffer from effects of head injuries, and over 2,000,000 people in the U.S. have suffered strokes. However, many of these do not have permanent or severe disabilities.
Arthritis. Arthritis is defined as pain in joints, usually reducing range of motion and causing weakness. Rheumatoid arthritis is a chronic syndrome. Osteoarthritis is a degenerative joint disease. About 1% of the U.S. population (or 2.4 million people) are affected by arthritis.
ALS (Lou Gehrig's Disease). ALS is a fatal degenerative disease of the central nervous system characterized by slowly progressive paralysis of the voluntary muscles. The major symptom is progressive muscle weakness involving the limbs, trunk, breathing muscles, throat and tongue, leading to partial paralysis and severe speech difficulties. This is not a rare disease. About 2 out of 125,000 people will develop ALS each year. It strikes mostly those between age 40 and 70, and men twice as often as women.
Multiple Sclerosis (MS). MS is defined as a progressive disease of the central nervous system characterized by the destruction of the insulating material covering nerve fibers. The problems these individuals experience include poor muscle control, weakness and fatigue, difficulty walking, talking, seeing, sensing or grasping objects. It is estimated that about 300,000 in the U.S. suffer from this disease.
Muscular Dystrophy (MD). MD is a hereditary, progressive condition resulting in muscular weakness and loss of control, contractions and difficulty in walking and breathing. About 10,000 new cases are reported per year.
Problems faced by individuals with physical impairments include poor muscle control, weakness and fatigue, difficulty talking, seeing, sensing or grasping (due to pain or weakness), difficulty reaching things, and difficulty doing complex or compound manipulations (push and turn). Individuals with spinal cord injuries may be unable to use their limbs and may use "mouthsticks" for most manipulations.
Individuals with movement impairments may have difficulty with programs which require a response in a specified period of time, especially if it is short. Individuals with impaired movement or who must use a mouthstick or headstick have difficulty in using pointing devices. Programs which require the use of a mouse or pointing devices and have no option for keyboard control of the program present problems. Individuals who can use only one hand or who use a headstick or mouthstick to operate the keyboard have difficulty pressing two keys at the same time.
This category contains a wide range of impairments including impairments of thinking, memory, language, learning and perception. Causes include birth defects, head injuries, stroke, diseases and aging-related conditions. Some commonly known types and causes of cognitive/language impairment are:
Mental Retardation. A person is considered mentally retarded if they have an IQ below 70 (average IQ is 100) and if they have difficulty functioning independently. An estimated 1% of Americans (2.4 million) are mentally retarded. For most, the cause is unknown, although infections, Down's Syndrome, premature birth, birth trauma, or lack of oxygen may all cause retardation. Those considered mildly retarded (80-85%) have an IQ between 55 and 69 and achieve 4th to 7th grade levels. They usually function well in the community and can hold down semi-skilled and unskilled jobs.
Language and Learning Disabilities. This is a general term referring to a wide range of disorders manifested by significant difficulties in listening, speaking, reading, writing, reasoning, and calculating/integrating perceptual/cognitive information. These disorders are presumed to be due to central nervous system dysfunction. It is estimated that over 43% of children in special education programs in the U.S. (1.9 million) have some type of language and learning disability.
Head Injury and Stroke. This group includes individuals with closed and open head injuries as well as those suffering strokes. These injuries usually result in physical impairments, cognitive impairments or both. There are approximately 400,000 to 600,000 people with head injuries and approximately 2 million people who have suffered a stroke.
Alzheimer's Disease. This is a degenerative disease that leads to progressive intellectual decline, confusion and disorientation. 5% of Americans over 65 have Alzheimer's; 20% of those above 80 have it.
Dementia. This is a brain disease that results in the progressive loss of mental functions, often beginning with memory, learning, attention and judgment deficits. The underlying cause is obstruction of blood flow to the brain. Some kinds of dementia are curable, while others are not. 5% of the population over 65 has severe dementia, with 10% having mild or moderate impairment. 30% of those over 85 are affected.
Cognitive impairments are varied, but may be categorized as memory, perception, problem-solving, and conceptualizing disabilities. Memory problems include difficulty getting information from short-term storage (20-40 seconds, 5-10 items), long term and remote memory. This includes difficulty recognizing and retrieving information. Perception problems include difficulty taking in, attending to, and discriminating sensory information. Difficulties in problem solving include recognizing the problem, identifying, choosing and implementing solutions, and evaluation of outcome. Conceptual difficulties can include problems in sequencing, generalizing previously learned information, categorizing, cause and effect, abstract concepts, comprehension and skill development. Language impairments can cause difficulty in comprehension and/or expression of written and/or spoken language. Problems can occur both in the use of software and in understanding manuals written at too high a technical/comprehension level.
Approximately 1 million U.S. workers (age 18-69) report impaired abilities to read, reason and/or understand spoken or written information as a result of a chronic disabling condition.
There are few assistive devices for people with cognitive impairments. Simple cuing aids or memory aids are sometimes used. As a rule, these individuals benefit from use of simple displays, low language loading, use of patterns, simple, obvious sequences and cued sequences.
A number of injuries or conditions can result in seizure disorders. Seizures can vary from momentary loss of attention to grand mal seizures which result in the severe loss of motor control and awareness.
Seizures can be triggered in people with photosensitive epilepsy by rapidly flashing light, particularly in the 10-25 hz range. This can be caused by screen refresh or by rapidly flashing different images on the screen. The brighter the flash, and the larger the portion of the screen involved, the more significant the visual stimulation. Somewhere between 1 in 25,000 and 1 in 10,000 people in the US have seizure disorders.
It is all too common to find that whatever caused a single type of impairment also caused others. This is particularly true where disease or trauma is severe, or in the case of impairments caused by aging.
Diabetes, which can cause blindness, also often causes loss of sensation in the fingers. Unfortunately, this makes braille or raised lettering impossible to read. Cerebral palsy is accompanied by visual impairments in 40% of cases, by hearing and language disorders in 20% of cases, and by cognitive impairments in 60% of cases. Individuals who have hearing impairments caused by aging also often have visual impairments.
As discussed in Part II, making computers and software more accessible is not the sole responsibility of application software vendors. Many aspects of computer access are best addressed by others, such as hardware vendors, operating system manufacturers, or third-party access product manufacturers. However, there are some components of accessibility that can only be addressed at the application software level.
To understand the role of application software manufacturers, it is important to examine the roles of all parties involved in making computers accessible.
Each party has its own unique role, and must work together to achieve computer accessibility:
As much as possible, the computer platform itself should be made directly accessible by people with disabilities. The computer "platform" is defined here as:
The hardware and operating system components may be produced by a single vendor or by separate companies. These components work together, however, to give the computer its basic operating characteristics and requirements. There are some accessibility features that can only be implemented at this level, and those that are will benefit all application software manufacturers by reducing the need to build these features over again in each application program. It is also of benefit to users in that there is a standard user interface and operating characteristics across programs.
In many cases, particularly for individuals with mild or moderate disabilities, slight changes in the hardware or operating systems can make the computers directly and completely accessible without any further modification. Once these modifications are incorporated into the design of the hardware or software there is little or no additional manufacturing cost. This type of accessibility is called "direct accessibility," since it allows individuals with disabilities to use the computers directly as they come "from the box." This is the most cost-effective type of accessibility, and the most desirable, since it allows individuals who have disabilities to access and use the computers in the same fashion as anyone else. It also allows them to access and use the computers as they find them in educational, employment, or public environments without having to bring along and install special access software or hardware in order to use them (which is often difficult or impossible in public and some other environments).
A second role for standard hardware and operating system manufacturers is to design the computer platform to facilitate the connection and use of special access tools (software and hardware) for individuals with more severe impairments where direct access is not possible (see next section).
Although direct accessibility of computers is by far the best situation, the type or severity of some impairments precludes the ability to use computers "off the shelf" (even if the computers have been designed to include as many direct access features as practical). In these cases, special interfaces, software programs, or other accessories are required in order to allow the individuals to access and use the computers. The role of third-party or "special access" manufacturers is to develop the special hardware and software tools, and to make them available to people who require them. As noted above, standard hardware and operating system manufacturers can greatly facilitate this process by designing their hardware and operating system platforms to be compatible with the connection and use of such special access tools.
While the use of special access products to access a computer is not as desirable as being able to directly access and use the computers, there is a need for and advantages to using third-party access products for some people, especially those with more severe disabilities. On one hand, individuals who have to rely on third-party access devices do not have the ability to just approach and use computers in libraries, laboratories, or employment situations. They must carry their special interfaces with them and be able to connect them to or load them onto these computers before they can use the computers. On the other hand, third-party products which are targeted toward a particular disability can sometimes provide more powerful and efficient interfaces than could be efficiently built into a standard hardware/operating system. It is also sometimes necessary to incorporate additional hardware into the interface (e.g., a dynamic braille display) which would be impractical to incorporate into a standard computers design. Third-party access products are therefore important components in system accessibility, and the only practical approach for some individuals with severe or multiple impairments.
Thus, both direct accessibility (wherever possible) and third-party access products (where built-in accessibility is not possible or is not efficient enough) are necessary to meet the broad range of needs of people with mild to severe disabilities.
The first two parties discussed (the standard platform manufacturers and the third-party special access manufacturers) can work together to overcome most of the access problems faced by people with disabilities. However, access to the computer and its operating system does not guarantee full access to application software, and running application programs is the only use of a computer for most people. Some aspects of the computer's behavior are completely in the control of the application software. Therefore, effective access to computers includes cooperation by the developers of application software. There are three general ways that manufacturers of application software can improve access to and usability of their programs.
Not all information needed to operate the program is available at the system level. Cooperation by the application program is therefore necessary in order for standard or special access features to be effective.
For example, most programs running on graphics operating system use the system tools to display their menus. Access features can thus be designed which attach themselves to the system tools and provide access to all of these menus. Occasionally, however, an application will create a custom menu or palette without using the standard system menu tools, or by using them for only part of the menu function. In this case, the special access features attached to the operating system would be unable to determine what the items in the special palette were in order to present them to the individual with the disability (e.g., if they were blind) and to allow the individual with a disability to choose from among them.
In some cases, the standard access features built into the operating system may allow the person with a disability to use a program, but only in some round-about or inefficient manner. A slight change or option in the application program could substantially increase the efficiency with which individuals with disabilities could operate the program. Since the person with a disability has to compete with their able-bodied colleagues, the ability to operate the program efficiently can be important to their maintaining comparative productivity to their colleagues.
For example, dialog boxes and many interactive programs may have numerous buttons in them. An individual who can tab between the various buttons and fields would have access to the dialog box. However, this type of operation would be much slower than that of other users, who could simply click on the desired buttons to access them rather than having to tab around. Having the ability to type a command key to activate any button directly would greatly increase the speed with which a person with a disability (and anyone else whose hands were on the keyboard) could access and use these programs.
Application programs can unknowingly include features which cause standard or third-party access features to break, or just not work with that program or function of the program. Understanding what accessibility features exist and how they function can help to prevent this problem. It also makes the program generally more robust and compatible with other nondisability-related third-party add-on programs.
For example, using nonstandard techniques to read the keyboard, write to the screen, or show a cursor may be done for performance or other reasons, but could circumvent or break access software. Several major application programs now do this.
In many cases they best means for providing access to persons with disabilities is through the use of third party access devices or software. However the design or improvements to a program can cause incompatibility problems for these third party access products leaving a person who depends on them without access to the computer or your software. Testing of your software for compatibility with access software and hardware can prevent this problem. Providing advance copies of the software to third party manufacturers for testing can also help avoid this problem if it is done early enough in the design cycle to allow for changes in the design to overcome incompatibilities
For example, screen reading software programs used by people who are blind can be made partially or completely ineffective depending on how new features, menubars, toolbars, etc., are implemented.
In addition to the three major players, there is sometimes a fourth player, the systems integrator, particularly in federal acquisitions. Since systems integrators do not usually create software or hardware, their role has not been well explored. However, for federal acquisitions, system integrators are often the individuals who select the hardware and software offered, and the individuals who provide the follow-up support. Their role in overall accessibility for offerings to the federal government is therefore substantial. Key areas where systems integrators can have a major effect are:
As previously discussed, the role of the systems integrator is not well understood, and points discussed here are therefore preliminary in nature. However, it is clear that the systems integrators will play a key role in determining the actual access that federal employees with disabilities will have to their computers and information processing environments. It is also clear that system integrators have major impact on which software packages are offered to the federal government for most of their packaged buys. Finally, it is clear that systems integrators cannot make the hardware and software in their packages more accessible or compatible with special access products from third-party vendors. They will have to rely upon selecting those hardware, operating system, and application software products which are most accessible and compatible with third-party access systems.
NOTE: These guidelines are directed toward the accessibility issues as they relate to application software developers. There is a separate document, titled Considerations in the Design of Computers and Operating Systems to Increase Their Accessibility to Persons with Disabilities, which has been developed by and for hardware and operating system manufacturers. At present, there is no document tailored to the needs of systems integrators. Because of their key role in federal acquisitions, and the fact that they face different problems and questions in making the systems they offer more accessible, a separate tailored document should be developed to address their needs.
In the previous section, the roles of standard platform manufacturers and third-party special access manufacturers were described. The purpose of this section is to provide an overview of the access work of these two groups and how application software manufacturers can take advantage of this work to solve most of the access issues for their programs. A thorough understanding of this section is necessary in order for application software manufacturers to avoid duplicating effort or solving problems which are best solved at these other levels. It is also important for application software manufacturers to understand these strategies in order to be compatible with them and to understand the aspects of accessibility that are and are not covered by them.
For the purposes of this discussion, the solution strategies which are provided by the standard platform manufacturers and by third-party manufacturers are grouped together and presented by impairment area.
The access strategies used by people with visual impairments fall into two major categories: enlargement of the image on the screen, and presentation of visual information in some other form (e.g., speech or braille). People with low vision generally use both strategies, while people who are completely blind must rely on the second approach.
(Please note: The strategies described below and on the following pages in this section are already provided (or will be) by computer manufacturers, operating systems, or third-party assistive device manufacturers. They are not features that application software designers need to add to their software; only things that they need to be aware of and to facilitate rather than obstruct.)
For individuals with mild to moderate visual impairments, the ability to enlarge the fonts (only) used on the screen may be all that is necessary. Within text-only documents, using "large type" is very straightforward, since most graphics-based programs allow the individual to select the font size to be used on screen. Utilities also exist which allow one to use a slightly larger font in the system menus. This concept could be expanded o include larger cursors, scroll bars, etc.,
Simply enlarging the font used on the screen, however, only works for individuals needing moderate character enlargement. For individuals with low vision, the image on the screen must often be magnified 4-16 times. Also, the entire image on the screen needs to be enlarged, not just the alphanumeric characters. To do this, some type of overall screen enlargement utility or program is required. These utilities or programs create a virtual image which is much bigger than the actual monitor screen. The monitor screen itself then becomes a "viewport" which can be moved about over the virtual screen. Using this technique, the individual can only see a small portion of overall screen at a time. (As a result, the effect is similar to a normally sighted person trying to use a computer while looking down a cardboard tube such as that found in a roll of paper towels.) Such screen enlargement utilities allow the individual to enlarge the text as much as they like (up to one character filling the entire screen). They usually also have a mechanism built in to allow the "viewport" to automatically follow the movement of the mouse or cursors as the individual types.
Application developers should note that it is important for screen reading or enlargement access software to be able to identify events which occur in different areas of the screen. This is necessary so that the access software can automatically move the "viewport" to that point on the screen in order to avoid the user missing important events occurring outside of the viewport. It is also important to maintain a consistent screen layout. The user will then know where to find things such as prompts, status indicators, menus, etc.,
For individuals who cannot read the image on the screen even when enlarged, some mechanism for presenting the information in nonvisual form is necessary. The two most common forms for doing this are speech and braille.
Screen reading programs allow the individual to move about on the screen and have any text read aloud to them. In graphical environments with multiple windows, screen readers must also be able to allow the individual to navigate around between windows and among the different elements of a window (scroll bars, zoom boxes, window sizing controls, etc.). They must also provide the individual with a means to deal with icons and other graphic information. For stereotypic images which always appear the same, such as scroll bars and icons, names or labels can be given to each object or icon. When the icons are encountered, their names or labels can be read aloud.
Application programs can facilitate or inhibit screen reading programs' ability to do this, however. For example, a tool bar which is drawn as a single graphic element cannot be easily deciphered by an access program. A tool bar where each tool is drawn using a separate draw command can be easily dissected, and the individual tool images extracted and named.
In addition to speech output, braille can also be used. Since braille is essentially a tactile alphabet, it can be used instead of speech to present the information to the user. Special displays of 20 or 40 braille cells with electromechanical moving pins can provide refreshable or dynamic braille displays that can be continually changed by the computer. As a result, anything that is printed in alphanumeric characters or which can be described in speech can be presented on a dynamic braille display. This is an effective and preferred means for accessing text by some people who are blind. For individuals who are deaf-blind, and can neither read the text on the screen nor hear spoken output, braille is essential for access. It should be noted, however, that the majority of individuals who are legally blind do not know braille (especially those who became blind later in life). Thus, it is a powerful technique, but cannot be used as the only way to provide access for people who are blind.
In addition to problems in accessing the screen, individuals who are blind also have difficulty in using input devices which require vision. For example, some keyboards have electronically locking keys, such as the Num Lock, Scroll Lock, and Caps Lock keys on an IBM PC or compatible. Small lights are provided on the keyboard to allow people who can see to determine whether these keys are in their locked or unlocked mode. Individuals who are blind are unable to determine the status of these keys unless there is some visual indication provided on the screen where their screenreaders can access it. Some application programs provide this. In addition, some software utilities and most screen reading software provide some auditory cues to allow the individual who is blind to know whether these particular keys are in locked or unlocked mode. It is important for application software to use the status flags in the system and ensure that these flags and lights are set to agree with the program's use of these keys.
A more serious problem for individuals who are blind is applications which require use of the mouse. The mouse by its very nature requires some type of eye-hand coordination. For individuals who are blind, this type of eye-hand coordination is impossible. Some blind access software packages provide mechanisms which automatically move the mouse cursor about the screen as they read or move between window elements. Another strategy which can provide some access to mouse-like operations is the use of the tactile mouse discussed below. Also, strategies for using a touchscreen are being explored. For these access techniques to work within the application windows themselves, however, they may require some cooperation from the application program.
Screen reading programs (available for Macintosh, Microsoft Windows and OS/2) are capable of providing full access to the basic operating system constructs (windows, menu bars, dialog boxes, etc.) as well as providing access to text within application program documents (as long as the text drawing tools of the operating system are used to create the text image). In order to access information which is drawing or picture-based (line drawings, charts and diagrams, floor plans, etc.), several advanced strategies are being explored.
One approach involves the use of a virtual tactile tablet with a tactile puck/mouse. A vibrating tactile array of 100 pins is mounted on a special puck/mouse. As the mouse is moved about on the tablet, the tactile representation of the information on the screen is provided to the individual's fingertip. In this fashion, the individual can actually feel the information on the screen. Coupled with voice output screen reading features, this system allows the individual to feel the image on the screen and to have any words on the screen read aloud. snap shots goes here
Other experimental techniques being examined are routines which would automatically recognize and describe verbally stereotypic information presentations formats (bar charts, pie charts, etc.) and routines which would provide special image enhancement (edge detection/enhancement, etc.) to make complex graphics simpler to tactually explore.
Individuals with hearing impairments currently have little difficulty in using computers. Some computers, such as the Macintosh computers and the IBM PS/1, have volume controls and headphone jacks which allow the connection of headphones or amplifiers/speakers to facilitate their use by individuals who have mild hearing impairments. For individuals who cannot hear, on-screen indication of beeps and other sounds can be provided. Currently, the Macintosh has a feature where the menu bar will flash whenever a sound is emitted if the volume control is turned to zero. Many of IBM's newer laptop computers have a small LCD display which flashes a symbol of a speaker whenever a tone is emitted from the computer, thus providing a visual indication of the auditory sound. The AccessDOS package distributed by IBM also includes a feature called "ShowSounds" which provides a screen flash whenever the speaker on the computer is used. There are also other third-party products, such as SeeBeep, which provide visual indications on the screen when a sound is emitted from a PC speaker.
In addition, a system-wide "ShowSounds" switch is currently being advocated for all operating systems. By implementing the "ShowSounds" switch at the system level, the switch could be used by all application programs to determine if the user would like visual indication of any sounds made by the application programs. If an individual was in a noisy environment (such as an airplane or a factory) or had difficulty hearing, they could set the ShowSounds switch. The operating system and all applications which emitted sounds could then check that switch. If it were turned on, they would accompany any auditory sounds with some type of visual indication. Some applications already provide some type of visual indication to accompany many (but not all) sounds. If the ShowSounds switch were set, however, it would be an indication that all sound output should be accompanied by some type of visual indication.
Implementation of the ShowSounds switch would also allow application programs to have closed captioning. That is, newer programs which include speech output could check for the ShowSounds switch and, if it were set, pop-up a small window with the same text that was being spoken. Because this caption would only appear when the ShowSounds switch was set, it would be called a "closed caption." Similarly, if other auditory information were presented which was necessary for the operation of the program, a small indicator or caption describing the sound could be presented (if the ShowSounds switch were set). This descriptor of the sound should preferably be text rather than an icon, in order to facilitate access by individuals who are deaf-blind and using a screen reading program (using braille) to present the information to them.
As software packages move toward more multi-media presentations, the ability of application software to provide closed-captioning will increase in importance.
Problems faced by individuals with physical impairments vary widely. Some individuals are very weak, and have limited range of motion. Other individuals, such as those with cerebral palsy, have erratic motor control. Some individuals have missing or paralyzed limbs, while others, such as those with arthritis, have limited manipulative and grasping ability. People with physical impairments can have difficulty manipulating media, carrying out quick actions, operating input devices requiring fine motor control, and pressing multiple keys or buttons at the same time.
Access strategies can be broken down into roughly three categories:
Some individuals are unable to use the standard keyboard, but could use it if it behaved a little differently. A number of standard modifications are now available which allow the user to modify the way a standard keyboard works in order for it to function better for people with disabilities. Four examples of keyboards modifications are StickyKeys, SlowKeys, BounceKeys, and RepeatKeys. Many of these features (and others) are now distributed by the major computer companies as standard parts of, or extensions to, their standard operating systems.
Info for Visual
n/a = not applicable
ud = under development
Figure 3 shows the availability of StickyKeys, RepeatKeys, SlowKeys, BounceKeys, MouseKeys, ToggleKeys, SerialKeys, and ShowSounds on Macintosh and IBM computers. The Macintosh has all but BounceKeys and SerialKeys built directly into the operating system. IBM and Microsoft distributes(free) a package called AccessDOS which contains all of the features. The Access Utility for Windows 3.x also contains all of these features, and is distributed as a part of the third-party drivers package available from Microsoft, as well as being available on several bulletin boards, gophers, and on-line information systems such as Compuserve.
StickyKeys is a feature which eliminates the need to press several keys simultaneously. For individuals who type with only one hand, finger, or a head- or mouthstick, it is difficult or impossible to press a modifier key (such as Shift, Control, or Alt) and another key at the same time. When invoked, StickyKeys allows the individual to type modifier keys in sequence with other keys: for example, they can press the Control key and then the H key to get a Control-H.
RepeatKeys is a feature which allows the repeat rate on the keyboard to be adjusted. Some individuals get unwanted multiple characters because the key repeat rate is faster than their reaction time. RepeatKeys allows them to change the speed of the repeat function and/or to turn it off.
SlowKeys is a feature which facilitates use of the keyboard by individuals who have poor motor control which causes them to accidentally bump keys as they move around between desired keys on the keyboard. The SlowKeys feature allows the user to add a delay to the keyboard so that the key must be held down for a period of time before it is accepted. In this fashion, the keyboard would only accept keys which were pressed deliberately and held for athis period, and would ignore keys which were bumped.
BounceKeys is a feature to facilitate keyboard use by individuals with tremor or other conditions which cause them to accidentally double- or triple-press a key when attempting to press or release it. BounceKeys does not slow down the operation of the keyboard, but does prevent the keyboard from accepting tvery quick consecutive presses of the same key. Thus, with BounceKeys on, individuals who "bounce" when either pressing or releasing a key would only get a single character. To type double characters, the user would simply have to pause a moment between typing the same key two successive times.
In addition to these software modifications to the keyboard, the use of a keyguard is also common. A keyguard is a flat plate which fits over the top of a keyboard and has holes corresponding to each key. The individual can then rest their hand on the keyguard and poke a finger down through the hole to type. The keyguard both helps prevent the typing of unwanted characters and provides a stable platform which the individual can use to brace their hand for additional control in typing.
Many individuals with physical impairments are unable to control the standard pointing device. In some cases, mouse alternates such as trackballs can be used. In other cases, individuals are unable to operate any analog pointing device. One software approach which allows the mouse to be controlled from the keyboard is called MouseKeys. When MouseKeys is invoked, the number keypad on the computer switches into a mouse-control mode. The keys can then be used to move the mouse cursor around on the screen. Keys on the keypad also allow the mouse button to be "clicked" or to be locked and released to facilitate dragging. The MouseKeys feature works at the same time as a standard mouse or trackball; it is therefore possible to use these other pointing devices to move about on the screen, and then switch to the keypad for fine movement of the mouse. Single-pixel movement of the mouse is very easy using MouseKeys. In fact, it is often used by nondisabled graphic software users for precise pixel movements which are difficult or impossible with the standard mouse. For individuals who have good head control, there are also head-operated mice which allow the individual to essentially use their head to point and then puff on a straw in order to activate the mouse button.
While modification to the standard keyboard allows input by some individuals, alternate "special" keyboards or input devices work better for others. These alternate keyboards take many different forms, including expanded keyboards, miniature keyboards, headpointing keyboards, eyegaze-operated keyboards, Morse code input, scanning keyboards which require operation of only a single switch (operated by hand, head, or eyeblink), and voice operated keyboards. Some of these keyboards connect to the computer in place of or along with the standard computer keyboard. Other alternate keyboards connect to the serial or parallel port on the computer, and use special software to cause their input to be injected into the operating system and treated as keystrokes from the standard keyboard. In still other cases, the "keyboard" may appear on-screen in a special window. The individual then selects keys on that video keyboard using a headpointer, a single switch scanning technique, Morse code, or other special input technique. The keys selected on the video keyboards are then fed through the operating system so that they appear to application programs as if they had come from the standard keyboard.
For programs which provide mouse support, these alternate input devices can also create simulated mouse activity in order to the user to access drawing, dragging, and other mouse-based functions of the application programs.
This document is hosted on the Trace R&D Center Web site. Please visit our home page for the latest information about Designing a More Usable World - for All.