These design guidelines are grouped by topic as listed below.
Wherever possible, applications should use the standard text-drawing tools included in the system. Most screen access software programs for computers with graphic user interfaces figure out what is on the screen by watching the use of these tools. Even when the tools are used to draw characters in other (nonscreen) locations of memory and then copy the information to the screen, it is still possible for access software to track its use. In this fashion, the access software can keep track of which characters with which attributes appear in each location on the screen without having to attempt to do optical character recognition directly on the bit-mapped fonts on the screen. (Direct OCR of the pixel image of the characters on the screen has been proposed, but is currently not practical. When small point italic characters are used, they are generally so distorted as to be unrecognizable. In addition, underlining, shading, outlining, and other attributes to the text can make it difficult to recognize. As a result, tracking the use of the text-drawing tools is the only currently available technique.)
Occasionally, applications will draw the text characters in a different portion of memory, and then copy the block of text onto the screen. As mentioned above, as long as the text-drawing routines are used, this does not pose a problem. However, when the applications are done with this text and they want to re-use the area, they will often directly zero the space in memory where they were drawing the characters rather than using the text-drawing tools to erase this area. This makes it more difficult for the screen reading software to keep track of which characters are or are not still drawn in that portion of memory.
Occasionally, applications will use text which has been predrawn and stored in the program as a bit image. Such painted text cannot be read by any current screen reading routines. When this text is purely decorative, as on a start-up screen, it does not pose a problem. If it contains important information or information necessary to use or understand the program, it should be created in real time using the text-drawing tools in order to be accessible by screen reading programs.
The problems surrounding cursors and pointers generally fall into two categories:
Eventually, some standard mechanism for allowing electronic cursor/pointer location may be devised. In the meantime, the following strategies may be used.
Whether using text-based or graphics-based screens, using the system cursors and pointers wherever possible facilitates their location. Again, most screen reading programs can easily locate the system cursor and pointer. However, if the application software creates its own cursor (by highlighting text, by creating a box, etc.), there is no way for the access software to easily tell where the cursor is.
If the application software does use some special nonsystem cursor, one strategy is to drag the system cursor along with the special cursor. The system cursor can be invisible. It will still be "seen" and tracked by most screen reading (or enlarging) software even though it is not visible on screen to a sighted user. In this fashion, the access software can follow the custom cursor which would otherwise be invisible to it. Even when the focus is indicated by other means (e.g., a heavy black square around a cell on a spread sheet) the system cursor can be dragged along with the focus. In some systems the cursor rectangle can be defined to be the same size as the cell on the spread sheet... allowing the screen reader to determine more easily which characters on screen are within the focus area. If there is more that one highlighted area on screen, the system cursor should be taken to whichever would be the primary focus at the present time given the users activity.
Some individuals with low vision are able to use computers without screen enlargement software, either by using the standard font or a slightly larger font. The text cursor (and some mouse cursors), however, sometime consists of a single thin line which easily disappears from the user's view. As the user enlarges the fonts, the cursor line usually gets taller, but it does not necessarily get any thicker or easier to see. If an application is using a standard system cursor, then the problem should be handled at the system level (since the system should already support an alternate system cursor which would be heavier and easier for individuals to see). If the application software is providing its own cursors, however, then provision of an alternate cursor with a heavier line width should be considered. Alternately, a special control which would make the cursor stand out in some fashion, to make it easy to locate, could be provided. Some strategies for making the cursor easy to locate include:
For individuals who are color blind, the ability to select the colors used for all aspects of the screen is helpful. In general, most displays use light characters on a dark background or dark characters on a light background. As a result, they are generally visible no matter what their color is, simply because of the difference in their intensity. However, the ability to adjust colors to increase contrast is helpful for some individuals.
When using color to encode information, using colors having much different intensities makes the colors easier to differentiate. A light yellow and a dark green, for example, could be distinguished even if the screen were displayed in gray-scale mode because of the difference in their intensity.
If there is a method to set the colors of standard elements from the system control panel then use those colors for the corresponding elements in the application.
If there are no standard elements then either provide a way to set the colors within the application or make sure that color blindness will not effect the readability or interpretability of the information displayed in color by using color redundantly, and by making sure that high contrast is maintained.
One mechanism to circumvent problems with color is simply to provide a monochrome or gray-scale option for the program. Individuals having difficulty with colors can then use the program in the monochrome or gray-scale mode.
However, care should be taken to make sure that there is sufficient contrast between text and background. It is fashionable to make some buttons using black text on a dark gray button. This low contrast combination makes it more difficult for people to read, especially those with low vision.
Some systems plan to have a "High Contrast" mode. In this case low contrast controls and information can be used more freely to dress up the application, as long as high contrast modes are available and used within the application when the "High Contrast" flag is set in the operating system's control panel.
For individuals who have low vision, consistency of screen layout is important. As discussed earlier, individuals with low vision often use screen enlargement software to access the screen. As a result, they are only able to view a small portion of the screen, similar to looking down a paper tube. Similarly, individuals who are blind must use screen reading software to locate items on the screen, searching one letter or word at a time. Thus, programs that have a consistent location for menus, feedback messages, etc., are much easier to use. Where operating systems specify standard procedures and locations for things, it is very helpful for application programs to follow these standards.
Alert messages that pop-up and disappear quickly may be missed by some individuals, depending on their screen access tools. To avoid this problem, alert messages should remain on screen until dismissed by the user.
Some other applications have text which appears when the mouse cursor touches some point on the screen. If the mouse cursor moves off of that point, the text disappears. This provides a particular problem for screen access software, if it moves the mouse pointer along as it reads the text.
A typical scenario of this problem would occur as follows. The user moves the cursor to a point on the screen, causing the text to pop-up. The user then tries to read the text, but as the screen reader begins to read the text, it moves the mouse cursor to move along with the reading. As soon as the cursor moves to the first word, it has left the original trigger point on the screen, and the text that the user is trying to read disappears.
Individuals with learning disabilities may experience similar problems. For example, there is now a special utility program on the market which allows people with learning disabilities to get reading assistance: the user points the mouse cursor at a word, and the program reads the word aloud. Such a program would be unable to read words in pop-up messages such as those described above. As soon as the user moved the cursor to tell the special utility which word to read, the message would disappear.
At the present time, the balloon help on the Macintosh suffers from such a problem. A mechanism which would allow triggered text to be locked on, so that the individual can move the cursor over the text to read it, would be helpful.
Text-based screen readers default to reading left to right. Text which is positioned in columns within a window or object on screen is often read as if it were continuous text; that is, the text in the first column is read, and then the screen reader jumps to the next column and continues reading. Many screen readers can be programmed to deal with text in columns. Where possible, however, continuous text is easier to deal with -- especially in help files.
If objects on the screen have a definition table, it is important to attach a label to the object. Even if the label does not appear on the screen, this information is available to screen readers. Wherever possible, labeling controls visibly on the screen makes their function clearer and also facilitates access via screen readers.
Icons which are embedded in text and convey meaning (not merely
decorative) can be missed by screen readers, resulting in misunderstanding
or incomplete comprehension of the information by people using
Some application programs provide their own on-screen indication as to whether the CapsLock, ScrollLock, and NumLock keys have been depressed. In some cases, this feedback is independent of (and therefore sometimes contradictory to) the flags in the system or the status of the lights on the keyboard. This can cause inconsistent feedback to people who are using access programs which check the status of these indicators. Applications programs should either use the status flags in the system and keyboard or update them to agree with the program.
Making all aspects of the program, including menus, dialogs, palettes, etc., accessible from the keyboard significantly increases accessibility for many users. Although a MouseKeys feature (which allows the user to use the keypad to drive the mouse around the screen) could be used to provide access to toolbars, for example, this is a very slow and ineffective mechanism. Even if the individual is using MouseKeys for drawing, rapid access to the tools via the keyboard can greatly facilitate the use of the application software by individuals with disabilities (and other users as well). Access by allowing users to "walk" the menus using the arrow keys as well as by keystroke equivalents can greatly increase the efficiency and ease of use for many users both with and without disabilities.
Again, use common conventions, system standards, and style guidelines wherever possible when designing keyboard access to all aspects of the program.
One problem faced by individuals with disabilities is the inability to hold down two keys simultaneously. "StickyKey" programs which provide electronic latching for the Shift, Control, Alternate, Option, and Command keys on the different computer platforms already exist, and are being made available by operating system manufacturers. As a result, it is not necessary to build this type of feature into your application program. In fact, this is an example of an accessibility feature which is best handled at the system level. Moreover, implementing it in an application can cause a conflict with and therefore interfere with the feature in the system software. See Part II for complete listing and description of common keyboard access features in new operating systems.
Screen reading software for people who are blind uses the control names and types to provide information about the control to the user (who cannot see the shape of the control).
Screen readers used by people who are blind can easily detect and identify these types of controls on the screen.
If you want a custom look, and the operating system has an "owner-draw" style (such as MS Windows), use it instead of a custom control. This type of control will appear to the blind user's screen reader as a standard control. Be sure to fill in the text label for the control (even if you don't use it to label the control on screen). The screen readers use this name to describe the control to the blind user.
If you define a CUSTOM control which behaves similarly to a standard control, use the name of the standard control as part of the name of your custom control. The screen reader can use the name of the control to pass information on to the user which will help the user understand the general type of the custom control. For example you might name a custom button "SpecialButton".
"Fake" buttons or "hot spots" on pictures make it difficult or impossible for a screen reader to tell that there is a button there. Strategies and approaches for dealing with this are being developed - but have not yet been standardized.
As discussed earlier, most access software works by attaching itself to the operating system. When application software uses standard system menu tools, access software is able to read the list of available commands and can provide the individual with the ability to directly maneuver through and activate the commands.
Menu items that are not text-based and are not accompanied by text are difficult for screen reading programs to access.
Application programs which provide the ability to access all of the menus by using the keyboard greatly facilitate access by individuals who cannot use the standard pointing device. This access may be provided either by use of the arrow keys to move around through the menu structure, or through use of keyboard equivalents for the menu items.
Application programs which provide multiple mechanisms for accessing commands better accommodate the differing needs of users. Access via menus and layered dialogs provide easier access for individuals with lower cognitive abilities. Direct access with key combinations provides better access for individuals with physical impairments and for individuals who are blind.
As with menus, application programs which provide direct access to palettes and toolbars greatly facilitate access by individuals with different disabilities. If the toolbar is only a shortcut method to accessing items in the menu, and the menu is accessible, then access to the toolbar would not be necessary. When the toolbar commands are not available in the menu, however, direct access might be provided, or the items might be provided redundantly as an optional menu.
Screen access software for individuals who are blind works by monitoring the operating system's screen drawing routines. When individual icons are drawn separately, they can be individually identified, named, and accessed. If a toolbar or palette is drawn as a single bit image, the individual tools within that palette are not individually identifiable or accessible using standard techniques.
Helpful for both individuals with physical disabilities and with visual impairments.
Again, when naming buttons and controls within a dialog box (whether the name appears on the button/control on screen or not) be sure that clear, logical, descriptive names which match the words printed on the screen near them. Screen reading software accesses these names in helping the person who is blind to decipher the information within the dialog box.
In some operating systems, buttons within a dialog box are not normally accessible directly from the keyboard. Access utilities exist which allow individuals to tab through the buttons until they reach the desired button, after which they can select it from the keyboard. The order in which the tab moves through the buttons is dependent upon the order in which the buttons are defined in the dialog definition tables. If the button definitions are not in logical order, the tabbing key will jump the highlight in what appears to be a random pattern around the dialog, highlighting the buttons in their definition order. Although this does not prevent access, it is disorienting.
If the caption is not a part of the button itself, use some standardized spatial relationship so that the location of a label for a button (or a button for a label) is predictable for individuals using screen readers to explore/use a dialog box.
Again, the best solution is to provide direct keyboard access to all aspects of the dialog, including buttons, scroll windows, text entry fields, and pop-up menus.
Many users with low vision can use an application without a screen enlargement program, provided the application allows users to adjust the font size. In fact, most users will appreciate being able to adjust the font size as a way to reduce eyestrain.
The font size in the on-line help should change in concert with the adjustments to the font size made by the user in the application.
As discussed in "Cursors, Pointers, Highlighting and Other Focus Techniques" above, allowing for the substitution of larger or heavier lined cursors and pointers makes it easier for many users to track cursor and pointer movements, and maintain their attention on the application's current focus.
Lines are often drawn using a default set at a single pixel for the width. These size lines can be hard to see in a variety of viewing environments and on different display hardware. Additionally, users with low vision may be unable to see single pixel width lines under any circumstances. Therefore, make sure that you use the system's tools for determining monitor resolution parameters, and be aware that future operating systems may allow users to adjust line thickness to suit their needs. (For example, in Windows you can call GetSystemMetrics with SM_CXBORDER and SM_CYBORDER constants to determine the proper line width for the users monitor and resolution - and later, their preference).
There are many uses for sound in an application. Some of them are:
In Uses 1 and 2, a person who cannot hear the sounds is not at a disadvantage. In Use 3, and particularly in Use 4, however, visual presentation of the information should be provided as an option for people who cannot hear - or are in a noisy environment where the sound would be lost or not intelligible - or in environments where the sound may be turned off (e.g., library or long row of workstations)
A general solution which solves the access problems for both individuals who are hard of hearing and individuals who are deaf is the provision of all auditory information in a visual form as well. Auditory warning beeps can be accompanied by a visual indicator. Beeps and other sounds would be described in text, both to differentiate the sounds and to allow access by individuals who are deaf-blind (and would be using a braille screen reading program to access all of the information from the computer). Speech output (in cases where it is important for understanding and using the program) can be accompanied by text on the screen (either as a normal part of the program, or in a caption box). This presentation of information visually can be programmed to happen at all times, or can be invoked if a special operating system flag is set indicating that the user would like all auditory information presented visually. If the system software provides a "ShowSounds" switch, the setting of this switch could then trigger the visual display feature.
For beeps or other sounds which are not normally accompanied by a visual indication, application software should check for a system "ShowSounds" switch. At the present time, the "ShowSounds" switch is not a standard feature. In the future, however, it should be appearing as a standard system switch which can be accessed by software. Users who are in noisy environments or who cannot hear well would then be able to set the "ShowSounds" switch. Application programs could then check that switch and provide a visual indication to accompany any auditory sounds.
NOTE: What kind of visual indication accompanies the sound is entirely up to the application. In some cases where the sound carries a rather urgent cue or warning, you might want the whole screen to flash. In other cases the window or its title bar might flash. Also, see "Ensure that Visual Cues Are Noticeable" below.
NOTE: In addition to providing a "ShowSounds" switch as a part of the operating system, manufacturers of operating systems are also being encouraged to build captioning tools directly into the operating system to facilitate the implementation of closed captioning by application programs.
When providing a visual cue to what would otherwise be an auditory alert, it is important to ensure that the cue is sufficient to attract the user's attention when viewed out of the corner of the eye. An individual who is looking at the keyboard and typing, for example, is not going to notice a small icon that appears and disappears momentarily in the corner of the display. A flashing menu bar or area at the bottom of the screen will stand a better chance of attracting attention (flashing should be 2 hertz or below).
As programs incorporate the use of synthetic or recorded speech, closed captioning should be considered. Again, in those cases where the information being presented via speech is already presented in text on the screen, there is no need to present the information visually in any other fashion. In those cases where information is being presented via speech which is not otherwise displayed on the screen, application programs might check for the "ShowSounds" switch. If the switch is set, a small box containing the text being spoken could be displayed on screen. Music or other sounds being provided for adornment would not have to be presented in caption form, if they are not important to the operation of the program. Where the tune or sound is important to the operation of the program, then some description to that effect could appear in the caption box.
For some users, simply increasing the volume of the sounds is enough to provide access to all auditory information presented by the application. Auditory output should not have a fixed volume but should be adjustable using the control panel or other user settable sound features in the operating system.
In other instances and in other environments, users may want to eliminate any sound output at all. For instance, while working in a library, auditory output can be irritating to the other patrons.
Although the use of sound can be a problem for people with hearing impairments (if a visual counterpart is not available), the use of sound in programs can be very helpful for users who are blind and in some applications for people with cognitive disabilities as well.
Programs requiring time-dependent responses in less than 5-10 seconds should have provision for the user to adjust the time over a wide range, or have a non-time-dependent alternative method.
These should remain on screen until the user consciously acknowledges or dismisses them.
Flickering screens can trigger seizures in people with photosensitive epilepsy. The worst frequency is 20; at frequencies above 60 and below 2, sensitivity is greatly reduced. Sensitivity increases with the brightness and the area on the screen that is flickering.
In order to facilitate access to programs by individuals using their access software, it is useful to have all user-settable parameters both readable and settable via external software. This might be accomplished in a number of fashions, including providing an optional menu which could be enabled (since the access software would already have access to the menus). This technique would allow the software both to easily get a list of the externally available commands and to execute them. Commands can be provided for reading and for setting parameters, either directly or via dialogs.
Although this is true in any environment, it is especially true in character-based programs. Dual column text, pup-up menus, etc., can be problematic and require custom programming of the interface for each application program. Even then, the results are mixed. The screen reader tends to read from left to right across the page, mixing columns and drop-down menus as if it were all running text.
Where possible, use extended ASCII character graphics rather than standard ASCII characters (such as "***") for drawing lines, making boxes, etc., When screen readers hit such text, they may read it as "asterisk, asterisk, asterisk," unnecessarily slowing down the process. A particular nuisance is text buried in a string of asterisks. In order to read the text, the individual must sit while the screen reader reads off the punctuation or other characters. Screen reading programs can be programmed to skip nonalphabetic characters; however, this can cause the individual to miss important information on the screen.
A similar problem appears when alphabetic characters are used to draw boxes. Using 1's (the digit one) or l's (lower case L) to draw a vertical line is obvious to somebody looking at the overall screen. When reading a single line of text using a screen reader, however, these do not look like a vertical line but are read aloud as the characters "One" or "L."
Software that presents information in a color graphics mode often uses different strategies to highlight or select text. Providing an optional monochrome mode in your software greatly facilitates access software, particularly cursor finding.
A common strategy for selecting items from a list is to use the arrow keys to move a highlighted bar up and down the list. A highlighted bar is much harder for screen reading software to detect than is a character. If a small character is also moved up and down a list (along with the highlight) or in some other way change the characters on the line that is selected in the list, it greatly facilitates access by screen reading programs. An example is shown below.
> Item 2
An important component to the accessibility of any software is the ability of the user to access the documentation. Documentation can be made available in a number of formats, including standard print, large print, braille, audio tape, and electronic form. The most universal of these is the electronic format. In order to be really accessible for people who are blind, the information should be available as an ASCII text file. This would involve converting photographs and diagrams into descriptions, and identifying other techniques for providing emphasis to particular words other than the use of different fonts and highlights. Once a file is available in a pure ASCII form, it can be easily accessed using screen readers as well as translated and printed out as braille or recorded in audio tape format.
Although individuals who are blind will find an ASCII text file to be the most useful form, individuals who have severe physical disabilities may find that an electronic copy of your manual which also provided pictures and diagrams will be the most useful form. The electronic form of the manual would allow people with physical disabilities to have access that they would not normally have, because of the difficulty in manipulating books. Having a full graphic version of the manual would provide them with the maximum amount of information.
Someday, when "electronic paper" is common, having the manual in both ASCII and "electronic paper" would be optimal. In the meantime, the ASCII version is the most universally accessible format.
Even the design of standard print manuals can be done to better facilitate their direct use by individuals with visual and other impairments. Some things which can be done to improve the accessibility of standard print documents are:
One form of electronic documentation which is becoming increasingly more prevalent is on-line help. As long as the help is presented using standard screen-writing routines, access should be no problem. If pictures are used within the on-line help, then text should accompany the picture and provide enough information that the picture or diagram is a redundant visual aid.
Translating documentation from its standard print form into an ASCII text file which is effectively formatted can take some effort. However, there are programs set up in the United States which can provide technical assistance in the translation process.
Some packaging techniques make it difficult or impossible for people with manipulation problems to open the package. Where products are sealed for warranty or virus protection, some means for easily opening the package should be provided.
In addition to the printed and on-line documentation, many programs have videotapes or other multi-media training materials available for them. In addition, some companies provide training courses, either in the direct use of their product or for programmers or other professionals wishing to use or extend their product.
Having access to the training materials for a program can be as or more important than access to the basic documentation. As software becomes more and more complicated, the ability to access and use the training materials becomes essential. Videotapes with closed (or open) captions, provision of equivalent training materials which do not require the ability to see, and the use of descriptive video (where the actions taking place on the screen are described as a narrative on a separate audio track) are examples of some strategies which can be used here. Providing more accessible training does not mean that videotapes cannot be used because there are people who are blind, however. It could mean that the same information provided in the videotapes is also available in a form that does not require sight.
In addition to the training materials themselves, it is also important that training sessions be as accessible as possible. Some strategies for doing this include holding the training sessions in facilities which meet ADA accessibility standards, and may include the provision of interpreting or other services to meet the needs of specific attenders.
Another key to having software which is more accessible is the provision of specialized customer support. Often, an application program will seem to be incompatible with various adaptive hardware or software products, when in fact it will work with them if certain parameters are properly set. In other cases, it may be incompatible with one particular adaptation, but be easily accessed using others. Such information is important to users who have disabilities, and generally cannot be obtained by calling the standard customer support lines. In fact, a number of companies have built-in accessibility features in their products which are unknown to their own customer support teams.
While it would be nice to have all of the customer support personnel fully aware of all types of disabilities, adaptations, and compatibility issues, this is unrealistic. There is simply too much specialized information. Even with a specialized hot line, application companies may find that they identify different individuals with expertise on how to use or adapt their software for users with different disabilities.
A two-tiered approach to support for users with disabilities is therefore suggested. First is the inclusion of basic disability access issues and information across all of the customer support personnel. This would include both a TDD (telecommunication device for the deaf) line and a voice line. It would also include an awareness of the efforts by the company to make their products more accessible, and the existence of the specialized customer support line. All customers, including those with disabilities, could then use the standard support lines to handle standard product use questions. When specialized questions arose, such as compatibility of the product with special disability access utilities, the calls could be forwarded to a disability/technical support team.
The second tier would be the creation of a customer support line specifically for individuals who have disabilities. If your company provides an electronic customer assistance mechanism, a special forum or section for disability access should also be provided. The purpose of these mechanisms would be to provide specialized and in-depth information and support regarding disability access and compatibility issues or fixes for different access utilities.
For some small companies, it may be difficult to develop a depth of expertise in each of the disability areas. In that case, rather than trying to hire someone with expertise in the different disability areas as well as expertise in the technical support aspects, the company might contract with an outside agency who does have this expertise and give them the training on the company's software and technical support information.
The existence of the special customer support, as well as the phone numbers, should be prominently listed in the documentation. Specific services and disability access features of products should also be plainly documented in manuals.
It is difficult to ensure that new application software will not cause problems for any of the many different types of special access and adaptive hardware and software. Often, the only way to tell whether a product or new features in a product will cause problems is to actually try it out with the different access products. As a first pass, companies may have people with disabilities on site who can test new programs for general usability. However, there are literally hundreds of different adaptive aids. As a result, it is difficult for each application software manufacturer to have all of the adaptations on-site to try with their new software or new features. Two alternate strategies are therefore suggested.
The first strategy is to include individuals from the various adaptive hardware manufacturers and software developers as a part of the early beta testing of a product. This will take a concerted effort on the part of application software developers, since these adaptive product manufacturers themselves do not represent a large enough market to normally qualify for early beta release of application software programs.
A second strategy would be to contract with a third-party testing lab that is familiar with a) the different types of hardware and software adaptations available and b) the problems usually encountered by these access products with application software. This would involve a financial investment on the part of the application software developer. On the other hand, it may provide for a better mechanism to get a relatively high confidence evaluation of the compatibility of the application software. It would also allow testing with a range of different hardware and software adaptations without requiring the application manufacturers to release their software to a large number of different manufacturers. The early testing of software (pre-beta) is important, since problems with accessibility are likely to occur at a level that is difficult to address at the beta stages of an application. A major difficulty with this approach is that there are no known testing labs with the broad cross-sectional base of information that would be needed to carry out such testing at the present time.
The best approach at this time therefore appears to be involving the developers of the adaptive hardware and software as early as possible in the testing of a product or update.
Another key area in ensuring the accessibility of application software is support for companies developing disability access software. Again, these companies are usually small enough that they do not qualify for the types of support generally provided to other, larger developers and operating system manufacturers. As a result, it is often difficult or impossible for them to qualify for access to technical support in the same manner as other larger third-party manufacturers. In addition, the types of problems they have sometimes differ. It is often therefore helpful to have individuals within the technical support team who specialize in these issues, and who can work with developers to both a) identify strategies for those developers to effectively access your application, and b) identify ways in which your application or future editions of it can be made more user-friendly.
This latter point is essential in the development of new versions
of application programs. As mentioned above, discovering an incompatibility
with access software at the beta testing stage is too late. Typically,
the types of inconsistencies that occur with access software occur
at a rather fundamental architectural or structural level in the
application. Thus, it is usually too late by the time the beta
test occurs to do anything about accessibility problems. On the
other hand, software is usually not available for testing until
it is substantially completed. Ensuring the future accessibility
of software products is therefore highly dependent upon interchange
and communication between the software development team at the
application manufacturer and the third-party access product developers.
Through this interaction, as well as through documents such as
this, application software developers can begin to identify the
kinds of things that do or might cause accessibility problems.
They can then get in contact with the third-party assistive device
manufacturers and explore ways to circumvent these problems.
This document is hosted on the Trace R&D Center Web site. Please visit our home page for the latest information about Designing a More Usable World - for All.