Java Accessibility
Preliminary Examination
WORKING DRAFT

Version 2.0
March 14, 1997

Report commissioned by
Sun Microsystems, Incorporated

Prepared by
Trace R&D Center
University of Wisconsin-Madison
http://trace.wisc.edu/

This report is available on-line at http://trace.wisc.edu/docs/java_access_rpt/report.htm
It is also available in electronic form (HTML and ASCII) on request 608-262-6966.



Trace Center Java Accessibility Team:

Wendy Chisholm
Chuck Illingworth
Mark Novak
Gregg Vanderheiden

Acknowledgments

We would like to express appreciation to the following people for their contributions.

Jon Gunderson
Earl Johnson
Peter Korn
Chuck Oppermann
Will Walker



Contents

Introduction

Part I: Disability Access Information

Part II: Java-Specific Recommendations

Appendix A: A Sample of Common Interaction Problems between Screen Readers and Applets

Appendix B: References



Introduction

PURPOSE

This report outlines the disability access issues surrounding the use of Java, and suggests modifications to the AWT to increase the accessibility of applications and applets written with Java.

Although the focus of the project is the AWT, a complete solution involves virtual machines, development environments and applet/application developers. Therefore, these aspects have also been covered, albeit in lesser detail.


DOCUMENT STRUCTURE

Information on specific topics is covered in individual sections to make it easy to reference. Although we tried to keep redundancy to a minimum, a certain amount was required to make individual sections stand on their own.

The sections of this report are:

Part I: Disability Access Information

Part II: Java-Specific Recommendations

Appendices



Part I
Disability Access Information

A) What does accessibility mean? (in brief)

Two levels of accessibility (operable and usable)

Basically, saying that a system is accessible to people with disabilities means that the system is usable by people who have disabilities.


Accessibility is not an absolute

In reality, it is difficult to design a system which an individual with severe limitations can use as efficiently as someone with full sensory, physical, and cognitive resources. Furthermore, we don't know how to make some functions or capabilities accessible to people with particular severe disabilities or combinations of disabilities. However, the first mistake made by most designers is not realizing that most functions and programs can be operated very efficiently and effectively by individuals with a wide range of disabilities if they are properly designed. In fact, many individuals who are blind (when given good access techniques) are able to access and use their computers, and even graphical user interfaces, faster and more efficiently than many of their "fully able-bodied" colleagues.

The second mistake is believing that making systems accessible is hard to do. Although it seems difficult at first because a person needs to familiarize themselves with new information and techniques, once the ideas are mastered it is not any more complicated than creating multi-modal (or alt-modal -- see "nomadic" below) interfaces for fully able-bodied users. In fact, if you create truly multi-modal interfaces for software, you will largely have accessible applications, and vice versa.


Alt-Modal and Alt-Input

Alt-Modal

Alt-modal refers to the ability of a system or device to operate in alternate sensory modalities. For example, a system which can be operated entirely with vision (e.g., without requiring any hearing) but can also be operated entirely auditorally (without requiring any vision) would be an example of an alt-modal system. The term is coined to differentiate this type of behavior from multi-modal behavior, which is sometimes defined in the same fashion as alt-modal but more often is defined as meaning the use of multiple modalities simultaneously (that is, a system which requires both hearing and vision in order to operate it). A fully alt-modal device/system could be operated:

In order to do this, the systems would have to present all information redundantly through all three senses, or have separate modes of operation which required only one of the three principle sensory channels.

Alt-Input

A parallel concept to that of alt-modal systems would be alt-input.

Alt-input refers to the ability of a device/system to be controlled through either simple verbal or direct manipulation interfaces. "Simple verbal" refers to an interface that can be controlled entirely via unformatted ASCII text . "Direct manipulation" refers to an interface that does not require the inputting of any text or numbers by the user.

An alt-modal, alt-input device provides for a very wide range of input and output flexibility. It supports a complete speech input and output interface, but can also be used in a completely silent format.


Accessibility = alt-modal = nomadic applications

Making systems accessible to people with disabilities involves creating systems that can be used without vision, without hearing or without full range of physical ability. These strategies directly parallel nomadic uses of computers. For example, if you're using your computer to access a database while driving a car, you need to be able to do it without vision. Trying to use a system while you are in a quiet environment (a library or in a meeting), or in a noisy one (in a factory, shopping mall, or airplane) means that you need to operate it without being able to hear it. Operating a system while walking may preclude the use of a fine pointing device, and perhaps even a keyboard. Creating programs which are alt-modal (you can choose between different alternate modalities) and alt-input (you can choose between different input techniques) is an important first step in creating systems which are nomadic.


Intelligent agent accessibility

Similarly, creating programs and materials which are more compatible with assistive technologies makes them more usable by intelligent agents by allowing them to be accessed and used externally.


High priority that development tools support alt-modal, alt-input structures

An important priority is therefore the creation of operating systems and tools which allow the development of alt-modal, alt-input programs. Not all programs will be used in a nomadic setting but it is critical that the underlying structures and tools for Java support alt-modal, alt-input applications so that Java can support nomadic applications as well as accessibility.


Not all applications can be made accessible to all people

Not all applications will be nomadic, and not all applications will be accessible to all individuals. For example, it is difficult to see how a program that simulates water color painting could be made accessible to people who are blind, or someone driving a car.

However, for the majority of information systems, alt-modal and alt-input strategies can be applied. Wherever these strategies cannot be applied, individuals with disabilities will have difficulty participating in educational, employment, or daily living environments that require the use of such systems.


Direct access versus access through assistive technologies

It is important to distinguish between "direct" or "built-in" accessibility and "compatible" or "accessible via assistive technology."

Examples of built-in accessibility would include most of the features which are in the Easy-Access control panel on a Macintosh or the Accessibility control panel on Windows 95. These features are built into the operating system to allow people with disabilities to directly use the operating system by making its input interface more flexible.

An example of compatibility would be the SerialKeys feature in the Accessibility control panel on Windows 95. This feature allows people with disabilities to connect external assistive technologies which they can use in place of the keyboard and mouse. Another example of compatibility or access via assistive technology would be Active Accessibility being built into Windows 95 applications. Active Accessibility does not make the program more directly accessible, but makes a program more easily interpretable by assistive technology software such as screen readers by providing semantic information and the ability to operate the program from other software.

NOTE: It is important to note that the Telecommunication Act of 1996 clearly makes a distinction between these two types of accessibility. In fact, it defines the first type as being accessibility, and the second (compatibility) as being an alternate strategy when direct accessibility is not achievable. Basically, the legislation says that companies need to make telecommunication products (directly) accessible whenever it is readily achievable. When it is not readily achievable, then companies are to make the products compatible with devices commonly used by people with disabilities to access telecommunication products (when readily achievable). This is noted here since many Java applications will be telecommunicative in nature and will fall under the Telecommunications Act provisions.

In this report, both strategies for making products directly accessible and strategies for making them compatible with assistive technologies (screen readers, alternate keyboards, etc.) will be discussed.


Type I, Type II and Type III applications/programs

In looking at application programs, we have sometimes found it useful to divide them into three categories.

Type I - DIRECTLY ACCESSIBLE - are programs which can be directly used by people with disabilities. They have disability access features or flexibility built in to the standard program and can be used without needing assistive technologies.

Type II - ACCESS FRIENDLY - are programs which are not directly accessible but have been designed so that they work well with (or are "compatible" with) assistive technologies (software or hardware).

Type III - UNFRIENDLY - are programs where the programmers were either unaware or did not choose to make their programs directly accessible or compatible. The only way for people with disabilities to use these programs is if their assistive technologies are clever enough (and the program straightforward enough) that the assistive technologies can allow the user to "hook into" or "trick" the program into working with the assistive technologies.

Most programs today are Type III. Most programs that are designed to be accessible to people with all disabilities will turn out to be a combination of Type I and II. They may be directly usable by people with some disabilities ( or maybe even most disabilities) but will probably have to rely on assistive technologies for people with some types of disabilities - especially severe or multiple disabilities like deaf-blindness where an expensive dynamic braille display may be required.

In developing the tool kits (e.g., Java AWT) and development environments, the goal should be to



B) What makes an application or applet accessible? (in brief)

Low vision: Allowing the user to utilize residual vision by: 1) providing the ability to enlarge text and images, 2) use of larger, clearer fonts, 3) good figure/background contrast and 4) avoid low contrast or complex background patterns beneath text or important graphics. Also important is the ability to locate the point of focus and to follow it if it moves when zoomed in on the screen. Avoiding the presentation of information only in color, is important for those with color blindness. (See also Blindness.)

Blindness: Access consists of providing all information that is presented visually in alternate forms that make use of the auditory and tactile modalities. For built-in access, synthesized speech is the form usable by the greatest number of individuals who are blind. Making the information available in electronic text allows translation to synthesized speech or Braille. For complete access, all information that is presented visually must be available including text, descriptions of graphics, spatial or semantic relationships between objects, etc. The individual must also be able to operate input and control systems without vision - meaning that mouse clicks or movements are not required. (Is the product usable by a blindfolded individual unfamiliar with the device and program?)

Deaf-blindness: It is rarely possible to build in access for individuals who are deaf blind. Provision of information externally in electronic text form allows use of dynamic braille.

Hard of Hearing: The goal here is to maximize the ability to use the residual hearing ability. Residual hearing can be maximized by 1) allowing the volume to be adjusted, 2) reducing or eliminating background noise when important information is presented auditorally in the foreground , 3) allowing the connection of external audio amplifiers via speaker or headphone jack (largely a hardware consideration) and 4) presenting information visually as well. (See also Deafness.)

Deafness: Primarily this involves providing a visual representation of any information presented auditorally, including captions for any speech.

Physical Disability: Physical abilities vary widely. The general approach here is to avoid requiring fine motor control, strength and reach. The best general strategy is to provide keyboard access to all actions and functions. (Drop the mouse behind the desk and try to use the program.)

Cognitive and Language: Use simple, straightforward layouts and provide the ability to have words read aloud on command. Layer commands and options.


Built-In Cross Disability Accessibility (Type 1)

A system that has built-in cross disability access would have the minimum following characteristics:

  1. an audio output only mode,
  2. a visual output only mode,
  3. fully operable using keyboard only (for both modes 1 and 2),
  4. straightforward and obvious in operation, and
  5. all actions are reversible or require confirmation.


Minimum Requirements for Compatibility with Assistive Technologies (Type II)

(For a more complete discussion of access specifications for different products, see Designing an Accessible World.)


Implications for Java

To support accessibility, the Java structure and development tools must support and promote the above. Strategies for doing this include (but are not limited to):



Part II
Java-Specific Recommendations

A) Recommendations for changes to the Java AWT

B) Other and future issues to be investigated

C) Issues for Tools and Development Environments

D) Applet and Application Developer Guidelines



A) Recommendations for changes to the Java AWT to increase the accessibility of applets and applications:

Object Attributes (OA)


Orientation and Focus (OF)


Keyboard Enhancements (KE)


B) Other and future issues to be investigated:

Java Virtual Machine Implementations


JavaOS (JOS)


Audio Package


Layout Managers (LM)


Areas for Further Research (FR)


C) Issues for Tools and Development Environments


D) Applet and Application Developer Guidelines



A) Recommendations for changes to the Java AWT

Object Attributes (OA)

This first group of requirements centers around object attributes. Most of these attributes are aquired for compatibility with screen readers and other hardware and software assistive technologies (AT).

None of them are required for providing built-in accessibility, since the person writing the application knows its function and therefore can build accessibility in using any number of software strategies. However, most of the information exposed by these requirements would be needed for built-in accessibility. Furthermore, attaching the information to the objects in the fashion described here can greatly facilitate building accessibility in, as well as allowing for compatibility with hardware and software assistive technologies.

The ten object attribute recommendations are:



OA1: Expose the textual content of all objects

Issue

There are several instances where exposing the textual content of an object is needed. One instance is a custom control that has been created by drawing to a canvas or a dialog window where the text is not always visible to an AT. Part of this problem might be due to the lack of creation events of windows or use of system drawing tools (by Virtual Machines) and may disappear. The other instance is the use of images with bitmap text where the text is part of the graphic, and an "alt-text" of sorts is therefore needed.

What is currently provided in the AWT?

What is needed?



OA2: Expose attributes of text (font style and size, graphical symbols, color, orientation and direction)

Issue

If semantic information is not explicitly defined and/or exposed in an application, an AT may need to infer. For example, text displayed as blue and underlined are text anchors. In cases where a screen needs to be resized, manipulating the text attributes properly could keep the presentation readable.

What text attributes are provided in the AWT?

What is needed?



OA3: Expose the current state or value of an object as well as its possible other states (if analog, provide digital)

Issue

When using an AT it is difficult enough to discover what objects exist much less their states or values. In the case of an on-screen thermometer applet, the user needs to know where the mercury indication is in relation to the lines marking degrees. The state of a checkbox (checked or not) is available, but scrollbars are usually identified as "graphics," thus no state is attached to them. The Interface java.awt. Adjustable (JDK 1.1 beta3) seems to have the necessary information for objects who have adjustable numeric values (that change on either a horizontal or vertical plane) but not for objects whose values are qualitative (red, green, blue).

What states are currently provided in the AWT?

What states are needed?



OA4: Expose the type (or role) of an object

Issue

In the visual display, buttons, checkboxes, and scrollbars have very distinct appearances. These appearances help the user to grasp the type of object and its role in the display. As we access the same components, or types of components in multi-modalities, we need to standardize how the functionality is identified (see Kramer 1994 and ICAD 1996 for more information on Auditory Displays). Exposing the role and type of object is a first step. Currently, screen readers seem to be able to recognize that all of the basic components exist, but may not recognize them correctly (for example, some scrollbars are identified as graphics).

What types (or roles) are currently provided in the AWT?

Note: These components exist but their roles may not be exposed:

What types are needed?

Custom components and containers need to identify their type and role. If a custom component created with a Canvas is acting as a Button, Button should be exposed as the type/role of the object.

Therefore, these are classes to be added to the AWT. Roles need to be exposed by VMs but developers need to subclass or implement classes with these roles/types.



OA5: Expose additional semantic and contextual information for objects

Issue

Currently, most Java applets and applications convey information about the meaning or use of an object via its visual appearance. For individuals using assistive technologies, this information is not necessarily available. Exposing the role (OA4) is a start, but additional information is often needed.

In addition to enhancing accessibility, making the semantic information available in text form will also facilitate:

What semantic information is currently provided?

What needs to be added to the AWT?

Descriptions of:

These semantic information attributes for objects will then need to be exposed by the VM. Developers will need to fill this information in, but tools could be developed with libraries of descriptions and best guesses to help ease the process as well as give some standardization.

Examples:



OA6: Provide a grouping component that will semantically group multiple objects to create a single semantic unit

Issue

Often, multiple objects will be perceived visually as a single semantic idea. For example, several lines, a block of color, and some text would be perceived as a thermometer or thermostat. Another example is a label and its corresponding datafield.

"Grouping" objects which are currently provided in the AWT

What is needed?

A mechanism to semantically group a number of objects into a single semantic unit so that they are presented and can be described as such.

It is not clear that Panel and Canvas could be modified to handle all of the cases that might arise, even if all of the previously mentioned object attributes are added. If they would not, a semantic group object would be required.

This topic is tied to the concept of a Semantic Manager which would keep track of how many semantic entities are present on the display both to reduce cognitive load and to provide a simplified display of the applet/application.



OA7: Provide the ability to break up multi-semantic objects

Issue

Some objects may fill multiple semantic roles, making it difficult to describe them with a single name or "role": for example, a control strip which functions like an image map with hot spots or a "navigation ball" which allows you to manipulate an image six different ways depending upon how you click on or drag around on the ball. In both of these cases, there is a single object with different functions that need to be described and activated individually.

What is provided?

What is still required?



OA8: Provide a semantic manager capability

Issue

A list of what is meaningful on a screen is not always the same as a list of the objects that are displayed on the screen, nor is a text transcript the same as the information provided by the nuances of speech in an audio clip. A Semantic Manager would differ from a Layout Manager in that it would provide a way to track, navigate and activate objects on a semantic rather than visual layout basis. This capability is not only important for disability access, but also for the creation of nomadic systems, which will require that systems be operable via different sensory modalities (e.g., via speech when one is driving the car, via visual display and keyboard when participating in a meeting, etc.).

What changes to the AWT are required?

Role of the Semantic Manager

A Semantic Manager would contain the semantics of the information of the application. We envision one manager per virtual machine that would keep track of the following information:

Current semantic information that is part of the Event Delegation model:

Thus, it seems that the basic framework is in place for definitions of semantic events to be added and exposed. What information could be associated with these events? It might be necessary to include that object AĂs state has changed from X to Y. Or already provided event messages could become more complete. For example: "Do a command" could become more specific: look up a URL, contact the host, download an applet, etc.

Why create it?

Currently most of the interpretation of visual information is handled by individual screen reader off-screen models. To facilitate direct accessibility, a semantic manager could be an integral part of the program providing the semantics up front so that Assistive Technologies do not have to guess. If the semantic attributes are added to objects, the Semantic Manager could keep track of these. As an application/applet moves between platforms, the modalities the objects are presented in might change, thus the manager could take care of ensuring proper changes in display.

A Semantic Manager could keep track of how many resources are present, and present a simplified view of the applet/application to reduce cognitive load. A similar idea is Metawidgets in Polymestra (Glinert and Wise, 1996) which calculate the userĂs available cognitive resources and take into account user display preferences before selecting the modality in which to present themselves.

Balloon Applet Example

A balloon sits on the left side of the screen. On the right side of the screen is a thermometer. Using the mouse, you can raise or lower the temperature of the thermometer. As you raise and lower the temperature, the atoms bouncing around inside the balloon change. As the temperature gets very hot, some of the atoms actually leak out through the balloon. If you raise the balloon temperature too high, the balloon pops.

The Balloon Applet Example formed the basis of a discussion about grouping complex objects into single semantic units. If objects are grouped into semantic units, each object will be able to expose its function and other semantic information which is needed for objects whose semantic information changes, such as the balloon applet. The thermometer will change states causing updates in the displays of the balloon and graph. These objects could be queried again and again for their changes - but what if they donĂt always change? Instead of asking each object what has changed, the user could query the semantic manager for what has changed as well as for the semantics of those changes or ask to be notified on changes.



OA9: Expose semantic and contextual information for events (changes in object and applet/application attributes)

Issue

Sighted users might easily infer what data has changed on the screen and why. The semantic meaning of changes in these characteristics needs to be explicitly exposed. Likewise, this information is needed in visual form for auditory events and changes in auditory objects.

What is provided?

Developers must specifically create methods that notify users of changes in display, other than for changes in:

What event messages are needed?

Event messages that indicate changes in display:



OA10: Provide support to allow users to make changes to visual and auditory display attributes (text font, size and color, ShowSounds on/off, timing duration, etc.) or honor user preferences maintained by the system/browser/vm

Issue

For individuals who are color blind, the ability to select the colors used for all aspects of the screen is helpful. If a "high contrast mode" exists on the platform, the user should be able to select. Likewise, users who have lost the ability to detect ranges of frequencies will want to shift the auditory output to frequency ranges and volumes that are audible.

Many users with low vision can use an application without a screen enlargement program, provided the application allows users to adjust the font size and size of objects. Most users will appreciate being able to adjust the font size as a way to reduce eyestrain.

Users with hearing loss or deafness will require that system sounds be displayed visually (SoundSentry) and that applications display any auditory information visually, such as captions for speech (ShowSounds).

Programs requiring time-dependent responses should have provision for the user to adjust the time over a wide range, or have a non-time-dependent alternative method.

How are preferences currently supported in Java?

What is needed?



Orientation and Focus

This second group of requirements identifies issues with managing changes in focus. Again, most of this information needs to be exposed for compatibility reasons.



OF1: Expose what object has current focus

Issue

Changes in focus identify what object(s) the user is currently manipulating. In most GUI applications the focus follows the cursor or caret. In applications that do not have visual displays or are translating the visual to an auditory display, focus attributes need to be made available to allow proper translation. This includes component focus, window focus and text insertion point focus both in mouse and/or mouseless navigation mode

What is provided?

Both of these are new to JDK 1.1 and need further investigation.

What is needed?



OF2: Identify and control location of pointer cursor

Issue

When an application uses its own method of indicating the visual focus within its window, such as highlighting a cell in a spreadsheet, the system is most likely not aware who has focus. It is therefore not able to move the system cursor to the location of focus.

What is provided?

What is needed?



OF3: Expose the window and object hierarchy

Issue

Expose all window creates, destroys, moves, reparents, visible-invisible, etc.. This includes tracking all off-screen bitmaps so that an AT can properly navigate through the hierarchy if needed. For example, the parent or child of the current object may need to be identified (especially if the current object is a window) to determine who has focus or who can take the focus if some object is not currently focusable.

What is provided?

What is needed?



OF4: Identify which cursor is in use and which are available in an active window

Issue

The type of cursor in use provides semantic information about the system state or an objectĂs role. A ticking clock often indicates that the user should wait while the system processes something, or the pointer changing to a hand in a web page indicates the text that is under the cursor is a link.

What is provided?

What is needed?

Possible solutions

Since this is a visual notification of a semantic change, the SemanticManager could update the semantic model in the proper modality for the current display.



OF5: Allow screen updating to be suspended for user analysis and response

What is needed?

As a user of an AT works their way down a page, a constantly updated text stream may not be relevant to material read earlier on the page. For example, with circulating billboards of sports scores, the Bulls may end up leading the Bears 100-7. Allowing the user to suspend screen updates would allow them to control how often the page is updated. A similar example is the pause function in games and voice messaging systems that allows users to run to the bathroom or to finish writing a phone number.

Possible solutions



OF6: Provide the ability to focus within (and manipulate) multi-semantic objects

Issue

We use the term multi-semantic objects to mean objects that contain "hot spots."or which otherwise behave differently depending on the user input (e.g. a spherical control which provides different output if it is clicked, stroked, rubbed etc. ). The hot spots can be considered virtual objects. Just as some objects need to be grouped into a single semantic object to take the focus once, others need to have multiple focus points. Imagemaps are an example.

What multi-focus objects exist in the AWT?

An ImageMap class is not part of the AWT, but is easily derivable. Flanagan (1996) has an excellent example. The image is able to gain focus as a whole but the sub-objects can not be tabbed to.

What is needed?

A mechanism to allow multiple focus points or objects to be defined so that multiple actions can be taken from a single object. Semantic information, such as name, type, role and description should be provided for each virtual object, see OA7 (Provide the ability to break up multi-semantic objects) for what changes are required in the AWT to attach semantics to virtual objects.



Keyboard Enhancements (KE)

Most of these issues provide mechanisms to support mouseless navigation. These provide support for compatibility but in the future, if integrated into JavaOS can help provide support for direct accessibility.



KE1: Provide support for keyboard access to all objects, menus, and windows

Issue

Making all aspects of the program, including menus, dialogs, palettes, etc., operable from the keyboard significantly increases accessibility for many users. As Java applications appear on multiple platforms, some of which have various types of input devices (keyboard, voice, etc. -- i.e., no mouse), this becomes beneficial to everyone.

What support is provided by the AWT?

What is needed?



KE2: Provide keyboard equivalents for mouse operations

Issue

Since we can not assume that all users or applications will be using a mouse, alternative provisions for mouse operations need to be implemented.

What is provided?

What is needed?



KE3: Do not override built-in access

Issue

Applications/Applets and VM implementations should not override built in access utilities (StickyKeys, MouseKeys, Keyboard Response Group [SlowKeys, BounceKeys, RepeatKeys, etc.], and ToggleKeys). Instead, they should respond like normal applications and not provide their own key-repeat (like word perfect, for example). If they do, they need to allow deactivation. (See also JavaOS, in Other and Future Issues).

What is provided?

We have not performed extensive tests, but so far we have not run into a problem with this.

What is needed?

If an application does override built-in access utilities, it needs to allow deactivation.



B) Other and future issues to be investigated to increase the accessibility of applets and applications

Java Virtual Machine Implementations [JVM]

Java OS - Initial Notes [JOS]

Audio Package - Initial Notes [AP]

Layout Managers [LM]

Areas for Further Research [FR]



Java Virtual Machine Implementations (JVM)

Several of the items that are discussed in the previous section (recommended changes to Java to increase the accessibility of applets/applications) assume that changes are made to the AWT to support semantic information, developers provide this information and virtual machine implementations expose it to an AT or use systems resources (which then expose the information to an AT). With custom controls, a developer will need to properly implement the semantic information attributes. An improper implementation or no implementation does not imply a lack of accessibility but a less usable application. The basic issues that should be handled by Virtual Machines are:



Java OS - Initial Notes



AUDIO PACKAGE (AP) - Initial Notes



AP1: Provide compatibility with standards for decoding closed captioning signals from QuickTime, AVI and MPEG video encoding techniques



AP2: Provide support for CD quality audio

Auditory displays created with Java are limited to basic playback of prerecorded/rendered audio clips (unless one knows how to create Pulse Code Modulated data on the fly - creating an integer array that is copied to the AudioDevice and played - see MeijerĂs The_vOICe applet). As applets/applications are run on platforms without rich visual display capabilities, audio will become more necessary. As this need increases, developers will require more flexible audio possibilities such as the need to manipulate attributes of audio such as frequency, spatial location, timbre, etc.. (See Kramer, 1994 for several articles on Auditory Display).

What does Java currently support?

What does Java need to support?



AP3: Provide support to manipulate sound files or create sounds "on the fly"

What methods are provided in JDK1.0?

What is needed?



Layout Managers (LM)



LM1: Provide support for flexible screen layouts (for magnification or translation between languages and cultures)

Magnification: If the user increases font size, the layout may become unreadable, or less readable. Perhaps layout managers or a new layout manager could handle magnification issues such as resizing components to maintain readable, logical layouts.

Translation between languages and cultures: Other cultures will be reading the screen right to left or upward, instead of left to right and downwards. Labels on components may become longer or shorter dependent on the translation of the text label. For example: "No" buttons might need to be a bit larger if translated to German ("Nein").

These will inevitably change the layout of the display. Although these are two very different problems, it seems they could both be handled by a LayoutManager of some sort. However, this would require the development of several heuristics to translate between a large number of combinations of languages.



LM2: Provide support for automatic generation of basic descriptions of layout (if possible)

CardLayout Example:

The following description could be generated: "start button in north, west has text field, center has thermometer and graph." All of this info could be available as "label" "object type" "alignment", "alignment" "object type", "alignment" "label" "label"

This would not increase the demands on the developer, but would be a function of the layout manager. In the (common) instance that the developer does not use a layout manager, this information would either not be available or the developer would have to provide it. Since it is common for developers to use null layout managers, it seems necessary to create managers that are more flexible.



AREAS FOR FURTHER RESEARCH (FR)



FR1: State, description and mouseless interaction with animations

How can we convey the semantic information of animations and then interact?



FR2: Network Computers

Will we download AT software like other software or will access capabilities be built in?



FR3: System preferences for input devices and keyboard layout

As people use speech and handwriting input more often, users will want to set up their devices for their own personal needs. The need for system maintenance of user preferences for input devices such as speech recognition, and handwriting recognition and customized key assignments will be necessary. How will these be implemented? What support is required in the AWT? What support is required in Java OS?



FR4: Delegation Event Model

Possible problems:

"The requirement to subclass a component in order to make any real use of its functionality is cumbersome to developers; subclassing should be reserved for circumstances where components are being extended in some functional or visual way (JDK 1.1 Documentation.)." If this is the case, many of the methods and object attributes defined in this report will disappear as objects are created and not subclassed. How then do we deal with this problem? An interface? But doesn't this become cumbersome also?

Possible opportunities:

Two types of events exist in this model, low-level and semantic. Semantic represent the semantics of a user interface componentĂs model. The semantic event classes defined by the AWT are: ActionEvent ("do a command"), AdjustmentEvent ("value was adjusted"), ItemEvent ("item state has changed"). Thus, the basic framework exists to create definitions of semantic events. It would then be possible to include events that represent: "item state has changed from x to y." "Do a command" could be extended to be more specific: look up a URL, contact the host, download an applet, etc.



FR5: Need to better understand implications of (if there are any):



FR6: See also OA6, OA7, OA8, OF5, OF6, JOS2, JOS4, AP3, LM1, LM2



C) Issues for Tools and Development Environments



D) Applet and Application Developer Guidelines (for version 1.0)

Incorporating the following design considerations can make Java applications/applets more accessible. These design considerations are based on version 1.0 of the Java AWT with some discussion of features new to 1.1. For a complete list of guidelines for software applications see Vanderheiden, 1994 (not all will be applicable due to Java's interpretation by VMs).



DG1: Provide information in more than one modality.

Images

To provide descriptions of images there are two possible methods:

To provide access to the functionality of an Image acting as a Button see DG5 (Providing mouseless navigation) and DG7 (Providing menu access to commands).

Sound

Provide text transcripts or descriptions of audio files that can be viewed without playing the audio file.

Scrollbars and custom components

The Scrollbar component and most custom components are identified by Assistive Technologies as graphics, the catch-all term for screen items they are unknowledgeable about. The role of these components should be made accessible. Values could be presented in a Textfield or dialog box if the component is queried. Values could be adjusted from the Textfield or by keyboard methods (e.g. arrow keys, pageup/down).



DG2: Trigger events by user input rather than user actions (or providing a way to freeze or avoid events that are triggered by user action).

Avoid using mouseEnter(), mouseExit(), gotFocus() or lostFocus() to trigger events. The visual event of moving the mouse to a region should require an active user action to trigger an event, such as keyDown(), keyUp(), mouseDown(), mouseDrag(), or mouseUp(). The nature of the event will determine if passive or active triggering events are appropriate. If such behaviors are unavoidable, then the user should be provided with a way to circumvent the action or to freeze the display so that they can perceive it.



DG3: Allow user modification of application/applet appearance and presentation modality.

Text Size

To make text more readable, the user should be able to adjust text size. The text of components [Label, CheckBox, Button, Dialog Box, Menu, Choice, TextArea or TextField] and text of the graphics drawString() have methods for altering text and providing information to make the altered font work in the format [setFont(), FontMetrics class, setMaxAscent(), setMaxDescent(), stringWidth()].

Increasing the text size may create formatting problems. To prevent this, the developer needs to force the layout. A good implementation of this was written by David Geary and discussed in "Answering frequently asked AWT questions" pages 26-38, Java Report February 1997.

Color Scheme

The user should be able to adjust the color scheme to reduce display complexity or increase contrast. Again, these preferences are common among systems and browsers. java.awt.SystemColor makes use of current desktop colors, set by the user. The system colors that are defined are:



DG4: Provide semantic information about the application/applet, its objects and their actions

Provide a summary of the application/applet and explanations of the relations between and functions/actions of the objects in the application/applet. This might be most easily accomplished through a series of Help files available through the menu system.

Basic semantic information is available by knowing the type of component. For example, knowing which objects are Buttons directs the user to which items may be selected to cause actions. Giving logical and unique names to buttons provides additional semantics for the events that might be generated upon selecting that button. If a custom control is necessary provide an explanation for how to identify and access the component. For example, in the rooms reservations applet, users were told that custom components were labeled with the room number. Once able to find the text label, they were able to position the mouse on the component. This action caused a dialog box to open which took the focus, and allowed the user to find out more about a room reservation.



DG5: Provide mouseless navigation (JDK 1.1)



DG6: Notify the user of important changes in the semantics of the display.

If the font, font size, or color of text changes to communicate information, the information should be accessible to the user by alternate means. The changing of these qualities could be indicated by a pop up alert or other text indicator such as a tooltip or sound (as long as there is also a visual and textual indication). The modality should be user selectable.



DG7: Provide menu access to commands (JDK 1.1).

New to JDK1.1 is the AWT shortcut API which provides the following new methods:



Appendix A:
A Sample of Common Interaction Problems between Screen Readers and Applets

Problems experienced by screen readers when interacting with applets (not an exhaustive list):

The software used for this evaluation:

screen readers: OutSPOKEN and JAWS
browser: Netscape 3.0
OS: Windows95.



Appendix B: References

Ball, Thomas (1996). Win32 AWT Rewrite. Available: http://www.javasoft.com/people/tball/Win32-AWT.html

Flanagan, David (1996). Java in a nutshell. Sebastopol, CA.: O'Reilly and Associates, Inc.

Glinert, Ephraim and Wise, G. Bowden. (1996) Adaptive multimodal interfaces in Polymestra. In Paul M Sharkey (Ed.) Proceedings from the First European Conferences on Disability, Virtual Reality and Associate Technologies (pp. 141-150). Maidenhead, UK.

Kramer, G. (Ed.) (1994) Auditory Display. Santa Fe, NM: Addison-Wesley.

JavaSoft (1996). JDK 1.1 New Feature Documentation. Available: http://www.javasoft.com/products/JDK/1.1/docs/relnotes/features.html

Meijer, Peter. (1996) The_vOICe. [Computer software]. Available http://ourworld.compuserve.com/homepages/Peter_Meijer/voice.htm

Microsoft Accessibility and Disabilities Group (1996). Microsoft Active Accessibility: Programmers Guide and Reference (Beta 2 Version) September 1996.

Microsoft Accessibility and Disabilities Group (1995). The MS Windows Guidelines for Accessible Software Design. Available: http://www.microsoft.com/enable/dev/guidelines/software.htm

Sun Microsystems (1996). JavaBeans 1.0 API Specification. Available: http://java.sun.com/beans.

Vanderheiden, Gregg C. (1994) Application Software Design Guidelines: Increasing the Accessibility of Application Software to People with Disabilities and Older Users. Available: http://trace.wisc.edu/docs/software_guidelines/software.htm


trace.wisc.edu
This document is hosted on the Trace R&D Center Web site. Please visit our home page for the latest information about Designing a More Usable World - for All.