Many of the new elements in HTML5 help you more accurately describe your content. This becomes more important when other programs start interpreting your code. For example, some people use software called screen readers to translate the graphical contents of the screen to text that’s read aloud. Screen readers work by interpreting the text on the screen and the corresponding markup to identify links, images, and other elements. Screen readers have made amazing advances, but they are always lagging behind the current trends. Live regions on pages, where polling or Ajax requests alter content on the page, are difficult to detect. More complex pages can be difficult to navigate because of the screen reader needing to read a lot of the content aloud.
specification have been rolled into HTML5, while others remain separate and can complement the HTML5 specification. Many screen readers are already using features of the WIA_ARIA specification, including JAWS, WindowEyes, and even Apple’s built-in VoiceOver feature. WIA-ARIA also introduces additional markup that assistive technology can use as hints for discovering regions that are updatable.
In this chapter, we’ll see how HTML5 can improve the experience of your visitors who use these assistive devices. Most importantly, the techniques in this chapter require no fallback support, because many screen readers are already able to take advantage of these techniques right now.
These techniques include:
The role attribute [<div role=”document”>]
Identifies responsibility of an element to screen readers. [C3, F3.6,S4, IE8, 09.6]
aria-live [<div aria-live=”polite”>]
Identifies a region that updates automatically, possibly by Ajax.
[F3.6 (Windows), S4, IE8]
aria-atomic [<div aria-live=”polite” aria-atomic=”true”>]
Identifies whether the entire content of a live region should be read
or just the elements that changed. [F3.6 (Windows), S4, IE8]