What is media access?

AUTHOR’S NOTE – You’re reading the HTML version of a chapter from the book Building Accessible Websites (ISBN 0-7357-1150-X). Copyright © Joe Clark, 2002 (about the author). All rights reserved. ¶ Back to Contents

The Web is merely the latest medium requiring accessibility. Work has gone on for generations on improving the accessibility of other media of communication and of the physical world.

Some Web-access techniques stand on the shoulders of ancestors in “old media” like books, film, and television. Conversely, some access techniques from so-called old media are directly applicable online (typically only for multimedia). Just as you need to know more about disability than might seem immediately relevant to the Web, you need to expand your understanding of media access. The techniques and technologies behind media access are hard to understand, obscure, and poorly documented. It’s difficult, for example, to learn about captioning, or audio description, or dubbing, or subtitling.

Here is a quickie introduction to the various techniques and technologies in use to make media of information accessible to people with disabilities and others. We’ll start out with a basic definition.

What is accessibility?

Accessibility involves making allowances for characteristics a person cannot readily change.

It’s a simple, sweeping definition. Practical examples of its application to Websites:

Old-media accessibility

Let’s learn a little about the means of making old media like film and television accessible.

Access for the blind and visually-impaired

The relevant technique is audio description: Narration, read out loud by a human being (or, in the future, by voice synthesis), that succinctly explains visual details not apparent from the audio alone. Audio description takes a movie, for example, and talks you through it. A narrator tells you everything that’s happening onscreen that you can’t figure out just from the soundtrack.

That’s a rather dry definition. Audio description (A.D.) is actually an advanced literary form that traces its origins to live theatre. True enough, akin to the practice of reading print books aloud for the benefit of the blind, for centuries sighted people have sat alongside their blind friends at theatrical performances and filled in the blanks with spoken descriptions. Experiments in providing descriptions for TV date back thirty years. But audio description as a practice, with its own norms and a name unto itself, began in 1981 with the work of Cody and Margaret Pfanstiehl in the suburbs of Washington, D.C. The Pfanstiehls introduced the first regularly-scheduled description of live theatre.

A year later, the Pfanstiehls began work with Barry Cronin and PBS in the United States. (I am obliged to mention that PBS is the Public Broadcasting Service, a television network.) Years of demonstration projects ensued, leading to the founding of the Descriptive Video Service (DVS) at WGBH-TV in Boston. Audio description has been a feature of American television – albeit in a very limited way – since 1988.

Theatrical audio description is available in many countries – Spain, England, New Zealand, and beyond – and A.D. on television can be found, albeit rarely, in Canada, Germany, the U.S., the U.K., and Australia. If you can watch North American–standard NTSC home videos, have a look at DVS’s line of Hollywood movies and PBS specials with always-audible descriptions. The Royal National Institute for the Blind (RNIB) in the U.K. offers a similar line of described home videos in PAL format (and in some cases, DVS and RNIB libraries have both described the same films). You can also watch a tiny handful of Region 1 and 2 DVDs with audio description, although an interface problem comes up: Since you need to manipulate an onscreen menu to turn audio description on, just how do you do that if you’re blind?

It’s also possible to describe first-run movies, and it is being done today, though even more rarely than on television; the predominant service provider is the Descriptive Video Service, whose DVS Theatrical system is an adjunct to Rear Window captioning explored in the next section.


It’s important to get the terminology right. Audio description is often misnamed:

  1. “Video description” (more than video can be described; the technique started in live theatre, remember)
  2. “Descriptive Video” (a registered service mark of DVS)
  3. “Auditory description” (an early favourite of the World Wide Web Consortium, the only entity anywhere that uses the term, which offers the disadvantages of pretense and an extra syllable, and no advantages at all)
  4. “Audio captioning” and a range of other abominations

It must be pointed out, however, that the Federal Communications Commission in the U.S. has more or less adopted the term “video description,” while the Canadian Radio-television and Telecommunications Commission, the broadcasting regulator in Canada, seems to think that “audio description” and “described video” are two different things, which they are not. These broadcasting regulators, with their government imprimatur, have unfortunately muddied the terminological waters. I suggest you act smarter than these bureaucrats and stick to the only generic term, “audio description.”

Access for the deaf and hard-of-hearing

The technique of record is captioning: Rendering of speech and other audible information in the written language of the audio. Usually closed: Captions are encoded or invisible and must be decoded or made visible. Some captions are open and can’t be turned off (and indeed, that’s how captions started out in the 1970s, when I first started watching them and before closed-captioning systems were invented).

Previously, to watch closed captioning you had to use a separate decoder connected to your television. (I still own one.) Caption decoders are built into televisions now. Nearly all TV sets sold in North America come so equipped (U.S. law requires it; split manufacturing runs for Canada are rare, so Canadians buy the same sets). So do a majority of sets in Europe, according to all indications, though there is no legal requirement. Built-in caption decoders are much less common a feature in Australia.

Television isn’t the only medium that can be and is captioned. Theatrical motion pictures can be open-captioned; it’s still being done, but prints and screenings are virtually impossible to find, and even after more than twenty years, I have watched but a single open-captioned film in a movie theatre. (And that one was Liar, Liar with Jim Carrey!) In 2001, after a human-rights complaint alleging discrimination on the basis of disability, certain Australian cinema owners agreed to exhibit open-captioned films several times a week – a first in the English-speaking world.

A sexier technology is the Rear Window system devised by a team of inventors centred at WGBH, which actually allows first-run movies to be closed-captioned. A large display sits on the back wall of the auditorium on which captions are displayed in mirror-image. You the viewer attach a semi-transparent plexiglas panel on a long stalk to the arm of your chair; place the panel in a comfortable position, possibly overlapping the bottom edge of the movie screen; and watch the reflected right-reading captions and the movie together. Only a tiny handful of cinemas use Rear Window, a number that is unlikely to grow significantly due to cost and resistance; this closed captioning of first-run movies will remain rare.

Note that subtitling is not the same as captioning. Despite their seeming similarity, captioning and subtitling have very little in common.

  1. Captions are intended for deaf and hard-of-hearing audiences. The assumed audience for subtitling is hearing people who do not understand the language of dialogue.
  2. Captions move to denote who is speaking; subtitles are almost always set at bottom centre.
  3. Captions can explicitly state the speaker’s name:
  4. Captions notate sound effects and other dramatically significant audio. Subtitles assume you can hear the phone ringing, the footsteps outside the door, a thunderclap.
  5. Subtitles are typically open; in fact, subtitles were almost always open in all media for decades until DVDs, with their selectable subtitle tracks, came along. Captions are usually closed.
  6. Captions are in the same language as the audio. Subtitles are a translation.
  7. Subtitles also translate onscreen type in another language, e.g., a sign tacked to a door, a computer monitor display, a newspaper headline, opening and closing credits.
  8. Subtitles never mention the source language. A film with dialogue in multiple languages will feature continuous subtitles that never indicate that the source language has changed. (Or only dialogue in one language will be subtitled – for example, Life Is Beautiful, where only the Italian is subtitled, not the German.)
  9. Captions tend to render dialogue even in a foreign language, transliterate the dialogue, or state that the character is speaking a different language.
  10. Captions ideally render all utterances. Subtitles do not bother to duplicate some verbal forms, e.g., proper names uttered in isolation (“Jacques!”), words repeated (“Help! Help! Help!”), song lyrics, phrases or utterances in the target language, or phrases the worldly hearing audience is expected to know (“Danke schön”).
  11. Captions render tone and manner of voice where necessary:
  12. A subtitled program can be captioned (subtitles first, captions later). Captioned programs aren’t subtitled after captioning.

Worldwide captioning

Captioning is available in dozens of nations worldwide. In television, two broad technical standards are in use:

The systems are incompatible (then again, so are the telecasts), though it’s possible to translate caption files between them. Only North American Line 21 captions are readily recorded on home videotapes, and for nearly 20 years closed-captioned home videos have been the norm with larger studios. You simply play the same tape everyone else plays with your decoder turned on, and captions appear. (The same applies to TV broadcasts you tape yourself.)

It is possible to record World System Teletext captions on a tape, but there is no standard, easy, foolproof way to do so; it doesn’t just happen automatically on a standard VHS VCR as it does with Line 21. In this case, sometimes you need a special VCR that converts closed captions to open. A variation of the Line 21 system (called Line 22, confusingly) was introduced in Europe and Australia specifically for home-video captioning. In those countries, then, you need two different and incompatible decoders. It is rare to find a television or VCR that includes both decoders, so if you want the full captioning experience you end up buying a set-top decoder.

A note on the U.K. lexicon: While Canadians, Australians, and Americans can keep captioning and subtitling straight, our dear British friends employ the worst possible terminology. Captioning, as far as they are concerned, is “subtitling,” while subtitling is also “subtitling.” (A caption, in British vernacular, is any kind of onscreen graphic, like the name of a city written out onscreen during a news report.) It then becomes possible to subtitle a subtitled program, or subtitle a captioned program. British correspondents tell me, in a manifestly false claim, that it is impossible to confuse subtitling and subtitling in their grand nation. Yet this is not a case of using different words for the same concept (as lift/elevator or boot/trunk); here we are using the same word for similar but readily-confusable concepts. Because the two techniques are not at all the same or interchangeable, I’ll call captioning “captioning” and subtitling “subtitling” in this book, and so should you.

Language accessibility

Two old-media techniques are in use in this domain:

  1. Dubbing: Replacing vocal tracks with vocal tracks in another language.
  2. Subtitling: Translating speech (and, in specific limited cases, onscreen type) into one or more written languages added to the image.

In online multimedia, you may be confronted with adapting a segment of video more than one way. Recall that dubbed programs can be and are captioned, as are subtitled programs; I’ve seen it myself. (Remember, subtitles fail to render a lot of sounds and don’t tell you who’s speaking, information a deaf viewer needs.) Both types of programs can be described. To describe a subtitled program, the subtitles are read out loud and typically enacted – using a delivery more akin to a dubbing actor’s than to the newsreader-like pseudo-objective delivery of normal descriptions.

Applicability to the Web

I like to refer to captioning, audio description, subtitling, and dubbing as the Big Four access techniques. Web accessibility, the subject of this entire book, is the fifth. This quintet clusters in the way the four fingers and single thumb of a hand do – all of them interrelated if not interchangeable.

Similarly, in the way that a thumb is a finger as well as a thumb, Web accessibility occupies two categories at once. Whenever you’re dealing with online video, and often when dealing with audio, the Big Four are relevant to Web access for the simple reason that online video is video, full stop. Now, audio is a more complicated matter, and those complications are explored in Chapter 14, “Multimedia.”

Additionally, longdescription of still images on the Web is cognate with the practice of audio description of film and video.

Previous   ¶   Contents   ¶   Next