Joe Clark: Media access

You are here: joeclark.orgCaptioning and media access
Reading the silver screen

Originally published 1994 | Updated 2001.07.15

Reading the silver screen

For a technology that serves a “marginal” group – deaf and hard-of-hearing people – captioning has blossomed into something of a technological success story. By law, decoders to translate closed-caption signals into words visible onscreen are now built into all U.S. TV sets with screens at least 13″ in diameter. A decade’s worth of legwork and consciousness-raising by captioners and deaf and hard-of-hearing viewers has brought captioning to all genres of TV programming, from prime-time series to music videos to newscasts and beyond. But first-run movies in theaters remain inaccessible: If you have a hearing problem and want to watch the latest Hollywood “product” at your local cinema, you’re out of luck. Instead, you’re forced to wait for the home video to appear, and even then it might not be captioned.

Researchers at the National Center for Accessible Media (NCAM) at WGBH, the Boston PBS Überstation, are fine-tuning a trio of technologies to break the cinematic sound barrier for deaf and hard-of-hearing moviegoers. Director Larry Goldberg describes NCAM as a research office "dedicated to serving under-served populations" – that is, people who experience barriers in using information media. That includes not only deaf and hard-of-hearing people but the blind and visually impaired and people whose first language isn't English. NCAM's location at WGBH works to its advantage, as that station has a 20-year history of developing accessible media: It's home to the Caption Center, the world's oldest and most adept captioning body, and the newer Descriptive Video Service that provides "audio descriptions" of onscreen action for blind TV viewers.

With that wealth of in-house knowledge, in late 1992 NCAM launched its Motion Picture Access Project, better known as MoPix. The project came to life after countless complaints from deaf and hard-of-hearing people about uncaptioned first-run films. As Goldberg recalls, “The consumers would say, ‘I want to go to the movies with friends who can hear, so what are you going to do about that?’ ”

The easiest solution to the movie-access problem is open-captioning – adding captions that all viewers would see. But conventional wisdom in the captioning biz holds that hearing people resent captions. (That antipathy extends to any kind of titles beyond opening and closing credits – witness the reluctance of major studios to put subtitled foreign-language films in general release.) Avoiding the ire of hearing viewers prompted the development of a closed-captioning system for television in the first place; captioning for motion pictures, then, has to be as unobtrusive for hearing moviegoers as it is useful for deaf and hard-of-hearing viewers.

Armed with a small grant from the Department of Education, NCAM engineers spent most of 1993 developing a prototype movie-captioning system. But there were a host of practical constraints. If thousands of people are going to use it over the years, any real-world system will be all but unbreakable and must be impervious to what the rigorously scientific minds at NCAM call “cooties.” It won’t require training and will be usable virtually anywhere in an auditorium “so you didn’t have to have a specialized ‘deaf section’ in the theater,” as Goldberg explains. Finally, it will be readable, comfortable, and above all, cheap enough that stingy theater owners will buy it and sticky-fingered moviegoers won't steal it.

A few months of brainstorming resulted in three trial technologies for movie captioning:

  • A “seatback display”: This configuration consists of a vacuum-fluorescent display (VFD) attached to the back of the seat. VFDs – familiar from many supermarket cash registers – produce a bright green dot-matrix character display by selectively energizing wires with a phosphor coating that glows when electrified. In the NCAM design, the VFD is built into the seatback like a head restraint; users can adjust the height of the device to place it within peripheral vision to avoid doing hundreds of double-takes between movie screen and caption display.
  • A “rearview display”: A large LED screen located at the rear of the theater displays captions in mirror-image (i.e., with words flipped around a vertical axis). Users watch the display via a clear plexiglas panel mounted on an adjustable stalk. The plexiglas is reflective enough to show the captions in their correct orientation but transparent enough to watch the movie through it at the same time.
  • Virtual Vision glasses: Initially developed as a kind of video Walkman for portable TV-watching, the Virtual Vision system comprises an oversized pair of eyeglasses and a small backlit LCD that sits at the very top of the glasses and faces straight down. Through lenses and a mirror, TV pictures (or images from a computer display) on the LCD are reflected in the eyeglasses at about the position where a bifocal lens would be, allowing someone “wearing” Virtual Vision to walk, chew gum, and watch TV at the same time – or, in this case, look at a movie screen and read captions that seem to float in mid-air.

With the help of a 65-seat Boston movie house, NCAM ran a field test of these technologies using real movies – Sleepless in Seattle and In the Line of Fire – in October 1993. Audiences were made up not only of hard-of-hearing and deaf volunteers, as would be expected, but of hearing people, too, “because we wanted to see what hearing people would think if they were going to a theatre with these devices around them,” Goldberg says. Deaf and hard-of-hearing moviegoers each watched both films using two different devices.

All three systems succeeded at the basic task of making a movie comprehensible, but there was no clear winner. While the seatback and rearview displays were bright and readable, those very attributes distracted hearing viewers seated nearby. (It’s possible, Goldberg notes, to restrict the viewing angle of the seatback display with a special “microgroove” coating that limits the sideways “spill” of light from the VFD, but NCAM has not estimated the cost of that refinement or tested it.) Arriving at a comfortable viewing position for the rearview and seatback displays is difficult or impossible for viewers in the front rows, and those in the very front row have no seatback in front of them to mount a display on.

The rearview display was dirt cheap, easy to use, low-tech, and almost disposable, but it was difficult to aim the plexiglas at the LED display on the rear wall and keep it aimed amid the bumps and jostling that can happen in a crowded movie theater. Mass-producing LED displays for theater walls would be a headache, too, since theaters come in enough configurations that expensive custom installations (the model NCAM used costs over $12,000) might be the all too common.

Virtual Vision glasses had the advantage of visual privacy, but since they’re expensive ($700 list) and high-tech, they’re very much worth stealing. The glasses need careful set-up, too, regardless of the use to which they’re put. Virtual Vision glasses project a “virtual image” – like a TV picture or captions – on only one side. For most people, the “dominant” eye – the one through which the brain gets most of its visual information – is the eye the LCD screen needs to be aligned with. To use the glasses in a movie house, you would have to know in advance which eye is dominant and request that version, meaning theater-owners would have to keep left- and right-eye stock on hand. (To add to the confusion, some people have no dominant eye, and for others the non-dominant eye works better with Virtual Vision.) People with poor 3-D vision, for which the brain usually has built-in compensations, often first discover that problem when they find they’re unable to make the virtual image meld with the real world that’s also visible through the eyeglasses.

The virtual image – in this case, captions – can be adjusted to look like it’s floating anywhere from 30 cm away to infinity, but that disparity in focal length between captions and movie screen confused some people. And simple fatigue was a problem as wearing the 140 g [5 oz.] glasses through a two-hour movie took its toll, particularly for people who already wear glasses. Still, deaf and hard-of-hearing users told Goldberg they were so desperate to understand the movies that they would put up with Virtual Vision’s flaws if it were the only way.

NCAM scheduled another field test for early April in an even trickier environment: The ultra-wide-screen Imax theatre at the National Air and Space Museum. The steeply raked, almost clifflike seating in an Imax theatre locates the seatback in front of you at your feet, putting the rearview and seatback displays at something of a disadvantage. As NCAM’s Judith Navoy explains, “All the problems that we experienced in our regular theater – the need to refocus between device and film and discrepancy between caption size and movie size, for example – were magnified during this test, primarly on account of the enormous screen size.”

NCAM already has a wish list for improvements to the tested technologies. High on the list for the seatback display is reducing its size, perhaps to the point of making it portable, and improving adjustability so the viewer has more options in placing the display relative to the movie screen. The rearview display needs a more rigid mounting arm and a brighter LED display on the theater wall; NCAM may also experiment with opaque and translucent plexiglas. NCAM wishes Virtual Vision glasses were lighter and didn’t require tethering to some data source, but adaptations to Virtual Vision are unlikely given that it is an existing retail product not specifically designed for captioning.

Goldberg has his fingers crossed for funding for larger-scale tests which he hopes could result in the release of a workable system within two years. Until then, deaf and hard-of-hearing viewers are stuck with the silent screen.


Update: WGBH has since settled on the rearview display. It's now called the Rear Window® captioning system. Rear Window and DVS Theatrical have been deployed on a number of films, including The Jackal, Titanic, Mask of Zorro, 8mm, Entrapment, Star Wars Episode 1, and essentially every Imax film in the last five years.