axxlog

Skip to contentAbout

January to June 2003 archives

2003.06.28a – We in the accessibility demimonde have few credible sources of statistics on people with disabilities and their use of adaptive technology. Maybe we know how many disabled people there are, but how many of them are using, say, captioning, or screen readers? This we really do not know.

And it’s not getting better quickly. Statistical sources tend not to ask the right questions. Ages and ages, ago, I wrote about the existing knowledge of decoder users in Canada and the U.S., for example:

The largest consistent estimate I have read of decoder sales in all of North America in the entirety of the first 13 years of captioning is 300,000. (I don’t have a citation.) Statistics Canada (in “Selected Characteristics of Persons with Disabilities Residing in Households,” 1994) states that there were 15,575 decoder users in Canada (and 10,240 who needed decoders but didn’t have them, a number not considered here).

Further statistics on the number of users of set-top decoders in the U.S. just after the decoder law kicked in are now available courtesy of Stephen Kaye of the Disability Statistics Center, University of California San Francisco. 15% of deaf people and 1.4% of hard-of-hearing people report using caption decoders. That amounts to 48,000 deaf people plus 127,000 hard-of-hearing, or 175,000 total. (“This is based on 1994–1995 data from the National Health Interview Survey on Disability,” Kaye says.)

That’s significantly lower than a 300,000 estimate by decoder manufacturers (really Sears, Sanyo, and NCI) and a 30,000 guesstimate for Canada, given 1/10 the population. It’s pitiful, really. But of course those were external decoders you had to pay money for. Now with decoders built in, there is no reason to assume that everyone who wants captioning doesn’t have it. Some hard-of-hearing people, for example, who are in denial about their hearing loss may not watch with captions (the elderly especially), but at this point essentially every deaf, hard-of-hearing, or hearing person who needs captions, including people learning whatever language is shown in the captions, has a decoder-equipped TV.

But how many people are those? We don’t know that, either.

Will the upcoming Statistics Canada census report finally answer the question? Sometime in December, Statistics Canada will release a report from the Participation and Activity and Limitation Survey (PALS). We thus don’t have the numbers yet, but we do have the questions (in a large PDF).

Deaf or hard-of hearing respondents are asked if they need (or, in a separate question, need but do not have) the following:

  1. Computer to communicate (e.g., E-mail or chat service)
  2. Volume-control telephone
  3. TTY or TDD
  4. Message relay service
  5. Other phone-related devices (e.g., flashers)
  6. Closed caption TV or decoder
  7. Amplifiers (e.g., FM, acoustic, infrared)
  8. Visual or vibrating alarms
  9. Sign-language interpreter
  10. Hearing-ear dog
  11. Other – specify

From these results, we would thus be able to learn how many people are watching captioned TV.

The questions for blind and visually-impaired respondents are:

  1. Magnifiers
  2. Braille reading materials
  3. Large-print reading materials
  4. Talking books
  5. Recording equipment or portable note-takers
  6. Closed circuit devices, e.g., CCTVs
  7. A computer with Braille, large print or speech access
  8. A white cane
  9. A guide dog
  10. Another aid – specify

We might be able to infer how many people are using screen readers (or magnifiers – they really shouldn’t be grouped), but what we won’t know from this estimate is the number of people using audio description.

A StatsCan researcher writes:

I agree with you that it is unfortunate we do not have any data on audio description in the 2001 survey and I can understand your frustration. The great diversity of assistive technology for persons with all types of disabilities is expanding at a very fast pace. At the same time, respondent burden and operational constraints do restrict the number of questions and list items that can be included on the survey. Data quality also needs to be considered, i.e., will cell counts be too low and preclude us disseminating such data?

However, for 2001, we still have the write-in category of aid...; I will look at these write-ins to get an idea of the frequency of “audio description,” “descriptive narration” or similar write-in responses and evaluate data quality. I will get back to you with my findings.

As well, there are preliminary plans for a 2006 survey; we will definitely review the list of aids to blind persons, to make it more current and add any technology that is being used by a substantial number of persons.

What’s the punchline?

The punchline is that the original StatsCan data was released in 1993 based on the 1991 census. This data will be released in 2003 based on the 2001 census. Kind of a long wait for numbers, isn’t it?

2003.06.28b – By far the biggest news of late is the release of the report of the Standing Committee on Canadian Heritage, a House of Commons committee reconsidering the Broadcasting Act. I gave “evidence” before that august Committee (“Why are your answers not any shorter, Mr. Clark?”) in 2002. Apparently only Jim Roots from the Canadian Assn. of the Deaf also appeared to discuss accessibility, and we got pretty much everything we wanted. (You can skip the quoted text.)

Accessibility

Section 3 of the Broadcasting Act states that “programming accessible by disabled persons should be provided within the Canadian broadcasting system as resources become available for the purpose.”1 This particular section was added in response to a recommendation made by the Standing Committee on Communications and Culture in 1987 in its report on the Canadian broadcasting system.

For people who are hearing or visually impaired, this access is accommodated through several different formats. Closed captioning assists the deaf or hearing impaired. Audio description and descriptive video services (DVS), or described video programming as it is sometimes called, assist the blind or visually impaired, as does the National Broadcast Reading Service (VoicePrint) and La Magnétotèque.

The proliferation of channels and the presence of three distinct distribution mechanisms (conventional, cable, satellite), however, has raised important difficulties concerning delivery of services to persons with disabilities. This chapter reviews what the Committee heard concerning programming accessibility for the hearing and visually impaired. It also discusses another access issue, namely the expense of participation in broadcasting hearings.

Closed Captioning

Since at least 1995, the CRTC has made specific requirements for captioning as a condition of granting or renewing broadcast licences. These requirements differ according to the size of the broadcaster and the language of delivery.

English-language television stations are separated into three categories: large, medium and small. Large stations are defined as those earning more than $10 million in annual advertising revenues and network payments. This includes CBC, CTV and Global. These broadcasters have been required, since 1 September 1998, to caption at least 90% of all programming during the broadcast day, as well as all local news, including live segments. Medium-sized stations, (those earning between $5 and $10 million in annual advertising revenues and network payments) are expected to meet the same standards as large stations. Small stations (earning under $5 million in annual revenues and network payments) are encouraged to work towards large-station captioning standards as well.

As for French-language television stations, since 1999, the CRTC has expected broadcasters to move towards achieving the same levels of captioning as English language broadcasters, and has been “exploring this with individual broadcasters at licence renewal time.”6 Moreover, in 2001, the CRTC held that the largest French language private broadcaster, TVA, must, by September 2004, caption 100% of all news and, by 2007, 90% of all programming as a condition of its licence.

What is Closed Captioning?

Captioning is a method used to make television broadcasting available to persons who are deaf or hard of hearing. This is achieved by the use of subtitles that appear on the television screen as a written transcript of dialogue and other meaningful sound effects on a television program. Captioning can be either “open” or “closed”. Open captioning refers to that which is accessible to all viewers, and closed captioning refers to that which is accessible only to those viewers using a television equipped with a computer chip to decode the captioning signals embedded within the broadcast signal. Virtually all televisions manufactured within the past 10 years are equipped with such a decoding chip. Closed captioning may be toggled off or on, as the viewer wishes. It is estimated that about 15% of the Canadian population has some form of hearing loss.

Pre-packaged captioning and real-time captioning

Captioning of television broadcasts may be prepared in advance of broadcast (“pre-packaged” captioning) or occur simultaneously with the broadcast images (“real-time” captioning).

The process for providing pre-packaged captioning is much like having a document translated from one language into another. A video copy of the material to be broadcast is provided to the captionist, who watches the tape, listens to the audio and inserts the captions. The tape may be stopped and replayed for clarity, and the positioning of the captions may be adjusted so as not to interfere with the images on the screen. The captions are then edited to correspond with the images by use of time signals and inserted into the broadcast signal and the audio and video signals are saved together for later broadcast. Pre-packaged captioning, which provides the greatest accuracy in the finished product, is the most time-consuming and costly method of captioning, costing approximately $1,000 for one hour of program captioning.

Real-time captioning, on the other hand, occurs as the images are broadcast live. Television news and live sporting event broadcasts are good examples of this; clearly, these are formats completely unsuitable for pre-packaged captioning.

In real-time captioning, a stenographer provides the captioning while the images unfold onscreen. This is a less expensive method than pre-programmed captioning, costing approximately $140 for one hour of programming. However, as it does not have the benefit of time and editing, it sometimes suffers for accuracy, spelling errors, missed dialogue, loss of content as the images on the screen can sometimes outpace the captionist’s speed and the possible interference of the captioning over the on-screen images, obscuring, for example, the speaker of the on-screen dialogue. Moreover, any technical problems either with the broadcast or captioning devices will necessarily compromise the ability to provide on-the-fly captioning, thus impeding viewers’ ability to read what they cannot hear.

The Technology

Captioning technology has evolved over time and differs as between preprogrammed captioning and real-time captioning.

There are two systems that may be used for pre-programmed captioning, although one, Cheetah Systems captioning, is no longer in use in Canada. The other, the Rogers Canada system, is capable of captioning all pre-packaged content in Canada. [This business about the Rogers Canada system, whatever that is, being the only one in use is entirely false – Ed.]

Captionists providing real-time captioning use phonetic shorthand on a stenography keyboard, much like the machines used by court reporters. These machines are unique in that the keys correspond to sounds rather than individual letters.

French- and English-language captioning

Witnesses told the Committee that English language captioning tends to be superior to that of French language captioning, and there are significant differences between pre-programmed captioning and real-time captioning. Some of the reasons for this are market driven, and more particularly a function of the large presence the United States has in both the captioning of programs and the sheer number of English-language programs produced annually. Moreover, the U.S. federal Department of Education has provided much funding for captioning itself.

Difficulty with French captioning lies in the technology used, which is based on an English-language model. In the case of real-time captioning, the hardware must be remapped with French phonology to accommodate accents and other characteristics of the French language not present in the English language. Currently, there are two different systems that can accomplish this on English language hardware. One system, co-developed by a Canadian at a captioning company, has every accent available in the caption decoder font. A competing system has been developed by la Société Radio-Canada and is, according to one witness, “technically inferior” to the other system in that the SRC system can only use the lower case accented character “é” despite it being technically possible to use many more accents than that.

Linguistics presents another challenge with respect to French real-time captioning. The French language requires gender and number agreements which must be complied with and which may lead to errors in the captioning of live broadcasts. Additionally, as one witness explained, because the French language in general uses more words per sentence than does English, it sometimes compromises a captionist’s ability to keep pace with live events as they unfold onscreen.

Thus viewers who rely on French language real-time captioning are sometimes frustrated with the quality of the captioning.

Training captionists

The delivery of captioning, and particularly real-time captioning in French, suffers from a lack of trained captioners in Canada. The Committee heard witness testimony directly on this point:

There’s a distinct shortage of trained real-time captioners in French around the world, because the system is new. It has been adapted to the English hardware, after all. There are some in France. There are some in Québec. There aren’t enough.

There are no legitimate training programs for machine-aided stenography in the French language, as there are in English. Canada has a very good source for English-language-court-reporting training, but there are no such schools for French-language court reporting.

This witness suggested that French language captioning would be improved by the creation and support of better training regimes for French real-time captioners.

The Reality

Although both the Broadcasting Act and CRTC policies contain clear language expressing the need and desirability for the captioning of television programs broadcast in Canada, witness testimony before the Committee suggested that the reality is rather different.

The Committee heard that the language used by the CRTC in some instances lacks force and clear direction. For example, it “persist[s] in merely encouraging — as distinct from requiring — broadcasters to provide minimal amounts of captioning.” This may be readily seen in the case of smaller English and French language broadcast stations, as noted above.

In some instances, captioning is treated as an afterthought and as a marginal and unnecessary element of the broadcast as a whole. The Committee heard one witness who had been looking forward to viewing a movie that had been advertised as having captioning. However, when the movie was run, it was without the captioning. When this witnessed pressed the broadcaster for an explanation, he was told that the movie had been received with the wrong captioning track and that it was broadcast as scheduled but without the advertised captioning. He told the Committee:

Now I ask you, if it had been the audio track that was the wrong programming, would they have gone ahead and broadcast the movie with the wrong audio track? Would they? No, they wouldn’t. But they didn’t care about broadcasting without a captioning track.

As mentioned above, the CRTC has made it a requirement of broadcast licences that a certain proportion of all programming during the broadcast day be captioned. The CRTC defines “broadcast day” as “the period of up to 18 consecutive hours, beginning each day not earlier than six o’clock in the morning and ending not later than one o’clock in the morning of the following day, as selected by the licensee.”

The Committee heard that this 18-hour limitation of a broadcast day further marginalizes captioning and, by extension, those who rely on it:

Are all deaf people supposed to go to bed at midnight? We’re not allowed to stay up and watch a late movie? Who made that decision? Who said deaf people only live 18-hour days? We live 24-hour days like everybody else. We want 24-hour regulation of captioning.

The Committee also heard that the CRTC does not adequately enforce its own policies and that broadcasters need not fear reprisal should they not adhere to the captioning requirements, expectations or encouragements of their broadcast licences. One witness told the committee that:

Nothing untoward will happen to you if you’re a broadcaster and do not meet requirements for captioning or description. There has never been a case in which any broadcaster has ever been meaningfully punished for failing to live up to captioning or description requirements. It simply doesn’t happen.

This was echoed by another witness, who said that:

It’s widely acknowledged that in its present incarnation, the CRTC is toothless and is unwilling to penalize licensees for their failure or refusal to meet licensing conditions. This kind of situation makes the CRTC bureaucratically pointless and ineffectual.

This gives the unfortunate appearance of indifference to the community which relies on captioning. This appearance is exacerbated by limitations of the definition of “broadcasting day” and policy language that “expects” or “encourages” captioning rather than requiring it. The remedy, according to one witness, is that the CRTC:

... be given the power and the political support to take aggressive action wherever necessary. That could include forcing broadcasters off the air, at least temporarily, if they violate the conditions of their licensing.

Absent such mandatory direction and punitive enforcement, there is evidence to suggest that some broadcasters will continue to resist full implementation of the captioning directives of their broadcast licence. In 2000, a complaint was heard before the Canadian Human Rights Tribunal alleging that the Canadian Broadcasting Corporation was failing to live up to the English-language captioning requirements under its broadcast licence. As a large broadcaster, CBC is required to caption at least 90% of all programming during the broadcast day as well as all local news, including live segments. It was alleged that the CBC’s continuing failure to meet these requirements constituted discrimination on the basis of disability, an action contrary to the Canadian Human Rights Act.

After hearing evidence on the matter, the Tribunal held that, due to lack of captioning, some CBC English-language network broadcasts, as well as those by CBC Newsworld, were not accessible to deaf or hard of hearing viewers, thus constituting a prima facie case of discrimination on the basis of disability. The Tribunal also made note of evidence that the technology currently exists to caption everything broadcast on television:

The CBC claimed that providing the captioning they were obliged to provide as a condition of their licence would constitute an undue hardship and offered financial evidence to sustain this position. The Tribunal dismissed this claim, concluding from the CBC’s own evidence that the estimated cost of full captioning would total less than1% of the CBC’s annual budget — an amount insufficient to constitute an undue hardship on the [C]orporation.

Indeed, the Tribunal stated that:

... after considering all of the evidence adduced by the CBC in this case, I was left with the overwhelming impression that, although significant improvements to the level of captioning have been implemented in recent years, with a little corporate will and imagination, a good deal more could be done with respect to captioning without imposing an undue hardship on the CBC than has thus far taken place.

The Tribunal then ordered that the CBC English-language network and Newsworld “caption all of their television programming, including television shows, commercials, promos and unscheduled news flashes, from sign on until sign off. This must occur on the first reasonable occasion.” The Tribunal also “strongly encourage[d] the CBC to consult with representatives of the deaf and hard of hearing community on an ongoing basis with respect to the delivery of captioning services.”

This decision was warmly received by the deaf and hard of hearing communities as a long-awaited vindication of their rights. The CBC has appealed the decision of the Canadian Human Rights Tribunal and the case is still in progress at the time of writing.

Services for the Visually Impaired

Audio Description for Visually Impaired Viewers

Audio description for blind or visually impaired viewers is a form of basic voice-over that describes the textual or graphic information that is displayed onscreen. This sort of service, for example, has someone reading aloud weather reports or sports scores as they appear on the television screen.

The CRTC has set regulations for the availability of this service. In its 1999 Policy Framework for Canadian Television, the CRTC stated that:

Licensees are strongly encouraged to adapt their programming to include audio description wherever it is appropriate and to take the necessary steps to ensure that their customer service responds to the needs of the visually impaired.

Building on this policy statement, beginning in 2001-02, CTV and Global were expected to provide audio description as part of their licence renewal agreements. In its licence renewal decision for television stations owned by CTV, the CRTC stated:

CTV indicated that it is committed to its general practice of providing audio description of important graphic information. It conveys all emergency information, such as weather warnings, in audio form as well as in video form. The Commission notes this commitment, and expects CTV to ensure that it provides audio description where appropriate. It further expects the licensee to take the necessary steps to ensure that its service responds to the needs of visually impaired audiences.

Global’s licence contains a similar clause concerning the provision of descriptive audio and the CRTC’s requirement for such service is identical, though specific to each network:

Global confirmed that its policy is to reinforce a program’s textual and graphic elements, such as the presentation of regular weather forecasts, sports scores, addresses, and telephone numbers, with an oral description. The Commission notes this commitment, and expects Global to ensure that it provides audio description where appropriate. It further expects the licensee to take the necessary steps to ensure that its service responds to the needs of visually impaired audiences.

Thus, these networks are expected to provide audio description wherever it is appropriate. There are no specific requirements as to the number of broadcast hours expected to have the descriptive audio service available.

Descriptive Video Service for Visually Impaired Viewers

This service consists of a narrated description of key visual elements as they appear on screen. The purpose of this narrative is to give a visually impaired viewer a mental picture of what is happening on the screen. The description is timed so that it does not interfere with the on-screen dialogue. The descriptive video service is normally provided on the second audio program (SAP) channel. This second audio channel exists as an alternative to the standard audio that normally accompanies the video portion of the television program. Listeners can then choose to receive this second audio channel through either a special decoder or a television set or VCR equipped to receive SAP.

In a 1999 policy statement, the CRTC said that:

With respect to descriptive video services (DVS), the Commission concludes that it is premature to impose specific requirements on licensees at this time. The Commission encourages licensees and the National Broadcast Reading Service to continue to cooperate in order to effect the gradual implementation of DVS.

The Commission, at licence renewal, will explore with licensees the progress that has been made in meeting the needs of the visually impaired.

The Commission considered issues related to DVS during a proceeding concerning to the addition of a third national television network (PN 1998-8). The Commission’s approach has been to support in principle, the gradual implementation of DVS.

This gradual implementation process is clearly seen in the 2000 renewal of CBC’s English- and French-language licences, in which the statement that “it is premature to impose specific requirements on licencees” is repeated. The CRTC then stated in the terms and conditions of both English and French language licences that it “encourages the Corporation to continue to develop the use of DVS, and to cooperate with the National Broadcast Reading Service in order to effect the gradual implementation of DVS.”41

Following on this, the CRTC has begun to “require” descriptive video services as part of broadcast licence renewal applications. In CTV’s 2001 licence renewal, the CRTC imposed specific conditions of licence:

... on each CTV station relating to the provision of described video. The condition requires CTV’s largest stations (in Toronto, Ottawa and Vancouver) to broadcast, between 7 p.m. and 11 p.m., an average of two hours per week of described video programming during the first two years of the licence term. All of CTV’s stations are required to provide three hours per week in year three, and four hours per week in year five. A minimum of 50% of the hours must be original broadcasts. This programming must be Canadian and be from categories 2(b) and 7. The licensee may, however, count toward fulfilment of this condition a maximum of one hour per week of described video programming that is directed to children and broadcast during an appropriate children’s viewing time.

The Commission further expects CTV, wherever possible, to acquire and exhibit described versions of the Canadian and non-Canadian programming that its stations broadcast. It notes that some American programs already include descriptions in order to fulfil requirements in this area that are in effect in the United States. Finally, the Commission commends the licensee for making concrete proposals with respect to the broadcast of programming that includes described video. The Commission considers that the presence of such programming in the Canadian broadcasting system is an important contribution.

As part of its 2001 licence renewal, the CRTC imposed a similar

... condition of licence on each Global station relating to the provision of described video. The condition requires Global’s largest stations (in Ontario, Vancouver and Quebec) to broadcast, between 7:00 p.m. and 11 p.m., an average of two hours per week of described video programming during the first two years of the licence term. All of Global’s stations are required to provide three hours per week in year three, and four hours per week in year five. This programming must be Canadian and be from categories 2(b) and 7. A minimum of 50% of the hours must be original broadcasts. The licensee may, however, count toward fulfilment of this condition a maximum of one hour per week of described video programming that is directed to children and broadcast at an appropriate children’s viewing time.

Global’s licence also contains a statement of expectation that wherever possible the station should acquire and exhibit described versions of Canadian and non-Canadian programming. Thus, both CTV and Global have specific requirements that must be met with respect to the provision of descriptive video services for visually impaired viewers.

TVA’s licence renewal is slightly different. Here the CRTC stated that:

The Commission expects the large station groups to demonstrate leadership in establishing descriptive video. With regard to CFTM-TV’s market, the Commission expects TVA to provide, during peak viewing hours, DVS in accordance with the following timetable: Years 1 and 2: 2 hours/week; · Years 3 and 4: 3 hours/week; · Year 5 and following years: 4 hours/week.

The Commission also “emphasizes that the number of hours allocated to DVS must not consist of more than 50% repeats.”46 [...]

Proposed Solutions

The Committee strongly believes that the present wording of section 3(p) of the Broadcasting Act, stating “programming accessible by disabled persons should be provided within the Canadian broadcasting system as resources become available for the purpose”, is discriminatory. The qualifying phrase “as resources become available for the purpose” detracts from the statement of accessibility and leaves the impression that broadcasting that is accessible to disabled persons is of marginal importance. This erodes Canada’s commitment to equality.

The Committee recognizes that closed captioning and descriptive video services are significant issues that must be addressed to give meaningful effect to the accessibility statement in section 3 of the Broadcasting Act. It is also aware that the Canadian Human Rights Tribunal in the case of Vlug v. CBC, stated that:

[Broadcasters] shall caption all of their television programming, including television shows, commercials, promos and unscheduled news flashes, from sign on until sign off. As required by Section 53(2)(b) of the Canadian Human Rights Act, this must occur on the first reasonable occasion.

As such, the Committee strongly supports television broadcasting that is accessible to the hearing and visually impaired and encourages the broadcasting industry to work towards the better provision of this access. In addition, the Committee urges broadcasters to fully comply with their broadcasting licence requirements as stipulated by the CRTC.

The Committee notes that it is a condition of some broadcast licences that accessibility be provided. The Committee is aware of the substantial costs of providing programming that is accessible to all. With respect to closed captioning, some costs are covered when Canadian broadcasters purchase pre-captioned programming. In addition, sponsorship agreements in which advertisers assist with the cost of captioning a program in exchange for certain advertising rights also helps ease the cost of providing this service.

Despite this, many programs are not pre-captioned and thus must have it added prior to broadcast. Moreover, as an increasing number of new television channels become available, there is a corresponding rise in the need for broadcast material for those channels, and a need for such material to be captioned, if necessary.

With respect to French-language programs, this situation is exacerbated by a shortage of trained captionists in Canada, particularly French-language captionists. Nevertheless, the Committee is not persuaded that the cost of complying with broadcast licencing requirements for captioning as stipulated by the CRTC is overly burdensome or constitutes an undue hardship for broadcast licence holders. Accordingly, the Committee makes the following recommendations:

Recommendation 15.1:

The Committee recommends that section 3(p) of the Broadcasting Act be amended to read “programming accessible by disabled persons should be provided within the Canadian broadcasting system;” This amendment would remove the qualifying phrase “as resources become available for the purpose.”

Recommendation 15.2:

The Committee recommends that a training program for closed captioning and descriptive video services be developed and funded by the federal government.

Recommendation 15.:3

The Committee recommends that the federal government develop a program to assist broadcasters in providing closed-captioning and descriptive video services.

Recommendation 15.4:

The Committee recommends that once the appropriate training and assistance programs are in place, that escalating conditions for the amount of captioning and descriptive video provided by broadcasters be phased in with a view to reaching a target of 100% for captioning and descriptive video services.

This assistance should include training programs, in both official languages. Absent a program to train an adequate number of individuals in both official languages, support for broadcasters in providing captioning and descriptive video services is illusory, and thus the overall objective of more comprehensive accessibility will be even more difficult to reach.

The Committee also recognizes that oversight will be key if targets are to be met. For this reason:

Recommendation 15.5:

The Committee recommends that the Broadcasting Act explicitly instruct the CRTC to set rigorous requirements and enforcement mechanisms to eliminate discriminatory practices by broadcasters. These instructions must explicitly include the requirement that captioning and descriptive video services be phased in for all television programming with a view to reaching a target of 100% captioning and video descriptive services.

2003.06.28c – I never met him, but he wrote a paper on viewer reactions to described science programs. Was big at WGBH. Is unlikely to be forgotten now that there’s a scholarship in his honour.

WGBH is pleased to award the 2003 David Kuhn Scholarship to two Brighton high school seniors who are pursuing careers in journalism. Sharniece Benders, a member of the National Honor Society, will be attending UMass Dartmouth this fall. Niccole Lambert, a member of the National Honor Society and a participant of Boston College Collaborative’s College Bound program, will be attending Clark University.

During his 23-year career at WGBH, Kuhn lent his skills to a wide range of WGBH projects, from The Advocates and The Ten O’Clock News to Descriptive Video Service and WGBH radio. Mentored himself by broadcast great Fred Friendly, Kuhn was interested in helping young people carve a path in journalism, the career to which he brought so much zest.

Kuhn died in 1993 at age 50. The David Kuhn Scholarship fund gives his colleagues and friends a way to keep alive his name and the spirit of integrity he personified. The outpouring of individual contributions has been matched dollar for dollar by the WGBH Educational Foundation as a sign of gratitude for Kuhn’s work and commitment to his ideals.

2003.06.28d – I can’t believe this. Captioners are complaining to a program originator about the content of the program they are captioning. We desperately need a code of ethics, it seems:

Jeremy Clarkson’s “misogynist” attitude on BBC2’s Top Gear has sparked a revolt by the programme’s female [captioning] team.

In a damning letter to the BBC house magazine Ariel, the women complain accuse the show and its star presenter of sexism.

“Why, oh, why didn’t the BBC release Jeremy Clarkson to Channel 5 at the same time as the other, rather less offensive Top Gear presenters?” they wrote, adding: “Having just [captioned] Top Gear for the second week running, our patience is wearing thin... Comment after comment about blokeishness, wives and women were finally crowned in one recent edition with a misogynistic explanation ‘in plain English’ which saw three bikini-clad woman used to demonstrate the differences between Porsche models.”

The show’s producers have defended the presenter, saying: “Audiences are at the heart of what we do and we know that millions of them would disagree.”

The [captioners] should beware the next time they meet Clarkson. In his column for the Sunday Times, he once warned: “If you call me sexist, I’ll grab you by the epiglottis and bash the back of your head repeatedly into the pavement.”

All right, the guy’s a twat, so to speak, but your job is to sit there and caption the show, not agitate to make a better show for you to caption, no matter how seemingly valid and noble the reason.

Appalling.

2003.06.22a – I am now officially sick and tired of Hollywood studios’ releasing multiple variations of movies without included description tracks when those same films had already been described for first run.

It’s a known problem and has been forever. Now we have two worst-case scenarios:

  1. T2: First DVD had description, somewhat incredibly. Second did not (a confirmed bit-budget issue – not enough space on the disc when everything else they insisted upon including was accounted for). Current “Extreme Edition,” with separate standard- and high-definition discs, carries no description. This is perhaps a mitigated case. The DVD author wrote to me (and the DVD-List) to explain:

    If you’ve ever watched any of the DVDs I’ve produced, you’ll notice that there is always a bit-budget issue, because I like to give DVD viewers their money’s worth by filling the DVDs to near bursting; the problem really is that every single consumer wants a different set of features for their money, depending upon their individual tastes. And since I’m basically trying to stuff twenty pounds of stuff in a five-pound bag, there are choices to be made, sometimes by me and sometimes by the studio marketeers. Believe me, I would love to be able to include absolutely everything on one release... but there you have it.

    Then again, since we had three separate five-pound bags over the years, I think we did manage to get most of that twenty pounds in across the three releases of the title on DVD... if you really want DVS, you can always use the original DVD release. I’m sure it’s readily available on eBay, and I’m making the bold assumption that folks who want to hear the DVS track aren’t really worried as much about not seeing the better picture quality of the later versions of the disc... ;-)

    That was Van Ling (interview) who strikes me as sincere and well-informed on accessibility.

  2. Black Hawk Down: This one is outrageous. I am one of the few people on the planet, perhaps the only one, to have seen this picture with captions and descriptions at the theatre. I had more information than anyone else in the room, possibly anyone else ever, and I still couldn’t figure out what the hell was going on. (The claim that you always know where you are and who’s doing what is a flat-out lie.) So the actual efficacy of the description track in making the movie, as opposed to the visuals, understandable is in question. Nonetheless, it was available, and in the past year and a half, no fewer than two DVD releases comprising four discs have been issued with no description track. (First; second.)

My advice to studios is: Don’t trash your assets. You spent all that money on description tracks; use them everywhere you can. And stop pissing us off: Only a tiny sliver of viewers who would enjoy description ever watch the theatrical description, meaning your investment of tens of thousands of dollars is amortized over a handful of viewers. Do you want to be paying a thousand dollars a head (for described movies only a few people see in the cinemas) or pennies a head (for descriptions available to any DVD viewer?)

Later, we can have a nice talk about inaccessible audiovisual menu systems on DVDs.


Well, we’re back to something vaguely resembling irregular updates.

2003.06.01a – “SideKick, other devices benefit the deaf”:

The key, Saliga said, is often to take technology such as television’s closed captioning which was developed for the disabled, and bring it to the mass market.

“My children actually grew up believing that closed captioning was designed for gyms,” Saliga said [revealing his own ignorance and inability to explain the most widespread adaptive technology there is – Ed.]. “Whatever technology we develop, if we market them and sell to the mainstream, we give those individuals that have impairment more opportunity than they ever had before. The price drops, the technology is more prevalent, and the user who happens to have maybe a permanent disability gets the benefit of a lower price.”

It’s always nice to get a discount for being crippled.

A vice-president at a major Canadian broadcasting alliance – not, in fact, the manageress of captioning at that same alliance – told me once that some CRTC petit fonctionnaire or other had described captioning as useful for health clubs.

And these are the people “regulating” television broadcasting in Canada.

It’s just marvy that you can use captioning in places where no soundtrack is available, but that is not its original function. Next people will be telling us that automatic doors and level entrances were intended for nannies pushing strollers (because moms these days are just too busy to look after their own kids).

2003.06.01b – “Students go deaf for a school day”:

Meyer was one of 55 students participating in the American Sign Language club’s Deaf-A-Thon.... Promptly at 7:15 A.M., participating students inserted foamy green plugs into their ears, then placed cushioned headphones on top. They also took a vow of silence, swearing to communicate only through sign language [suddenly everybody could sign? – Ed.] or pen and pencil....

It made government class difficult, said senior Betsy Holley, as she struggled to follow the movie Thirteen Days. Even though her teacher turned on the closed-captioning service for Holley, the words and action were too quick.

“I would get confused and try to start reading the captions again, but then they’d move on to something else,” she said. “It was really hard.” [...]

However, a portion of the student body knows some American Sign Language, which is taught as a foreign language at Princeton. [Aha. – Ed.]

Captioning generally is too fast for people new to it, but Studies Have Shown that even less than an hour of viewing captions increases your reading speed. After two weeks, you should have stopped complaining. Also, it’s nearly a sure thing that this student was too far away from the screen to read comfortably, another issue Studies Have Shown.

Interesting about the sign-language courses in that school, huh?

And extra-fun fact about Thirteen Days? The DVD’s closed captions are all set at screen top. For some reason, I liked them there. They seemed to defy gravity.

2003.06.01c – And their job is so difficult, unlike, say, that of captioners, who just sit there transcribing all day. Why, a nation of 25-year-old female poli-sci students could do that for $12 an hour.

Right. Back to our story. “Subtitling films: Foreign voices translate into sweat”:

A former journalist and advertising copywriter, [Tim] Sexton, 42, is part of the hidden world of subtitling, a challenging, arcane milieu.... Poor or hard-to-read titles (most notably, white subtitles against a white background) are a thorn in the side of filmmakers and distributors, dooming even the best of material. Brilliant subtitles, on the other hand, can win plaudits for foreign filmmakers and attract American audiences to even difficult material. [...]

“Pedro [Almodóvar] films his own scripts and his words are immensely important to him... But he likes to have the minimum of subtitles so they don’t detract from the visual. It’s a nightmare when everyone is talking at once or the camera cuts fast between scenes. ‘If I only had another second,’ I tell myself, ‘I could make this so much clearer.’ There’s a richness in the dialogue and you have to sacrifice so much.” [...]

Translators are given “spotting lists,” which break down the dialogue into frames or fractions of seconds to determine how long each title can last, she explains. Not only must they capture the meaning but, even more demanding, everything must fit.

“It’s kind of like doing a crossword puzzle, in that you have a set number of characters you can use,” says Schoch. “Sometimes you have only 25 letters to convey very complex things. And because you can’t translate everything, 40 percent of the content is lost.” [...]

Even quality subtitles, however, don’t bring in the crowds. “American audiences generally don’t want to go to the movies to read,” said Paul Dergarabedian, president of Exhibitor Relations Co., a company that monitors box office performance. “They’d rather the experience flow over them, be spoon-fed rather than interactive. Reading dialogue takes them out of the movie, they say, shattering the illusion.”

But Sony Pictures Classics’ Barker maintains that younger audiences are far less resistant to subtitles.

Run Lola Run and Crouching Tiger, Hidden Dragon became huge because the younger generation, who are used to reading instant messaging on their home computers and CNN crawls at the bottom of the screen, are much more open to subtitles than people in their 40s and 50s,” he said.

Exactly. The multitasking generation can handle your spindly little titles. Bring it!

2003.06.01d – On the other hand, subtitling in Brazil ain’t gonna pay the bills. If I were a 25-year-old female with a poli-sci degree, I’d stick to my rewarding future of quality captioning. “Why I Couldn’t Take Brazil” (could anyone?):

After heavy word of mouth, I was able to get some translation work for several companies, in addition to doing work for a colleague at HBO of Brazil. I got into the subtitling/dubbing sideline through her, and even urged my wife to get involved in it, as well. She took the HBO course, which led nowhere because of the recession.

In point of fact, the work was scattershot at best. Sometimes, I would get two or three films to work on, other times I would get nothing for weeks. When I did get work, I would spend many days, nights and weekends at the computer terminal, away from my family, friends and relatives, while I was involved in the transcribing process. The pay was decent enough, but I still needed to teach to pay the bills, plus I really wanted a less ephemeral and time-consuming occupation.

Anything related to translation is ephemeral if you don’t have the sinecure of a government job, and it’s always time-consuming.

2003.06.01e – Anthony Minghella is the new chair of the British Film Institute. One can only hope things improve.

“The Pedro Almodóvar film [Talk to Her] is an extraordinary achievement, but I don’t know how many people in Britain will see it. It’s urgent and funny, but neglected because it’s not in English. We’ve got an audience which has grown a resistance to subtitling. But that’s very easily overcome. You forget them after five minutes.”

2003.06.01f – Crotchety, squeaky-voiced film critic John Harkness is only now discovering the power of the Audio and Subtitles buttons on his DVD remote. Joining us late, John?

These three discs present the Japanese versions of the films intact, with parallel English-language versions if you don’t want to make your kids read subtitles. The English versions are quite good, and the discs provide an interesting subtitle option: you can watch the English version while running the subtitles for the Japanese version and see how they differ.

And someday they won’t differ. It’s a problem I’m gonna solve.

2003.06.01g – Or a preference setting. William Gibson (otaku):

Remember the debate around the ethics of colourizing films shot in black-and-white? Colourization, up the line, is a preference setting. Probably the default setting, as shipped from the factory.... This spreading, melting, flowing together of what once were distinct and separate media, thats where I imagine were headed. Any linear narrative film, for instance, can serve as the armature for what we would think of as a virtual reality, but which Johnny X, eight-year-old end-point consumer, up the line, thinks of as how he looks at stuff.

If he discovers, say, Steve McQueen in The Great Escape, he might idly pause to allow his avatar a freestyle Hong Kong kickfest with the German guards in the prison camp. Just because he can. Because he’s always been able to. He doesn’t think about these things. He probably doesn’t fully understand that that hasn’t always been possible. He doesn’t know that you weren’t always able to explore the sets virtually, see them from any angle, or that you couldn’t open doors and enter rooms that never actually appeared in the original film....

Somewhere in the countless preferences in Johnny’s system theres one that puts high-rez, highly expressive dog heads on all of the characters.... You get complete breed-selection, too, with the dog-head setting, but that was all something he enjoyed more when he was still a little kid. But later in the afternoon he’s run across something called The Hours, and he’s not much into it at all, but then he wonders how these women would look if he put the dog heads on them. And actually its pretty good, then, with the dog heads on, so then he opts for the freestyle Hong Kong kickfest.

It may be that hell have to be taught to watch films, in the way that we watch them (or watched them, as I think DVDs are already changing that, not to mention changing the way you approach making them).... I see Johnny falling asleep now in his darkened bedroom, and atop the heirloom Ikea bureau, the one that belonged to his grandmother, which his mother has recently had restored, there is a freshly-extruded resin action figure, another instantaneous product of Johnny’s entertainment system.

It is a woman, posed balletically, as if in flight on John Woo wires.

It is Meryl Streep, as she appears in The Hours.

She has the head of a chihuahua.

And when she barks, English-language captions-cum-subtitles appear in Johnny’s preferred hideous Windows system font and appalling colour combinations.

Listen. Every kid under 30 to whom I’ve ever tried to explain anything regarding accessibility has grasped the concepts instantaneously. It’s happened over and over again at the movies, where the kids will literally glance at a large display on the back wall reading words in mirror-image and instantaneously know exactly what it’s for (and that I do indeed need to sit right in front of it and not off to the side). They sit there and dick around with caption/subtitle/audio settings on their fave discs just to kill time. They have never known television without “closecaption.”

It’s the old people who are the problem. Even the old people in the industry, actually.

2003.05.27 – I had yet another client conversation today in which the meme “You know, when we do things for disabled people, it often ends up benefiting everybody else!” was trotted out, like an inked software developer in Middle America. (An actual example, in fact! Richard Florida, The Rise of the Creative Class, p. 215: “This young man had spiked multicoloured hair, full-body tattoos.... As I would later learn, he was a gifted student who had just inked [!] the highest-paying deal... ‘That’s easy. We wanted him because he’s a rock star... when big East Coast companies trek down here to see who is working on their project, we’ll wheel him out.’ ”)

It’s not really true. Yes, level entrances and automatic doors assist nannies with strollers (moms are far too busy to raise their own kids), and captioning shore is handy in them gym-naysiums, but the true purpose of accessibility is... accessibility.

But here’s a counterexample.

Our dear lovelorn Turkish friend Tantek searches high and low for online Matrix Reloaded transcripts. There aren’t any. Now, if a project I’m working on comes to fruition, it will be relatively easy to unite:

  • the first-run captions
  • the script for the audio-description track

into a text-only narrative equivalent of an original film. I was told it was attempted once, in an experiment for deaf-blind students. I can’t find the reference now. (I did mention it on Slashdot.)

In a case like this, Tantek’s dream (I refer to the dream of an accurate transcript, not that of a hooping girlfriend unit) could be realized. Then again, one expects that studios would treat such a text-only analogue as more valuable than gold. After all, Miramax Books earns a good penny publishing badly-typeset screenplays. (So do Faber & Faber and others.) A narrative of that sort has market value, hence it must be protected under lock and key. (You do realize that the original Batman tape was accompanied by an armed guard while NCI captioned it? People take this shit seriously. And that will only worsen.)

In the interim, can’t Tantek read my review and go see it a fourth time, with captions? It is playing in San Francisco.

Note (2003.05.27): Intervention posted (ages ago).

2003.04.11 – It appears that Australian deaf groups have reached a deal with broadcasters that will cause human-rights complaints lodged against certain broadcasters to be withdrawn. The complaints alleged a breach of the Disability Discrimination Act through failure to provide captioning on “free-to-air” television.

So what’s wrong with the deal?

The proposed deal does not guarantee full accessibility to Australian TV programming; exempts certain program categories altogether and favours others; reiterates half-truths; and is being rushed into place, as though the various powerful interests involved were treating it as a done deal.

A very long time ago, I intervened in a previous Australian captioning inquiry. I see not much has improved.

  • Human Rights and Equal Opportunity Commision explanation
  • Application in user-unfriendly Word format
  • Proposal (full documentation – the important file) in user-unfriendly Word format

Rushed deadline

Suspiciously, HREOC believes that, since certain parties it deems more important than the general public have already discussed the proposal in secret long before its announcement, a short four-week comment period is sufficient. It is not.

As things presently stand, you have until May 9, 2003 to submit comments, preferably to disabdis@hreoc.gov.au.

One can reasonably expect another filing from me.

Full documentation

As a public service, here are the application and proposal in readable form.

Application

Application for Exemption Under Section 55 of the Disability Discrimination Act

APRIL 2003

This is an Application for Exemption from the Disability Discrimination Act (DDA) in so far as it relates to the broadcasting of television programming by the ABC, SBS, Network Ten, Channel Nine and the Seven Network (“the Applicants”).

Because of the detailed process of consultation which has already occurred, this Application has been made in a very brief form. The Applicants understand that is the preference of the Commission. If, however, more detail is required about the Application, the Applicants will do their best to provide it.

Preamble

The Applicants have been providing captioning services for deaf and hearing- impaired viewers since the 1980s.. The commitment was formalised in regulations enacted under Schedule 4, section 38 of the Broadcasting Services Act which require broadcasters to provide closed captioning for all prime time programming (6.00pm to 10.30pm) and all news and current affairs outside this period. The Applicants have complied with and exceeded the requirements under the Broadcasting Services Act.

Prior to and in 2001, HREOC received a number of complaints from deaf and hearing-impaired groups and individuals under the Disability Discrimination Act. These complaints alleged that the level of captioning being provided by the Applicants amounted to a breach of the DDA as it amounted to discrimination against the deaf and hearing-impaired community. This contention was disputed by the Applicants.

In March 2001, HREOC convened a forum attended by a number of interested parties including representatives of the Applicants, the Federation of Australian Commercial Television Stations, the Department of Communications, Information Technology and the Arts, Deafness Forum Australia, Australian Association of the Deaf and the Deaf Council of Western Australia. The purpose of the forum was to explore resolution of the issues raised by the complainants.

As a result of this forum, the Applicants commissioned research to determine the attitudes of members of the deaf and hearing-impaired community, their needs in relation to captioning and to obtain feedback on what areas of programming should be given priority.

The Research

This research was carried out by Sherlock Research and indicated that the deaf and hearing-impaired community was generally satisfied with the quality of captioning provided. Overwhelmingly, the main priority identified for increased captioning was for children’s programming, especially educational programs and pre-school programs.

This was seen as essential in order to:

  • assist children to develop language skills
  • provide an opportunity for children to learn
  • allow deaf or hearing-impaired parents to participate in learning development

Based on this research, in August 2002 the Applicants put a Proposal to the deaf and hearing-impaired groups that addressed the priorities identified by the research and introduce increases to captioning levels to be phased in over time. The Proposal was the result of a long and detailed process, demonstrating the commitment of the Applicants to addressing the needs of the deaf and hearing-impaired community.

Following detailed discussions with representatives of Deafness Forum Australia, Australian Association of the Deaf and the Deaf Council of Western Australia, those organisations have now indicated that they accept the Applicants’ Proposal. [...]

Summary of Proposal

  • Caption all programs (other than sport) which commence in prime-time until their conclusion
  • Staged increase in hours to reach minimum goals – 55% by end 2005 and 70% by end 2007 (6am to midnight programming)
  • Priority is given to captioning of pre-school and children’s programming – by end 2007, over 1400 hours of pre-school, children’s and schools’ programs will be captioned each year.

The Proposal allows for increases to be phased-in over time. The Applicants see this as essential due to:

  • the limited availability of captioners
  • the desire to maintain high quality captioning services, necessitating highly trained, experienced captioners
  • the significantly increased financial commitment for each Applicant, resulting from the increases, due to the cost of captioning eg – 16 hours of labour are required to caption a one hour pre-recorded program

Scope of Exemption

The exemption sought under the DDA will apply to all broadcasting services provided by the Applicants.

Period of Exemption

In order to implement the proposed increases to current captioning levels, which the Applicants need to have certainty in relation to their captioning requirements. Accordingly, the Applicants seek the exemption for the maximum period of 5 (five) years allowed by the DDA.

Proposal

CONFIDENTIAL, FOR DISCUSSION PURPOSES ONLY

FREE-TO-AIR TELEVISION CAPTIONING

Revised Proposal By Networks Seven, Nine, [and] Ten, ABC & SBS (“Broadcasters”)

5 FEBRUARY 2003

This document has been prepared to assist confidential discussions with the HREOC Working Group on Free-to-Air Television Captioning. The document is provided on a confidential and “Without Prejudice” basis. It should not in any way be interpreted as an admission of liability under the Disability Discrimination Act.

Broadcasters seek the agreement of Deafness Forum, AAD and Deafness Council of WA (each Group being a member of the Working Party) to the following Proposal which will form the basis of an application to HREOC by Free-to-Air television broadcasters for a temporary exemption under section 55 of the Disability Discrimination Act. The Free-to-Air television broadcasters intend to apply to HREOC for an exemption until the end of 2007 in relation to captioning of television transmissions to allow the Broadcasters to implement the Proposal (the “Temporary Exemption Application”). The Free-to-Air television broadcasters will lodge the Temporary Exemption Application on the basis of the Proposal once the Proposal is accepted.

The Proposal will be accepted when a representative of each of Deafness Forum, AAD and Deafness Council of WA has accepted the Proposal on behalf of the members of their Group and has agreed on behalf of the members of their Group to fully support the Temporary Exemption Application.

Implementation of the Proposal is conditional on:

  • the grant by HREOC of a Temporary Exemption in terms acceptable to the Broadcasters; and
  • HREOC notifying the Broadcasters that each of the current complaints lodged with HREOC in relation to captioning of television transmissions has been withdrawn
  • (the first date on which both of these events have occurred is referred to in the Proposal as the “Effective Date”).

To begin immediately upon the Effective Date

  1. Captioning until conclusion of programs

    • All broadcasters to caption programs (other than sport) that commence in prime time until their conclusion.
    • For example, a program commencing at 10:00pm (within prime time) and ending at 11:00pm (outside prime time) will be captioned in full.
    • Currently, legislation requires programs showing between the hours of 6:00–10:30pm to be captioned.
  2. Equipment standards

    Broadcasters to support actions of Working Party seeking Government action to:

    • require captioning decoders in all new television equipment; and
    • improve availability of VHS and DVD players which allow recording of closed-captioned television programs.
  3. Staged increases in captioning

    Captioning of Seven, Nine and Ten pre-school (P) and children’s (C) programs

    • From 31 December 2004 Seven, Nine and Ten will caption all new P and C programs.
    • This means at least 80% of P and C programs will be captioned by 31 December 2007.

    Captioning of ABC schools and children’s programs

    • From the Effective Date the ABC will begin captioning all new schools programs, with an annual target of approximately 50 hours.
    • From 1 July 2003 the ABC will begin captioning:
      • all new Australian pre-school programs
      • all new Australian children’s programs

      with an annual target of approximately 50 hours.

    • From 1 July 2004 the ABC will begin captioning new overseas children’s programs with an annual target of approximately 50 hours.
    • As part of staged increase, by 31 December 2007, ABC will caption all schools programming and approximately 500 hours of pre-school and children’s programs broadcast annually.

    Together proposals 3 and 4 mean that by 31 December 2007 over 1400 program hours of pre-school, children’s and schools’ programs will be captioned.

  4. Overall captioning levels

    • Broadcasters to implement a staged increase in the hours of closed captioned programs broadcast. The broadcasters will use their best endeavours to achieve the following minimum levels of captioning by the following dates:
      • 55% by 31 December 2005
      • 70% by 31 December 2007*

      Percentages apply to programs:

      • broadcast on the broadcaster’s primary channel;
      • between 6:00 A.M. [and] midnight;
      • measured on an annual basis.

      Percentages include program repeats.

    • Programs do not include advertising, sponsorship or promotional material, or community service announcements.
    • Captioned programs include foreign-language subtitled programs.
    • Hours of programs broadcast exclude foreign language programs exempted by section 38(4B) of the BSA.
    • In addition, in relation to the ABC, the percentages apply to all nationally transmitted programs broadcast. In addition to these nationally transmitted percentage targets, the ABC will continue to caption all state and territory news and current affairs programs (approximately 1,500 hours of captioned programs annually). The ABC will also continue to increase captioning levels for other regionally produced and broadcast programs.
    • This proposal means that by end 2007 in excess of 20,000 program hours will be captioned each year.
    • Broadcasters will reassess captioning levels at end of 2007 in light of circumstances existing at that time. Broadcasters undertake to commence a review process in 2006.
    • SBS is committed to providing maximum possible captioning for deaf and hearing impaired audiences. However, due to its relatively small funding base, it is not possible, in the absence of further funding, for it to meet the 55% and 70% industry targets without significant detriment to its programming. The agreement of SBS to these captioning levels is therefore conditional on receiving allocated funding for captioning from the Federal Government as sought in its current Triennial Funding Submission. If such funding is not received, SBS agrees to captioning levels of 50% by 31 December 2005 and 60% by 31 December 2007.
    • Increased ABC captioning is subject to formal approval by the ABC Board (proposed for decision in early March) and compliance with other funding approval procedures.

2003.03.20

Actual testing with actual crips

A really quite solid study of BBC Interactive’s accessibility to various disability groups has been published by its contractor, System Concepts.

Actually, it came out in 2002. News travels slowly some days.

The study examined the BBCi Web site, which is merely BBC Online, against various published accessibility criteria. But more usefully, System Concepts actually tested the site with disabled users.

Let’s run through some highlights!

Comparison against other sites

This part is a tad wonky. The report surveyed competing Web sites and attempted to sort them into high, medium, and low compliance levels. The high-compliance sites were:

  • amazon.co.uk
  • google.co.uk
  • uk.geocities.yahoo.com
  • uk.groups.yahoo.com
  • upmystreet.com

I find this a bit rich given Amazon’s complete inability to do so much as add alt texts to its images, and its asinine presentation of amazon.com/access/ as a putative accessible site.

The next three sites on that list have almost no graphics (Google especially) and present at best an information-design problem. UpMyStreet does in fact meet a range of accessibility guidelines; I’ve even used it as an example in presentations.

In fact, later UpMyStreet is actually interviewed:

Q. Do you test with users?

A. Always. In addition, we employ a consultant (who uses a screen reader) to review the sites that we build. Having a visually-impaired user review the site makes more difference than any amount of guideline-following. He sends us audiotapes of the screen reader output, and I play these to the developers!

“Medium-compliance” sites, as listed in this section, are more realistic. They’re actual sites with actual content and complexity, including:

  • bbc.co.uk [should the subject of the study be on this list?]
  • britannica.com
  • eBay.co.uk
  • ITN.co.uk
  • ManUtd.com [Manchester United – my personal fave]

I’m not sure that I or other experts would label all these as medium- and not low-compliance. The true state of commercial Web design involves tiny degrees of separation among appalling, abysmal, regrettable, poor, and passable accessibility, in my experience.

But BBC was, to its credit, willing to shell out considerable cash to assess its own degree of inaccessibility, and it can only fix its own sites, not the competition’s.

Generally deemed useful

Given a set of 130 possible adjectives and adjective phrases to use, subjects tended toward favourable terms:

Four of the ten participants... selected the word “useful” to describe the site. This is a strong finding, as participants had a choice of over 130 cards, so the odds of this happening by chance alone are extremely remote.

Our discussion with participants revealed that it was the content of a range of BBCi Web sites that they found useful. By describing the content of sites as useful, participants are demonstrating that they were able to access and understand areas of the site....

In addition, three of the ten participants... selected the word “easy to use” to describe their experience of using the site. Participants successfully completed more tasks on the sites they labelled as easy to use, suggesting that these site were more accessible to them....

These ratings showed that the accessibility of the site varied depending on the disability of participants. Participant success rates were highest for individuals with motor and hearing impairments, whereas individuals with learning difficulties had the lowest success scores. However, all participants were able to access some parts of the BBCi Web site....

Perhaps this level of satisfaction is due to the presence of genuinely useful information on the BBCi site:

In addition, the participant with visual impairments (uses screen reader) commented that she liked the recipe information on the Food site. She has the Delia Smith recipe books at home but is unable to read them without a magnifier and has not got round to getting the Braille version. The Web site is a useful accessibility tool as the screen reader reads the recipes out to her and she can also print them out in large font to access them at home.

Further, the researchers didn’t just go by quantitative measures:

On completion of the test, we elicited subjective opinions from participants. We collect subjective data because participants who appear to struggle may in fact find the site accessible. Conversely, just because a participant completes the task successfully does not mean that the participant found the task a simple one or the site accessible. Subjective opinions help us uncover these accessibility issues.

Visuals are important

I mean, I have been saying that for a while, but not enough people do.

Several users (both participants with visual & hearing impairments, learning difficulties) commented positively on the use of graphics on BBCi. With the exception of one of the visually-impaired participants (who uses a screen reader), all these participants wanted more pictures and icons on the site.

Three out of the ten participants (motor & hearing impairments, learning difficulties) used the word “professional” to describe the site. The hearing-impaired participant used the word to describe the pictures on the Food site which she said helped her to understand the content of the site. She said content would be easier to understand if there were more pictures and if these complemented the text, for example recipes could have pictures alongside them which demonstrated how to make the dish.

The other hearing impaired participant used the EastEnders episode update page as a good example of pictures supporting the text.... The participant with learning difficulties also chose the word “professional” to describe the “good colours and pictures” on the Sport site. He wanted more pictures and icons on the site as he found them helpful.

During the testing the visually-impaired participant said he preferred graphical sites because he can make use of them, but stressed that pictures must be “well-defined.” The other visually-impaired participant (who uses a screen reader) said she liked the fact that the BBCi Web site did not have too many graphics. She did not dislike graphics per se, as long as they were labelled properly, but commented that this was not true of all the graphics on the BBC Web site. Some graphics only provided a file name, but no useful description of what was presented.

Also, the damage done to graphic design by screen magnifiers was, at last, acknowledged:

Screen-magnification software helps the user enlarge small sections of the Web page. The diversity expert we interviewed used magnification software for reading Web page sections but relied on using the largest font size when referring to any aspect connected with page layout. He commented that many sites were difficult to navigate using a magnifier due to loss of context. Image quality can also be a problem; the sharpness of some images was severely affected when viewed through a magnifier.

It would be interesting to find a device that magnified the screen in the following ways:

  1. All type enlarged from the underlying outline font file. If the default screenfont size is 12 pixels and you want 300% enlargement, the system would call for a 36-pixel font rather than tripling the size of the 12-pixel bitmap.
  2. Images selectably unmagnified. Upon request, only non-image objects on the page would be enlarged. You could make an exception for input and button elements, whose text you would probably want to read. When you opt for images to remain unmagnified, the system could still block off an enlarged section of the page corresponding to the area the image would occupy if you did enlarge it.

Mobility impairment

The study provides some of the only available information on the true accessibility problems faced by people with mobility impairments.

A participant with motor impairments used the word “cluttered” to define the site. This participant had difficulties as a result of the amount and position of items on the page. He commented that there was too much information on the page and as a result the BBC had crammed information near the links and icon and made the link text smaller. This was a problem for accessibility as it made it harder for him to select links using the assistive technology’s grid system.

When using the grid it is easier if there is space around links so when the grid square goes over a link, it does not cover more than one link causing the user to accidentally select the wrong link. Links in a larger font are easier to click on as users do not have to take the grid square down to a minute level to cover the link.

We observed the participant with motor impairments attempting to click on a link in the left navigation bar. In order to do so he had to take the grid square down three levels at which stage it was difficult to read the numbers on the squares.

This reduction in grid size can cause a problem for individuals with poor eyesight. The user also commented that if you are using the mouse with shaky hands it is difficult to click on links if they are in a small area. He would prefer less-cluttered screens so there would be more space around each link and a larger surface area to click on.

By “grid system,” I believe the report is referring to a method of successive quadrants (invented by Jutta Treviranus, according to Jutta) or a brute-force method of dividing the screen into grids.

In the successive-quadrants method, the keyboard, for example, is divided into four parts. The assistive technology asks you, in sequence, which quadrant you want. You hit whatever switch you can hit to select it. Then that quadrant is itself subdivided into quadrants, down to the level of individual keys, which can be selected in sequence. The same principle could be applied to a computer screen, letting you select which part you want to manipulate.

In a brute-force grid system (that’s my term, by the way), the screen is divided into preset squares, which you must select, which then display their own preset squares, and so on.

Or so I understand. Adaptive technology of this sort is a field unto itself (with a lot of expertise here in Toronto, actually) and I really only have a conversational knowledge of it.

In any event, since most people who are introduced to Web accessibility tend to equate accessibility with blindness, I welcome all the evidence we can put together explaining the barriers faced by people who have trouble moving the arms and/or hands.

Extracted lists of links

I think this problem, experienced by the screen-reader user, was poorly explained:

We observed this participant becoming confused during the testing when she did not know her location on a Web page and within a site. For example, whilst searching for an explanation of the ingredient “Vacherin” on the Food site, she was taken to the glossary page where she was read out a series of letters. As she had sorted her links alphabetically these letters were not read out one after another but where they appeared within the alphabetical list of links. The user was not sure if she was in the recipes page or the glossary and did not know what the letters were. The screen reader did not provide her with any useful information about the page for her to orientate herself.

Participants and selection bias

In my book, I wrote a whole section on choosing recruiting users with disabilities for testing. It is not at all easy.

Consultancies will band together to develop a database of disabled subjects in various cities who can be called upon to do user testing. The population may be small, and after participating in one or more tests, standard protocols require those subjects be disqualified from further tests for a number of years. (Otherwise they become “professional testers.”) I argue that this nicety must be sacrificed if we’re ever going to get anywhere in testing by actual disabled users. It can further be argued that you want the most experienced users working for you because they have the greatest fluency in adaptive technology and Web use. It might be oxymoronic to imagine a novice disabled Web-surfer in the first place; you need such advanced software skills just to duplicate the basic functions of a nondisabled person that there may be no such thing as actual newbies.

The BBCi report describes the participants:

  • P1, Ismail, has mild learning difficulties.... He works for Mencap [another of those chillingly Orwellian British nonce words; it’s formerly the National Society for the Mentally Handicapped]
  • P2, Theresa, is visually impaired.... She works for ESDA [East Sussex Disability Association]
  • P3, Graham, is visually and hearing impaired.... He works for ESDA
  • P4, Alan, has a motor impairment.... He will shortly be working for ESDA
  • P5, Maureen, has motor impairments.... She works for ESDA from home
  • [P6, P7, P10 deleted]
  • P8, Cathie, has a hearing impairment.... She works for the British Deaf Association
  • P9, Sue, has dyslexia.... She works for the British Dyslexia Association

Notice a pattern here?

I would not have anticipated this trend of selecting disabled users from disability service organizations. I wonder if this cohort would be more apt to offer remarks influenced by political ideology. I get this impression from Cathie, who does a lot of complaining about the English language used on the site. Whereas a learning-disabled person has a disability that may prevent understanding of written English, a deaf or hard-of-hearing person without such a disability merely has to learn the language. The dividing line is not clear-cut, but we’re definitely talking about two separate groups.

Participant welcomes

An interesting detail in experimental procedure with the subjects:

We welcomed the participant to the laboratory or thanked them for allowing us to visit them at home/work and described the background to the test. We emphasised that the testing was being carried out by an independent research agency. This meant that the participant could be critical of the Web site without feeling that they were criticising the designer.

The administrator explained that the purpose of the testing was to obtain customer feedback on the accessibility of the BBCi Web site. We made it clear that it was the Web site, not the user, that was being tested....

We... asked the participants for some background information about their Internet usage, their abilities and any assistive technologies used. The purpose of this brief interview was to gain an understanding of the participant’s capabilities, the types of assistive technology they were using, and to help them relax.

Doesn’t this just conjure an image of British hospitality? Because they speak so well, they can give the impression of being really quite engaged with you. That’s the feeling I get from this passage.

Conclusions for participant testing

The report is, to my knowledge, the only free-of-charge document that gives basic facts about the differences in testing with disabled vs. nondisabled users:

Testing sites with disabled people takes time, for a number of reasons.

  • It takes longer than usual to recruit disabled participants [unless of course experimenters simply recruit staff from disability-service and -activist groups]
  • If the testing takes place in people’s homes or places of work then it may not be possible to test more than two participants per day (about one-third as efficient as lab-based assessments)
  • Access barriers mean that users with disabilities may work more slowly and may get tired more quickly
  • Disability is diverse: five participants may be enough for a usability test, but this number is insufficient for an accessibility test. For future tests of BBCi, we recommend using at least two participants to represent each disability, as in this study

Recommendation – The development schedule for BBCi sites should allocate sufficient time for usability testing with disabled participants. Compared with testing non-disabled participants, it takes about twice as long to recruit and three times as long to test. These timescales could be reduced by building a relationship with disabled organisations or by creating a representative user panel.

...which I believe is what I was saying about “a database of disabled subjects.”

Repetition and contradiction

The report mentions a lack of truly original advice on accessibility:

There is an increasing number of sources available on Web accessibility but (with conference papers being the notable exception), these merely repeat the same material. Importantly, the WAI guidelines tend to be used by most organiszations as their primary source of information when they consider accessibility, but the WAI does not give equal consideration to all types of disabilities.

The Web Content Accessibility Guidelines 1.0 are mostly concerned with blind people, and within that group, mostly concerned with screen-reader users, and within that group, mostly concerned with Jaws users.

Meanwhile, the contradictory nature of accessibility advice is acknowledged. For completely-blind people, images are useless; for learning-disabled people, they are beneficial. The report states:

Designing for users with disabilities is not simple. Our findings show clearly that disabled people need different, contradictory features to make Web sites accessible.

For example, people that use screen readers find that images reduce accessibility; other people with partial vision find that images improve accessibility by structuring the information on the page (even if they cannot make out what is in the image).

Some users with motor impairments prefer to scroll a single page of information than to load separate, screen-sized pages of information (since scrolling requires less detailed motor movements). Other users (such as those with dyslexia) prefer paging to scrolling since this reduces the amount of information on each page in view and reduces mental workload.

Observation during our usability test shows that these findings are not simply individual differences: users are much more efficient when they interact with the information in their chosen way. Rather than design a site for everyone, it may make more sense to ensure that each site meets a set of minimum standards.

In addition, sites aimed at specific disabled groups (such as BBCi’s See Hear) should tailor structure and content for the people who will be using the site. Some general sites (such as Food) will need content to be simplified to be truly accessible to people who are deaf or who have cognitive disabilities: if producers feel this results in a site that is too simplistic, one solution is to develop an alternative site.

Comparison with Nielsen studies

Obviously the BBCi/System Concepts report draws comparison with the Jakob Nielsen reports on Web usability for people with disabilities. I flipped through one of them in a colleague’s office. In all fairness, they’re probably worth the money, but so far I haven’t bought them.

Kara Pernice Coyne, the chief researcher for the accessibility studies, admitted to me that they had loads of trouble recruiting disabled subjects, but refused categorically to state how they ultimately located such subjects or even how many there were. (Quite possibly the ultimate reports state as much.)

Along with this, being told by Duke Nielsen himself that I had “achieved permanently-blacklisted status... and any E-mail we get from [me] gets deleted unread” has rather hardened my resolve against handing him any money whatsoever.

As for hyperlinks to those reports, I suppose that is what Google is for.

Congrats

But congrats to BBC and System Concepts. They’re produced a readable, dense, informative, and rather valuable contribution to the corpus of original research on usability of the Web for disabled people.


2003.03.17

You can use captioning
or you can use Heath

Heath Row, whom I met at South by Southwest, almost real-time-captioned panel presentations at the conference.

I wanted to see how else this Immediate Journalism could be done.... I could either go shorter. Or longer. I chose longer. I type really, really fast, so I was able to capture almost verbatim transcripts of what went on. Oh, I didn’t catch everything, but I’m pretty sure I caught almost everything.

But why go so long? Why strive to be such a completist? Many conference organizers opt to audio record the event’s keynote speakers and breakout sessions. SXSW does not.... If people are going to publish a pamphlet every time Noam Chomsky spits up soup, why aren’t talks given by people such as Lawrence Lessig, Bruce Sterling, and others similarly captured, published, and distributed – online or offline? We’re losing an important part of our industry and culture’s conversation and history. (That’s not a slag on Chomsky, by the way.)

Similarly, I just wanted to see if I could do it. And making such an effort to transcribe everything, typing real-time transcripts as the speakers spoke, lightly editing them, and publishing them – most of the time – mere minutes after a session ended really changed my experience of the event. While I like to think I was present and engaged with my friends outside of sessions, inside the breakout rooms, it was just me, what I heard, my head, my hands, and my PowerBook.

It was kind of neat when I’d really get in the zone and almost fall away so I was typing automatically. I almost wasn’t paying attention to what was being said. I wasn’t ascribing any meaning to the sounds and words I was entering into the blank Word document. I wasn’t really there.

But it wasn’t easy. While I’m not a trained stenographer – lots of SXSW participants have asked – and while my hands didn’t really hurt at the ends of the days, my head did kind of hurt. When you’re trying not to be present or actively engaged in a given situation, when you’re only trying to document, project, and reflect what’s happening, you get this thin feeling. You’re fragile. Light like balsa wood. [...]

So. How’d it go? Great. I’ll do it again. Response on site was amazing, and word quickly spread throughout the conference that I was documenting the talks so thoroughly. Some people made decisions on what sessions to go to based on what session I was going to go to. If I was going to publish a transcript of a given panel, people felt free to go elsewhere. That brings up some interesting traffic flow and attendance questions. I hope I didn’t gut people’s headcounts because I was there in the room.

So what’s wrong with this picture?

Well, for starters, an important conference like South by Southwest shouldn’t permit its speakers’ words to float off into the ether. Presentations should not be ephemeral.

Audio quality in the conference rooms (in an ordinary, uncreative conference hall) is poor, and computer experts are rarely worth looking at for more then five seconds at a time; we are quasi-Aspergerian and have poor fashion sense, with unusual exceptions. In my book, those factors rule out audiotaping and videotaping, which are too tedious to wade through anyhow.

Yet conferences should not draft paying journalist-attendees into creating inadvertently incomplete précis-transcriptions of panel events, or force anyone to take on that role, or even leave that role unfilled, unintentionally begging for someone to fill it. It strikes me as shabby.

Obviously captioning is the way to go. Obviously. Or you wouldn’t be reading this Weblog, not that many people do.

How would one go about it?

I counted the following durations of conference panels at SXSW ’03:

  • Friday: 2 hours
  • Saturday: 12
  • Sunday: 18.5 plus 5.5 for Web Awards and Fray Café
  • Monday: 24.5, plus at least 4 hours for AIR for Austin and 20×2
  • Tuesday: 14.5

Trade-show panel discussions added about 5 more hours. Let’s say we had 77 hours of discussions.

I would judge real-time captioning rates for a live conference at US$120 an hour minimum, plus various travel and lodging expenses. (You could go cheaper, but I wouldn’t.) That adds up to at least $9,240 – let’s say $10,000, a not-inconsiderable sum. But it is a sum for which one could fundraise specifically: “Captioning of SXSW 2004 brought to you by [company X],” each and every transcript could state.

The advantages? Immediacy. You Are There! Uncorrected caption text could be blogged one minute after the panel ends. Corrected text could appear later, and in fact the LazyWeb could correct it for you. Or, even smarter, you could blog the transcript at the halfway point, or every ten minutes. And you can send it out in real time using a Java application, for keeners (and for any deaf people in the audience).

Discussed, not enacted

Oh, right. Accessibility. South by Southwest 2003 had three accessibility panels (I presented one) and no accessibility provisions whatsoever. If you’ve got captioners in the room, and any kind of monitor handy, suddenly you make the speakers’ remarks accessible to anyone who can’t hear. (Now, how a deaf person would ask a question or give a presentation is a different story.)

A less-desirable Plan B would involve tape-recording the panels (which will always involve technical failure) and transcribing them later. How much later? Days or weeks. Where? India, probably. With how much accuracy? Not very much. For how much money? A couple of thousand perhaps.

A bit too twentieth-century, don’t you think?

Next year, I don’t want poor Heath Row banging away into “a blank Word document.” I want South by Southwest transcribed properly.


Japanese anime subtitles plead:
“Abuse me!”

Well, this certainly caught my eye.

American anime fans have taken matters into their own hands.

An expert on Japanese film will be on the Western Michigan University campus this month to discuss how stateside fans of the Japanese animation form known as anime have formed collectives to translate and subtitle their favorite works. [...]

“Thus, what a pleasant surprise that American anime fans have taken that apparatus into their own hands, forming collectives to undertake the translation and subtitling of vast numbers of their favorite anime. My talk will celebrate the work of these collectives by, ironically enough, looking closely at the translations to demonstrate how they are, shall we say, abusive.”

The fan translators, [Dr. Abé Mark] Nornes contends, have tapped into a major shift in viewers’ relationship to audio-visual material and provide a positive example for professional translators who want to keep up with the times.

So I looked up his paper, “For an Abusive Subtitling”:

The duration of the subtitles, for example, is very ideological. I think that if, in most translated films, the subtitles usually stay on as long as they technically can – often much longer than the time needed even for a slow reader – it’s because translation is conceived here as part of the operation of suture that defines the classical cinematic apparatus and the technological effort it deploys to naturalize a dominant, hierarchically unified worldview.... Therefore, the attempt is always to protect the unity of the subject; here to collapse, in subtitling, the activities of reading, hearing, and seeing into one single activity, as if they were all the same. What you read is what you hear, and what you hear is more often than not what you see. [...]

The translator must condense his translation in the physical space of the frame and the temporal length of the utterance. The reader cannot stop and dwell on an interesting line; as the reader scans the text, the machine instantly obliterates it.... The translator then determines how many letters or characters are legible in the second or two or three available to each title. It is often said that actors talk twice as fast as spectators can read, but this is hardly a useful starting point for the work of translation. [...]

The viewer can, in fact, stop and re-read. Ever heard of video? Hence video subtitles can be longer. I could look this up in my Subtitling book, if I could find it.

Once accomplished, the translation moves through the hands of countless technicians, some of whom think nothing of “adjusting” a subtitle here or there for their own capricious, technical reasons. As we will see, this can lead to the kind of embarrassing mistakes that make translators cringe. [...]

Technical reasons are not always capricious. I think we could have used some examples here – rather a lot of them, actually, if the extent of the claim were to be supported.

In fact, a number of these translators have achieved reputations among general audiences. Some subtitlers even have fans! The most famous – Shimizu Shunji, Okaeda Shinji, Kamishima Kimi, and Toda Natsuko – have published autobiographies, how-to books, and English conversation via subtitles textbooks. [...]

The article quotes two other authors:

[Herman Weinberg] Then I’d go into the theatre during a showing to watch the audiences’ faces, to see how they reacted to the titles. I’d wondered if they were going to drop their heads slightly to read the titles at the bottom of the screen and then raise them again after they read the titles (like watching a tennis match and moving your head from left to right and back again) but I needn’t have worried on this score; they didn’t drop their heads, they merely dropped their eyes, I noticed.

This emboldened me to insert more titles, when warranted, of course, and bit by bit more and more of the original dialogue got translated until at the end of my work in this field I was putting in anywhere from 100 to 150 titles a reel ... tho’, I must repeat, only when the dialogue was good enough to warrant it. [...]

[Tamura Yukihiko] First of all, the first problem we encountered was whether to use vertical or horizontal lines. For this, I performed various experiments. In the case of vertical lines, 3 ½ feet of film were required to read one line with 12 characters. However, we found that if we printed the same line horizontally it would be impossible to read without five or more feet. Besides the decision to print vertically, we had to decide to put the subtitle on the right or left side. It was impossible to settle on a position. We’d put them on the right to avoid covering something on the left and vice versa. So we watched previews and investigated the problem scene by scene. [...]

See? Subtitles can move. Should, really.

There’s also a discussion of expressive typography:

For example, in M there is a scene in which a boy hawks newspapers; as the camera nears the boy, his voice gets louder on the soundtrack. At the same time, the Japanese subtitles translating the boy’s voice grow correspondingly larger and larger, providing a graphic representation of the materiality of the speech. [...]

In the spring of 1993, Professor Laurel Rodd of the University of Colorado assigned her Japanese translation class the task of translating subtitles.... The class quickly learned to appreciate the difficulties facing the translator of films, but their intuitive solutions to confronting the practical issues had little to do with the corrupt rules of the second epoch’s subtitlers. They regretted their “inability” to experiment and put subtitles in different colors and in different parts of the frame. In fact, their exercise was hypothetical and nothing was preventing them from indulging in the most outrageous innovation.... The tools are in place, but the professionals, like the students above, check themselves, held back as they are by the inertia of convention and the ideology of corruption.

Actually, this has not restrained one group of translators from whom we may learn much. In fact, this article was inspired by their work. In the past few years, a massive fandom has developed around Japanese animation (anime) throughout the world.... In scenes with overlapping dialogue, they use different colored subtitles. Confronted with untranslatable words, they introduce the foreign word into the English language with a definition that sometimes fills the screen. Footnotes! Some tapes include small-type definitions and cultural explanations which are illegible on the fly (here we find a completely new viewing protocol made possible by video where the viewer halts the apparatus’s mindless march and reads subtitles at leisure). They use different fonts, sizes, and colors to correspond to material aspects of language, from voice to dialect to written text within the frame. And they freely insert their “subtitles” all over the screen. It is as if history folds back on itself and we find a resurgence of the subtitling practice of the talkie era, but the underlying differences put the two worlds apart.


2003.03.01

Wales? Huh?

I don’t get this at all.

A deaf group is taking a complaint against HTV, now known as ITV1, to the Commission for Racial Equality on the grounds English people with hearing difficulties receive a better service than the Welsh.

Cedric Moon of the Wales Deaf Broadcasting Council claims that HTV Wales does not subtitle its local weekend news, unlike its sister channel HTV West.

He suggests that as the television channel has a statutory obligation to provide subtitling in both England and Wales, by having such a service at weekends only in England it is guilty of discrimination and in breach of race-relations laws.

I was not aware the Welsh were a race. At any rate, this seems to be an issue of captioning quotas, but it’s impossible to tell, given that the British maddeningly continue to use subtitle to mean both subtitle and caption.

Are our dear British friends, whose precision in terminology for these matters leaves everything to be desired, talking about:

  1. English-language programming with captions?
  2. Welsh-language programming with captions?
  3. English-language programming with Welsh subtitles?
  4. Welsh-language programming with English subtitles?

I think the first, but it’s not at all clear.

Why would it not be clear? Because both captions and subtitles are used in Wales:

BT is to sponsor one of the UK’s longest-running television soap operas, S4C’s Pobol y Cwm. [...] S4C provides English-language subtitles and subtitles in simplified Welsh on Pobol y Cwm’s five weekday broadcasts and on-screen English subtitles on the channel’s Sunday afternoon omnibus.

Trying to decode this feverish, almost Tourette’s-style use, reuse, and rereuse of the same word over and over again to mean five, ten, a hundred, or a thousand different things, I believe they’re attempting to discuss the provision, at various times, of:

  1. English-language closed subtitles
  2. Easy-reader Welsh closed captions
  3. English-language open subtitles

Richard Moremon, head of advertising sales and sponsorship at S4C International, said, “We’re delighted that BT is supporting the subtitling services on the channel’s premier Welsh-language series. The subtitles service on Pobol y Cwm reiterates that S4C is a channel for all viewers of both languages.”

So, I mean, you tell me.


Indian multiplexen

...will permit a range of subtitled and dubbed films you’d never otherwise see in India. Like, I dunno, Run Lola Run, improbably.

So in the coming months, your neighbourhood multiplex will be visited by Paris-based Gujarat-born filmmaker Pan Nalin’s documentary Ayurveda; Malayalam veteran Adoor Gopalakrishnan’s Tamil-Malayalam bilingual Nizhalkkuthu (Shadow Kill), with English subtitles; Shaolin Soccer, a Cantonese film dubbed in English that is being touted as a Chinese Lagaan minus the cricket; and acclaimed Iranian director Majid Majidi’s Baran, dubbed again, which is about the life of Afghan refugees in Iran. [...]

With Baran due for release in a couple of months at Chennais Studio 5 (a 148-seater), Swaroop Reddy, director of Sathyam Cinemas in Chennai, agrees that there is definitely a niche audience for such films. “We had earlier screened films like Taxi and Run Lola Run with English subtitles.” [...]

An English documentary in a mainstream Kanpur theatre... who would have thought it possible back in 2000? That was the year director Rajiv Menon released his Tamil take on Jane Austen’s Sense and Sensibility, Kandukondain Kandukondain, with English subtitles.

The film boasted of stars like Tabu and Aishwarya Rai, but Menon says he didn’t dub it to cash in on their pan-Indian appeal because some things are untranslatable. [...]

Sometimes, of course, dubbing simply does not make financial sense.

Consider this: Gopalakrishnan’s Nizhalkkuthu was made at a cost of about Rs 1 crore – not small money, but certainly peanuts in comparison with a Lagaan’s Rs 25 crore.

Now consider that dubbing it would cost between Rs 4–6 lakh, while subtitling would add up to just about Rs 1 lakh. Since the film, in its multiplex release outside Kerala, is aimed primarily at Malayali expats and die-hard cine buffs, a limited audience by any measure, dubbing at that cost is impractical. [...]

“Small films, like the ones we show, will never match a Spider-man’s earnings, but there is a growing market for them which cannot be ignored,” says Saba Ali, India rep of Rossellini and Associates, which brought the English-dubbed version of the Oscar-winning Italian film Life is Beautiful to India.

Those who fail to learn from history, etc.


How much money is SDI raking in?

Previously, we explored just who the hell SDI is. They do subtitling and a wee bit of captioning and dubbing. SDI’s PR chick was unable to give me the title of any North American release they had closed-captioned. Perhaps they don’t really do any closed captioning.

Anyway, they made shitloads of money in 2002 (PDF):

SDI Media operates in 19 countries around the world and is the global market leader in the field of translating, subtitling and dubbing for TV, video, film and DVD. SDI has a more than 60% share of the worldwide DVD feature subtitling market, and contracts with all of the major Hollywood Studios as well as international TV channels such as the Discovery Channel. SDI’s operating margin increased to 15% (14%) in the fourth quarter, as DVD subtitling represented 48% (45%) of sales.

SDI subtitled the DVD releases of blockbuster movies including Spiderman, Minority Report, and Men In Black II into [presumably a total of] 32 languages in 2002.

SDI reported net sales of... SEK 379 (397) million for the full year, and operating income of... SEK 54 (42) million for the full year.

In Canadian dollars:

  2002 2001
Sales $66,500,000 $69,600,000
Operating income $9,400,000 $7,300,000

Did your company clear $9 million ($6.3 million U.S.) last year?


Case not exactly closed

Mildly inaccurate but nonetheless edifying discussion of HDTV closed-captioning requirements.

Besides raw text characters, line 21 also contains primitive instructions on how to display the characters. For example, there are options to control the foreground and background colors [actually, only foreground colour has any real meaning – Ed.] and the screen position where the text is displayed. In addition, it offers three methods to manipulate how the characters are drawn: roll-up, pop-on, and paint-on.

Roll-up captions add words to the end of a line; when the line is full, it scrolls upward so a new line can be displayed. These captions are popular for sporting events and other live programs. By contrast, pop-on captions are immediately drawn onto the screen and erase whatever caption text was on the display. Paint-on captions are similar to pop-on captions except that they don’t clear the captions that were previously drawn. [No, paint-on captions are pop-on captions where every character is displayed immediately upon reception, quickly building up the lines, character-by-character, into a fixed-position pop-on caption – Ed.]

While line 21 captions were a significant advance for the deaf and hearing- impaired, the technology is stale in the age of digital video. For instance, one of the most significant problems with the line 21 approach is inconsistent presentation. Each television owns the look and feel for caption display (i.e., they are responsible for choosing fonts and colors, etc.), so the text presentation will vary wildly between manufacturers and even between different models for the same manufacturer. Further complicating the situation is uncertainty over text color controls. Since the FCC only recommends, and doesn’t require, that televisions support a variety of text colors, few televisions offer the feature. [I’ve never found a TV that didn’t display colour, or italics, for that matter – Ed.]

Another problem with line 21 is the paucity of its text and window attribute features. For instance, there is no control over the point size of the font. In addition, other than the roll-up, pop-on, and paint-on options, window display options are limited (i.e., it’s impossible to perform a wipe, fade, or other window control primitives). While these may not initially seem like onerous restrictions, line 21 captioning is targeted at individuals with poor eyesight [no, it is not – Ed.] and other physical challenges. Consequently, it is essential that they be able to alter font size and display so they can enjoy the presentation. [Yes, no kidding. I was telling people that in 1989Ed.] [...]

All of these features are based on DTVCC’s flexible aspect ratio control. When line 21 was created, all televisions were 4:3, so captions were assumed to have a 4:3 aspect ratio. By contrast, DTVCC permits content creators to choose between 4:3 or 16:9 aspect ratios. If your content is 4:3, your captions will have a potential screen resolution of 160×75 “pels” (see below) and a maximum of 32 columns by 4 rows. By contrast, 16:9 captions offer a potential resolution of 209×75 and a maximum of 40 columns by 4 rows.

One of the most confusing aspects of DTVCC is its notion of pels. Most people assume that a pel is equivalent to a pixel. However, in DTVCC terminology, a pel is a logical entity that represents one or more pixels (the actual number of pixels per pel varies depending on the current font). Consequently, the window resolution (i.e., 160×75 or 209×75) dictates where the window is located (anchored) onscreen, whereas the font size governs the window size. [...]

A window’s anchor point informs the system of the direction where the window should be expanded if an attribute such as the size of the window’s caption text is changed. Anchor points let viewers tweak caption settings to match their preferences while maintaining the content’s original window layout. [...]

The most unusual pen attribute is text tags. A text tag is inserted when you author the caption text and it describes the nature of the content. Potential tags include dialog, voiceover, song lyrics, sound effects, musical score, and expletive. When a set-top box unearths a text tag in the MPEG stream, it has the option of enhancing the text display to emphasize the text content. For example, if the text has the expletive tag and the player is configured to prevent harsh language, it could render the text in an alternate font that displays special symbols (i.e., #@%!) rather than the potentially offensive words. [...]

If you conclude that DTVCC support is an immediate necessity, the next step is to find a caption service provider that offers 708B services. Unfortunately, until the DCC becomes mandatory in 2006, this may be difficult. For example, Rick Leet at Closed Captioning Services Inc. believes that while smaller caption providers are interested in offering DCC services, the majority of their customers aren’t ready to deploy this technology. Consequently, they intend to offer 708B services when the market matures and customers become interested in 780B functionality.

Larger vendors are also hesitant to plunge into an immature market. For example, although VITAC was part of the standards committee that issued the EIA 708B standard, Timothy Taylor, VITAC’s chief engineer, indicates that they won’t offer full 708 functionality until consumer and client demand for HDTV equipment significantly increases. Fortunately, while the market is maturing, VITAC offers 608 transcoding services to its customers. [...]

Regardless of whether you must add DTVCC support now or intend to delay its rollout for a few years, it is critical that you familiarize yourself with the technology and formulate a roadmap for support. Failure to plan properly may result in a mad, and potentially expensive, scramble when the deadline arrives in 2006.


Further subtitle markup languages

Trust me, you’re gonna be hearing very much more about markup languages like these (not that you haven’t heard about them before), but here are a few other trufflësque nuggets I unearthed.

  • JACOsub [sic], “Amiga video titling software for professionals”: Amiga? Amiga? “It was originally written for the Japanese Animation Club of Orlando (JACO) [!] to lay English subtitles over Japanese-language films and television shows. The program has spread and grown in popularity due to its extremely flexible script format, clean multi-buffered title transitions, and other features.”
  • Got Linux? Get lost! BakaSub: “[T]his site hasn’t been updated in over two years now and there is very little chance of BakaSub being resurrected at this point as a project.... I’ve got too many other projects in line in front of it anymore (and Linux’s video support is still just as pitiful as it ever has been, really).”
  • Ever heard of fansubbing? Anime fans write their own subtitles. See FAQ and list. (Somebody’s gotta do it. Or do you want only native Japanese-speakers to understand your show?) You can, with not much difficulty, find transcription files online, but not in any particular subtitling format, necessarily, though those do occasionally pop up.

Who is Cynthia Delmar?

A captioning supervisoress at NCI, back in the day. She did the smart thing – which would no doubt have been squelched in its cradle were John E.D. Ball’s reign of error still underway, and which would never be countenanced at, say, WGBH – and simply asked Usenet if anybody had any credible Babylon 5 scripts they could use.

I work for the company doing the closed-captioning of Babylon 5 for DVD release. We are reformatting the captions from the original files, which were mostly done without scripts....

(Basically, I said to my boss, “Hey, JMS is a very cool guy, posts online all the time – someone ought to contact him and ask for copies of the scripts.” She, naturally, said, “Great idea! Go ahead!”[...])

Anyway, does anyone have suggestions as to where the best, most authentic sites would be for verifying things like spellings of names/places, etc.? Someone who might have had access to those scripts and would therefore have it for sure, right? I’d really like us to get this stuff right.

There was some doubt that Cynthia was on the level, but that was eventually assuaged.

And, later:

I talked to the woman who coordinates our work for Warner. Seems she never asked for the scripts, because we are reformatting the original caption files we did waybackwhen. What she didn’t realize was that we never had the scripts in the first place, nor Internet access to do research or ask the fans. She is contacting them to request the scripts, so hopefully we will get everything right. (I would cringe to admit how many errors were in the ep I worked on the other night. To say nothing of the fact that at that time we did not caption verbatim – we condensed lines into what would encode in the old, old software. Ugly.)

Believe it or not, captioning ain’t as easy as it looks – despite being essentially paid to watch TV all day. And without scripts, we have to go by what we hear, and with a show with created jargon, such as most sci-fi shows use, we end up spelling things phonetically. Again... ugly. I’d much rather have the writers’ own version!

Commendable, really, as are the admissions of inferior NCI captioning.


2003.02.23

The inevitable discussion of Pinocchio

Now, faithful readers (and how many of those do I have?) will be wondering why I have not bothered to engage in a monologue concerning Pinocchio, a worst-case scenario of film dubbing for the 21st century.

The Roberto Benigni vanity project, reputedly the most expensive film ever made in Italy, stars the high-forehead 50-year-old as the lovable cobbler’s school-age son. Already it makes no sense. We’ve already got a punchline and we haven’t even set up a joke yet.

But it gets worse. Initially, Benigni insisted on dubbing himself into English. Now, the man does not speak English. It can work sometimes, as with bilingual or trilingual actors (viz. Jean-Marc Barr, Romano Orzari). It doesn’t work in any other case.

So what ended up happening? The original dubbing job, the one featuring Benigni speaking English, was so horrific Miramax ordered another dubbing job, this one featuring B-list Hollywood actors.

  • “Life Is Still Beautiful; It’s Just More Complicated”: “It was a little more than a week before the Italian opening of his new movie, Pinocchio, and Mr. Benigni was dubbing into English all of the Italian dialogue of the title character.... That task called for a degree of quiet composure that Mr. Benigni infrequently manages, but it was even more noteworthy for another reason. Mr. Benigni had never before dubbed one of his movies for its distribution abroad, and his decision to do so for Pinocchio underscored how dramatically his career had changed and how high the stakes had been raised.... But Americans will hear Pinocchio in English, rather than reading subtitles, a necessary adjustment if the movie is to click with children. Miramax is approaching many of the dubbing assignments the way voice-overs for high-profile animated movies are handled luring stars with signature tones and cadences. So far, Queen Latifah and Cheech Marin are among those on board.”
  • Pinocchio gets U.S. dubbing”: “The dubbed Pinocchio... will feature rapper-actor Queen Latifah as the Dove, star Kevin James as the Fire Eater, Eddie Griffin as the Cat, Topher Grace as Pinocchio’s best friend, Lucignolo, Cheech Marin as the Fox and David Suchet as Geppetto. Stars Roberto Benigni and Nicoletta Braschi have dubbed their own roles into English.” Still. Apparently. According to this story. But:
  • “Benigni’s ‘vanity project’ targeted to go up in flames”: “The official reason for the secrecy... is that the English dialogue tracks are still being recorded and they won’t be ready until just before Christmas. Miramax has decided not to release the subtitled Italian version of the film, as it did for Benigni’s Oscar-winning Life Is Beautiful, but no convincing answers have been given for this change of heart. The company has also decided not to use Benigni’s own voice for the title character, or his wife Nicoletta Braschi’s voice for the Blue Fairy, even though their fractured English is supposedly all part of the charm.” Hardly. No professional dubbing actor would countenance “fractured English” in a dubbed picture.

We should have seen this coming. What happened to Life Is Beautiful?

The English version had been originally thought of only for a video and airplane viewing, but, following the three Oscar wins by Benigni’s picture, a bigscreen version will be out there as well as the subtitled. Who will dub the lead roles? “The original cast,” said Weinstein, “Benigni and [his wife] Nicoletta Brasschi.” It’s not clear if Giorgio Cantarini, who played Benigni and Brasschi’s son Giosue, will voice himself.

One challenge for the dubbed version is Benigni’s limited English capabilities. After receiving his second Oscar, for best actor, he admitted he “used up all my English” when he made his first acceptance speech for foreign-language film.

That means Benigni should have seen it coming, too.

Q. Why didn’t you keep your own voice in the movie?

A. It didn’t work. Two hours with this thick Italian accent was too risky. And somebody could think that I won an Academy Award and now I try to act in English. It sounds preposterous. Imagine Woody Allen talking in Italian. You cannot do that for two hours. Also I couldn’t control my acting because I was concentrating on the language.

Q. But unlike most foreign actors, Americans know what your voice sounds like. So there’s this weird disconnect in hearing an American voice coming out of your mouth.

A. Your culture is with subtitles. Ours is with dubbing. They are both revolting things. You have to choose which one is the less revolting. This movie is for children, and they can’t read that fast. You cannot subtitle it.

Q. Will we ever see the subtitled version?

A. Of course! In January, the second week, we come out with the original version with subtitles. Of course! You know, when I hear my voice dubbed by another actor, I faint in my chair! I can’t stand it! It’s terrible! Terrible!

A typically scathing review:

Released in the U.S. with an English-language track, it also suffers from bad voice casting and atrocious dubbing: Breckin Meyer is totally wrong as Benigni’s voice, and Glenn Close (Blue Fairy) and John Cleese (Cricket) should fire their agents. Everything in the film but Benigni is wooden. A misbegotten idea, ineptly done.

They who do not learn from history, &c

So this month, Miramax put out a subtitled version, but did so solely within the confines of the only cities in the United States where they actually know the difference.

The gamble didn’t pay off for Miramax, who had hoped families looking for something warm and fuzzy would flock to the film. The $35 million movie grossed less than $4 million over the December holidays.

Now Miramax has re-released the film at theaters in New York and Los Angeles in Italian, with English subtitles.

Except that the subtitled version was always around anyway. Toronto got both versions at Christmas, for example.

The whole enterprise is pointless as far as I’m concerned anyway. You say “Pinocchio” to me and all I can think of is “Pin-Itchy-o.”

NARRATOR: Roger Meyers’ next full-length feature was the wildly successful Pin-Itchy-o.

SCRATCHY [Italian accent]: Now you be good, Pin-Itchy-o, and don’t you lie.

ITCHY: I promise I will never hurt you.

[His nose grows suddenly, spearing Scratchy’s eyeball]

SCRATCHY: Ouch!

Now, there’s a punchline.


Jake Lloyd still cannot act, even in Telugu

India Times on Star Wars: Episode II in India:

Fox will fan out with 59 prints across 40 Indian cities in the first phase. Of the 59 prints, 35 will be the original English version, while 24 are being dubbed in Hindi. There is no decision yet to field dubbings in languages like Tamil or Telegu. [...]

In terms of print numbers, Episode II ranks second after Fox’s September 2001 release of Planet of the Apes when the studio had gone with 112 prints. Planet, of course, had also seen South Indian dubbings.

“We laid more stress on the English version this time because one felt English would gel better with this movie,” [a functionary] said. Interestingly, led by Lucas’s prescription, the mixing of the Hindi dubbing was executed at Lucas Lab in the US.

I am told by a reputable source that not only can a competent audio engineer mix a language he or she cannot understand, directors can also direct actors who speak an incomprehensible language – right in the recording studio. Apparently a great many emotional features of the human voice are “universal,” a rather broad claim, but one that is at least passably true in many cases.


Go, National Working Party on Captioning divatrix!

In my experience, the National Working Party on Captioning down in Oz is composed of conservative, prudent, well-meaning people. I rather wish they had a Web site.

En tout cas:

Karen Dempsey knows all too well about the difficulties of a life without the ability to hear – the television makes no sense, the radio is useless and people avoid speaking to you for fear they will not be able to communicate effectively.

That’s why she has spent the last 12 years of her life campaigning for captions on television and in theatres.

Karen’s dedication was recognised recently when she was presented with the Roma Wood OAM Community Service Award as part of the 2002 Supertext Awards. [...]

As a member of the National Working Party on Captioning, Karen was successful in pushing legislation through parliament which requires all news, current affairs and prime-time television programs between 6pm and 10.30pm to be closed captioned. Regional news has until 2004 to implement the changes, unless it converts to digital before then. [...]

Karen was also responsible for getting open-captioned movies screened in Newcastle, the first regional cinema to show open-captioned movies outside the capital cities. Greater Union Glendale now shows one captioned movie each month.


Not actually written by Neal Pollack

Why I Need a Playstation 2,” not by Neal Pollack:

Dear America,

I want a Playstation 2 for Christmas. Everyone has one but me. And that sucks.

As it stands, I have yet to purchase or own any DVD, DVD-related or video game entertainment system. Or, better yet, have one purchased on my behalf. I hear people talk about extra features or deleted scenes, but do I understand? No! Do I get to watch Revenge of the Nerds in French with English subtitles from the comfort of my own living room? No! Do I get to blow up supervillains while smashing the corporation? Hell no!


Ask Gary Shapiro? Let’s not

Last fall, I took LawMeme’s bait and submitted a question for its interview with Gary Shapiro of the Consumer Electronics Association:

Open-ended question about legal requirements for accessibility (e.g., for deaf or blind users of consumer electronics). The U.S. already has mandatory caption decoders in TVs. What the U.S. does not have are mandatory accessible interfaces for on-screen menus, easy ways to turn audio descriptions on and off, and other accessibility improvements built into devices themselves.

What legal changes do you think are required to improve accessibility, and what changes would CEA support? The two responses may well be different, which will be of particular interest. (I should also ask what changes CEA would oppose.)

LawMeme brilliantly failed to forward the first paragraph of my question. Shapiro’s next-to-useless answer, its next-to-uselessness due entirely to LawMeme’s ineptitude?

I am baffled by what you mean by accessibility. Here in Washington speak it means access by people with disabilities – and we are doing a lot in that area with our products, captioning, etc. Assuming that’s what you mean, I think product makers keep trying different ways to reach new markets so as long as a need is perceived it will be met. The challenge is when a mandate is imposed; we spent a lot of time fighting a proposal to require every product to be accessible to a person who had any type of disability. Try building a phone for someone who can barely see, that is also good enough for someone who can’t lift things, with a loud enough ringer and visual keyboard for someone who is hearing impaired – get the idea?

If you mean accessibility to copyrighted product [I don’t, so let’s not bother]....

My intro graf was 60 words. Shapiro wrote 174 words on the mistaken assumption I was talking about something else.

Do you want these LawMeme lawyers representing you in court? They’d lob off the “not” in your not-guilty plea.


Vitac’s Xmas card

“Leechburg girl wins card-design contest”:

A Leechburg girl placed first in a Christmas card design contest for a national closed-captioning company that sponsored the contest for the Western Pennsylvania School for the Deaf.

Danah Richter, 14, best portrayed the theme “The Joy of Giving” with her card depicting a pair of hands holding a globe wrapped in a bow. An eighth-grader at the school, Danah won a certificate and $50 for her artwork....

Her card this year will be the corporate Christmas card this year for Vitac, a national closed captioning firm that offers captioning services to broadcast outlets such as television networks, syndication, cable and educational program suppliers. ABC, CBS, FOX, NBC, PBS, CNN, Discovery Channel and the Learning Channel are among its clients.


If Salon covered it, it must be important

“Why do movie subtitles stink?” Who frigging said they do? Jeez. Fucking Salon. Go under already, willyez?

Take, for example, Jean-Luc Godard’s film For Ever Mozart, which recently screened in San Francisco. The movie opens with a man standing on the side of the road, tapping his feet impatiently. When a young woman comes running toward him, he screams a steady stream of insults at her.

At least, that’s what seems to be happening, but it’s hard to tell – five dialogue-filled minutes have passed by and not a word of translation appears on the screen. After a few moments, the man appears to soften a bit. As he leans over to kiss the woman on the cheek, the first words finally appear: “You’re late. Kiss me.” By the time another sentence appears on the screen, half the audience has walked out. [...]

“It’s an adaptation, not a translation,” says Luis Manuel Rodriguez, a former dentist-turned-translator who wrote the Spanish subtitles for Primary Colors. One might assume that Rodriguez – the person who is essentially responsible for how millions of Latin Americans will understand the film – worked closely with the director Mike Nichols to ensure that every subtlety was not lost in translation....

Instead, he gets a “spotting list” that tells him how much time, or how many frames, he has for each title – and a week to write about 1,200 to 1,500 snippets of dialogue.... The fact that translators have so little to work with goes a long way toward explaining why subtitles can be so dreadful. [...]

Rodriguez blames bad translations on the fact that translators are last in the assembly-line process of film production. “With American movies, translators don’t even get a film credit,” he complains. “I would imagine a person who is willing to put their name on a film would be much more conscientious about their work.”

Actually, I strongly agree with Rodriguez. I like the way CaptionMax does things: Captioners are credited by first name (“Captioning by Joe at CaptionMax”). Personal touch, more accountability.


And just who is Jim Ellwanger?

He’s a captioner at Vitac (résumé). I think he’s also into model trains or something. You know captioners and their picayune interests.

At any rate, I like what he writes. He’s got two good Usenet pieces on pre-censored music videos he’s forced to caption in that form (Everlast and something on BET).

And someone even used Jim’s postings as feedstock for a killer captioning punchline!

Today, I closed-captioned next week’s episode of Birds of Prey, a one-hour prime-time drama that airs on the WB network.

Please let us in on it the next time you caption next week’s sports results or stock-market report.

– Danny “or, even better, Greenspan’s interest-rate decision” Burstein

Or how ’bout this zinger?

JIM: I didn’t until I had to closed-caption Sports Night.

PETE: You had to closed-caption Sports Night?

JIM: I had to closed-caption Sports Night.

PETE: Did you like closed-captioning Sports Night?

JIM: I did not like closed-captioning Sports Night.

PETE: Why did you not like closed-captioning Sports Night?

JIM: Everyone talks too fast on Sports Night.

PETE: Everyone talks too fast on Sports Night?

JIM: Everyone talks too fast on Sports Night. The scripts are 70 pages long for a half-hour show.

PETE: The scripts are 70 pages long for a half-hour show?

JIM: The scripts are 70 pages long for a half-hour show, which is about twice as long as the usual script for the usual half-hour sitcom, but most of it is repetitive dialogue such as this.

PETE: This is repetitive dialogue?

I laughed until I stopped!


All right.
So who is M.G. Timoshenko?

Someone who shills “money-making program[s]”:

Testimonial Letter From M.G. Timoshenko, who is an audio-description writer for the blind in California:

I know that what you are about to read will knock you for six [sic]. Your first question will probably be, “Why is a successful author involved in a money-making program like this?” Simple! I am always looking for lawful ways to make a lot more money. If money doesn’t make one happy, at least it allows one to be unhappy in luxury. And the Product should make anyone happy!

WTF?


Again craziness!

Now, what was Salon saying about bad subtitles?

To top it all off, the subtitling job is full of typos and very strange translations (“Digesting your thoughts will push open the door of mystery”). The dub voices are kind of annoying, but the dub script itself is pretty good, with some liberties taken to make the dialogue flow better in English.


More with the anime. Oy

Trolling through Usenet, I found a delicious list of anime subtitling/dubbing/captioning annoyances. (Apparently-newer list.)

I made this list because I am sick and tired of subtitle fans being treated as second-class citizens, given shoddy product, late releases, and high costs. I’m not sure how often I’m going to post it; probably whenever it accumulates enough changes that a new posting is worth it. [...]

Dubtitled: using a dub script for a subtitled version.

[Remainder excerpted – Ed.]

  • Kishin Heiden Nº 1–7: Subtitled in all capital letters. Songs not translated.
  • Moldiver Nº 1–6: Dubtitled and in all capital letters. Songs not translated.
  • Street Fighter II V: Most of the tapes of this (but not every one) use the dub version of the opening, which is newly hacked for America, uses American music, and leaves out the songs. (Couldn’t those cheapskates make a separate subtitled master from scratch?)
  • Green Legend Ran: All capital letters.
  • Kiki’s Delivery Service: Dubtitled.
  • Macross Plus Nº 1–3?: Dubtitled.
  • Ranma 1/2, “Desperately Seeking Shampoo”: Dubtitled.
  • Slayers: No closed captioning on LDs 2 and 4.
  • Tenchi Muyou OAV Nº 1–3?: Dubtitled and in all capital letters.
  • Tenchi Muyou OAV Nº 4?–7: All capital letters.

Now, here is my complaint. Dubtitling (a reasonably useful term) is desirable for deaf accessibility. Captions have to transcribe what people say, so the captioning of a dubbed audio track should match it. A case could certainly be made that, for hearing animephiles, a more idiosyncratic translation tailored for the written medium is desirable.

What this means is I want both.


How to learn to caption

Couple o’ items concerning training courses for real-time captioners.

  • Closed Captioning Classes Taught at AIB”: “That FCC mandate has led to a boom in closed-captioning classes and created a niche market for AIB College of Business. AIB College of Business is one of the only schools in the Midwest offering such classes and 120 students are currently enrolled.” Real-time only.
  • Local schools may get funding”: “Huntington Junior College could begin a program in closed-captioning services for broadcast media, thanks to the $900,000 federal grant supported by Byrd and Democratic Rep. Nick Rahall. ‘We already have a court reporting program in place... Basically, broadcast captioning and communication access real-time translation are two specialties that come out of the court-reporting field.... Once we receive the funding, we hope to offer some programs to maybe retrain some court reporters to meet the immediate needs. It looks like it should bring some good economic opportunities to the state.’ Two companies do most of the broadcast captioning in the United States.... Because of technology, people anywhere in West Virginia can work at home and closed-caption live events such as news, weather and sports.”

How not to learn to caption

Media Composer now available for Mac. And you want us to do what with it?

The release of version 11 Media Composer software in the middle of 2002 saw a number of additional features for Media Composer and Symphony. One of these was MetaSync tracks – an additional set of tracks that the editor can use to put in markers or trigger points for other software to use. For instance, an editor might use MetaSync tracks to place points for closed-captioning text changes or chapter points for DVD authoring.

So let’s see. You’re going to do half the work of closed captioning (positioning the titles) without actually positioning the titles, dooming you later to do more than the remaining half of the work, since it’s gonna take you longer to match your caption text precisely against preset in and out times than it would to just caption the whole show in one pass?

Shouldn’t this software have captioning capability rather than – yet again – fobbing captioning off as Not the Sort of Thing Our Software Should Really Be Expected to Do?


Heck – we just cured deafness!

Fred Reed:

My last night in Arlington, chilly, portending rain, traffic heavy on Wilson Boulevard. I was doing ribs at Red, Hot, and Blue on the theory that a coronary occlusion was better than saving myself for the tumor. A benefit of life is that you have choices.

RH&B claims to have the “best barbecue you’ll ever eat in a building that hasn’t already been condemned.” That sounds about right. I like condemned buildings. On the lobotomy box, a psychologist, expounding on the mental state of a sniper then popular with the press, babbled about self-actualization. I could tell because the closed-captioning said so. Captioning is supposed to be for the deaf, but really it’s for bars.


Surtitles: They’re a survivor

I have a dim racial memory of the “controversy” caused by surtitles in opera. Yet here we are 20 years later.

The inventor of opera surtitles was John Leberg, the COC’s director of operations at the time. (He’s retired now, and lives in St. Mary’s, Ont.) But in a recent interview, Leberg was quick to share credit with then-general director Lotfi Mansouri: “It started with Lotfi, who said, ‘Even with a good singing translation, it’s impossible to hear most of the words – especially in a theatre the size of the O’Keefe’ ” (as the Hummingbird was then called). [...]

Recalls Leberg: “We had purists who said, ‘I’m German-speaking, I already understand every word.’ And some people said, ‘I have to look up, and it takes away from the action.’ But in an audience poll, approximately 80% gave their approval.”

Since the early 1980s, surtitle technology has undergone extensive modification. “At first, we didn’t have computers,” explains Gunta Dreifelds, the COC’s current surtitles producer. “It was typewriters and slides. We’ve gone through at least three different projection systems.” Until the advent of computerized text projectors, between 800 and 1,000 slides were needed per opera, each one containing a sentence or two of the libretto. But if the equipment is newer, the principle has stayed the same.[...]

But by 1995, the Met found itself virtually the only opera house on the continent without titles, and the company installed its “Met titles” – little electronic display screens on the back of every seat in the house – at the cost of $2-million. [...]

In North America, titles have caused the near-extinction of foreign-language opera productions sung in translation. As well, they have led to new demands on opera performers: Now that audiences understand all the words, the singers are expected to act like they do too. [...]

And, of course, there have been a few disasters along the way. When Mimi lay dying at the end of Puccini’s La Bohème in a Washington production, she begged Rodolfo not to leave her. The surtitle replied, “Your battery is failing and your screen has been dimmed to conserve power.” (The titles were being run off a laptop.)


ViSiCAST (sic)

Our dear British friends insist on adding sign language to television broadcasts. Well, we did that here, too – in 1977. (Anyone remember the Deaf Television Resource Centre?)

Anyway, now they want to encode sign language for Webcasting, telecasting, and the “high street.” It’s fine, really, I suppose.


Description for the partially-sighted

Firsthand account of how useful description actually is to a person with some usable vision, in this case the U.K. DVD of Dancer in the Dark:

Yes, I really like this film a lot! It’s a great film to go and see/listen. I found the audio description did not intrude into the speech of the actors and the information it gave me was amazing!

For instance, when selma pulled something out of her bag I see I was some kind of paper however, the audio description told me it had large letters on it. My reaction was “oh really?” Anyway, another scene where I could see movment and a building, the A.D. would describe people/workers leaving which I can gather from the level of sight I have. However, I laughed when it actually read out the sign on the building and announced the company name. This must be, I imagine, what sighted people must see in films! It also told me who was there, in the audience watching, when selma was due to be hanged. I wouldn’t have disguished one person from the other well enough to know.

What I also found helpful was that I can place the characters much better than normal. Meaning that I can pust each face to a name much better using audio description. It tells you who’s doing what so I can place the name of the charactor that way rather than see a charactor and wonder which one is he/she?

Sometimes I can see what people do on screen if it is obvious and, other times, I don’t have a clue, to be honest. So audio description doesn’t allow me to miss a sinlge minute of the film. I found my concetration was much better too. Because of not having the full sight to see films I can distracted a lot of the time and I’ll miss bits, especially if it’s very visual and fast-paced.


Israeli deaf in Alabama

...are affected by a lack of captioning. Shouldn’t be surprising, I suppose.

Joy and Shaul Anter [name misspelled in the original article], a deaf couple living in Fairhope, had their world shattered by suicide bombers thousands of miles from home. They communicate with graceful hand gestures. Their message often heard through an interpreter. Yet, even if there were no spoken words, this couple’s anguish would have been felt loud and clear at a recent memorial service in Mobile for their nephews, who were killed in the attacks. Standing in front of a large crowd at Ahavas Chesed Synagogue, Joy and Shaul thanked everyone for coming. [...]

The Anters first learned about the bombing from the news on TV, but because the broadcasts were not completely closed captioned, it was difficult for them to get many details. That night, Shaul curiously logged on to a Hebrew news Web site. It was then that the bombing halfway around the globe in Africa completely shook their world in quiet Fairhope, Alabama.

“Their names were right there in the article,” Shaul says.”I couldn’t believe it.”

16 people died, including the 3 suicide bombers. Rami’s wife and daughter were in critical condition after being severely burned.


Lord of the Rings: Muffed for Js?

Lord of the Rings subtitling is a known problem area (op. cit.). In Japan especially.

  • Usenet: “There was mention not long ago of the problems with the subtitles of the Japanese release of FotR which resulted in Jackson ‘firing’ the translator and making sure there’d be a different subtitler. The issue recently came up on a list I read for professional translators, and I came across a few of the errors. Here is a small sampling of some of the more egregious ones” (check the link).
  • Asahi Shimbun: “Some Japanese fans of author J.R.R. Tolkien didn’t like what they saw... But it wasn’t filmmaker Peter Jackson’s take on the story that bothered them so much. It was the subtitles. While some fans took their complaints to the Internet, petitioning the film’s Japanese distributor, Nippon Herald, and Jackson himself to ‘improve’ the subtitles, the distributor has decided to stick with their chosen translator for the sequel. The original translator of the trilogy and its publisher also lent a hand in the subtitling. Whether the outrage is justified in this case, however, is beside the point. The fact remains that the movie has become so successful that it can inspire such an outcry.”

2003.02.02

Der talentierte Mr. Ripley sample

Die Deutsche Hörfilm gGmbH does audio description in Germany, as you may know from reading the DVD pages.

Now they’ve got an audio sample (a biggie – 984 K MP3) posted on their welcome page from Der talentierte Mr. Ripley. Even without understanding the language,

  1. English terms are discernible, including an apparent description of the appearance of Matt Damon rather than Tom Ripley
  2. the cadences of the description narrator are immediately familiar
  3. even given the longer wordlength of German, they can still fit descriptions into pauses

Kooky fun fact: When I saw the filename of Ripley.mp3, I immediately thought “Oh, goody! Signourney Weaver!” That’s how deeply embedded the Aliens hexalogy is. “Get away from her, you bitch!


Timed Text now official

The Timed Text working group (op. cit.) is now official.


Anarchy in the U.K.?

The British wouldn’t know accessibility if it came on a plate with bangers and mash, but at least somebody’s trying to change that. “Henry to join digital demo”:

A blind man from Kidderminster is taking part in a mass lobby of Parliament to protest at alleged discrimination against blind people using digital services.

Henry Brugsch likes to listen to the news and documentaries on TV but finds it virtually impossible to use new digital TV, radio and mobile phone services.

“I have to struggle to use a remote control with tiny numbers to activate an on-screen menu just to change channels,” he said.

“When I eventually find the right channel I struggle to follow the programme because there is no audio description to fill in the gaps between the dialogue.”

Henry Brugsch, as it turns out, is an American.

Continuing: The RNIB has a protest coming up on Tuesday the 4th concerning a U.K. communications bill.

The Communications Bill fails blind and partially-sighted people because:

  • Promoting Inclusive Design is not among OFCOMs functions even though this is key to preventing exclusion.
  • Access to audio description is compromised by insultingly low targets: The Government wants 10%, with no provision for people to actually receive a service on all platforms. We want 50%.
  • There is no mention of building in access provisions into requirements for electronic program guides or digital teletext services and no provision for making the user environment for interactive services accessible.
  • There is no guarantee that blind, partially-sighted or other disabled people will be effectively represented on OFCOM’s Consumer Panel.

If we don’t challenge these failings and lobby for changes to the Bill now it will be too late and we risk losing access to TV and radio. The Deaf [sic] lobby has lobbied en masse and has made great gains. Now its time for us to take action. Please join us.

Yeah. And where is the “Deaf [sic] lobby” now that they’ve gotten what they wanted?

You give deaf people captioning and they figure the entire exercise is completed. They’re perfectly happy for other disabled groups to continue to be excluded because, of course, deaf people aren’t “disabled,” and the accommodations they nonetheless require, despite failing to be “disabled,” can in no way compare to those required by actual disabled people.

It boils down to this: Captioning is more important than anything else. Give us that and everybody should be happy. Once we get captioning, we can all go home, right?

We see this now as Sony, Dreamworks, and Miramax caption some movies for first run but don’t bother with expensive audio description. Why should they? Film executives all at least vaguely understand captioning, and it’s cheap, and they reuse the captions on home video anyway, plus deaf people can see, meaning it’s nice and easy for them to write letters to studios, rather too often in broken English, that request captioning.

But description? It costs too much and we haven’t had much demand for it. Right?

Deaf people – deaf groups, deaf activists – sit by and permit such an inaccessibility to happen. I don’t see anyone acting on the ethical grounds that accessibility for one disabled group is insufficient. Since it’s perfectly possible to make a single movie accessible to deaf and to blind moviegoers, who can sit in the same auditorium as hearing and sighted people and people with other disabilities, deaf organizations’ silence on this issue is a tacit concession that they truly do believe they are special among peoples and that their needs take precedence over everybody else’s.


2003.02.01

All your Two Towers subtitles-cum-captions are belong to us

Via Boing Boing: Astonishing malapropist fansubbing of Lord of the Rings bootleg. Archive this one to your computer before it gets DCMAed.

See also: Fellowship of the Rings Japanese-subtitle abominations.

UPDATE: Also Chinese-source malapropisms.


2003.01.30

Kooky Flash karaoke demo

The typography is crap, but I like the approach: FM Systems ActionScene (other samples).


Hearing kids love captions too!

San Jose Mercury News:

Closed-captioning provides reading practice

My young daughter never willingly picks up a book to read, so to encourage her to exercise her reading skills in a fun way, I began to use the closed-captions on our television while she was watching her favorite shows. Almost immediately, she began to click on the closed-captions by choice when she watched television, without any reminders. She seems to enjoy reading what the characters are saying, and I feel better knowing that her TV time is no longer only a passive activity.

– S.D., Longmeadow, Mass.

What about homosexualist hearing-son-of-deaf-parents Jim Verraros?

There are so many things about living with deaf parents that made me who I am. For example, the closed captioning on the bottom of the TV screen: I grew up as an amazing speller because of that, because I would never really look at the picture. I’d just look at the words all the time. And sometimes they’d have misspellings – it wasn’t perfect all the time – but you got a good sense of the word. And I placed seventh [in spelling] in the state of Illinois when I was in the seventh grade.


Novel Japanese excuse

Why doesn’t Japanese TV do more captioning?

The disapproval of a major broadcaster is thwarting the efforts of a support group to implement a text display system on its Web site that would allow the hearing-disabled to more fully enjoy television viewing.

A few years ago, Jinko-naiji tomono kai (Association of Cochlear-Implant-Transmitted Audition) started transcription of the spoken content of TV and radio programs into text and posting it on its Web site, so that the hearing impaired could watch TV and also read the text on their home computers.

Although the law allows dubbing of programs if the copyright holder gives authorization, Japan Broadcasting Corp. (NHK), doesn’t allow it in fears that doing so would open the company to potential litigation.

“We hold the copyrights for our programs, but we don’t have the permission of individual performers or interviewees to put their words in text form. We can’t take responsibility if the text contains mistakes,” an NHK official said.[...]

Hiroaki Yamada, a lawyer who also has a hearing disability, says, “Broadcasters aren’t fairly addressing the needs of the disabled because they’re overly concerned with that particular aspect.” [...]

Last year, only 18.2 percent of NHK programs and 6.3% of five local networks’ programs were subtitled.

Wow. What happens if a writer’s union bothers to protest caption editing? Someone once mailed me to say the Writers Guild of America had come very close to doing just that, but, when pressed for details, none came, and no, I have not called them up to verify.


Caption Colorado can dub in Spanish? ¿Como?

“New York TV Station Plans Spanish Translation”:

The WB11 News at Ten, the prime-time newscast on Tribune Broadcasting’s New York WB Television Network affiliate, WPIX Channel 11, will be heard in a simultaneous Spanish translation on the Second Audio Program (SAP) beginning February 3. The move, a first for a major New York English-language news program, is prompted by the rapid growth in the Hispanic population in the New York area.... The Spanish translation, to be handled by Caption Colorado, which does closed captioning for The WB11 News at Ten, will be sponsored by Pontiac.

Huh?

What do captioners know about real-time interpretation?

Ask me sometime about what went on with Le Téléjournal and Le Point ten years ago.


Captions in cabs? ¿Como?

NYC Tests Interactive Video in Taxis”:

Next month, City Media Corp. is to launch its system, City Media InCabTV – The Taxi Channel [You can just feel the trademark symbols scattered through that phrase – Ed.]. Once a passenger enters the cab and closes the door, the “Buckle your seat belt” message will air. The ads then pick up from where they left off when the previous passenger exited. City Media’s monitors can be muted and will contain closed captioning.

No descriptions, though, of course, right?


Rule Nº 1: Don’t play football if you don’t want to go deaf

Mike Davis no longer hears cheering”:

“In the grand scheme of things, I know I don’t have it that bad,” [Mike] Davis said. “But when you look at a life like mine that’s been full of vim and vigor, a guy that’s always ready to go places on a moment’s notice. Now I’m limited where I can drive. I had taken for granted the beauty of the sounds of nature. I can no longer distinguish forms of verbal communication, when someone’s being serious or sarcastic or funny.” [...]

“You know how badly I wanted to hear how excited Tim Brown was while speaking to the Oakland crowd on the podium after the AFC championship, how badly I wanted to hear what Jerry Rice sounded like about going back to the Super Bowl?” Davis asked. “Closed captioning only gives you so much.”

It is, in fact, true, particularly with live captioning.


“Remote captioning”

A rare mention of the use of real-time court reporting (I suppose it’s not really captioning, since there is no video or film source) in classrooms:

During class sessions, [Richard] Haws, associate professor of journalism and mass communication, uses the headset, which is part of a hearing-assistance system called remote captioning.

Through remote captioning, the professor’s voice is carried over a phone line to a captionist, who transcribes everything the instructor says during class.

Greiman’s communication class captionist works in Colorado [or just for Caption Colorado? – Ed.]. She types the text of the lecture in a fashion similar to a court dictation. The text is then transmitted back to the computer that Greiman uses to watch the lecture unfold word for word on her computer screen.

Before using the captioning software, Greiman had a note-taker who would give her a carbon copy after each class. But she says the new technology lets her be more involved in the lecture, not just mentally disengaging from the instructor she can’t hear and waiting for her notes after class. “In the past I didn’t have any assistance in my classes and I missed many of the things that were going on,” she said.

Greiman wasn’t aware of the technology’s existence during her first two years of college. Then a friend informed her of the technology and that she qualified under Iowa State’s disability policy. Greiman then started using remote captioning. [...]

Not all of her lecture dictation takes places far away from Ames. In her architecture class, the captionist sits next to Greiman and types out the class transcript, trailing the professor’s lecture by only a few words. [That’s odd – they’ve got captioners in Iowa? – Ed.]

Most professors are supportive of Greiman’s needs. This sentiment isn’t universal, however. [...] Todd Herriot, director of disability resources, said whether professors like the technology is irrelevant – they’re legally obligated to work with disabled students. [...]

It isn’t always easy to adapt because “there’s an expectation that I change my teaching style, like repeat a student’s response so [Greiman] can receive a perfect transcript of the class,” he said. “I don’t do that and it’s awkward to make the change.”

Haws said it hasn’t been hard to adapt to the headset, however. [Photo shows you in fact wear it around your neck; the microphone projects upward – Ed.] [...]

Keeping up with the lecture still isn’t simple for Greiman. She must look at the professor, take notes and watch the computer screen – all at the same time. “It’s hard to do three things at once,” she said. “You also can’t always depend on the technology. [The captioning system] doesn’t always work, which means from time to time I have to do things the old fashioned way – go to the front of the class and listen as hard as possible.”

Now, Kathryn Woodcock used a similar CART system for her Ph.D., and this was a couple of years ago:


2003.01.28

“Sam, go slower when you declaim!”

What is this trope going around that The West Wing is so fast-paced you need captions to understand it, and even then it’s iffy?

  • Hard to understand West Wing”: “The readers have spoken and they came through loud and clear: My wife and I are not the only ones who can’t understand the dialogue on The West Wing. By E-mail and phone, they responded to a recent column that asked the question, ‘What’d he say?’ [...] ‘We found the only way to enjoy it was to tape it with ‘closed caption’ on and then watch it that way. Heck of a way to watch a program, but it’s the only recourse we have.” [...] ‘We tried the closed-captioned option and they speak too fast for that, too.’ ”
  • “Shedding light on murky look of West Wing”: “[A] lot of us, including Schlamme’s 78-year-old dad whose hearing is just fine [!], have to switch on the closed-captioning just to keep up with the banter. ‘Many people say they love the show the second time even more than the first time,’ he said. ‘It’s because there are so many nuances that I don’t think, in that context, you can catch.’ ”

My complaint about West Wing captioning pertains to ill-advised caption divisions that, in some but (let’s get real) nothing remotely resembling all cases, are dictated by super-long declamatory dialogue and editing. I should really start collating these little perversions.


2003.01.26

The I Am Noticeably Less Embarrassed by the Design Corner

Well, I hated the design of this page as much as you did. It was originally an experiment in creating an all-stylesheets layout that even Netscape 4 could display. That used to be true. But I checked it recently and I had since added enough of my own homebuilt jiggery-pokery to queer the layout in that cœlecanth of a browser.

So I engaged in further jiggery-pokery. The site now uses tables for layout. Please, go ahead and laugh.


The CRTC learns the word “accessibility”

The CRTC, the smug, careless, antiquated, calamity-prone broadcast regulator without whom so very much would be possible, issued a report on broadcasting in 2002. Strangely enough, CRTC functionaries seem to have learned the word “accessibility,” no doubt from my repeated interventions and complaints. The report is at least useful for collating broadcaster requirements. You could also print the PDF.


Boomers, you’re next

Press release makes a hard-to-refute point.

ACB and other members of the Coalition are weighing our options and considering a number of next steps,” said Christopher Gray, of San Francisco, President of the ACB. “The population of blind and visuallyimpaired people continues to expand as the baby boom generation enters senior citizenship. People who lose their vision later in life have grown up watching TV, and they aren’t going to like the idea of having to do without access to this mainstream medium, just when it began to appear that described video would allow them to continue to enjoy it.”

Take that, Jack.


Just what is it that makes “TheatreVision™©” so infuriating?

Helen Harris’s “TheatreVision™©” is this not-particularly-competent audio-description outfit down in California that does its level best to confuse the rest of the world about audio description – first by claiming that TheatreVision is some new and unique process rather than yet another brand name for the generic process of audio description and, most importantly, by engaging in publicity stunts that seem actively harmful to the cause of increased access for blind people.

One such stunt? The use of celebrity narrators, which everyone else in the industry agrees is unnecessarily distracting. (I’ve had a guideline decrying such a practice online for a year and a half.) And ignorant television writers have a habit of falling for it:

  • “ ‘Wonderful’ play-by-play for visually impaired”: “In a new twist on an old classic, NBC next month will air a version of It’s a Wonderful Life for visually-impaired viewers. Called TheaterVision, the special process [sic] uses a narrator to describe the action during breaks in the dialogue. For this broadcast, former President George Bush has done the narration”
  • “At This Juncture, George, See” (Jim Knipfel): “According to this morning’s Daily News [care to do your own reporting there, fella?], the film will come complete with an extra audio track to describe the action for those viewers who can’t see the screen. Called TheaterVision, its sort of like the ‘director’s commentary’ tracks you find on your DVDs – but instead of having some asshole explain how hard it was to get funding, or the ‘artistic vision’ behind the shower scene, here you have a narrator saying things like, ‘Eddie gets up from his chair, walks across the room and grabs the samurai sword.’ TheaterVision is not exactly a new process – Disney has released a number of its animated features to home video with a blindo track.... This, however, represents the first time a classic film has been shown in such a way on a major network”
  • Stars Lend Their Voices to the Blind”: “Comparing other descriptive audio services with TheatreVision is like ‘taking a Model T and comparing it to a Porsche,’ said Harris. ‘There are other descriptive services, but not with the TheatreVision attention to great script writing and performance’ ”
  • Curious George Wonders: Will It Be Bedford Falls, or Cheneyville?”: “On TV this weekend, George Bush, Sr. tells the story of George and the bush. ‘George stares down at the empty robe, then picks it up, looking puzzled,’ the former president says. ‘Mary’s eyes peer out of a large, flowering bush. George starts to toss the robe, then reconsiders, eyeing the robe slyly.’ It’s a Wonderful Life was on NBC, and the visually impaired could tune in to a version in which Bush père charmingly narrated the action, including the scene where Donna Reed loses her bathrobe and jumps into a hydrangea bush so Jimmy Stewart can’t see her. Poppy Bush did the voice-over as a favor for a blind woman from California who is an advocate of TV audio description for the blind.”

Described films run every week on U.S. networks; Lifetime even maintains a Web page listing them. I suppose the proviso “classic film” is key here.

The issue of Disney is worth discussing. I own the DVD of Dinosaur, with a description track created by TheatreVision™©. As a narrator, it uses one of the movie’s own actors, which, if anything, is an even worse idea than hiring a celebrity.

TheatreVision™©’s parent organization, Retinitis Pigmentosa International, appears to view blindness as a curse that must be “fought.” (Listen to the plug at the end of the description of Dinosaur.) Moreover, RPI discourages professionalism in description. I use the term not in the context of getting paid but in taking the whole task seriously. Washington Post again: “Helen Harris... said the celebrities are not paid for reading the descriptions. TheatreVision also uses volunteer writers. ‘The directors are involved with it’ too, said Harris, who called the stars’ participation a ‘humanitarian effort for the blind.’ ”

Some of us believe accessibility is a right. And we have the law on our side.

TheatreVision™©’s practices are so egregious they elicit the unwelcome temptation to custom-craft guidelines to make what they do illegal. More profitably, we should use TheatreVision™© as a worst-case scenario to be avoided in the future.

But it’s not all bad

Hats off to Jim Knipfel for his punchy, charming, offhand use of the term blindos. I just love the new slang the kids have got going.


Virtual real-time captioning

You can do real-time captioning anywhere there’s an audio feed and a phoneline, as Fremont, California is learning.

But who is the mysterious typist transcribing dialogue for the hearing impaired at local city council and school board meetings? Fremont school district trustees wondered themselves at a recent board meeting. One school official finally asked: “Where are you?”

They waited for a moment. And finally, the word “Wisconsin” came up on the monitor.

The typists, formally called closed captioners, can be located anywhere in the country. Linda Chavez, whose San Francisco-based closed-caption company contracts with the city of Fremont, has workers transcribing local meetings who live and work as far away as Hawaii. Because it still is a relatively new profession, the demand for good captioners requires that concessions be made, Chavez said. “There’s not a lot of good captioners out there,” said Chavez, whose family-owned company, Chavez Group, also contracts with the city of San Francisco.

However, as with interpreters, it is vastly preferable also to have a video feed. You understand speech better when you can see the speaker.