Joe Clark: Accessibility | Design | Writing

Submission to HREOC Issues Paper on Television Captioning

Note: In 1998, the Human Rights and Equal Opportunity Commission in Australia undertook an inquiry into the question of whether or not the failure to provide captioning constituted unequal treatment on the grounds of disability. Since nobody involved in the process, least of all the other intervenors, seemed to know anything about captioning, let alone captioning outside Oz, I wrote the following submission. (See also my submissions to the CRTC. At present, the only one documented online concerns CTV.)

Background

I am a freelance writer in Toronto who has followed captioning for some 22 years. A full biography is provided elsewhere. I have written a dozen or so articles on captioning (many available online), and have given lectures and presentations on the topic. I run the Media Access mailing list, the only Internet mailing list dealing with captioning, audio description, subtitling, dubbing, and other means of making media of information accessible to people with disabilities and others.

As a hearing person with background and training in typography, linguistics, computers, and engineering, and with two decades’ exposure to the seemingly obscure medium of captioning, I am uniquely situated to clear up many of the misapprehensions presented in the HREOC inquiry’s report. I am also the only person in the world qualified to expose and repudiate the lies and distortions officially submitted by Australian television services and others, who appear to assume that their own levels of ignorance of and disdain for captioning are definitive and universal.

Responses to the Issues Paper

Answers to questions, corrections, and elucidations are given below, with appropriate quotation from the source document.


Closed captioning is an assistive technology designed to provide access to television for persons with hearing disabilities.

While that may be the case in Australia, where captioning is underdeveloped, in North America the main audience for captioning is now hearing people. In the dying days before decoder chips were required in new TV sets, it was understood that most buyers of set-top decoders were ESL learners and other hearing people, the implication being that pretty much everyone with a hearing impairment who needed a decoder owned one already. However, with decoders built into nearly every TV set sold in North America, the demographics of the captioning audience have changed.

Analysis: In the initial decade of closed-captioning, something like 300,000 external or set-top decoders were sold in North America. (An exact number is unavailable. The National Captioning Institute in the U.S. manufactured and sold nearly all those decoders but, unaccountably, has always refused to divulge accurate sales figures.) Assume 92% of those decoders – 276,000 – are in the U.S., the rest in Canada.

Let’s assume that each of those decoders is watched by one person – a conservative estimate, since many households consist of more than one person. Let’s also assume that all those people are deaf or hard-of-hearing – also a conservative assumption, since NCI propaganda stated that the largest single group buying decoders in the late 1980s and early 1990s consisted of people learning English as a second language, nearly all of whom are hearing, and since a small number of hearing people have been watching captioning since its early days.

Since July 1993, all U.S. TVs with screens 13 inches or larger have carried decoder chips as standard equipment. U.S. sales of TVs, according to the Electronic Industries Association in Washington, are:

Let’s assume 90% of those sets are 13 inches or larger, and that sales are more or less consistent month-to-month. Let’s assume that annual sales since 1995 are 25,000,000. Thus a reasonable estimate of the total number of decoder-equipped TVs in the U.S. at the end of 1998 is:

0.9 × (0.5 × 23,005,000 + 24,820,000 + 25,600,000 + 3 × 25,000,000)
= 123,230,250

123,230,250 is 446 times the number of U.S. external decoders sold in captioning’s first decade. Again assuming that each decoder-equipped TV is watched by one person, and assuming that 90% of those people are hearing, the total hearing audience capable of watching captioned TV is 110,907,225 – that is, 401 times the number of set-top decoders watched, in this model, only by deaf people.

Hence, if slightly more than 1/401, or 0.2489%, of those hearing people watch TV with captions on, hearing people become the majority audience of captioning. And as successive years come and go, a lower and lower proportion of hearing people will need to turn their decoders on in order for hearing people to become the majority audience of captioning.

In Canada, the working assumption is that 90% of TVs 13 inches or larger do contain decoder chips even without a legal requirement. Using Canadian figures provided by the Consumer Electronics Marketers of Canada:

then the number of decoder-equipped TVs in Canada adds up to:

0.9 × (0.5 × 1,511,000 + 1,545,000 + 4 × 1,540,000)
= 7,614,450

A somewhat optimistic estimate of the number of set-top decoders sold in Canada during captioning’s first decade is 24,000. Assume all those are watched by individual deaf people only. Thus by the end of 1998, using the same assumption as above (that 90% of decoder-equipped TV owners, or 6,853,005, are hearing), 285 times as many decoders will be found in hearing Canadian homes than accumulated in the first ten years of captioning. Hence if slightly more than 1/285, or 0.3502%, of those TVs have their decoders turned on, hearing people become by far the majority audience of captioning in Canada.

These numbers have no direct application to Australia, but this pattern will inevitably replicate itself if and when Teletext and Line 22 decoders are required by law in new Australian-market televisions.


In what if any respects are non-broadcast television services different from broadcast regarding possible and required captioning?

None. A signal is a signal irrespective of its transmission source. We are not talking about cats versus dogs here: In every case a television signal is involved, whether transmitted over-air or via cable. For those who maintain that those services truly are separate and distinct, think about how the signal actually gets to the ultimate distribution point before it is viewed at home: Is it the case that the same signals ultimately delivered by antenna or cable actually funnel through the same machinery? In other words, is it not true that television signals in a country as big as Australia all travel through satellite and microwave connections before reaching an over-the-air transmitter or a cable head end?

How, then, are the signals truly different merely because they are ultimately delivered by antenna or cable? For the purposes of regulation, a signal is a signal is a signal if it is intended for the viewing of consumers.


What weight should be given in decisions under the DDA to the provision of public financial resources to the ABC and SBS, which are not available to other television service providers?

ABC and SBS are separate among equals given their public funding and distinct noncommercial mission. The fact that, with rare exceptions, commercial broadcasters do not derive their funding from government is simply that: A fact. And it is largely an irrelevant fact where captioning is concerned.


What weight should be given in decisions under the DDA to the digital conversion arrangements for free “loan” to existing broadcasters of public resources by way of broadcasting spectrum?

The forthcoming digital conversion, if it actually happens, is an event in the future. To borrow a phrase from elsewhere in the Issues Paper, “’the Australian community’ means the community as it exists in fact, including people who are deaf or have a hearing impairment, rather than a community artificially defined as excluding people with a disability.” Accordingly, “the Australian community” must also mean the community has it exists in fact at present with its demonstrable needs in the present day. The promise of eventual digital conversion is irrelevant to the day-to-day viewing lives of Australian consumers.

While it will be necessary to ensure that a level of captioning at least as high as that required of analog TV also be required of digital TV if and when digital TV comes along in the future, at the moment the entire issue of digitization is an irrelevancy that is advanced by broadcasters as a tool to evade responsibilities to caption their existing programming now.


The Federation of Australian Commercial Television Stations has the audacity to submit baldfaced lies and distortions to the HREOC inquiry.

FACTS’ submission recommends that the Commission in relation to the captioning requirements of the Television Broadcasting Services (Digital Conversion) Act 1998, recognise that FACTS is working with the Government and the Department to arrive at achievable standards because:

Let’s take these one at a time.

it is not practicable to require commercial television to caption news and current affairs outside of prime time

In fact, experience in North America proves that it is eminently practicable to caption news and current affairs outside of prime time. Why? Because, at prime time, most if not all stenocaptioners are actually at work captioning the news. At other hours of the day, there is less demand; there are fewer bodies at work. Accordingly, it is easier to schedule stenocaptioning of an off-peak program than a program during prime time or other peak hours.

For the purposes of this specific discussion, “prime time” should really refer to the period in which newscasts are concentrated, which, from having visited Australia, I believe to mean “after dinner.” Given this, the crunch time for stenocaptioning in Australia is the same as in North America: After dinner. That’s when most stenocaptioners are hard at work simultaneously. Australia would require fewer stenocaptioners to caption programming in prime time but not after dinner (from, say, 1930 to 2230 hours, when fewer news and current-affairs programs are aired), and even fewer to caption programming at other times of the day or night.

Another complication that is not mentioned in the Issues Paper, presumably because all respondents and the HREOC are inexperienced in these matters, concerns time zones. In North America, it is common for national newscasts to be produced live one, two, or three times so that they air live in different time zones. In Canada, CBC’s The National is read live to a camera up to four times a night. American network newscasts are broadcast on tape-delay in Central and Mountain time zones some or all of the time, but are read live for Eastern and Pacific Time viewers.

However, stenocaptioners of any specific program are not themselves spread out over the continent. They typically work from exactly one office. This too reduces the demand for stenocaptioners working simultaneously: It’s possible for the same captioner to caption the Eastern and Pacific newscasts separately because they take place separately. For live events that are truly live everywhere, like the Academy Awards, there is no extra burden on the sheer number of captioners because there is only one broadcast.

Further, the issue of live versus live-display captioning has specific relevance here. As a live program is captioned, it is also recorded somewhere within the broadcaster’s equipment. That tape, with the live captions, can be run as-is later for other time zones. The appearance is the same as live captions actually created live, but what you’re actually seeing is a tape of the previous live show with its live captions. Remember, as far as the captioning company is concerned, this is only one job: They stenocaption the show live from, say, 1830 to 1900 hours, and the network simply runs that tape an hour or two hours later for the other time zones.

But really good captioners – and there aren’t many of them – will caption via stenography a live program while it is being transmitted, then save up and correct the steoncaptioned text for repeats later that day. These so-called cleaned-up captions make the experience for later viewers actually better than that of live viewers because captions are less delayed from the original utterance and are 100% verbatim or very, very close to it. Some U.S. newscasts seen in Central and Mountain Time areas have these cleaned-up captions, which require someone to postprocess the stenocaptioned program and then scroll up the corrected captions when the program is aired in those locales. The person doing this work isn’t always a stenocaptioner; years ago, one of CBC Newsworld’s caption providers used an employee who is a champion at typing on a standard QWERTY keyboard.

Given this experience, repeat telecasts require either exactly zero extra stenocaptioners if the broadcaster runs a tape of the live show with its live captions, or zero extra stenocaptioners if the captioning firm uses a standard transcriptionist to clean up the captions and re-scroll them. And remember, too, that these cleaned-up captions are sent out at different times from the original live show: The captioner may caption the live feed at 1830 hours, clean it up from 1900 to 1930 hours, and then re-scroll the corrected text at 1930 and 2030 hours for Central and Mountain Time. Then, at 2130 hours, the live telecast to Pacific Time viewers is itself captioned live. The only crunch times for qualified stenocaptioners are at 1830 and 2130 hours.

The North American experience does have relevance to Australia, with its multiple statewide newscasts. Since there are three time zones in Australia, it is quite conceivable to spread the work out over time: Captioners can reside in the same time zone as the live broadcasts or they can reside in only one time zone and work at different hours for the live telecasts in other states. This is not rocket science. We’ve been doing it over here for more than a decade. No one involved in captioning in Australia should pretend that it’s impossible Down Under.

The upshot, then, is that there is nothing in the airing of news and current-affairs programming outside of prime time that hinders its captioning.

for metropolitan stations, requirements to caption news and current affairs in prime time must be phased in

Not necessarily. It is understood that there is a shortage of qualified stenocaptioners in Australia. Presumably, though, there are more qualified captioners than there are nightly newscasts. Australia is a small country, but it isn’t that small. Even if the number of newscasts exceeded the number of captioners, Australian broadcasters could make use of the country’s time zones to shift the load across the evening for a captioner located in exactly one place who serves multiple stations. And yes, it is quite conceivable for one captioner to serve more than one broadcaster.

It is conceivable that there really aren’t enough qualified captioners to caption every prime-time newscast in big cities, but it’s unlikely. What is more likely is that Australians are inexperienced and uncreative in addressing captioning problems and are trying to reinvent the wheel when we in North America already worked out a solution years before. Feel free to adopt our procedures. We’ve got the bugs worked out by now.

for regional stations, it is not practicable to caption news and current affairs in or out of prime time

Ridiculous. We do it here all the time. I regularly watch live-captioned newscasts from Buffalo, New York and Peterborough and Hamilton, Ontario, along with Toronto’s own newscasts. There is no technical obstacle whatsoever. All that is required is a trained captioner and the money to pay for the captioner’s work. It is conceivable that regional stations in areas where TV stations are relatively impoverished might not be able to afford stenocaptioning, but the issue of affordability is one that the HREOC will specifically address.

captioning requirements are not practicable for live sport and some other live programming in prime time

Claims of the impracticability of captioning live sport are the most insulting of the many lies presented by broadcasters in these proceedings. Captioning live sport is purely a question of captioners and money. We do it here all the time. Even small regional stations live-caption some of their sportscasts – CKVR in the small Southern Ontario city of Peterborough, for example, stenocaptions its triathlon coverage. “Other live programming” can be and is captioned in prime time in North America, including awards shows, news and interview specials, and anything else conceivable.

FACTS seems to labour under the misconception that, since FACTS’ members do not want to expend money and effort on captioning and thus have maintained a wilful ignorance of the reality of captioning, everyone else shares their ignorance and disdain and buys into their poppycock excuses. FACTS wants its own industry evasions enshrined in legislation and DDA-enforcement practices.


What levels and standards of captioning are required and achievable to meet the requirements of the DDA prior to the commencement of digital broadcasting?

The question is too broad to be answered. It is an essential question of the inquiry and can be answered only by the HREOC.

What levels and standards of captioning are required and achievable to meet the requirements of the DDA prior to the commencement of digital broadcasting?

It is an artificial deadline. Digital TV is vapourware: Like Microsoft Office 2000, Mac OS X, or Sasquatch, it is merely rumoured to exist. Well after the introduction of digital TV, analogue TV will be in widely popular use – probably by orders of magnitude. Establish levels and standards of captioning for analogue TV and use those as absolute minimums for digital.

In interpreting and applying the DDA, and in making recommendations, should the Commission give priority to news, current affairs and prime time programming, as with the digital conversion legislation, or to other program types? Or should the U.S. FCC approach be followed, which sets overall quotas and leaves the mix of programming captioned to stations and the market?

It’s a bit late in the century to engage in the tokenistic practice of picking and choosing the programming made accessible to captioning viewers. If Australia were as advanced in captioning as Canadians or Americans, then a uniform, nondiscriminatory rollout of captioning to all programming types could be easily justified. However, that is not the case. In all good conscience, I can only concur that news, current-affairs, and prime-time programming should take priority. That’s the way we did it here.

The only bottlenecks are trained stenocaptioners and money. There are no other “difficulties” whatsoever, and don’t believe otherwise.


Should the Australian Broadcasting Authority regulate captioning on commercial broadcast television under its existing powers?

Yes.

Could broadcasting industry codes on captioning contain more specific requirements?

Of course they could. Children’s Christmas wish lists for Santa Claus could be a bit more realistic, too, but it isn’t going to happen. Australian broadcasters are recapitulating the worst habits of craven Canadian and American broadcasters: They resent captioning, do not get captioning in any sense whatsoever, and move heaven and earth to oppose any increase in captioning levels or quality. Broadcasting industry codes are per se irrelevant to the HREOC’s mission; they are disinformation.

Should the Australian Broadcasting Authority regulate captioning on subscription television under its existing powers?

Yes. A signal is a signal. The Authority should regulate platform-agnostically: TV is TV whether it’s received via antenna, cable, or satellite. A case could be made that digital television is a different animal, but not as far as captioning is concerned: Digital TV’s levels of captioning can never be lower than analogue TV’s levels.


The “Canadian Radio and Television Commission” does not exist. The Canadian Radio-television and Telecommunications Commission does. We call it the CRTC.

In Decision CRTC 96-611 (4 September 1996), approving an application for music-video channel MuchMoreMusic, the Canadian commission stated:

When questioned at the hearing regarding its plans to provide closed captioning of its programming, the licensee stated that captioning 90% of the programming offered by MuchMoreMusic would not be feasible, given the scarcity of captioned videos. As noted in Public Notice CRTC 1996-120, the Commission agrees that it would not be appropriate to apply its general approach for English-language services to MuchMoreMusic. Nonetheless, the licensee did make a commitment to spend $525,000 on closed captioning over the licence term. The Commission expects the licensee to adhere to this commitment. Furthermore, the Commission encourages the licensee to close caption, by the end of the licence term, at least 90% of all non-music programming broadcast by MuchMoreMusic, including presentations by program hosts.

I have intervened repeatedly in license renewals for various Chum Ltd. networks, including MuchMusic and MusiquePlus. In all cases the CRTC ignored my detailed factual disprovals of Chum Ltd.’s lies and exaggerations. (In the MuchMusic renewal, not only was I ignored but the CRTC elected to refuse to discuss captioning whatsoever – the only license renewal of that period to do so – rather than deal with my proof of their own regulatory incompetence and Chum Ltd.’s ongoing pattern of malfeasance. In the MusiquePlus case, I was summarily dismissed.)

I am an expert on captioned music videos. In 1989, I wrote a guest editorial for Billboard explaining that the time was ripe to begin captioning. A later article for the Village Voice described the superiority of the captioned version of Snow’s “Informer” video over the bastardized MTV subtitled version. I have over 900 music videos on indexed videotape, of which some 140 are captioned. And those are merely the captioned videos I liked enough to keep. I have made pitches to various entities, including Chum Ltd., about captioning videos. VideoFACT – Video Fund to Assist Canadian Talent, the funding arm of the three Chum Ltd. music-video stations (MuchMusic, MusiquePlus, MuchMoreMusic) – rejected my plan to require captioning of music videos in the early ’90s, though VideoFACT presently crows that it requires captioning on all videos it funds. Apparently VideoFACT is still not equipped with a decoder, because at least once every week I spot a new VideoFACT-funded clip with no captions whatsoever. And quality of captioning on all Canadian videoclips varies from poor to atrocious. Don’t dismiss this as hyperbole: I can prove it.

All the major U.S. labels caption videoclips. Exceptions: The various PolyGram labels still haven’t gotten their act together enough to systematically caption everything; Universal Records, formerly MCA, is not up to speed, either. And indie labels are not keen on captioning. However, for all intents and purposes, nearly all mainstream music videos are captioned in the U.S. Most of the major labels in Canada also caption their videos, though the quality of captioning is appalling, as it always is in Canadian captioning of prerecorded programs.

It is nothing short of a falsehood for Chum Ltd. to state that captioned videos are “scarce.” The scarcity, if it exists, was caused by Chum Ltd. itself. Ten years ago, Chum could have woken up and smelled the future and required captioning on all videos irrespective of source. It did not do so. Music-video stations are a license to print money: Though labels do increasingly levy a small handling charge on broadcast outlets to receive videos, effectively all of a video station’s programming is provided free or at laughably low cost by outside suppliers. In a span of, say, two years, and virtually for free, MuchMusic and MusiquePlus could have achieved 100% captioning of new videos, meaning that the only videos left uncaptioned would represent back catalogue.

Neither did the Canadian music-video oligopoly insist that Canadian subsidiaries of American labels import the captioned version of U.S. videos, something mighty Chum Ltd. had the power to do. (The labels, with whom I have spoken at exasperating length, claim that it is difficult to persuade U.S. labels to stop thinking of Canada as “international.” On this sole count, Canadians should demand continentalism and insist that U.S. labels acknowledge that both the United States and Canada use NTSC Line 21 captions. It only stands to reason that all videos flowing north of the border should arrive captioned in Canada if they are captioned for the U.S. market. To date, only a minority of major-label U.S. videos enter Canada with captions, as I have discussed elsewhere, an article the Issues Paper itself cites.)

Moral of the story: Don’t believe Chum Ltd., and don’t believe the CRTC, either, which is too daft and incompetent to do its own research and explode its applicants’ falsehoods. In fact, it’s too incompetent even to acknowledge submissions to its own license proceedings, which, I suppose, is not unexpected given that I revealed that neither Chum nor its chum the CRTC were wearing clothes.

Does the finding of the Canadian Commission regarding scarcity of captioned music video remain current? Is it applicable in Australian circumstances?

No and no.

Is more information available on captioning of music videos in Australian circumstances?

Captioning of videos in Australia will have to start from scratch. On the plus side, the experience in North America may help defuse any naysayers. I am willing to assist interested parties in making the case for captioned videos in Australia. Note that the whole point of captioning is to make a program accessible everywhere, so we must not be swayed by issues of cable vs. over-the-air music-video programming: No matter how a music video reaches a viewer’s home via television broadcast, it ought to be captioned.


The Issues Paper goes on to cite CRTC license decisions for broadcasters operating in languages other than English that do not impose captioning requirements.

I suspect that regulation of accessibility issues like captioning is within the ABA’s mandate, and yes, that power should be exercised or else it is meaningless. As in the Canadian case, allow licensees to get away with bad captioning, no captioning, or not enough captioning and it is impossible to mount a credible threat later on. The ABA, starting effectively from scratch, is in the position to stand unwavering in its captioning requirements and to exact real penalties for noncompliance; any other approach dooms the ABA to emulate the CRTC’s history of wrist-slapping ineffectuality. (And in fact, we’d be lucky if the CRTC even bothered to slap an industry wrist.)

A phase-in of captioning requirements based on gross income, à la the CRTC model, might work in Australia. But monitoring compliance is crucial; quarterly written reports, with computerized logs of every program actually transmitted with captions, will be the minimum necessary reporting.


The ABC’s submission argues that research is required on use of existing captioned television services:

In the absence of independent research concerning levels of use for captioning services, the Corporation does not support a further extension of the legislative requirement for television broadcasters to provide captioned services for the deaf and hearing communities. The ABC believes an independent assessment of the patterns of use for captioning services in the fifteen years since their introduction is an important element in the development of policy on this issue.

We now have absolute proof of the ABC’s contempt for captioning and captioning viewers. If these are the viewpoints of the supposedly-more-enlightened public broadcaster, what must the profit-hungry media barons of Australia be thinking?

To follow ABC’s logic, without proof that existing captioning services are being “used,” those existing services should be terminated. Why is it necessary to research “levels of use” for “further extension” of captioning? Doesn’t this presuppose that ABC does not know the current “use” rate of existing captioning? Shouldn’t we worry about the present day before worrying about the future?

Does ABC suggest that the entire industry’s general disinterest in captioning, and its ongoing knowing denial of access to captioning viewers, is so profound that nothing short of ironclad scientific proof will prompt the industry to improve quantity and quality of captioning in Australia?

More relevantly, ABC fundamentally misunderstands basic concepts of disability access. By its reasoning, sidewalk corners would not include curbcuts for wheelchair users unless “independent research” proved in advance that actual wheelchairs would traverse those specific curbcuts. Such a policy requires disabled persons to prove ahead of time that they absolutely will make use of a provision before it will be made accessible to them.

Under this policy, no accessibility improvements would ever be made. It is impossible to prove a priori that any specific sidewalk in Alice Springs, Launceston, or Geelong will actually require a curbcut. But that misses the point. Nondisabled people, assuming they have the ability to use a sidewalk not equipped with a curbcut, do not have to phone ahead to some central politburo, and presumably file written requests in triplicate six months beforehand, to cross the street.

Would ABC like to impose similar conditions on, say, women, Aboriginals, Asians, white males, welfare recipients, or some other group? ABC misunderstands fundamental democratic concepts like freedom and equality: If nondisabled people have the option to cross the street, so should wheelchair users. If hearing people have the option to watch and understand ABC television programs, so should deaf people.

In the specific context of captioning, ABC’s malarkey holds even less water. No one is required to watch a program on ABC. The network produces or acquires the program and transmits it; what happens after that is up to the viewer, who may ignore or watch the program as he or she wishes. We are not dealing with the Ludovico Technique from A Clockwork Orange: ABC cannot force anyone to watch a specific show or even any show on ABC at any time.

Unlike pay-per-view television, there is no requirement to declare in advance that you will watch a program in order to receive it. If that is true for nondisabled viewers, it must equally be true for viewers with a disability: The absence of captioning restricts the freedom of the viewer to make the same choice of watching or ignoring the program a nondisabled viewer can make.

This issue goes to the heart of the jurisdictional and definition discussion that opens the Issues Paper. Yes, it is indeed true, as HREOC states, that

[t]he transmission of programs without captioning therefore appears highly likely to involve imposing a condition or requirement, rather than being an inherent part of the nature of the service concerned. The condition is that a person be able to hear for the person to be able to have access to the service on an equal basis.

Think of it this way: Australian broadcasters, like broadcasters worldwide, think nothing of the cost involved in presenting visuals and audio in their programs. It would be inconceivable to broadcast a program without visuals or without audio (with the notable exception of archaic silent films: in our era, we are long past that technological limitation). The cost of recording video and audio is such an accepted component of all aspects of the television industry that it is essentially invisible. In fact, these costs become visible only in budgeting for creative and crew time (e.g., directors of photography; sound rerecordist; boom operator) and equipment (DAT deck; Cubase software; microphones, booms, and headphones). Those budget line-items make clear how intricately sound and visuals are woven into television. Indeed, TV is not TV without both.

Yet broadcasters go to great lengths to avoid making visuals and audio accessible to viewers who cannot see or hear them. ABC in particular wishes us to believe that “no condition or requirement beyond the inherent nature of the service is in fact involved in lack of captioning”: In other words, the essential nature of TV is visuals and sound, and if you can’t perceive or understand sound, it’s not our problem. ABC and other broadcasters argue elsewhere, in effect, that producing a show in the first place – a show without captioning – must take primacy.

But the argument is self-contradictory. Producing a show means capturing, manipulating, and transmitting sound and video. If sound and video are so important and essential that ABC and all other broadcasters worldwide invest visible and invisible sums of money in recording, editing, polishing, and transmission, they are important enough to be buttressed by assistive technologies that take their place for viewers who cannot perceive or use them. If sound and pictures are that important, how can broadcasters refuse to compensate for some viewers’ inability to perceive them? If broadcasters see sound and video as important to the nature of their programming, such importance does not disappear if the viewer happens to be deaf.

Fundamental rights cannot be subject to double-entry bookkeeping. It is possible that captioning may never “pay for itself” in crass profit/loss terms, though given the low cost of captioning relative to TV production budgets, I suspect it pays for itself over and over again. The HREOC is quite right in stating:

The existence of the DDA means that the level of captioning to be achieved is not a purely discretionary decision for management of television stations to determine on the basis of assessed demand relative to other policy or commercial priorities. It is not properly to be viewed as solely a matter of voluntary commitment.

Now, if broadcasters really are concerned with maximizing their return on captioning investment, they should push for mandatory inclusion of decoding chips in television sets, any device that can be used to display television pictures (like circuit boards in computers), and video recorders and players. Within five years, the captioning audience will increase by orders of magnitude. Captioning will no longer really be an opt-in technology, where a viewer interested in watching captions has to buy something special. It will simply be a matter of pressing keys on a remote control. We will return to this subject later.

Also:

Even in cases where priorities have been negotiated in good faith, or determined after consultation with organisations with apparent expertise and authority, it is not clear that the complaint mechanism under the DDA provides individuals or organisations any binding authority to negotiate regarding the rights of other potential complainants (other than in the settlement of a representative complaint).

The Independent Television Commission in the U.K. has pretty much solved this problem. Largely irrespective of any previous action, it decided, rightly, that all genres of programming should be made accessible via captioning, sign language, and audio description. Existing contracts are largely irrelevant given that the ITC does not require previous or legacy programming in a broadcaster’s back catalogue to be made accessible.

Rights cannot be bargained away, either by the person involved or in abstentia. “Priorities... negotiated in good faith” are irrelevant under this philosophy: Captioning is a right, not a privilege, and, with the limitation mentioned previously, broadcasters’ priorities about which exact programming or genres will be captioned carry very little moral or legal weight.

Note that this blanket applicability of access technologies is not quite blanket: It may be reasonable, given Australia’s low standard of captioning, to permit the prioritization of news and current-affairs programming. This should not be taken as license to caption only or mostly news or current affairs. It should also not be taken as license to continue refusing to caption, or rarely captioning, certain other genres of programming, like music videos or live sport.


On the topic of emergency captioning:

FACTS draft code of practice states that its members will, “when broadcasting emergency, disaster or safety announcements, provide the essential information visually, whenever practicable. This should include relevant contact numbers for further information.”

A typically weak requirement, one with no policing procedures or penalties, that also misunderstands what access really is. It would not take very long to set up a system whereby all television stations in Australia, or at least in areas often affected by sudden serious weather conditions, have on-call the names and contact numbers of qualified stenocaptioners who can start work with minimal notice. It’s done all the time in North America, and it is commonplace for captioning companies to work round the clock captioning disaster or weather coverage, often for free. The captioner does not need to be in the same city or state as the TV station: Over the short term, stenocaptioning can be done adequately via a telephone headset.

Also, the requirement isn’t really a requirement, because it provides the loophole “when practicable.” If hearing people’s lives are worth saving through warnings of severe weather, so are deaf people’s lives. Television stations in Australia, and central sources of information for disasters and severe weather, must be equipped with TTYs, and “relevant contact numbers for further information” must be given as voice and TTY numbers.

Under battle conditions, a reasonably good QWERTY typist can type more-or-less-acceptable captions for broadcast during emergencies until a real stenocaptioner can be engaged. If the warnings in question are brief bulletins, even those presented every few minutes or every hour, it is not too difficult to enter the script of the bulletin into an electronic newsroom system or simply into the character generator every TV station owns. A station could laser-print warnings on ordinary paper and telecast pictures of those if need be. The bottom line is that there is no technical reason why every single disaster or severe-weather warning cannot be captioned.

On this topic, the Commission states:

D. Bradley’s submission notes that there are large numbers of hearing impaired people among older Australians. Many of these people may be particularly dependent on television for information and entertainment. The Commission’s experience on other issues indicates that many of them may not identify closely with or have their views directly represented by disability community organisations.

This was precisely the reasoning behind the passage of the Television Decoder Circuitry Act in the United States. Intervenors pointed out that there is indeed a stigma attached to deafness, particularly late-onset deafness linked to aging. Experience showed that this stigma was strong enough to keep some people who would benefit from captioned TV from buying a decoder; also, it offends the sensibilities that any viewer, particularly one who feels some degree of shame about hearing loss, should pay extra for a separate device to make television understandable. (It doesn’t stop there: How does one communicate with a TV salesperson, who probably knows nothing about caption decoders in the first place, when one has a hearing impairment and/or is a bit shy about having to buy a decoder at all?)

Further, it is known that deaf people are not the only group who watches captions, as the Commission itself recognizes.


Mr Bradley’s submission also notes that the audience for captioning of the Olympic Games will be a worldwide audience rather being restricted to deaf and hearing-impaired Australians.

This is a much trickier issue than is suggested here or in the other brief mentions of international captioning. Olympic Games are telecast by host broadcasters. The exact nature of the feed varies widely. It is not uncommon, especially when Americans are at the helm, to force foreign broadcasters to accept a feed with English type, graphics, and supers even if the home audience does not read English. In more elaborate and better-designed cases, multiple feeds are available – one thinks of NBC’s multiple feeds during the Atlanta Olympics, most of them on pay channels. Also, inevitably the host broadcaster does not broadcast an event within the host nation that other nations do want to watch. In short, pretty much every second of Olympic competition is covered by the host broadcaster.

But because of TV format differences – Australian World System Teletext captions don’t immediately work on NTSC Line 21 decoders and vice-versa (but see below) – and because, even if the visuals are exactly the same everywhere, the commentary in different countries will also differ, it is simply impossible for the host broadcaster to caption the Olympics for all the other broadcasters taking the feed. It cannot be done. We certainly know this in Canada, where English- and French-language broadcasts, and American-language cablecasts, are all available. It’s quite possible to watch the same event on three different feeds from a Canadian living room, and, while the English feeds will differ, they’re always captioned; some French feeds are captioned, too.

However, it is possible for a broadcaster to caption all or part of the Olympics in its own country. We do it all the time over here, and in fact Olympic Games require the rollout of pretty much every single competent stenocaptioner. There is simply too much going on for one firm to handle. Yes, this does mean that competing firms all caption different parts of the Olympics. It’s been a good ten years since any part of the Olympic telecasts in Canada or the U.S. appeared without captions. It can be done. It is being done.

In the Australian case, it is certain that there won’t be enough stenocaptioners in the entire country to caption the Olympics just for the home market. It may be necessary to import some talent from North America or England, or simply contract for the captioning to be done remotely from here or the U.K. It is technically possible to transcode Line 21 live captions to World System Teletext. Or, if U.K. captioners were used, all that’s necessary is a satellite feed and a phoneline to the encoder in Australia. For the purposes of this discussion I am omitting many details, but these plans are by no means outlandish or particularly expensive.


Do the provisions of the HREOC Act and the Convention on the Rights of the Child require the Commission to promote priority for access to children’s programming in performing functions regarding closed captioning under the DDA, including in considering complaints or exemption applications?

No. Children are not more important than other groups; they also are not less important. But the Commission’s own admission, captioning appeals to and is used by diverse groups.

However, the Convention provides yet another reason why captioning must not be avoided; the Convention provides another reason to caption (in this case, to caption children’s programming). And we need these additional requirements, since broadcasters are already trying hard to avoid captioning kids’ shows: Pace ABC, “there is clearly no need to caption programs for preschool children as they are not able to read the words.” This, naturally, is laughable. Children can’t understand all the audio, either. And, while other respondents note that parents of deaf children have a right to understand what their kids are watching, both sides miss two crucial points:

Given the known utility of captions in ESL and the mounting evidence of captioning’s utility in educating deaf kids, are there really any valid reasons at all not to caption children’s shows?

Are programs such as Playschool exclusively aimed at and used by pre-reading children?

Perhaps ostensibly, but not in actual fact. Pre-literate children are not the only audience, and in any event, a pre-literate sighted child benefits from captioning.

Is captioning possible and worthwhile for pre-school programming?

Yes. It’s done all the time in North America. Sesame Street and Shining Time Station are only two examples, though NCI edits the dialogue of Sesame Street, and takes other unusual measures, in a half-arsed attempt to make the captions “more understandable.” (There is no research backing up that practice. It is best to simply caption a children’s show like any other program. Captioners should not delude themselves into thinking they act in loco parentis. Provide in captioning all the information a hearing adult would glean from the program.)


The SBS submission notes that many SBS programs are open-captioned. That is, they have subtitles which can be read without use of decoding equipment. This reflects the high proportion of non-English-language programs broadcast.

SBS and the HREOC are both mistaken on this count. (So is the CRTC, for that matter.) Subtitles are not captions. Subtitling translates dialogue in one or more languages into written words in another language. Despite their seeming similarity, captioning and subtitling have very little in common.

  1. Captions are intended for deaf and hard-of-hearing audiences. The assumed audience for subtitling is hearing people who do not understand the language of dialogue.
  2. Captions move to denote who is speaking; captions can also explicitly state the speaker’s name (e.g., Homer: [NARRATOR] Tennison:). Subtitles are almost always set at bottom centre.
  3. Captions notate sound effects and other dramatically significant audio. Subtitles assume you can hear the phone ringing, the footsteps outside the door, or a thunderclap.
  4. Subtitles are always open. Captions are usually closed.
  5. Captions are in the same language as the audio. Subtitles are a translation.
  6. Subtitles also translate onscreen type in another language, e.g., a sign tacked to a door, a computer monitor display, a newspaper headline, or opening credits.
  7. Subtitles never mention the source language. A film with dialogue in multiple languages will feature continuous subtitles that never indicate that the source language has changed. Captions tend to render the language of dialogue in its own writing system or state the language (for example, JE VOUS EN PRIE, MONSIEUR or [SPEAKING RUSSIAN]).
  8. Captions ideally render all utterances. Subtitles do not bother to duplicate some verbal forms, e.g., proper names uttered in isolation ("Jacques!"), words repeated ("Help! Help! Help!"), song lyrics, phrases or utterances in the target language, or phrases the worldly hearing audience is expected to know ("Danke schön").
  9. A subtitled program can be captioned (subtitles first, captions later). Captioned programs aren’t subtitled after captioning.

Subtitles are not a substitute for captioning. If you don’t believe me, try watching subtitled films (including those on DVD) with the sound off for a few weeks and see if you can figure out what’s going on.

Subtitled programs can be and are captioned. SBS, whose disdain for captioning and whose ongoing deployment of subtitled programs as a wrongheaded substitute for captioning are well-known, writes:

SBS hopes that, in terms of making programs accessible to people with hearing impairments, the inquiry will treat SBS subtitles as achieving a similar outcome to specifically designed closed captions. It is worth noting here that to provide closed captions on top of subtitled programs would make the television screen extremely cluttered for those people choosing to access closed captions.

SBS will say anything to avoid increasing captioning on its network, which one respondent to this inquiry gauged at 4% of SBS programming. SBS is ignorant of the correct method of captioning subtitled programs. SBS executives’ phobia of captioning, hindered literacy, and inability to handle information-dense screens are apparent here. In captioning subtitled programs, the captions do not replicate the subtitles. Instead, they add speaker identifications, note sound effects and other nonvocal noises, and generally make up for the deficiencies of subtitling for a deaf audience. Captions also render English-language narration and dialogue.

Captioned subtitled programs are not uncommon in North America. The Wonderful Horrible Life of Leni Riefenstahl, captioned by WGBH, showed subtitles, captions for the English narration, and captions giving speaker IDs and sound-effect information. It’s a very clean and agreeable, and very multimedia, experience.

(There’s another interesting example, the R.E.M. music video “Everybody Hurts,” where onscreen titles caption the thoughts of various people stuck in a traffic jam. The closed-captions sometimes replicated the titles when those titles and the sung lyrics coincided, but that was uncommon; also, captions and titles appeared and disappeared according to their own schedules. All in all, a very pleasing viewing experience – for someone who can handle it. Apparently SBS cannot.)

The FCC was wrong to permit subtitled programs to count toward accumulated hours of captioned TV. Captioning and subtitling are separate.

The Commission’s question"Is closed captioning of subtitled material necessary?" can be answered only one way: Yes.


In what circumstances would it involve unjustifiable hardship to require captioning of retransmitted overseas news bulletins such as those presented by SBS WorldWatch where this material is not captioned in its country of origin?

It is probably excessive to caption a program whose intended viewer is not English-speaking. Again, though, we have experience with this phenomenon here in North America. In 1994, PBS ran the Russian news program Vremya with dubbing. The production of the show emulated that of the original Captioned ABC News of the 1970s: The show was recorded and translated, and then rebroadcast with interpreters reading the translated copy over top of the original audio.

Vremya was produced by WGBH in Boston, home to the Caption Center, where policy holds that each production carry a line-item for captioning. But there were no captions on Vremya. I asked why, and an industry source told me:

Vremya wasn’t captioned because it was aired on a shoestring budget. Many discussions were held as to how it might be ’titled someday, on such fast turnaround, in such a huge quantity (30 minutes a day forever is a huge amount of captioning). So the answer, as always, is [money].

Also, about five years ago CBC Newsworld experimented with airing the main French-language news shows, Le Point and Le Téléjournal, with English open captions. In this process, the show was rebroadcast with intact French audio; captioners listened to simultaneous interpretation via speakerphone and captioned that. Captions became combination captions/subtitles.

Problems: Pretty much every proper name was omitted or paraphrased given that the captioners did not know the spellings in advance; delay between utterance and title was considerable; the titles were ugly.

The right way to do this would be the Vremya approach, with a sufficient leadtime to transcribe proper nouns. The show could then be broadcast and the captions live-displayed in close synchrony with the audio. Apart from money, that was impossible because the two shows air live starting at 2200 hours. The Vremya approach would have bumped the subtitled/captioned English version to the middle of the night Eastern time.

Also, CFMT, a multilingual TV station in Toronto, uses electronic-newsroom captioning for its Italian- and Portuguese-language weeknight newscasts. None of the live segments is captioned, and there are no accented characters, but it is an interesting experiment. Naturally, CFMT can caption via ENR because it scripts and broadcasts its own news programming.

None of these cases exactly matches SBS’s circumstances. Newscasts transmitted an original language other than English should not be required to be captioned. However, newscasts in Latin-alphabet languages could in fact be captioned, and perhaps in the fullness of time SBS’s imagination will expand sufficiently to give that a try. But don’t hold your breath.


As proposed elsewhere in this Submission, the Corporation believes decisions about the further extension of captioning services should be informed by an independent assessment of patterns of use for the service by Australia’s deaf and hearing-impaired communities.

They’re not the only group who benefits from captioning. The entire rationale for this “independent assessment” is suspect, as described before, but if it happens it must tally all the users of captioning in Australia.


[T]he DDA clearly requires issues of cost to be considered in making decisions on whether providing access to a service would involve unjustifiable hardship or would be reasonable. This is not to say that cost issues will necessarily be decisive against captioning of particular programs. This decision would also require consideration of:

Cost-benefit analysis of this type must take into account the broadcasters’ existing infrastructure to capture, manipulate, and transmit sound and video. If their budgets can accommodate sound and video, they must accommodate captioning, too.


Would it be appropriate for the Commission to adopt a percentage level of revenues as indicating or determining unjustifiable hardship and reasonableness issues?

No. Under such a plan, broadcasters could simply eat up their assigned captioning budget in the first couple of months and then could legitimately assert that they have met their requirements. Perhaps the CRTC model of assigning captioning requirements by income level would work better. (The CRTC’s approval of the MuchMoreMusic license, and others from Chum Ltd., fell hook, line, and sinker for the fixed-budget approach to captioning. Allotting a set sum of money, or a set budget percentage, does not guarantee minimum levels of captioning. It’s too easy for a broadcaster to cook the books and wiggle out of its obligations.)


How far does a fixed budget for captioning necessitate a more or less fixed level of captioning?

The terminology is misleading. Few budgets anywhere are not “fixed” – on the expenditure side of the balance sheet, at least – because broadcasters rarely have unlimited money to spend on any line-item. One could equally ask “How far does a fixed budget for audio and video production necessitate a more or less fixed level of audio and video production?” Captioning must be seen as an essential component of the production process, as the Commission implicitly concedes in its discussion of the “imposition of a condition or requirement.” If a broadcaster has the funds to produce, acquire, or broadcast a program, the broadcaster also has the funds to caption it.

Of course, this vantagepoint requires broadcasters to give up their age-old cover story that captioning is too expensive, serves too few people, and is too extraneous to their central mission of broadcasting (or, more accurately, of making money). If there’s money for audio, there must be money for captions, because, as discussed before, audio is an essential element of television and is too valuable for a broadcaster to permit some viewers not to understand it.


It appears however that at least some of these comments were discussing and dismissing the foreseeable prospects for voice recognition as a fully automatic and fully comparable substitute for comprehensive high quality captioning, rather than as a partial but still effective measure for increasing access, or as a means of facilitating more comprehensive captioning through low cost provision of text for editing and addition of captions on non-spoken material.

We are stuck, here in Canada, with astonishingly inept captions produced via ostensibly magical voice-recognition software (aided by ill-trained, careless, illiterate human operators with all the linguistic facility of a Valley girl). The proof of the pudding is in the taste: Captions created with this software (see Chum Ltd.’s baldfaced claims) are the worst of the worst and barely qualify as English, let alone as good captions.

It might also be noted that these comments are from organisations highly experienced in developing and using existing captioning methods, and in the case of VITAC, an organisation commercially in the business of promoting these methods. It is not clear on the materials available to the Commission whether these organisations are equally engaged and expert in speech recognition technology.

The Commission unfairly insinuates that the existing captioning providers are knowingly suppressing or downplaying useful voice-recognition technology. The Commission could, I suppose, be forgiven for this since the Commission’s lived experience of watching captioned programming is limited or nil. There is no VR system available for sale anywhere that can do what a human being does: Listen to and understand continuous utterances from multiple speakers whose voices compete against other audio sources. Speaker-independent VR may exist in certain secret research labs, but you cannot buy such a system today, nor will such a system be on the market anytime soon.

The reference in this comment to “speaker independence” identifies a limitation of current speech recognition technology[...]. This might prevent available voice-recognition systems’ being used for some news and current-affairs purposes (such as interviews). It is not clear that it presents a barrier to use of such systems in live or near-live programming where there are only one or two speakers (perhaps including sports commentary).

This passage is self-contradictory. In an interview show, according to HREOC precept, only one person speaks at a time. But sports commentary is placed in this category, too. If VR works for sports commentary with limited numbers of speakers, why wouldn’t it work on an interview show with similarly limited speakers?

In any event, the real world of television doesn’t work that way. Voices uttered in silence are rare. Current and near-future VR systems cannot readily tease out a human voice from a complicated soundscape, a task even human children find trivial.


Does the electronic-newsroom method for captioning offer a viable and acceptable means of achieving captioning of news programming, whether generally or as a short- or medium-term solution or a solution for some types of programming or some types of provider?

Use of ENR must be tightly regulated. It is of use in disaster or weather warnings when stenocaptioning cannot immediately be provided. It is useful in pre-scripted brief news bulletins, but the experience in North America tells us that stenocaptioning produces consistently better results given that succeeding bulletins often reword the same stories to ease monotony; somehow those changes are not always updated in the ENR script sent out as captions.

ENR fails as a generally-applicable method of captioning news and other programming for one reason: It doesn’t give you the whole story, and as such it lies to the viewer. Which is worse, watching a film where the cinema doesn’t even have the final reel in the projection booth or not watching the film at all? Which is more frustrating, understanding half (or three-quarters, or 90%) of a news story or not understanding it at all? Would hearing people accept being able to understand the stand-up pre-scripted intro by a news reporter but not any other part of a report?

ENR does not provide equal access, even under circumstances described in the Issues Paper:

WGBH [stated to the FCC] that, if carefully and intelligently prepared, ENR captioning can provide access to large portions of news programs. In this regard, WGBH states that we should indicate that users of the ENR method need to enter additional script transcriptions into their systems. It suggests that we require that a percentage of a program (e.g., 50% or 75%) be accessible through captions if ENR is used.

I am on good terms with WGBH and approve of most of what they do, but WGBH is simply wrong here. I’ve been watching various ENR abominations for a decade and have never seen a newscast that captions 75% of its content via ENR. More relevantly, the 25% that’s missing is crucial to understanding the story: It is the meat, not the potatoes.

Are there any other low-cost captioning methods available which avoid some of the identified limitations of the ENR method?

No.

What possibilities are there for using computer voice recognition, as contemplated by the [CRTC] in its 1995 decisions?

None.


On each and every night, NBN Television effectively broadcasts five separate news bulletins throughout Northern NSW from the Newcastle studios. These multiple services are achieved by pre-recording local stories from each market, such that four areas receive a one hour bulletin that is part live and part recorded and one area receives a fully live bulletin. The split-second timing of this process and the enormous time required to successfully operate this “windowed” production format makes it virtually physically impossible to provide captioning for our multiple services.

Again the Australian broadcast industry seeks to project its own ignorance and naïveté onto the regulatory process.

It is quite easy to caption multiple simultaneous material. It is done every day of the year in North America, and in many cases the captioners work from remote studios watching exactly the same feed every other viewer watches as they do the captioning. It is an unusual feedback loop (watching the same show you are captioning), but there is no reason whatsoever it could not be done in Australia.

The number of stenocaptioners (with modems linked to encoders at the broadcast site) must simply equal the number of newscasts. For this particular example, the convenience of multiple time zones does not come into play. It is not impossible to caption such simultaneous newscasts; it is merely expensive.

Even if a technical method could be found to provide for all services, the practicality and cost of finding five separate stenographers to provide the captioning would be restrictive.

There is no technical limitation. Cost of stenographers is an issue. “Finding” stenographers must become an Australian national priority: The country must more aggressively promote the profession of stenography to keep pace with demands for captioning (and general court reporting). It takes a good three years to train a court reporter from scratch, and only some trainees have the speed necessary to caption TV. Most students can be brought to 120 words-per-minute stenography fairly easily. Only a small number can achieve 180 to 200 words per minute, the working minimum for TV. There are neurological and physiological limitations at work. The only way to find out if you can achieve 180 to 200 wpm is to learn from scratch. Everyone plateaus at a different level. Accordingly, Australia needs to encourage many more people to take up court reporting in order to find the small fraction who can stenocaption fast enough for TV.

Of course, the other option is to import stenographers from overseas. Canadian and American court-reporting schools might be willing to promote an exchange program whereby high-ranking graduates work in Australia for a certain period of time. Naturally, immigration paperwork becomes an issue here, and only some students will be keen to leave the U.S. or Canada for Newcastle, NSW, but this possibility should be pursued.


These bulletins have modest audiences but they serve local communities well. They operate on tight budgets and marginal returns.Any significant increase in news costs will result in a decline in the number of news services provided for sub-markets, in favour of composite bulletins, as well as a possible decline in the number of composite bulletins, in favour of no local news at all.

FACTS threatens to cut off its nose to spite captioning viewers’ faces. Give the choice between a larger number of uncaptioned broadcasts and a smaller number of captioned ones, the latter wins. Access is fundamental. There is, however, no reason to cancel local news altogether, though if any broadcaster did so it would be proof positive of the broadcaster’s determination to evade captioning completely, evidence of which can be read between the lines of FACTS’s submission generally.

In addition, there are very few or no people with necessary specialised live captioning skills in many regional areas.

Captioners not only do not have to be in situ, they usually aren’t, at least in North America. Satellite dishes or simple speakerphones are sufficient technologies to caption local news.


[ABC TV notes that l]ive sport broadcasts present particular challenges for captioners, as it is difficult to provide a captioned description of play which is simultaneous with the action.

A blatant lie. Sports captioning has been going on for more than ten years in North America, and there is nothing about the Australian experience that makes it impossible to caption sport. Indeed, it’s already been done: The AFL grand final was captioned on Seven. If it’s so “difficult,” why was Seven able to manage it?

Since Australia’s broadcast industry is ignorant of sports-captioning techniques, let me bring them up to speed. There are three main approaches:

  1. Caption everything all the time. In this method, all the commentary is captioned as it is spoken – and it is all displayed, usually in two lines and very often at a screen position designed to avoid covering up scores and suchlike. This method is almost identical to news captioning in that an attempt is made to caption all utterances.
  2. Caption at important moments, but hold back from captioning certain commentary. We see this commonly in hockey games. The run-on patter of the commentator is not necessarily captioned, but whenever play stops all commentary is captioned, and anything absolutely necessary to understand the game is captioned. For example, when a whistle blows and a call is made that the commentator explains or repeats, that sentence is captioned. Or if a player makes an apparent foul that is not caught by an official, the commentator’s remarking on it will be captioned. If an on-ice or on-field microphone picks up some utterance by a player, that will be captioned. This selective approach is in many cases a reasonable compromise between making the commentary accessible and making the action easy to follow.
  3. A technique used in the 1988 Olympics, particularly for figure-skating where viewers find it important to watch a performance continuously, involved stenocaptioning every utterance but saving up the text and displaying it using pop-up caption blocks (exactly as you’d see in a dramatic program captioned before telecast). This technique is no longer in use: The delay between utterance and caption usually reached 20 seconds or more, there was no real attempt to chunk the captions at semantically meaningful points, and usually the result was three badly-formatted lines of caption at a time, working against the goal of distraction reduction.

Something broadcasters (and regulators) fail to understand is that regular viewers of captioning are far more visually sophisticated than they are. Whether hearing, deaf, or in-between, we can and do take in a great deal of information at once. ABC’s claim, in effect, recapitulates the very philosophy that caused captioning to become closed in the first place: Captions are distracting. Sure – at first. But after a couple of weeks of watching TV with captions on at all times, not only do you get used to the addition of text to sound and image, you learn to manage all the visual inputs simultaneously. Captioning viewers, in effect, were the original screenagers, to borrow Douglas Rushkoff’s term from his book Playing the Future.

It is quite easy, therefore, for a regular captioning viewer to balance watching the action, looking at onscreen graphics, and reading captioned commentary. ABC executives may find it difficult, but they are not the audience, and their blithe, jejune dismissal of text-dense television must not make its way into the regulatory sphere.

Further, captioning multiple simultaneous sports events is done all the time here. Most weekend hockey games, and nearly every CFL football game, are all split telecasts, with certain parts of the country receiving one game and the rest the other. The same is true in the U.S. All that’s required is to engage a number of captioners, modems, and encoders equal to the number of games. Admittedly, Australia needs more stenocaptioners, but the claim that live sport is uncaptionable or nearly so is false.

FACTS malaprops:

There is a considerable labour cost and difficulty in captioning sport because of the unusual hours, the multiplicity of some events (e.g. the different AFL matches broadcast in each market on weekends) the length of some events (e.g. cricket matches and tennis and golf tournaments), location, and improvised and sporadic commentary. Given the visual nature of sport and the high level of on screen statistics and other information, the need for captioning is not as pronounced as in other areas of programming.

The “labour cost” of captioning sport is in proportion to the cost of putting on the telecast in the first place. Length of tourneys is irrelevant: Hockey Night in Canada lasts from 1830 to at least 0100 hours each Saturday night, and it’s captioned. Prolonged tennis matches, including the U.S. Open, Wimbledon, and Australian events broadcast here, are routinely captioned. For heaven’s sake, the O.J. Simpson trial was captioned almost from start to finish. Length is not an issue.

This business about “the visual nature of sport” is sophistry and betrays the intellectual shortcomings of FACTS administrators. If sport is so visual, why use commentators at all? Does FACTS suggest that viewers learn nothing from commentators that they could not see from the video? The CRTC has also bought this nonsense before (e.g., in licensing the Weather Network/Météomédia without captioning), but it is an untruth.

The distortions from FACTS get worse.

There exist enormous difficulties in the captioning of live, fast-moving variety-style programming such as The Footy Show and Hey Hey Its Saturday. In this type of programming the rapid interplay of conversation, graphic visuals and music on an impromptu basis makes captioning a difficult process, and an extremely expensive one.

Here we have further evidence of FACTS executives’ visual unsophistication. Captioning viewers have no problem at all assimilating all those elements plus captions, and FACTS must not be permitted to project its own inability to keep up with its own members’ programs onto everyone else. And in any event, there is nothing inherent in sports talk shows that “makes captioning a difficult process.” We caption commentary shows like those all the time in North America, like the rollicking Coach’s Corner segment on Hockey Night in Canada. CBC Newsworld runs a good half-dozen shows with rapid commentary – not all of it about sport, of course, but the whole point here is that FACTS’s assertion of the “difficulty” of captioning such shows is a lie.

To what extent do the difficulties and expense of captioning live material constitute unjustifiable hardship for DDA purposes?

To no extent whatsoever. It is not difficult to caption live sports.


Should the Commission accept that captioning for Olympic events must draw on an existing captioning budget for the network concerned? What account should be taken of sponsorship and other funding sources for these events?

The Olympics are too big a job to be subsumed under everyday captioning budgets. It requires special funding. Increasingly, Olympic captioning is paid for by advertisers in the country of captioning. Since major corporations are the ones who pony up the cash to advertise on the Olympics, they could easily afford to sponsor captioning. And remember, caption sponsors are credited by an opening and closing caption. (Sometimes they’re credited every hour or half-hour.) It’s an easy, relatively inexpensive advertising vehicle for corporations. If they’re unreceptive to the idea, point out that their counterparts in Canada and the U.S. (very often the same corporations!) do it, so why can’t they?


What if any actions should the Commission take or recommend regarding quality of captioning?

Caption quality is an area of concern. Canadian captions for prerecorded programs are a disaster from stem to stern. Most American captioned programs are at least acceptably well-done and, in the case of WGBH, usually very well-done. It is impossible to explain in print what does and does not make for quality captioning; the only way to discuss these issues is in person with tapes rolling.

About the best the HREOC can do is to insist that errors pointed out to a captioner or broadcaster be corrected – and that must include re-encoding the program so that tapes no longer carry the error.

However, on the issue of pass-through of NTSC Line 21 captions in Australian broadcast of imported programming, I think it’s time for the Australian Caption Centre quasi-monopoly to loosen the strings a bit and clean up American captioners’ mistakes. For example, during the first two years of the telecast of the series E.R. I regularly spotted mistakes, some subtle, some egregious, and E-mailed corrections to the Caption Center at WGBH. Even though the Caption Center is the best in the business, at root it really does not believe in quality. When an error is spotted, the caption file may be corrected (may: WGBH is territorial about its mistakes), but the actual show is never re-encoded, meaning that subsequent telecasts of that same tape always carry the error.

If the show is later re-encoded, which rarely ever happens (usually when a show is re-cut for syndication, with its more frequent commercial breaks), the corrected caption file may be used. In the case of The Simpsons, I know for a fact that my corrections to that series’ caption errors did not make it into the syndication edit. (WGBH claims they were required to hand the caption files over to an outside company to do the reformatting. Uh, you handed them files you knew had errors?)

When Australians, then, watch American programs with American captions, 99 times out of 100 the captions have not been corrected of errors spotted since the first airing. The Australian Caption Center told me in person that they don’t mickey with other captioners’ files unless something is blatantly wrong; the claim is that copyright law prevents such changes. This is of course untrue. You’re already creating a derivative work from the original caption file by synchronizing it to PAL program timing; the original file is not maintained in its pristine entirety. And besides, correcting unintentional mistakes is not a violation of the moral right of the creator.

What if any actions should the Commission take or recommend regarding pass through of existing caption files?

In this case, the HREOC should state that preserving imported captions on imported programs is an acceptable practice because it saves money. However, the captions must be vetted for errors and corrected before being broadcast in Australia. This applies equally to British programs or others using exactly the same teletext format as Australia. Whoever handles the words in a caption file is responsible for their accuracy, including the Australian Captioning Centre or its competitors.


Is the allocation of responsibility in the first instance under the DDA (and the United States FCC rules) to television stations rather than producers the most appropriate, efficient and effective approach?

Yes. They’re the ones transmitting the signal.


the logistics of distribution of commercials may also impose an economic burden that outweighs the benefits of requiring captions. Video programming distributors receive large numbers of advertisements, often close to air time, and to monitor whether each individual commercial is captioned could be burdensome.

Another lie, this time from the American industry. Captioning of commercial is so routine in North America that commercial captioning is cash cow for most big captioners. Captioning costs at most a few hundred dollars per ad, and same-day turnaround is typical. (Bigger agencies often ask that the caption file be sent via modem to the agencies’ preferred tape houses, which encode the commercial on the spot. The only time bottlenecks are transporting the original cassette to the captioner and actual captioning.)

In the U.S., the annual Super Bowl is known as the showcase for “cutting-edge” commercials, if that is not an oxymoron, and many deaf activists have been tracking the incidence of captioning of Super Bowl ads for year. The Caption Center in New York prepares every year by gearing up for fastest-possible turnaround times, with couriers at the ready to ensure that everything really does get captioned. The claim of short turnaround times is a red herring. A commercial whose postproduction occurs so close to airtime that it can’t be captioned is a commercial put together by people who don’t know how to schedule. Build captioning into the schedule beforehand and there will rarely, if ever, be problems.

(There are other ways to speed up the captioning process. If an advertiser were really in a hurry, the script or storyboard could be fax-o-grammed or E-mailed to the captioner while the tape is being sent via messenger to the captioning office. The basic text of the ad would therefore reside in a captioneer’s computer when the tape arrives. In a matter of minutes, a finished caption file could be sent via modem to the tape house. There really are no excuses.)

Are there any circumstances in which captioning of advertisements would impose unjustifiable hardship under the DDA?

None whatsoever, assuming the language of the intended viewer is English or is written in Latin characters. It is expensive to make a TV commercial, but inexpensive to caption it.


Would it be appropriate for Australia to have regulation comparable to the US and UK requirements that television receivers above a certain size have caption decoder capacity be appropriate, and if so in what form?

Yes. Separate legislation would be required. Adopting the exact language of the Television Decoder Circuitry Act would be a good start: Any device capable of displaying television images with a screen size larger than 13 inches (which may be 14 inches in the Australian measurement system; insert appropriate metric equivalent) must carry caption-decoding circuitry. World System Teletext and NCI PAL Line 22 captions must be equally supported. Similar requirements must carry over to digital TVs and devices.

And on the topic of captioned home videos: The NCI system uses Line 22 of the vertical blanking interval in PAL captioning, not Line 21 as in NTSC captions. The Issues Paper is in error.

Most video distributors are now attempting to ensure that any videos that they release contain Line 21 captions (which are created for the American and British markets).

This is inaccurate. Assume a Hollywood picture is captioned first in the U.S. These days NCI captions only a minority of Hollywood films; Captions Inc. has so undercut NCI’s price that it captions nearly everything. (The Caption Center is almost entirely out of the business of captioning first-run home video.) If the distributor wants a captioned Line 22 version for PAL countries, in practice the only company that creates such captions routinely is NCI, though the Caption Center may or may not have that capability.

If the U.S. release were captioned by another company, NCI has to caption it from scratch for PAL Line 22. If NCI captioned it in the first place, the file still must be manipulated because Line 22 captioning takes place in upper- and lowercase, while Line 21 captioning is largely in uppercase for reasons of illegible fonts. (The fonts in Line 22 decoders aren’t any more legible, in fact; no one has ever hired a qualified digital type designer to design real caption fonts. But U&lc is the NCI practice or Europe.) Thus it is not a simple case of simply reloading the existing U.S. captions for the European release.

The industry is working on a standard identification symbol to show consumers which videos carry closed captions.

Oh, here we go again. I know of no fewer than six pictographs used to denote captioning in North America, fully four more than necessary. The generic Caption Center symbol – two Helvetica Condensed Cs in a TV frame – can be freely used anywhere. Designer Chris Pullman can provide an EPS file that any interested party may use. Don’t reinvent the wheel. Australia is not so distinct that it needs an Australia-only symbol to denote captioning. And in any case, if NCI does the Line 22 captioning, NCI will invariably insist that only its registered servicemark be used.

What recommendations should the Commission make and what actions should it take regarding captioning of videotape material?

Require captioning of everything within, say, five years. And that really and truly means everything. Access must be universal.

World System Teletext captions cannot be recorded on ordinary VHS tapes. The only way to distribute a truly closed-captioned home video in PAL countries is to use the Line 22 system. But note that DVDs are quite capable of encoding Line 22 and World System Teletext captions, but players must be equipped with circuitry to reconstitute the MPEG-encoded signal into signals a Line 22 or WST encoder can understand. To date, no such circuitry has been developed, and only North American DVD releases contain closed-captions (all of which, naturally, are in NTSC Line 21 format). (Some early NTSC DVD players lacked the circuitry to translate MPEG-compressed Line 21 signals back to understandable caption codes. DVD authoring in general is a black art, and captioning of DVDs has not been fully debugged.) A legislative requirement anywhere in the PAL domain would wake manufacturers up rather quickly. Their objections – cost, complexity, etc. – are unlikely to hold water.


whether a captioning credits trading system (similar to systems used in pollution regulation) would be appropriate and possible to provide incentive and reward for stations achieving better than minimum mandated results and to provide market based discipline for stations falling below quotas rather than relying exclusively on more traditional regulatory mechanisms

No. The whole premise is flawed and, indeed, corrupt. Such a system would ensure merely that all broadcasters would transmit the averaged minimum quantity of captioned programming. Broadcasters who would otherwise exceed the minimum will simply trade those credits to someone else. Or, if there is no minimum at all and broadcasters are simply allocated a certain number of credits to use as they please, a broadcaster might sell all its credits at a cheap rate to other broadcasters and caption nothing. Transferable credits are a systematized method of evading captioning.

Fundamental rights cannot be bartered away by third parties. Access to television is a right, not a privilege, and it is not a gaming chip to be scooped up by the croupiers of the Australian television system.

You are here: joeclark.orgCaptioning and media access
Captioning
Submission to HREOC Issues Paper on Television Captioning

Submitted 1999.01.02 | Updated 2001.07.15

Homepage: Joe Clark Homepage: Joe Clark Media access (captioning, Web accessibility, etc.) Graphic and industrial design Journalism, articles, book