Joe Clark: Media access

Response to BCI access consultation document

This response pertains to BCI’s consultation on the draft code on providing television access services. Comments on its technical report are also included.

Permanent location

This submission is permanently located at the address:

BCI’s original postings are located as follows:

The technical report is not available online for some reason.


I, Joe Clark, am the sole author of this intervention. I have been involved in the field of accessibility for more than 20 years, starting with a fateful encounter with The Captioned ABC News in the 1970s. In my career as a journalist and author, I have written over a dozen articles on captioning, audio description, and Web accessibility, the latter being the topic of my book Building Accessible Websites (New Riders, 2003). In a September 2001 profile, the Atlantic Monthly called me “the king of closed captions.”

I have done and continue to do paid consulting work with public- and private-sector clients on accessibility. I have written audio-description scripts for first-run cinema. I maintain a large Web presence on media access at

Summary: Not ready for prime time

The consultation document is rife with errors of fact and interpretation. BCI and its advisors do not understand even basic facts about captioning, which, among other errors, BCI consistently misnames as “subtitling.” Many accessibility provisions that BCI claims are impracticable are already being done and are manifestly possible. Moreover, the consultation is based on the premise that accessibility will be provided where possible, but with the maximum available exemptions and loopholes for broadcasters.

There is no apparent understanding of the reason why we provide accessibility in the first place: Equality. You’re either equal or you’re not, and even if all the consultation paper’s plans were put into effect, blind and deaf people in Ireland would still be unequal in television viewing. I cannot readily imagine what other minority group would be singled out in this manner, except of course gays and lesbians. If the minority we were talking about were, say, Irish nationals living in England, how would our discussion differ?

Inaccessibility of process

As if presaging the error and misguidedness of the consultation paper itself, BCI showed no understanding of Web accessibility in presenting the document. There is no reason whatsoever to have provided the consultation paper in Word format, save for the fact that BCI staff use Windows computers and apparently cannot imagine that a document might not be saved in a proprietary Microsoft format. As a simple text-only document (notably unstructured in the original), it should have been provided in an HTML file with valid, semantic markup.

The site shows invalid HTML and violates many of the Web Content Accessibility Guidelines. validates as HTML 4.01 Strict, but it is in part a shell for a document in an inaccessible proprietary format. Providing audio readings of sections of the page is misguided and unhelpful; blind people have their own screen-reader settings that BCI’s attempted speech output should not override.

In short, BCI attempts to consult people with disabilities through methods that are inaccessible to them. All of those methods have perfectly viable accessible alternatives. That BCI failed to avail itself of those methods bodes ill for the current proceeding.

Expected outcome

Experience has shown that any accessibility policy proposal that is set out for public comment is never substantively changed. Ofcom was the most recent example: Despite an avalanche of evidence that many of its proposals made no sense, the differences between the proposed and final rules were trivial.

I expect that BCI shall follow in Ofcom’s footsteps and refuse to correct its many errors. It will, I anticipate, react with stubbornness to the mere fact that its errors were pointed out.


It is important to correct the myriad errors of fact and interpretation in the consultation document. I will refrain from correcting the equally myriad copy errors.

The most serious mistake of all

The committee’s utter misunderstanding of the true nature of captioning and subtitling may be the undoing of this entire process. Since this is the most serious error in the entire consultation paper, I expect that correcting it will meet with fiercest possible resistance. This single error threatens to sink the whole ship.

Technical considerations

In the early discussions of the Access Consultative Forum, it became apparent that there was a lack of clarity and common understanding of the technical issues surrounding the provision of [captioning], audio description and signing. [...]


Subtitling is onscreen text which represents what is being said on the screen.

No, it is not. Subtitling is a translation of dialogue and certain onscreen type in written words.


The terms “captioning” and “subtitling” are often used interchangeably,

Not true. Alone among English-speaking countries, the U.K. and Ireland use subtitling to mean both subtitling and captioning. In point of fact, then, “subtitling” is used interchangeably with “captioning”; given my two decades of experience in this field, BCI needs to believe me when I say that “captioning” and “subtitling” are not “used interchangeably,” save by the ignorant. (Occasionally, native French-speakers using English as a second language will make the mistake in question, given that “captioning” and “subtitling” have the same root word in French, sous-titrage. That case is not applicable here.)

In particular, the meaning in the U.S. is different to that in Europe and especially the U.K. In Europe, “captioning” usually refers to onscreen text which represents what is being said on the screen, such as when a foreign film is translated into English.

False two ways.

  1. A caption, in the technical usage of people in the U.K. broadcasting industry, is any onscreen text not meant as a caption or subtitle in the true sense. We would call them Chyrons or keys. They are used throughout the broadcasting industry to display names of people speaking in a news segment, place names and locations, phone numbers, and any other words that are not transcriptions of dialogue, save for the rare edge case of a source in a news program whose dialogue is displayed onscreen (e.g., when captured on hidden camera or when speaking an accent deemed too thick for the equally-thick viewing audience).
  2. The foregoing definition by BCI is the definition of subtitling. Subtitles are a translation; captions are a transcription.

In the U.S., however, the term refers to a form of text hidden from normal viewing unless accessed through a special decoder in the TV set or a “closed-caption reader.” U.S.-style closed captioning is similar to subtitling in that the hidden text is especially designed for access by deaf people to assist them in the interpretation and understanding of text and to link it to the dialogue.

Inaccurate. Every country that uses captioning save for the U.K. and Ireland – not just the U.S., but Canada, Australia, and New Zealand, too – refers to captioning as captioning and subtitling as subtitling. Only in the U.K. and Ireland is it impossible to clarify which of those you mean. We are not discussing a dialect difference like lift/elevator or boot/trunk; those examples use different words for the same thing. The U.K./Irish case uses the same word for different things.

Tell me, what is the meaning of the following hypothetical example? “I can’t meet you down at pub on Friday after all. Chris has come in from Cork and is taking me out to the subtitled movies.”

The term “closed-caption reader” is unknown in countries with closed captioning, which, surprisingly enough, includes the U.K. and Ireland. The term used everywhere outside the U.K. and Ireland is “closed-caption decoder,” and they’re built into TV sets in the U.S. and Canada, just as they are in teletext countries.

“U.S.-style closed captioning” – whatever does that mean? – is not “similar to subtitling.” What you erroneously call subtitling is captioning. Subtitling is a translation and is applied to foreign-language works.

I can say with some confidence that “the hidden text is especially designed for access by deaf people to assist them in the interpretation and understanding of text and to link it to the dialogue” is the very worst ever attempt to define closed captioning that I have ever read.

Subtitling is more sophisticated; there are differences in formatting which are designed to assist the interpretation and understanding of the text and to link it more accurately with the onscreen action. For example, in subtitling the colour of the text changes to alert the viewer that a different person is speaking in the scene. There are also standards with regard to the font size, the speed of reading and number of lines of text carried on the screen at one time. U.S. closed captioning on the other hand may not include this formatting and is a more basic representation of what is being said onscreen, sometimes having only one colour, [being] verbatim and... only in upper case.

Deceptive when not actually false (and ungrammatical).

  1. Line 21 captioning has included colour since Day 1. It’s been in limited use since 1993, when enough built-in decoders (with access to the colour guns of television sets) came into the field. PAL teletext captions have always been decoded within the television set at the consumer level, hence it was easier for teletext captioners to use colour at an earlier stage. I can assure BCI that competent Line 21 captioners are far better at communicating who is speaking in a scene. My experience of teletext captions bears out the unwitting admission in the paragraph above that the only thing we have to worry about is showing that someone different is now speaking.
  2. Line 21 captioning has font-size standards. The difference is we don’t get to choose between, for example, regular and double-high captions. As for speed of reading, BCI ignores the common practice in Line 21 captioning and is, I infer, making reference to the habits of U.K. captioners to edit captions down to an alarming 150 words per minute, a practice that is directly contradicted by user testing of caption comprehension.
  3. Moreover, BCI suggests that “verbatim” captions are a deficiency. I guess it really is true that caption viewers in the U.K. and Ireland prefer the lie of edited captions to the truth of verbatim captions.
  4. Upper-case captions are, as I have documented extensively, a historical artifact of U.S. broadcast engineers’ ineptitude in designing legible screenfonts. All-upper-case captions were deemed less illegible than mixed-case captions with no descenders. Only two leading U.S. captioners even bother with all-caps captions anymore; everyone else has switched to mixed case some or all of the time, save for real-time captioning. We don’t like all-upper-case captions, either, but since the consultation paper states elsewhere that buying captions already produced costs a mere €150 per hour, your choices are taking what you get; re-editing the captions in mixed case (an error-prone process); or captioning from scratch at BCI’s quoted prices of €400 per hour.

To say the same thing another way:

  1. What you call subtitling is actually captioning.
  2. You’ve been watching captioning all along.
  3. Subtitled works are in a foreign language. You may have been watching those all along, too.
  4. Teletext captioning and Line 21 captioning are both captioning.
  5. Your captions aren’t necessarily better than ours.

Differentiation between broadcast services

The Access Rules will apply to RTÉ 1, Network 2, TV3 and TG4. In discussing the rules, the Forum agreed that there are differences [among] these four services and these differences should be reflected in the Access Rules. The practical effect of acknowledging the differences [among] the various broadcasters is through the use of different targets and timeframes for each broadcasting service, in the areas of [captioning], audio description and sign language.

These factors are not about placing more onerous targets on some broadcasters or about permitting some broadcasters to reduce their obligations in this regard. It is a recognition that certain factors can [affect] the ability of the broadcaster to reach the same targets as other broadcast services.

The only viable “factor” given is cost, which is a red herring in the first place, as broadcasters have enjoyed cost savings for years by avoiding the provision of full accessibility. The claimed difficulties of the Irish language (discussed later) are also not germane.

In deciding on the targets and timeframes, the Rules differentiate between broadcast services based on the following criteria: [...] How long has the broadcaster been in operation? How much experience does the broadcaster have of providing access services? Is there already a level of expertise within the broadcasting service in the provision of access services?

While accessibility is not as straightforward as some would claim, neither is it rocket science. Assigning different “targets and timeframes” to broadcasters based on their “experience... providing access services” is another way of rewarding broadcasters who did next to nothing beforehand. A broadcaster that cannot handle at least the technical necessities of captioning, audio description, and sign language is too technically inept to run a station in the first place. Purely at the broadcaster level, providing accessibility is not like medical school; you don’t need years of training to manage accessibility, and in any event there is no such training. A broadcaster can simply hire outside experts, among other methods.

Perhaps BCI could help us out by explaining what kind of “expertise” they’re talking about here. Reiteration of common myths (captioning is “subtitling”; U.S. captioning is some strange inferior beast; sign language can’t be provided from original Irish speech; entire classes of programming don’t require accessibility)? Or something else?


The costs of [captioning] relate to the technical facilities needed to insert [captioning], the cost of the equipment and personnel needed to generate [captions] and the cost of purchasing [captions] for acquired programming.

The equipment required to insert conventional [captions] into a program costs in the range of €17,000 to €27,000 per television channel.

Are you talking about encoders and other equipment at head end, or is this an issue of encoders, modems, and related equipment required for real-time captioning? There are many methods of inserting PAL teletext captions already in use in the U.K.; I would hate to think that advisors have quoted the most expensive options.

RTÉ is the only Irish broadcaster that has in-house [captioning] capabilities at present. The cost of [caption]-generation equipment is in the region of €50,000 per workstation. RTE currently uses three such workstations.

That quote is wildly out of the ballpark compared to the most expensive captioning software on earth, Softel Swift, which costs about $13,000 Canadian per workstation (about €8,500). Even given additional costs for ingestion stations and server hardware, we’re talking about a difference of half an order of magnitude. Perhaps RTÉ needs to shop for better deals. We later learn that the majority of the cost is tied up in supremely-expensive DigiBeta tape decks.

The costs per hour for [captioning] vary from broadcaster to broadcaster and also depend on the type of programming in question. Live programming, particularly news, is the most expensive at around €500 per hour, while [captions]... bought for acquired programming cost, on average, €150 per hour. Costs for home-produced pre-recorded programming are in the region of €400 per hour.

At those rates, U.K. stenocaptioners are making a very good living indeed. They are obscenely excessive by North American standards and suggest that buyers are not getting their money’s worth.

Sign Language

The cost of sign language is higher than [captioning] and is, on average, about €850 per hour.

This estimate needs to be given in greater detail. Sign-language interpreters do not cost €850/hour. Does this estimate include studio time, with director and crew? Are there billable hours in advance of taping that are needed for translation or research? On the face of it, the estimate above is too high.

Sign Language

Irish Sign Language is the indigenous language of the deaf community in Ireland. It is a visual–spatial language with its own syntax and complex grammatical structure.

All languages have syntax, which is the same as grammar. Stating that ISL has a “complex” grammar is a value judgement. (Which languages have “simple” grammars?)

Signing can be presented onscreen through the use of a signer as part of the program content, or by the use of a signer interpreting the dialogue as part of the program content (either a real person or avatar) in a box superimposed in the corner of the screen. An avatar, which is a virtual representation of a human image has been developed primarily for digital services because it consumes less space in the digital transmission system than a real human image.

And these so-called avatars, which are not in wide use now and will not be in our lifetimes, are a non-starter as far as actual accessibility goes. You need a real human interpreter to actually understand the interpretation.

The main difficulty with regard to sign language provision is the technical inability to provide closed signing.... In other jurisdictions, there is a tendency for viewers to complain about the intrusive nature of signing.

Yes, it is the case that broadcasters, who don’t want to provide accessibility anyway, and their friends and future employees at broadcasting regulators all like to say this, but I have read no published evidence whatsoever that it is actually true.

How many complaints are enough to use as a justification not to provide open sign language? Five? Five thousand? Versus how many ISL users who prefer that method of accessibility?

In any event, the claim of “technical inability to provide closed signing” is false:

For this reason, broadcasters in the U.K. tend to broadcast signed programming in off-peak hours, e.g. overnight for recording on VCRs. Closed signing can only be achieved using a digital distribution system using a separate image. The image can be in the shape of an avatar or a lower quality picture of a human signer. The latter would require more digital bits or transmission space than an avatar. Both would require the development in Ireland of a national digital terrestrial television or distribution system.

The problem here is the assumption that the only way to provide sign language optionally is to pretend it is vertical-blanking-interval data, i.e., to encode it in some way. In fact, on any digital distribution system (whether terrestrial, satellite, or cable), it is perfectly possible to set up virtual channels that duplicate a main channel with a small difference. Channel 100 could be RTÉ and channel 800 could be RTÉ with sign-language interpretation. Given the tiny quantities of signed programming we’re talking about here, substituting a few hours of programming a week is not a technically complex task at the distributor head end and does not require two 24-hour feeds from the host broadcaster.

The Canadian approach could be adapted: Bell ExpressVu runs virtual channels with open audio description. In those cases, it’s a simple task of programming the system to provide main audio on main channels and Second Audio Program, where the descriptions reside, on virtual channels. Nonetheless, the method can be adapted. Nothing says you have to try to encode an interpreter.

In relation to TG4, there are some other considerations, namely the availability of sign-language [interpreters] with fluent Irish.

If necessary, relay interpreters or any method familiar to the subtitling field (e.g., master lists) can be used.

Audio Description

Digital services, however, are better able to carry the second sound track but a fully closed audio description as provided by a digital distribution system is not yet possible in Ireland at present.

A fully closed audio description is similar to [captioning] in that the viewer who does not wish to have the additional sound track describing what is happening on the screen has the option to turn the audio description off. Open audio description as used in some European countries means that the viewer does not have this choice.

Use virtual channels on a digital system. It works fine for us.

Use of captioning

The first [issue] relates to whether the Access Rules should permit broadcasters to use captioning to attain their subtitling targets.

What you call subtitling is captioning.

As discussed earlier, there are differences in quality between closed captioning, as used in the U.S.[,] and foreign-language captioning, when compared to subtitling.

This sentence is incomprehensible.

For example, in subtitling, the colour of the text changes each time the speaker changes, also subtitling will include a description of other non-speech sounds such as “phone ringing.” Captioning does not include this formatting or the off screen sounds.

Again, there’s a false assumption that teletext is the true, essential, or basic form of captioning and everything else is somehow deficient.

Very often closed captioning rather than subtitling is available for purchase with acquired programming, particularly American-produced programming. Broadcasters have highlighted the fact that for many programs subtitles are not available for purchase but U.S. closed captions or open English language captions are available. The Draft Rules currently stipulate that subtitling be used as standard and the guidelines which will be produced by the BCI will lay down a number of stipulations that are common to subtitling. This will mean that broadcasters will not be able to use captioning, even though captions may be available for purchase. Instead they will have to generate subtitles in-house for these programs. The issue is whether the Access Rules should permit captioning as well as subtitling or whether the rules should permit broadcasters to use a percentage of captioning each year to attain their targets.

Yet again BCI is fabulously aswim in contradictions based on its inability to simply get the terminology right.


The BCI will produce a set of guidelines and standards for [captioning].

The BCI is not competent for that task. It cannot even get the terminology right. At no stage has it been made clear that BCI staff are actually conversant with accessibility.

The existing U.K. captioning guidelines are demonstrably inadequate, as are all in-house style guides not created through research and evidence-gathering. BCI guidelines could only be worse.

The use of an 18-hour broadcast day as the timeframe for daily targets

In the Draft Rules it is proposed that the period of time over which the targets for [captioning] should be set and measured should be an 18-hour broadcast day. This is usually taken as the period of time from 7:00 A.M. to 1:00 A.M. The 18-hour day stipulation has been included based on the experience of deaf groups in other jurisdictions who have argued that if based on a 24-hour day, broadcasters have tended to [caption] programs in off peak hours, including overnight.

False. On this side of the ocean, broadcasters go out of their way not to make overnight programming accessible. There is no accountant on the planet who would endorse spending money to make infomercials and repeats of Starsky & Hutch accessible while leaving out high-viewership programming during the day, early evening, and prime time.

In any event, exempting six clock hours from accessibility tells people with disabilities that they are entitled to equality 3/4 of the time. Deaf and blind people watch TV overnight and have the same right to enjoy it as they do television at any other time of day.


A ten-year timeframe is being proposed for each broadcaster and the interim targets to be reached each year are specified. This is in keeping with the general practice in other jurisdictions and the principle of incremental progression. Within a ten-year timeframe, it is important to note that the challenge of meeting the yearly increment is not the same over the ten-year period. As the level of [captioning] increases each year, the cost, effort and expertise needed to reach the higher levels is greater than those required in the early stages of development. In the initial years the broadcaster may decide to prioritize pre-recorded programming or the purchase of [captions]. In the latter years, the broadcaster must develop the skills and capacity to [caption] live programming. The difference between the early years of the ten-year timeframe acknowledges not just the financial cost but also the training and human resources that are required and the skills that have to be built up over time.

Compliance gets harder as the level of compliance increases? Yes, it does, but it is a question of amount rather than kind. BCI has no cause to believe that only prerecorded programs will be captioned in early years, nor is there any reason whatsoever to think that a broadcaster requires “skills and capacity to [caption] live programming”; as with offline captioning, it can simply be farmed out.

You’re simply doing more captioning over time, not necessarily different kinds of captioning. However, the above-cited paragraph does seem to give implicit license to broadcasters to ignore live programming – including news and current affairs, perennially the most-sought-after captioned programming – until the end of the term.

The technical and human resource cost

In order to facilitate the development of the two RTÉ services in tandem, the targets for both are spread over a ten-year timeframe. It could be argued that shortening the timeframe for RTÉ 1 would have implications for the ability of the station to develop the Network 2 [captioning] service to the desired levels.

That might be true if one assumes, as BCI and Irish broadcasters clearly do, that accessibility is an added-on feature – that the essential form of a television program has no captions, descriptions, or sign language. Could BCI and broadcasters please back up that assumption with facts?

The stage of development of the broadcast provider

It is a relatively young broadcaster at an early stage of development, with no in-house capacity to generate subtitling.

What does BCI mean? Does the broadcaster:

Sign Language

The use of a 24-hour day as the basis for targets

In relation to sign language there are technical differences which mean it may be more appropriate to permit broadcasters to use a 24-hour broadcast day as the basis for targets. There are technical limitations that do not permit the use of closed signing or audio description which allows the viewer discretion as to whether or not to have the access service on their screen. Open signing is regarded as intrusive by many viewers who do not require the service and for this reason it is sometimes broadcast during non-mainstream or overnight hours.

I have already demonstrated that the claim that sign-language programming can only be broadcast out of the way of sensitive hearing people is unproven and technically insubstantiated. And isn’t it interesting that BCI is perfectly willing to tell deaf and blind viewers that they have no right to watch television with accessibility from 0100 to 0700 hours – yet, to avoid disturbing hearing people, we’re all willing to make an exception for sign language?

Is it not a Freudian slip for BCI to concede that off-peak telecast of signed programming relegates deaf people to the “non-mainstream”?

Audio description

The use of a 24-hour day as the basis for targets

As with sign language there are technical limitations which mean that closed audio description is not possible. This means that if audio description is being broadcast all viewers hear the audio description and do not have the ability to turn this service off. Viewers in other jurisdictions have tended to complain that this is overly obtrusive on their own viewing. Therefore, broadcasters have tended to broadcast open audio description during non-mainstream hours or overnight. The Rules propose using a 24-hour day as the basis for targets to accommodate overnight broadcasting of audio description.

As above. I see that BCI’s real intent is to ensure that deaf people have no access to captioning overnight, since we seem to be willing to gerrymander a range of exceptions to avoid what isn’t even an issue in the first place – disturbing sensitive nondisabled people.

Also, just how are blind people supposed to operate the purely-visual menu systems on their VCRs to record these overnight broadcasts so they could later be watched at a civilized hour?

5.2 Definitions


Subtitling is onscreen text that represents what is being said on the screen.

No, it is not. The rest of this definition is equally incorrect, as explained previously.


Captioning refers to onscreen text that represents what is being said on the screen.

Correct. Could BCI explain why it has unintentionally given a correct definition of captioning that is at odds with the extensive malapropisms of its previous discussion of the topic?

Comments on technical report

The in-house authoring of conventional [captions] (Page 888) for pre-recorded programming requires a capital investment of circa €50,000 per workstation.

As mentioned in my response to the main BCI consultation, this suspiciously-high figure needs to be explained in greater detail. It would be hard to run up a bill that high even with the most expensive captioning software available, Softel Swift.

TG4 outsource all captioning and subtitling requirements for pre-recorded material.

What does this sentence mean? That TG4 outsources captioning (same-language transcription for deaf viewers) and subtitling (translations for hearing viewers)? Or something else?

As ever, the U.K. and Irish stubbornness in using the word subtitling to mean both “subtitling” and “captioning” is unique anywhere in the English-speaking world and causes no end of trouble. For deaf and hard-of-hearing viewers, we are only talking about captioning.


Closed signing involving the use of a real person, as opposed to a computer-animated avatar, can only be achieved using digital distribution systems.

True, but not in the manner the Subcommittee seems to have in mind. It’s simply easier to run a virtual channel with the signed program. The fixation with encoding a huge video signal of an interpreter inside the vertical blanking interval has to come to an end.

Open signing may be provided whereby a signer is shown in part of the screen and the full video image is shown behind but at a reduced size.

Just run the interpreter or actor in a cameo. You don’t have to scrunch the main picture.

In the U.K., some open signing on analogue TV consists of previously broadcast material that is rerun in the overnight hours with signing and may be recorded for watching at a more appropriate time.

I am grateful for the admission that ghettoizing sign-language programming in the overnight hours is not “appropriate.” It isn’t.

Further information is required in relation to studio facilities and costs required. In general, though, deaf signers read the script from an autocue, but because the process is lengthy the costs are higher than [captioning]. That is to say about €850 per hour; there is also a problem in recruiting suitable interpreters.

If “[f]urther information is required in relation to studio facilities and costs required,” why do both this document and the consultation paper nonetheless forge right ahead and quote a price of €850/hour?

Audio Description

[...] Closed audio description would require digital distribution. It would be possible to upgrade the number of audio circuits on the digital satellite system from two to four.

You can just run virtual channels with open audio description. Apart from being technically simpler, it’s actually accessible to blind viewers since they do not have to fiddle with onscreen menu choices they cannot see in the first place.

Speech-Based EPGs

It was noted that a speech-based EPG service may be provided on the BSkyB platform. This could result from the ITC’s VISTA prototype electronic program guide that viewers can access by their own voice. The system performs a search of standard EPG data and answers the viewers’ queries in synthetic speech.

That’s only one approach to an accessible EPG.

Linear [captioning] prep workstation

Now we see why your quoted costs are so high: You’re using the crème-de-la-crème of tape decks. Why not use a nonlinear system, or Beta SP, or VHS or Super VHS, all of which are viable options? (Until this year, one U.S. captioner still used 3/4″ U-matic tape.)

The mention of software “protected by dongle” is a clear reference to Softel Swift. I assume the Subcommittee is aware that all other captioning software is cheaper.

Open signing may be provided whereby a signer is shown in part of the screen and the full video image is shown behind but at a reduced size.

No, that is not the only method, as mentioned before.

You were here: joeclark.orgCaptioning and media accessResources
Response to Ofcom consultation on the draft code on providing television access services

Posted 2004.11.09