This is an HTML version of a hard-to-find PDF (local version) that was submitted by MTS Allstream, a Canadian telecom company, to the CRTC, the Canadian broadcasting and telecom regulator. The author of the text below is Jaron Lanier; this version is provided for easier review and commentary.
Original footnotes, most of which were hyperlinks, were converted to real links; text of other footnotes was [inserted].
Lanier’s testimony before the CRTC on 2009.03.11 (lengthy transcript)
Posted 2009.03.16
On behalf of MTS Allstream Inc.
Broadcasting Notice of Public Hearing CRTC 2008-11: Notice of consultation and hearing – Canadian broadcasting in new media
Submitted December 5, 2008
The Canadian Radio-television and Telecommunications Commission (the “Commission”) has asked a series of questions relating to the availability of “broadcasting content” over the Internet, business models for content in new media, and whether measures are required to support the creation, promotion and visibility of Canadian broadcasting content in new media.
These are not easy questions, because there is much about the Internet that we still don’t know. It is the ultimate work-in-progress. New, Internet-supported media are appearing alongside the old, but are in such flux that we don’t yet know much about their long-term qualities. Moreover, seemingly contradictory trends commingle at the present time because “old” media persists even as New Media rises.
Quantifying activity on the Internet is also a difficult task. Network technology could have been designed so that it would be far easier to gather accurate data about what content is being conveyed. The willful blindness of the design of the Internet is a legacy of varied historical circumstances. These range from the origins of the Internet as a military device to later phases when it was enhanced by designers with entirely different motives.
The Internet as it exists today should therefore not be taken as a given, or as a point of arrival.
One thing we have learned, however, is that there is no workable stand-alone business model for traditional broadcast content in a purely online setting.
It is possible that the Internet will give rise to a great many promising early creative careers, but few, if any, lifetime careers for content careers, unless some form of the mainstream media can continue as a parallel option that generates revenues for content producers.
The good news is that the potential of the Internet as a commercial medium for content producers has not been realized yet, but might very well be realized in the next decade. Future improvements to the technology are likely to create a much more favorable environment for content producers, including Canadian producers.
This report divides content distribution into two classes to reveal potential business plans. One class of distribution uses the “Open Internet.” That means that a user accesses content through a generic browser, like Firefox or Internet Explorer, or some other generic tool.
The Internet presents regulators with a uniquely tricky environment. The problems facing content producers in the online environment are, in some cases, almost inversions of the problems faced in longer-established media categories. Policies and assumptions that addressed the well-understood problems of traditional media can therefore have almost inverted consequences in the context of new media.
One example of such an assumption is the idea that professionally-produced content available online retains commercial value. This is essentially an illusion. A product is only a product if there is a way to make money from it. Online content on the “Open Internet” has so far not made a profit, so it should not be thought of as “commercial.”
The other class of distribution is a custom hardware delivery method that receives content from the Internet but provides a gated hardware destination (“commercializing hardware”). The canonical example is the iPod.
Within the Open Internet, there are only three reasons commercial broadcast-like content might currently appear online:
It might be there to promote a different, commercial version of itself on commercializing hardware
It might be there illegally, because it was pirated
It might be part of an experiment in commerce, but thus far, despite extraordinary hype, such experiments have yielded disappointing results.
Virtually all professional-seeming content on the Internet serves to promote genuinely commercial content that is not delivered over the Open Internet, but over gated hardware devices. A content business must now rely on “Commercializing Hardware” that is keyed to a particular vendor to earn revenues.
It would be a mistake to assume that new players in the online space are making money from content in the way that a TV broadcaster makes money by selling ads that appear during a show, or a theater operator makes money by selling tickets. There are widespread illusions that people are making money in ways that are related to these traditional business plans. In fact, there is a constant parade of new chimerical businesses that appear to be succeeding in this way. But an honest appraisal of available evidence leads to the opposite conclusion. In fact, none of these models have succeeded in making money purely off their online incarnations.
A close look at advertising indicates that Google and other companies that offer online advertising are not simply siphoning money that would otherwise go into broadcast TV. AdWords (search-driven ads) accounts for the lion’s share of Google’s business. Google is not enticing people to look at ads through content, but through facilitating everyday activities via text snippets that provide a link to an advertiser’s website. Content similar to the traditional category of professional broadcast content does not play a central role in the way Google or its competitors make money.
There has been a constant, cluttered series of attempts in video-over-the-net ventures since broadband connections started to appear. The desire to find a path to success is so great that we can call experimental online video ventures a persistent phenomenon, and the raison d’etre for a significant portion of the video content that flows over the Internet. However, the activities of online video delivery ventures can be best labeled as “experimental” business practices, rather than demonstrations of plausibly profitable future business plans.
The failure of online content business models, sans gated delivery hardware, relying on either consumer fees or advertising for revenues, should by now be treated as a persistent negative result.
It is not impossible to think about the future of the Internet. The problem is not complete inscrutability, but a high level of volatility. It is therefore easier to imagine a variety of rough outlines of futures for the Internet in ten or twenty years than it is to foresee the events that might occur next year.
This report envisions three potential future scenarios:
Persistence of the current situation, where a large amount of content will always be available for free over the Open Internet, but would serve as a promotional device for the paid versions that would be accessed through custom gadgets, the “commercializing hardware”;
A scenario in which someone finds the “missing trick”, i.e. a way to present a content delivery service on the Open Internet that consumers, or the advertisers trying to influence consumers, are happy to pay for; and
A third scenario in which an improved infrastructure might change the fundamental value proposition and create opportunities for new revenues for content developers.
The Internet seems to provide unlimited opportunities at virtually no cost for the luckiest content entrepreneurs, as if by magic, like an infinite cornucopia. But this mirage of an almost weightless, supernatural spring of wealth has obscured the fact that the infrastructure of the Internet is physical, and has required substantial investment over a significant period of time.
There is a danger to this kind of magical thinking. If we constrain ourselves to think only about Facebook-like tales of sudden prominence which can arise out of the existing Internet infrastructure, we blind ourselves to greater possibilities that might exist if we consider improving that infrastructure.
Under the third scenario, significant upgrading of internet infrastructure, possibly at the level of network architecture as well as capacity, will be required to support applications that may include 3-D content or tele-immersion. An educated guess is that the infrastructure for immersive content will require about one magnitude of improvement.
The technical challenge to bring immersive content into the home is great, but the commercial opportunity is also great. Not only would viewers be treated to an entirely new kind of entertainment experience, but that experience would inherently require complex hardware, thus neutralizing the problem of value loss on the Open Internet from the start.
It will be to content producers’ benefit, and to the benefit of all other players, to have infrastructure upgraded as quickly as possible – certainly within the next ten years, if applications such as tele-immersion are to be possible.
In the case of potential regulatory interventions in the evolution of the Internet, a scenario-driven investigation yields useful results.
This paper examines how a particular policy proposal – i.e. the suggestion that a levy be imposed on internet service providers (ISPs) (the “Subsidy Proposal”) as a source of funds to supplement content producers as a way of promoting Canadian content is – likely to play out in each of the future scenarios that has been envisioned.
This Subsidy Proposal would likely generate inverted, destructive, or chaotic effects in all three scenarios. Therefore it should be considered a highly risky idea.
The infrastructure of the Internet as it now exists, including that portion which is maintained by ISPs, is barely adequate for the transmission of video content. This is why some ISPs have found that the heavy flow of video content by a portion of their customer base is choking off their ability to serve all their customers well all the time. There are only two ways out of such a dilemma. Either disfavor certain flows of information on the Internet, which in practice amounts mostly to video content, or invest heavily to upgrade the infrastructure.
Under the Open Culture scenario, it is in the interests of content producers to promote their work online for use on Commercializing Hardware. If the flow of the free version of the content is reduced, then the paid version becomes harder to promote, and therefore becomes less valuable.
A tax or fee regime would create incentives for ISPs to not invest in new infrastructure. Therefore, the only option available to them would be to adopt policies that disfavor certain flows of bits. If ISPs are disfavoring the flow of video, it will become harder to promote video content on those channels that remain, or come into existence, in which video is commercially valuable. The result would be a tax on promotion, not a tax on sales or consumption. It would reduce the commercial viability of content that is subject to the promotional tax.
Under the scenario in which immersion, tele-immersion or other new media technologies transform the worlds of entertainment, arts, and general communications, the Subsidy Proposal would burden the build-out of required new infrastructure. Draining capital from infrastructure providers will delay the day when content producers can adopt new business models.
The Commission should favor policies that will allow the Internet to grow as quickly as possible into a new state that inherently creates a more favorable environment for content providers. To impose a differential drag on that process in Canada can only hurt Canadian culture in the long term, even if it might seem to bolster some individual content providers in the short term.
See complete Curriculum Vitæ attached as Schedule 1.
Technical background: Former Chief Scientist of Advanced Network and Services, the parent organization of the Engineering Office of Internet2, the principal academic organization devoted to Internet technologies
Business background: Former Chief Scientist of the company that became the Machine Vision portion of Google; Present Scholar-at-Large for Microsoft (not an employee of Microsoft or speaking for that company in any way whatsoever.)
Cultural background: One of the original propagandists of the “Open Culture” ideal, turned apostate.
Pop-culture role: Principal personality behind idea and technology of “virtual reality”
An artist too: Former recording artist for Philips Classics, contributor to movies (soundtracks and conceptual design)
This is a moment in the evolution of media technology that is filled with contradictions. The Internet has brought sweeping changes to the cultural habits of almost everyone, especially younger people. But at the same time, traditional media forms continue to exist.
The “traditional” media are very much alive. Teenagers still flock to movies like Twilight, for instance.
One immutable characteristic of traditional media is limited “shelf space.” No town has the space for an infinite number of physical movie theaters, so if Twilight or some other hit movie is taking up a certain number of screens, it will be inevitable that some other movie cannot be shown. Theatrical movie distribution is a zero-sum game.
By coincidence, the first generations of electronic distributions of video and movie content to homes had similar zero-sum limitations. There was a limited amount of spectrum for over the air television channels, for instance. Later, there were an expanded but still limited number of channels that could be reasonably supported on cable TV services.
Since multiple generations of media distribution technology had all presented a similar zero-sum game, policy began to evolve as if that game would be eternal. Various ideas were proposed to ameliorate what seemed to be an inborn limitation to all possible media technologies.
One class of ideas concerned promoting certain cultural goals or classes of content producers. In the case of Canada, these included reserving some shelf space for Canadian content, and creating some subsidies for creators of that content.
The rationale was simple: Market forces might otherwise take all the existing shelf space for offerings generated by the huge American media market. Canadian content would be so disadvantaged that it would wither.
This author is sympathetic to the goal of promoting cultural content. Content producers face a variety of daunting threats as the Internet becomes more influential. The threats are different than the ones presented by traditional media distribution technologies, however. Shelf space is not a problem on the Internet.
The Internet, on one level, appears to be supporting a global culture, or a Global Village, as a famous Canadian put it. The new question is whether Canadian identity might become drowned out in the emerging globalization of culture. I say “on one level” because the situation is considerably more complex than this characterization suggests. There are some levels at which the Internet supports locality and the persistence of specific cultures.
The Internet as it exists today is generous in granting fame, but stingy in granting fortune. I’ll draw an example from recorded music because that market segment has already experienced several aspects of the digital revolution that the movie and television businesses are only just beginning to encounter.
The production of professional-level recorded music has become profoundly inexpensive because of the virtualization of music editing (meaning it is now implemented in software) and the availability of cheap computers, software, and studio equipment such as microphones. That means that the production side of music need not require much investment, provided that the labor can be provided on a cheap, free, or speculative basis. Similar tools are coming to traditional TV and movie making, but have not quite arrived as yet. For instance, there is still not a cheap way to light a set to professional standards.
(Online video production costs are typically already cheaper than traditional production simply because video on a computer screen will rarely be assessed with the same critical eye that a viewer might bring to material seen on a large, high quality TV. The cost-lowering foreseen here will be far more profound than that, however, and could eventually potentially effect full-resolution TV production, not just computer-based video. For instance, once lighting can be changed after-the-fact, as a digital postproduction effect, the cost of lighting will fall to almost zero.)
The lowered cost of production combined with the minimal cost of online distribution has caused commercial music to enter into a radically transformed phase. Any teenager with a Webcam might upload a video of a guitar solo to YouTube and suddenly be admired anonymously by hundreds of thousands of fans for a brief time.
None of those viewers are likely to know or care where the guitar player is from. There is no opportunity given for context setting. But this does not mean that Canadian musical identity will be lost in a global mush. Quite to the contrary, Canadian music is thriving on the Internet.
For example, a Canadian musician such as City and Colour (a.k.a. Dallas Green) can gain a worldwide following with little capital investment and a lot of hard work. Under the previous system, in which people were accustomed to paying a significant amount for recorded music, a figure like Green might have required substantial investment from a music label, and might also have become quite wealthy. Neither is the case at this time.
A Canadian artist today can find a less expensive path to fame than was available before, but the paths to wealth must be improvised from scratch. It is not clear that any such paths will ever exist. We might not see figures like Céline Dion, Leonard Cohen, or Joni Mitchell in the future.
But it is certainly true that Canadian voices are being heard loudly in the global online context. An American reviewer of Green’s recordings began a worshipful 2008 review with the comment that “It’s important to note how much good music is coming from our northern neighbors these days.”
In the case of music, online content can at least function as advertising for other aspects of an artist’s career, like live performance ticket sales, merchandising, or promotion to help raise the profile of an artist for endorsement deals and the like. “Labels” that invest in individual musicians have come to demand “360” contracts, which assume no revenues to speak of from recorded music sales, but demand cuts from live performances and all other activities.
In the case of movie and TV production, there isn’t an analog to a live performance, so the trend is certainly worrisome. The only way that TV and movie content creators can stay in business that seems to be working is to team up with providers of specialized viewing hardware. This strategy will be explained and explored below.
A question presents itself: Can a nation’s creative community remain vibrant if it is no longer earning much money? As a musician like Dallas Green lives his life, has children, and so on, will he be able to continue to tour and interact with fans to the degree needed to self-promote on the Internet? It is possible that the Internet will give rise to a great many promising early creative careers, but few, if any, lifetime careers, unless some form of the old mainstream media can continue as a parallel option that generates revenues for content producers.
For the purposes of this response, let us suppose that it might be desirable to adopt policies to bolster Canadian content producers in relation to the online world. One question, then, is whether the Subsidy Proposal suggested by some commentators [Peter Grant, “National Content on New Media”, presentation to CRTC Invitational Session on New Media, 1 October 2007; Eli Noam, “TV or Not TV: Three Screens, One Regulation?” at 19] would be effective.
One emergent characteristic of the Internet that we can be confident of is effectively limitless shelf space. The Internet already supports an uncountable number of musical and video productions of all types, including oceans of amateur content.
This isn’t necessarily a pleasant development, as it destroys some business models that are already understood.
Producers of movies have probably longed from time to time for more theaters to be built, so that they would have more revenue-generating opportunities to distribute movies. Likewise, television producers have sometimes longed for more TV channels. When the advent of cable enlarged the number of TV channels, media entrepreneurs quickly found profitable ways to put them to use.
But if the number of theaters or channels becomes effectively infinite, content producers face a new kind of problem. If there is no scarcity of a certain thing, then there is no value for that thing in a market system. Customers can always find a cheaper theater if there are an infinite number of accessible theaters, so the price of a ticket rapidly drops to zero.
Subsidies must be well-targeted to have any positive effect. If there is a practical means for Canadian content producers to demonstrate their potential to become successful, even if that means is obscure, indirect, or hard-to-interpret, then it can be sensible to enhance their prospects with subsidies.
Let’s use music as an example again. An extremely amateurish video of a music performance posted on YouTube or MySpace will occasionally gain a huge instant audience. The reasons are somewhat random. The crowd of eyeballs has to converge on something once in a while. Attention in the online universe is like weather, and sometimes there is a random storm.
It would obviously be senseless to provide subsidies to some of the amateurs who have stumbled into a fleeting storm of attention. But how should we draw the line between them and the more traditional recipients of subsidies?
In the pre-Internet world, there was enough of a functioning market that there was at least a little bit of objective data to help locate a promising professional musician. The collapse of the music business has been so extreme, however, that it is becoming harder to find such data.
In 2008 I conducted an experiment. I used one of the most influential Open Culture blogs, the one operated by Kevin Kelly, to ask for any leads on musicians who had been able to promote themselves without label support in the online environment, and are now earning a living sufficient to raise a child off of their music. To my shock, the Open Culture community was only able to identify a handful of candidates.
Under these circumstances, there is a danger that the process by which a young musician comes to be recognized as deserving of a subsidy can be dislodged from the real world entirely. The choice of subsidy would necessarily come to reflect only the politics of the subsidy-giving institution, since no other data would be available.
This is one of the hidden perils of the loss of the music-commerce business. It can indirectly give rise to a new form of overbearing patronage, in which musicians or other content producers no longer attempt to reach the audience, but instead prioritize manipulating the source of subsidies.
If that were to happen, it would amount to a lessening of the scope of Canadian culture, and the lowering of the ideals of Canadian expression.
The Internet presents regulators with a uniquely tricky environment. The problems facing content producers in the online environment are, in some cases, almost inversions of the problems faced in longer-established media categories. Policies that addressed the well-understood problems of traditional media can therefore have almost inverted consequences in the context of new media.
Here is an example from another topic in Internet regulation: Parents might be pleased that a physical theater demands proof of age before selling a ticket to view certain types of content unsuitable for minors. Parents might be concerned, however, if a Web site demands the same kind of proof without a further layer of legal protections, because it is hard to feel assured that the data gathered to screen out minors will not find its way to aggressive online marketers trying to target minors.
Unfortunately, some policy discussions still make use of idealistic, but unrealistic, assumptions about the nature of online commerce. It is unrealistic to assume that an Internet business is actually in the business that it claims to be in. In the above example, a provider of videos might actually be trying to gather marketing databases of children that would otherwise be hard to create.
A more general example of such an assumption is the idea that professionally-produced content available online retains commercial value. This is essentially an illusion. A product is only a product if there is a way to make money from it. Online content on what I’ll call the “Open Internet” (to be defined below) has so far not made a profit, so it should not be thought of as “commercial.”
In fact, traditional-seeming online content thus far almost always functions only as a “loss leader” promotional device that might, in the best case, help the content provider to make money in other ways, which will be discussed below.
Pulling money out of the Internet delivery chain, as from the ISP, in order to support Canadian content producers would effectively be a tax on the promotion of those content producers, and therefore counterproductive.
The Subsidy Proposal is an example of a well-intentioned idea that is likely to backfire, causing damage to the people it was trying to help.
The bad news is that the present level of Internet technology does not support viable, stand-alone business models for commercial content producers unless they embrace “commercializing hardware” strategies, which will be explored below.
Fortunately, if we think in the terms of the next decade instead of the next year, there are likely to be new incarnations of technology which support new forms of online commerce that have the potential to improve the prospects of Canadian content producers, even over the Open Internet.
One extraordinary example is tele-immersive live performances, which will be explained below.
The Internet as it exists today should not be taken as a given, or as a point of arrival. The potential of the Internet as a commercial medium for content producers has not been realized yet, in my view, but might very well be realized in the next decade. Future improvements to the technology are likely to create a much more favorable environment for content producers, including Canadians.
Policy discussions should prioritize bringing these fundamental improvements about.
The true nature of the Internet is one of the most common topics of online discourse. It is remarkable that the Internet has grown enough to contain the massive amount of commentary about its own nature.
One of the sub-genres of this self-obsession concerns the fate of “traditional content” such as TV and movie production. There are a variety of commonly expressed opinions.
It is sometimes claimed that the Internet is breaking down a traditional cartel that excluded ordinary people from making and distributing their own content. From this point of view, we are entering a new era in which there will be a vast expansion of media production. Amateurs and professionals will become less distinguishable, and movie making will become an everyday skill, like writing a blog. This means that content producers might arise from a much larger population of candidates. A golden age of media might result, in which new kinds of artists and personalities have a chance at stardom.
At other times, the Internet is interpreted as a tool to destroy copyright for whoever engages in media production, whether the producer is professional or amateur. From this point of view, we are entering an era in which the population is converging on the production of one giant collective movie, so to speak. Barriers between productions, such as between different movies, fall away to the practices of mash-ups, clips browsing in wikis, and so on. The identities of individual content producers and their productions will become subservient to the machinations of what is sometimes called the “Hive Mind.”
In this author’s point of view, the available evidence supports neither of these common interpretations thus far. Instead, a third scenario is unfolding at the present time, in which un-mashed and remunerative media forms like movies and television shows survive only outside of the “the Open Internet,” as delivered by a plethora of custom hardware strategies like the iPod. The Internet itself becomes a promotional vehicle for these strategies.
In the longer term, this trend in likely to give way to other, more interesting outcomes, which will be explored in Section Three.
By some estimates, about half the bits coursing through the Internet originated as television, movie, or other traditional, commercial content. BitTorrent, a company that maintains only one of the many protocols for delivering such content, has at times claimed that its users alone are taking up more than half of the bandwidth of the Internet. (BitTorrent is used for a variety of content, but a primary motivation to use BitTorrent is that it is suitable for distributing large files, such as television shows and feature-length movies.)
Of course, it is difficult to come up with a precise accounting. The Internet is only roughly charted, even though it is a human invention.
Network technology could have been designed so that it would be far easier to gather accurate data about what content is being conveyed. The willful blindness of the design of the Internet is a legacy of varied historical circumstances. These range from the origins of the Internet as a military device to later phases when it was enhanced by designers with entirely different motives.
The Internet was originally conceived during the Cold War to be capable of surviving a nuclear attack. Parts of it can be destroyed without destroying the whole, but that also means that parts can be known without knowing the whole. The core idea is called “packet switching.”
A packet is a tiny portion of a file that is passed between nodes on the Internet in the way a baton is passed between runners in a relay race. The packet has a destination address. If a particular node fails to acknowledge receipt of a packet, the node trying to pass the packet to it can try again elsewhere. The route is not specified, only the destination. This is how the Internet can hypothetically survive an attack. The nodes keep trying to find neighbors until each packet is eventually routed to its destination.
In practice, the Internet as it has evolved is little less robust than that scenario implies. But the packet architecture is still the core of the design.
The decentralized nature of the architecture makes it almost impossible to track the nature of the information that is flowing through it. Each packet is just a tiny piece of a file, so even if you look at the contents of packets going by, it can sometimes be hard to figure out what the whole file will be when it is reassembled at the destination. [This is why it is even possible for there to be a controversy over uTorrent and UDP.]
In more recent eras, ideologies related to privacy and anonymity joined a fascination with emergent, “out of control” systems similar to some conceptions of biological evolution, to influence engineers to reinforce the opacity of the design of the Internet. Each new layer of code has furthered the cause of deliberate obscurity.
Because of the current popularity of “cloud architectures,” for instance, it has become difficult to know which server you are logging in to from time to time when you use certain software. That can be an annoyance in certain circumstance where latency, the time it takes for bits to travel between computers, matters a great deal.
There is a social dimension to the trend as well, in which anonymity becomes more cherished than personal identity. This is why the Wikipedia does not provide the true names of people who have changed an entry.
The appeal of deliberate obscurity is an interesting anthropological question. Some of the explanations that I find to have merit include: A desire to see the Internet come alive as a meta-organism. Many engineers hope for this eventuality, and mystifying the workings of the net makes it easier to imagine it is happening. There is also a revolutionary fantasy. Engineers sometimes pretend they are assailing a corrupt existing media order, and demand the covering of tracks and anonymity from all involved in order to enhance the fantasy.
If these comments betray weariness with the youth culture I helped define long ago, it can hardly be helped.
At any rate, the result is that we must now measure the Internet as if it were a part of nature, instead of from the inside, in the way that we can examine the books of a financial enterprise. We must explore it as if it is unknown territory, even though we laid it out.
The means of conducting explorations are not comprehensive. Leaving aside ethical and legal concerns, it is possible to “sniff” packets traversing a piece of hardware comprising one node in the net, for instance. But the information available to any one observer is limited to the nodes being observed.
Using a variety of techniques, it is at least possible to estimate what is going on online. For the purposes of this Report, we’ll use the typical estimate stated earlier, that about half the bandwidth of the net was originally produced as traditional movie, TV, or recorded music.
It is common to divide content distribution according to the screen it will be viewed on. For instance, a movie might be viewed on a large screen TV in the living room, on a mobile device, or on a laptop computer.
I will divide content distribution into two classes using a different principle, which I think cuts deeper to reveal potential business plans. One class of distribution uses the “Open Internet.” That means that a user accesses content through a generic browser, like Firefox or Internet Explorer, or some other generic tool. The content can be sent directly between users, or through services. There is no central log of what content has been sent where.
As it happens, when users access content this way, they currently tend to view it on a computer instead of a TV. There are many reasons for this. One important reason arises from ergonomics. One has to follow fine print to use a browser, at least as they are currently designed, so for the moment, one must be close to a screen in order to use the Open Internet, instead of relaxing the eyes while sitting on a couch.
Even so, it is always possible that a design will take off that allows a large screen in the living room to function as a Web browser. The differentiations between screen usages are evolving, and are not critical to the differences between available business plans.
The alternative class of distribution to the Open Internet is a custom hardware delivery method. The canonical example is the iPod. While it is possible to load content onto it from the Open Internet, the whole point of the device is that it is exceptionally easy to load content from a particular service that is not open. The iTunes service does maintain a log of what content has been sent to which device. It is designed to be tracked, at least by its operators, instead of remaining obscure to all.
There are many portable devices that are, taken apart from an online service, similar to the iPod. These devices often receive content from the Open Internet. The iPod is by far the most successful such device, however. The iPod exists not to perform the same functions as other similar devices, but to provide a gated hardware destination for the iTunes service.
Other examples of hardware devices that exist only to create a channel in which it is possible to earn money from content include the video game consoles like XBox and Wii, the Kindle, and the DVD player. It is also reasonable to place the cable TV box, the Tivo and other DVRs, the OTA DTV receiver, and even the traditional movie theater into this category.
These hardware devices can be called “gated hardware” or “content-commercializing hardware.” They are motivated by certain business plans. While some of these devices can receive content from the Internet, they often support other options, such as closed mobile phone or satellite transmissions.
The Internet has been the scene of some unprecedented commercial success stories, like Google, and some equally unprecedented failures, like the failure to convince young Internet users that they should pay for traditional content on the Open Internet.
As explained above, the Internet lacks a global log of its activities, analogous to accounting records. But because accounting records do exist in commercial firms, we can know for certain that the apparently enormous amount of commercial content that flows on the Internet is usually not flowing in a commercial way. No one is making money from it, or at least not profits.
It would be a mistake to assume that new players in the online space are making money from content in the way that a TV broadcaster makes money by selling ads that appear during a show, or a theater operator makes money by selling tickets. There are widespread illusions that people are making money in ways that are related to these traditional business plans. In fact, there is a constant parade of new chimerical businesses that appear to be succeeding in this way.
An honest appraisal of available evidence leads to an opposite conclusion, however.
There are actually only three reasons commercial broadcast-like content might currently appear online:
It might be there to promote a different, commercial version of itself on gated hardware.
It might be there illegally, because it was pirated.
It might be part of an experiment in commerce, but thus far, despite extraordinary hype, such experiments have yielded disappointing results.
Each of these categories bears further examination:
Neither advertising nor fees for content have sufficed to fund analogs to broadcast content on the Internet.
Advertising online is not sufficient to support broadcast-style content over the Internet, and the impact of online advertising on traditional broadcasters is uncertain at best.
As stated in the introduction, seemingly contradictory trends commingle at the present time because “old” media persists even as new media rises.
One example of a pair of contradictory trends concerns the role of advertising in media commerce. On the one hand, advertising revenues are declining in some traditional media niches. Television advertising has been on a slow decline, for instance, while newspaper ad revenues have declined sharply.
On the other hand, advertising revenues based in new, online models, are rising dramatically. This phenomenon is to an extraordinary degree the story of a single company’s success. I am referring, of course, to Google.
These two events might seem to be mirror images of each other. Is the Internet enticing advertising dollars away from traditional venues like television? The answer is complex.
An “unfair” truth about the Internet is that content from the traditional mainstream media producers – the same ones who are being challenged by the rise of “Open” and “Free” Internet designs – is still an obsession for all those non-customers on the Internet. Although there is no precise accounting, BitTorrent users (who buy no tickets and see no ads) seem to gravitate to free versions of feature movies in current release.
Since Internet titans like Google and Yahoo are in the advertising business, it’s natural to assume that they must be competing with traditional broadcasters for advertising dollars. But that is not the case.
In a way, it’s unfortunate that what Google does and what TV broadcasters do are both described by the word “advertising” when the ways they make money have very little in common. Google is not enticing people to look at ads through content, but through facilitating everyday activities.
While Google is engaged in chasing hundreds of unproven, experimental sources of revenue, just a few ad placement services provide the vast majority of its revenues. [The Google annual report (Feb. 15, 2008) states that 99% of its revenues are from advertising programs.] AdWords (search-driven ads) accounts for the lion’s share of Google’s business. AdSense (ads that are automatically placed in sidebars on third party websites) is another example.
These aren’t really ads in the traditional sense, because they are only tiny text snippets, without production values, and the primary utility they provide is a link to an advertiser’s site.
Google places ads by tracking what people are interested in as they go about the most quotidian aspects of their lives. The top search topics, such as the ones listed above, aren’t what make the money. The money is instead made by what might be called the “most desperate” searches.
Google is not in the habit of disclosing the details of its business, but it is possible to infer a considerable amount. For instance, here is a typical, independent compilation of the likely top Google keywords for a particular week, as measured by price instead of popularity:
mesothelioma
structured settlement
vioxx attorney
drug rehab
contract management software
car accident lawyer
It is the non-glamorous events of life where most people spend most of their time and focus most of their concerns, and Google has found a way to commercialize them. The highest-paying Google advertisers are apparently American lawyers.
It is remarkable that Google earned a healthy profit on over US$16.5 billion revenue in 2007, and that represented revenue growth of two and a half times in only two years.
At any rate, content similar to the traditional category of professional broadcast content does not play a central role in the way Google or its competitors make money.
Google has a “long tail” when it comes to keywords. There is no reason to believe that Google would go bankrupt if mesothelioma were suddenly cured. There is no practical limit to the number of keywords Google can support.
The situation is different in the world of television. The top 100 advertisers account for over 40% of revenues, and they are dominated by carmakers and other branded consumer products like mobile phones.
Here are the top ten TV advertisers in North America in 2007 [Advertising Age, June 23, 2008, TV ad spend only]:
A traditional TV ad has production values similar to or exceeding the content that is adjacent to it. A Google ad has no production values at all. It is nothing but a text fragment with a link.
TV ads address the glamorous aspirations of people. They entice a viewer to crave the new car or the new consumer electronics gadget. The content that precedes the ad will typically be pretty glamorous as well; a drama with a dashing detective, for instance. Or, the content might be connected with one of the celebrity names in the top Yahoo search list.
Google, by contrast, probes the private corners of life, often resonating with the least glamorous aspects of personal experience. No one produces traditional commercial content that resonates with the topics that come up.
Some of the other big Google keywords of the moment, that didn’t make the top six listed above, include “Yellow Teeth” and “Laser Hair Removal.” These topics can only show up on the screen because someone typed them into a search engine, not because they afflict a glamorous detective.
The differentiation is not absolute. There are some cases of overlap between important advertisers in the online and TV domains. For instance, both are excellent venues for selling consumer electronics like cameras. There are some other instances of minor overlap. Asbestos litigators have occasionally advertised on TV, for instance.
Nonetheless, it is clearly the case that Google and other companies that offer online advertising are not simply siphoning money that would otherwise go into broadcast TV.
Meanwhile there is a class of online advertising that is growing and does compete with a traditional form of advertising, but not with broadcast advertising.
Banner ads and other similar designs can be thought of as “neo-print” advertising. They are placed next to journalistic text or other content on a Web page in the same way that ads are placed in a physical paper newspaper.
Online display advertising is projected to earn about US$8 billion in revenue in the North America in 2008, but there has been a worrisome downward trend in pricing and a slowing of market growth in this segment. Unlike the search market that Google dominates, online display advertising is a highly competitive marketplace with many significant players.
(Unfortunately for the newspapers, a mostly-free service, Craigslist, has siphoned away another kind of advertising; the classified ad business. This is a classic Internet story. Free services make established business plans obsolete.)
There are also new types of ads that are placed before, during, after, or, most recently, over and within online video content. This category is currently not successful enough to be the sole source of revenue on a commercial Web site. Sites like YouTube are heavily engaged in trying to find a design formula to make this type of ad successful, but have not as yet done so. This quest will be explored more below.
One reason for the success of Google-styled, tiny contextual small text ads over content-tied ads with higher production values is that the results of Google ads are more easily measurable.
It is trivially measurable when a customer clicks through and might actually complete a transaction, while the effects of persuasion are indirect and harder to measure.
Therefore advertisers are drawn to the more concrete alternative. This is perhaps unfair to those who attempt to sell more traditional ads online, but it is true, and likely to remain true.
To summarize, there is little reason to believe that the most successful online advertising techniques are drawing revenues away from broadcast media. On the other hand, it is probably true that certain less successful online advertising techniques are competing directly with print advertising.
To restate what is familiar news: People don’t pay for content on the Open Internet. The obvious reason is that there is no effective impediment to making digital copies of content, so content suffers no scarcity at all, and therefore has no value.
A thought experiment can illustrate this situation:
Suppose you sell tomatoes in a store, and a truck pulls up next to your store that offers free tomatoes.
You might ask why anyone would offer tomatoes for free. The reason would be that there are an infinite number of other trucks with tomatoes pulling up to park next to the first truck.
Your store will not be able to compete unless you also make your tomatoes free, not merely cheap, or you come up with a clever, seductive scheme that gets people to spend money they don’t have to spend.
At least that first truck might get a nice parking space and might make some money from selling the advertising space on its siding.
Let’s call that ideally positioned truck Google.
If you’re a canny storeowner, you might be able to get a few customers to pay a little for tomatoes because they are packaged with your stylish, innovative dicer and saucepan. If you can do that, you might call yourself Apple.
In this thought experiment it’s hard to have more than one truck called Google, because only one driver can get to the most ideal parking space first. The second driver, who we’ll call Yahoo, can perhaps sell some advertising, but only at a significant disadvantage.
It’s astoundingly hard to come up with those seductive schemes to get people to pay for things that are otherwise available for free. Consider the real world Apple. It has managed to become the biggest music store, but despite a series of attempts, has made almost no progress selling movies or TV shows.
If you are a tomato grower, you see a precipitous drop in your revenues. Neither the ads on the side of the Google truck nor the gadget-cleverness in the Apple store are providing you with anything like the revenues you used to get before there were an infinite number of tomato trucks.
The music and newspaper industries are like the tomato growers.
The function and context of “content” has shifted to such a degree that it might be reasonable to state that there is virtually no directly-commercial broadcast-like content on the Internet.
The new pattern that has emerged is that the professional-seeming content available over the Internet is non-commercial. It can be called “professional-seeming” instead of genuinely professional because the term “professional” usually suggests that money making is involved.
Such content does not generate access fees or advertising revenues to speak of, however. True, there has been an endless parade of optimistic, high visibility ventures trying to find the secret sauce that will reverse the trend. After more than a decade of failures, however, it is becoming reasonable to treat all such quests as quixotic.
This conclusion might strike some readers as an overly dark, or premature call, and indeed there is room for differing interpretations. At some point, however, any result which has been persistent for as long as the failure of schemes to commercialize content on the open Internet should be treated as an operative assumption.
Virtually all professional-seeming content serves to promote genuinely commercial content that is not delivered over the open net, but over gated hardware devices.
For instance, essentially no one pays to hear music on a computer, because it is free to listen to music on mainstream sites like MySpace or YouTube. It is almost as easy to use shadier free content services like PirateBay. And yet, customers pay to transfer music easily to an iPod.
The role of traditional broadcast productions in the online world extends beyond literal replication and transmission, however. It might be true that the majority of amateur video production that involves a significant degree of planning or staging, as opposed to spontaneous Webcam presence, in the online domain takes the form of fan-generated parodies, tributes, and other derivative material.
We have witnessed a remarkable inversion of the fundamental dynamics that preceded the appearance of the Internet. The majority of seemingly-professional content flows over an open system, the Internet, where there is no need to apportion spectrum, and also no way to earn money from content. Instead, entrepreneurs develop proprietary hardware devices to deliver the same content. When content flows to proprietary gadgets it can earn money, and only then.
Absent such hardware, there is no such thing as commercial content. There is no such thing as an analog to commercial broadcasting over the public spectrum on the Internet. Many an entrepreneur still hopes to prove that there can be such a thing, but at this point it would be quite a surprise if anyone succeeds.
In the past, if content flowed through resources owned by the public, it could be commercial only if the public licensed it to be so. Today, content can only be commercial if the journey the content makes from provider to consumer is constrained at each end by private, proprietary gadgets.
These gadgets are prevented from conveniently “tuning in” to the content available to the general public. The qualifier related to convenience is crucial, since the barriers are not absolute. A subculture of hackers enjoys scaling them, but on a statistical basis they are effective for the population as a whole.
The Commercializing Hardware becomes the anchor of the profit center, and therefore the new invariant in an Internet-related media venture. An iPhone owner can access video content either over the Open Internet or over one of the commercial wireless telephony networks, for instance. Only the Apple server and the gadget in the consumer’s pocket are the same in each case. The broadcast infrastructure use to be the invariant core of media distribution, but that is not true for successful Internet-based business plans.
Since an analog TV had to tune into limited spectrum controlled by the public, each nation had no choice but to have a policy about how to allocate that spectrum. Now things are different. There is no practical limit to the variety of Commercializing Hardware that might be developed, so regulators have to struggle to stay aware of, much less regulate, a fast-changing situation. There is therefore less difference between national approaches to commercial content than there used to be.
There is a vast amount of unauthorized file sharing online. It is increasingly rare to find anyone of college age or younger who thinks of file sharing as an unethical practice. The Open Culture movement has been driving the message for quite some time, and it has sunk in.
The large file sizes required for movies and television shows have been an impediment to a certain degree, though clever file sharing sites break large files into pieces to get around this problem. (I might have played a role in motivating the cat-and-mouse game that has characterized attempts to protect copyright, when I was an early advocate of “Open Culture.”)
It has become extraordinarily expensive and time-consuming to prosecute file sharing, so the practice has become tacitly accepted. Attempts to prevent file sharing have essentially failed. The Open Culture movement has won the hearts of the young generations.
When the most anticipated video game of the year, Spore, was recently introduced with copy protection, the Open Culture movement organized a boycott of it and seems to have had the effect of substantially reducing its success in the marketplace.
The Open Culture movement counsels content producers to treat piracy as promotion, and indeed that is what is happening.
One unintended consequence of the Open Culture movement is that it makes the much-anticipated event known as “convergence” in the consumer electronics market less likely. Convergence in the world of media technology refers to the idea that disparate devices such as TVs and personal computers will be far less functionally distinct in the future, even as aspects of their designs, like screen size, will vary a great deal. Each device would gain the functionality of the other, and any data, content, or services available on one would also be available on the other.
If media gadgets converged into a single standard, then there would be only one protocol to “crack” making it a simple target for the Open Culture movement, and useless for commercializing content.
All in all, the idea of ever-more-varied gated hardware gadgets should be an expected concurrent condition with the popularity of Open Culture.
If you want to talk to Internet content business skeptics, talk to executives from the traditional content provision industries. NBC CEO Jeff Zucker has been widely quoted, for instance, as predicting that the Internet will turn “analog dollars” into “digital pennies.”
If you want to talk to optimists, talk to Silicon Valley entrepreneurs. If there is an “official belief system” in the world of Internet entrepreneurship, it might include the notion that it’s only a matter of time before someone figures out how to charge for video content on the Open Internet. There is a constant rhythm of ventures being born and dying in this area.
For instance, as this response is being written, one high visibility video-over-the-net venture, called Mobuzz, died. Mobuzz hoped to make money from advertising, and its specialty was producing and releasing versions of videos in multiple languages at once. It was shuttered on November 27, 2008, after a doomed last minute ploy of simply asking its users for donations.
Meanwhile, on November 26, 2008, a bright new star was ignited in the firmament of online video businesses. Sling.com was launched. Since there is a related hardware product, a set-top video-on-demand device called a Slingbox, sold by the same company, there might be some hope for sling.com as a promotional vehicle. I doubt anyone seriously expects it to make money on its own.
If it seems remarkable that video-over-the-net ventures were being launched and sunk on the same week as this report was written, it shouldn’t. There has been a constant, cluttered series of attempts in this “space” (category of business plan) since broadband connectivity started to coexist with traditional cable TV service in the home.
The desire to find a path to success is so great that we can call experimental online video ventures a persistent phenomenon, and the raison d’être for a significant portion of the video content that flows over the Internet.
These ventures are either privately held, or units of large public companies, so information about how their business plans are working out has to be inferred. Here is a list of some of the better-known names in the space at the moment. Each has a footnote pointing to current reporting on its results as a business. The reporting is consistent for all these examples: There is no evidence yet of a profitable business plan.
Babelgum [an interesting case where an Internet “broadcaster” decided to switch gears and produce content while waiting for someone else to figure out how to make money from it]
Brightcove [“Brightcove is no longer a direct competitor in the market but now enables other hopefuls”]
(There is apparently a law that Silicon Valley ventures in this space must have nursery-school names.)
These companies vary in their strategies. There are differentiations in user interface and streaming or file transfer technologies. Search, content recommendation, and social networking capabilities are common add-ons. The various contenders might seek different mixes of user-generated vs. professionally-sourced content, or real-time vs. pre-recorded streams, or might have different approaches to advertising or fees.
The ventures in the above list are joined by assorted offerings from the Internet majors, like Apple, Microsoft, Yahoo, and AOL, as well as from the traditional broadcasting companies that also run websites, like Viacom, Disney, and Time Warner.
That’s quite a lineup. With all these companies competing in the space, one might assume that there is a lot of money to be made.
Maybe there will be someday. But thus far, there is only money to be lost.
YouTube is the biggest player in the space, by far. Nonetheless, it loses money, despite huge capital infusions from its parent company, gargantuan viewing audiences, and an ambitious, flexible commitment to perpetual, creative rethinking about how to incorporate advertising.
It is important to distinguish the business of video on the Open Internet from video to gated devices. Apple turns a profit on its iTunes service if it is considered a portion of its iPhone and iPod business. (Though music is the primary product on iTunes, video is also delivered.)
In comparing Apple to a business like YouTube, one should only consider revenues Apple can earn from activities in the Open Internet, sans gated devices. Few customers (or advertisers) pay anything at all, to Apple or anyone else, for content that users will view only on the computer. Customers pay a little, however, to get the content easily into their portable devices. Some of the video ventures above have strategies that include support for gated devices, such as set-top boxes or smart phones.
The second-place slot, after YouTube, has recently been held by either Fox or Yahoo. Neither has made money on Open Internet video. Likewise, no smaller player has as yet turned a profit, at least according to information in the public record. There are periodic rumors that one of the startup ventures has started to turn a profit. For instance, in late 2008 such rumors circulated around Hulu, only to be debunked shortly after.
Since one particular, delicate question is often asked, I might as well address it: Some readers might wonder if the profitability outlook for pornographic content is any different. It is occasionally claimed that pornographic content providers pioneer business models that are later adopted by mainstream content providers.
For a while, this claim appeared to have merit. From the late 1990s until the mid-’00s, there were successful entrepreneurs in the business of delivering pornographic content online. Starting around 2007, though, peer-to-peer file sharing assaulted their business model.
The online pornography business has contracted with extreme rapidity since the introduction of free sites that offer the same content. Free pornographic sites are now created and maintained either by entrepreneurs hoping to capture some dwindling advertising dollars (without much success, according to available accounts), criminals hoping to infect computers with viruses or pry loose personal financial data, or open culture ideologues, who believe that “free” content is an enlightened goal.
In sum, the activities of online video delivery ventures can be best labeled as “experimental” business practices, rather than demonstrations of plausibly profitable future business plans.
Might all the experimentation eventually lead to success after all? We can turn to recorded music and newspapers for precedents. These are two media business categories that “went digital” sooner than movies or TV because they require less bandwidth. They have found huge audiences online, but no successful business plans, so they are experiencing rapid, disastrous business contractions, even as their audiences have become larger then they ever were before.
All is not entirely bleak. The newspaper-publishing world does provide some examples of niches in which a publisher has been able to identify an audience that will pay for online content.
For instance, the Wall Street Journal has been able to maintain a paid subscription service for certain portions of its Web offerings, even after many other newspapers, such as the New York Times, failed when they tried. The explanation might be that the Journal’s audience can more readily afford to pay, the content is timelier (reducing the value of unauthorized copies of articles that take even a little longer to arrive), and the habits of the audience in the financial trades include routinely paying for information that flies across screens.
In the world of music, there do not appear to be any comparable happy examples. Some music subscription services, like Rhapsody, have a base of subscribers, though they also appear to rely on deals with hardware providers, like cell phones and set-top boxes, for the core of their revenues.
There continue to be experiments in ways to make money from either selling videos or ads related to videos over the Internet, but after a decade of failed designs, the outlook is not positive.
How is it that hope can spring eternal in Silicon Valley, when a seemingly endless variety of “pure” video-over-the-Web businesses have failed to become profitable? The answer is in part that optimism is a cultural characteristic of the high-tech industries, but there is also a deeper reason.
Trends can gain momentum with tremendous speed on the Internet. The density of connection and contact between users results in a highly exaggerated network effect. This means that a new service with a vast user base can appear virtually overnight. This has happened many times.
Wikipedia became a worldwide standard bearer for reference information in less than a year. YouTube saw similar explosive growth. Merely being the beneficiary of a network effect explosion of usage doesn’t guarantee profits, of course. But it does cause huge, global change in culture practically overnight.
The way the rise of a new online habit takes hold is generally called an “S-curve.” The metaphorical S we are concerned with is imagined to look more like an integral sign than a conventional S, like this: ∫.
The idea is that there is only a tiny, often ambiguous, warning of a rise before the huge, sudden, main portion of the rise sweeps upwards. The term “hockey stick” is what an “S-curve” looks like in a business plan. The venture loses money at first, but losses will soon turn to easy, tremendous gains, symbolized by a long handle, pointed towards the sky.
The network effect has a catch. If a particular service catches the network-effect rocket ship, it becomes inherently more valuable than those of its competitors, even if it would be considered inferior based on features or design.
Once Wikipedia caught on, there was no hope that an “Alterpedia” would follow. All the links lead to the Wikipedia, so it is what search engines point to, and therefore it motivates people to contribute to articles. A self-reinforcing cycle becomes almost unstoppable. This is also why so many personal computers have the same operating system, and why one search engine enjoys practically all the revenue from search-based advertising.
Digitally connected systems often prefer singleton winners to a distribution of winners.
And this is why investors are willing to bet over and over again on a series of silly-sounding startups to pump video over the Internet. They are all likely to fail. But on the off chance one takes off, it will enjoy an advantage that is almost, but not quite “winner-take-all.”
It is for this chance that the dream never dies. It is for this shot to win almost everything with a small initial investment, that the small investments never stop coming.
When I work as a musician, I always wander to the back of the audience when someone else is on stage and hoot and clap, in order to get the audience to do the same. All musicians do that. It’s to our mutual benefit.
In Silicon Valley we do exactly the same thing. There is a mutual benefit for keeping the hope alive that certain commercial niches on the Internet might yet work out for some lucky entrepreneur, even once many have failed trying. It is not impossible that a business like Hulu could become profitable on the Open Internet at some point in the future, but how many years have to go by before we start to treat that as an unlikely event instead of a likely one? A decade ought to be enough.
The failure of online-content business models, sans gated delivery hardware, relying on either consumer fees or advertising for revenues, should by now be treated as a persistent negative result.
The Internet is a most curious phenomenon, because it is immediately accessible to all of us, and yet it is extraordinarily hard to get a fix on what is going on with it.
For instance, is it making broadcasting, as we know it, obsolete? Or will the Internet and something similar to television continue to coexist for generations?
Because trends can arise with extreme speed online, there is still dramatic volatility in the most basic patterns of how people use the Internet, even a decade and a half after the introduction of the World Wide Web.
In just the last year, for instance, we have seen the momentous rise of Twitter and other services that provide constant ambient connection between users. It has only been a little longer since Facebook suddenly became the ubiquitous mode of teenage self-definition.
These events are not mere shifts in consumer behavior, but reconsiderations of basic societal values. The Facebook generation has thus far adopted different ideas about privacy, socialization, friendship, and status than any previous people on Earth.
And yet, within this profound state of flux it is possible to discern some emergent trends that have begin to demonstrate persistence and even solidity. These often come in the form of what a scientist would call a “negative result.” This is not a disparaging term, but rather a recognition that it is often more feasible to disprove assertions than to prove them.
The philosopher of science Karl Popper emphasized the way an accumulation of negative results can create an improving scientific theory. Those ideas that have not been disproved are likely to be more and more robust as the process of disputation proceeds.
This author will address the Commission’s questions in a Popperian spirit. It is possible at this time to dispute some ideas about the notions of “broadcasting” and “content” as they are applied to the online world. We still don’t know where the online world is going, but we can start to say some thing about where it is not likely to be going.
A Popperian approach can address some of the questions posed by the Commission in a useful way.
The repeated phenomenon of sudden changes in mass behavior should not lead us to conclude that it is impossible to think about the future of the Internet. The problem is not complete inscrutability, but a high level of volatility. It is therefore easier to imagine a variety of rough outlines of futures for the Internet in ten or twenty years than it is to foresee the events that might occur next year.
The author has worked with the “Scenario Method” as one of the “Remarkable People” of the Global Business Network for over twenty years. The scenario method starts by bundling a wide range of plausible futures (the ones not excluded by negative results) into a small number of representative scenarios. Once that is done, it is possible to ask questions about what the various representative scenarios have in common, and what they do not.
Successful commercial content might still appear in the public online Internet, for instance, but it might demand substantial new infrastructure investment to support entirely new media forms, such as enhanced live performances via tele-immersion.
This is only one of a number of scenarios that should be considered in any discussion of policy related to the Internet.
This response will explore this scenario and some others, and will consider the implications of various potential regulatory interventions in each scenario.
In the case of potential regulatory interventions in the evolution of the Internet, a scenario-driven investigation yields useful results. Some interventions, for instance, are likely to have negative effects in multiple scenarios and should therefore be avoided. Others might be sensible as events unfold, but are premature to consider at this time.
This should be considered the current normative scenario. It is the scenario preferred by many powerful stakeholders, including the largest Internet companies like Google and the “Open Culture” movement.
In this scenario, a large amount of content will always be available for free over the Open Internet, but would serve as a promotional device for the paid versions that would be accessed through custom gadgets, the “commercializing hardware.”
I will now briefly exceed the scope of the questions asked by the Commission in order to address another way that trends on the Internet might have an effect on the future of culture, including Canadian culture. I am no longer an enthusiast of Open Culture. It has a tendency to lead to designs with a collectivist quality, in which the creative outputs of large numbers of people are aggregated anonymously into giant structures that erase individual perspective. For that reason, it could eventually become a drain on Canadian identity as well.
A second criticism I will raise to this scenario, which also doesn’t fall within the scope of the Commission’s questions, is that it is a needlessly expensive, wasteful, and anti-green scenario, since it involves the creation of extra hardware in the world that would not otherwise be needed.
At any rate, the way to support Canadian content developers in this scenario is to encourage the success of the physical gadgets and venues that deliver paid content, and not to place any burdens on the promotional use of the Internet.
Therefore, under this scenario, there should be no extra fees or taxes considered for ISPs. This argument will be examined in detail below.
To put it another way, someone invents the missing trick. Some new startup venture with a kindergarten name actually rides the fabled S-curve to riches.
Somehow, someone finds a way to present a content delivery service on the Open Internet that consumers, or the advertisers trying to influence consumers, are happy to pay for.
Don’t hold your breath. Entrepreneurs have been trying for over a decade.
There are two known possibilities for revenues: Advertising and fees for content. Maybe the long-sought trick will involve a third possibility that has evaded detection until now.
If the missing trick appears, then it will be new information. There is little to be said about it here.
This is a more hopeful, but more challenging scenario to explore.
The Internet seems to provide unlimited opportunities at virtually no cost for the luckiest content entrepreneurs, as if by magic, like an infinite cornucopia. Relatively little capital was required to start ventures like YouTube, Facebook, and so on. (Of course, once they gave an indication of catching the network-effect rocket ship, they couldn’t beat off investors with a stick.) The engineering insight required in these cases was not earth-shaking. YouTube made use of an existing video delivery technology, for instance. The primary trick they had to perform was arrive with plausible support for an online service niche at just the right moment to catch a ride on the network-effect rocket ship.
Perhaps it is the attractiveness of this mirage of an almost weightless, supernatural spring of wealth that has obscured the fact that the infrastructure of the Internet is physical, and has required substantial investment over a significant period of time.
There is a danger to this kind of magical thinking. If we constrain ourselves to think only about Facebook-like tales of sudden prominence (even if it is non-remunerative) which can arise out of the existing Internet infrastructure, we blind ourselves to greater possibilities that might exist if we consider improving that infrastructure.
When the bandwidth and other properties of the net have improved in the past, the uses of the net have fundamentally changed. When users connected using modems, for instance, it was too much trouble to upload and share videos, but when broadband became common, suddenly user-generated video became a major category of cultural expression.
The typical current level of Internet service came about as a compromise between the costs of technology at the time (mostly in the 1990s), and the needs of the media forms that were already understood. In other words, the existing Internet infrastructure is just shy of being good enough to serve as a delivery medium of movie and television content. Being “just shy” means that ISPs can sometimes be overwhelmed by customers sharing videos, but not to the degree that the activity is made completely impossible – at least not yet.
Do we have any information about what media forms might appear if the specifications of the Internet were improved beyond the requirements of video-related services?
One of the reasons this author and many others find the world of New Media intriguing is that we suspect that fundamental surprises still await discovery. There is a hopeful quality to digital-media research.
Here is one recent example. This author was the first researcher to demonstrate the placement of multiple people inside a shared, general-purpose simulation. This happened in the 1980s. I dubbed this media form “virtual reality” over a quarter of a century ago, and speculated that one day people would enjoy “VR” in a way that was quite different from TV. It would be a form of shared, waking-state, user-designed dreaming. Users would invent surreal personal bodies and environments and enjoy maximizing weirdness for each other. At the time, this was considered a bizarre and radical concept.
In the ’00s, the typical personal computer finally gained the 3D graphics capability to render high quality games, while at the same time, many households were upgraded to broadband connectivity. As such, it became possible to conduct an early test of whether the wild notion of Virtual Reality would appeal to large numbers of people.
In 2002, I became an advisor to a venture called Second Life, which planned to bring a user-designed world of avatars and strange environments public. Second Life does not convey the Virtual Reality experience, since it is only viewed on a conventional computer screen. Even so, the content of Second Life evokes what Virtual Reality might be like in the future.
To my delight, it turns out that large numbers of people decided it was worth substantial time and dedication to design avatars and virtual places. Second Life rapidly gained a large following.
Like many New Media ventures, it isn’t clear if there will ever be a path to Google-like success for Linden Lab, the purveyor of Second Life. But there is at the very least a demonstration that the public is curious enough about New Media experiences to rise to the challenge and adopt new habits to enjoy them, even if those habits are labor-intensive.
Second Life is a particularly interesting case, since it has had the opportunity to explore a wide variety of different revenue sources, including monthly fees, commissions on the sales of virtual goods and services (like clothing for avatars), virtual estate (the sale of land in the virtual world), and currency exchange (between Linden Dollars and real-world currencies).
The enormous question that my colleagues and I obsess over is, “What undiscovered media forms might be possible if Internet connectivity is upgraded beyond the typical current level? What new forms of cultural expression might arise?” Might there be new business plans for ventures related to as-yet unknown media forms? Maybe the dilemmas that have damaged the recorded music and newspaper businesses would play out in a very different way at some higher magnitude of Internet performance.
There is a large, unexplored space of possibilities. I will present two potential examples in order to flesh out this scenario, but there are undoubtedly many others.
In a market economy, at least some degree of scarcity must be present for something to have value. In digital systems as we know them, a file of information can be copied. That means the contents of a digital file is not inherently scarce, and is therefore not valuable. Any particular collection of bits can be infinitely copied at an infinitesimal expense.
Furthermore, even if a large file for content such as a full-length movie takes longer to copy over the net than it would take to view, there is still utility in copying it. Some online video distribution designs copy movies overnight, for example.
The reason for the qualification “as we know them” is that the initial conception of digital media foresaw a different architecture. Ted Nelson was the inventor of the link and the notion of hypermedia. He articulated a vision of something like Web in the 1970s. Had his design taken hold, there would have been only one copy of each file. (Of course, as a matter of engineering efficiency there might have been caches with copies of files. But from a logical point of view, each file would exist only once.)
Nelson’s plan was that each content provider would charge a small amount each time a file was accessed. There would be no continuation of the idea of copyright, or attempts at technology for copy protection, because there would be no copies. Any citizen could be a content entrepreneur their content for access. This idea was more democratic and libertarian in its qualities than the “free” ideal which has taken hold.
At any rate, Nelson’s design was not the one that was adopted. The Internet as we have accepted it is a sort of infinite, free public copying machine. Observers can have different opinions about whether that design is desirable or not, but it is worth remembering that it was not inevitable.
Given the nature of the Internet design that has become entrenched, what kinds of bits can be conveyed over it that will retain value in an environment of endless copying? Already discussed was the strategy of gated “commercializing hardware.” Are there any other strategies?
Here is one: Real-time interactivity cannot be captured in a downloaded file, because such a file only captures the past. Right now, real-time interactivity over the Internet exists in a variety of forms. There are text messages and chat, audio telephony, virtual worlds like Second Life, and poor-quality video calls over services like Skype.
The bandwidths required by these services are not huge, so the business plans available to entrepreneurs who offer them are similar to those available to content providers. If a service is channeled through gated, commercializing hardware, like a smart phone or a video game console, then consumers are willing to pay for the service, but it is hard to make much of a business on the Open Internet.
But what if there are forms of real-time interaction that will require, and drive demand for, higher levels of Internet performance?
3D movies have recently found success in theatrical venues. There is now a substantial and high quality pipeline of movies in production with 3D techniques built in from the start. This is a triumph for a dedicated community of 3D entrepreneurs who worked towards their goals for many years.
Unfortunately, it is hard to transfer that success into the home. While a number of vendors are already selling consumer 3D TVs, they don’t function in the home setting as well as they do in theaters.
3D screens have to fill much of the viewer’s field of view in order not to seem like distant windows. Adding 3D to a TV with a given form factor can have the effect of making what used to seem like a large screen into an inadequate, smallish screen.
There is currently a tremendous energy to discussions about how to solve this problem. A great many players in the marketplace have an interest in finding a solution.
I have my own guesses as to what solution will take hold, but my purpose here is not to discuss 3D designs. What is important is that it is likely that whatever design emerges will have a few qualities that are highly relevant to the topic of this Report.
Consumer 3D is fairly likely to evolve in a way that includes at least certain layers of interactivity, even when the content being viewed is not interactive. For instance, a successful home 3D solution might involve a subconscious level of interaction related to the motion of the viewer’s head.
If the user’s head motion can be measured in real time, then a small virtual display might be able to simulate a large enough display to make the 3D effect appealing. For instance, a small display might be worn like eyeglasses, such that wherever one looks, one seems to be looking into a different portion of a vast screen that is always there, even when you look away.
Another potential design would be a small display that hangs from a delicate moving arm in such a way that as you look around, it stays in front of your head, creating the illusion that a much bigger display is persistently surrounding you.
Even if the display is to be located on the wall, like a present-day flat screen TV, the camera angle of the content you are viewing might be constantly adjusted to compensate for your head position, creating the illusion that what you are looking at is positioned in the same space that you are.
Or, displays might be emitting a much more complicated form of visual information than they currently do, becoming more like holograms.
It is intriguing to think about how these displays might work, but the purpose here is to consider how they might impact the range of available business plans.
First, note that 3D movies in the home will require new hardware.
Second, unlike in the theater, home 3D will probably incorporate at least an element of interactivity. This means the standard they adhere to will be more complex than the standards associated with non-interactive media. And that means the new hardware will be non-trivial, and will be an ideal platform for a naturally occurring instance of what I called “commercializing hardware.”
Taken together, this suggests that home 3D movies flowing over the Open Internet will not interfere with commercial 3D opportunities in the way that is happening for conventional movies. Immersive content might not have an “Open Internet” state at all.
There is one piece of bad news. This means that the way theatrical 3D movies are being produced currently will probably not transfer into the home. A new production method will be needed.
Most experts predict that the initial content is likely to be sports. There have already been successful experiments in gathering viewpoint-independent 3D “sculptural movies” of sporting events. The usual example that has been given, for decades now, is that you’ll be able to experience the puck’s perspective in a hockey game.
There will be surprising winners and losers in immersive content creation. Digitally-enhanced stage theater production companies might be advantaged over cinema or television production companies because their staging, lighting, acting, and so on already account for viewing from a wide range of perspective.
Digital enhancements to stage productions might include virtual sets and special effects, so that theatrical staging would take on some of the production values of cinema or television.
The technical challenge to bring immersive content into the home is great, but the commercial opportunity is also great. Not only would viewers be treated to an entirely new kind of entertainment experience, but that experience would inherently require complex hardware, thus neutralizing the problem of value loss on the Open Internet from the start.
Internet infrastructure might have to be upgraded to support immersive content. Immersion as described here will involve a complex set of synchronized data streams and latency-critical initiations of streams. For instance, a viewer might suddenly look at a part of the immersive environment that had not been visually rendered recently, because it wasn’t being looked at, and that might trigger a need to send fresh data to make sure it is rendered properly. It is possible that intermediate data caches and computational capabilities will be beneficially placed in locations that can provide lower latency service to customers. An ISP might have to turn into a host of computational services in a new way. It is still too early to say.
We can, however, make some estimates concerning bandwidth requirements. A 3D movie, as shown in a movie theater, requires between two and three times the amount of information storage that would be required for a conventional version of the movie at the same resolution.
An immersive movie or television show will require a far larger amount of storage. An educated guess is that the infrastructure for immersive content will require about one order of magnitude improvement [meaning on the order of ten times the current level]
Definition: The term “tele-immersion” generally refers to a hypothetical future media technology that would simulate the co-presence of distant people well enough for them to partially forget that a simulation is involved. Or, if that is too high a bar, the goal can be defined as a communication technology good enough to persuade people to forego air travel in many cases where it is considered essential today, such as to testify at a hearing. Yet another way to define tele-immersion is as the eventual overlap between the highest quality videoconference technology and Virtual Reality.
Companies like Cisco and HP use the term “telepresence” to describe the highest quality currently available video conferencing products. Tele-immersion can be thought of as the term for some future level of performance that transcends what we think of as today’s telepresence. Some of the currently available offerings are impressive, but they haven’t yet achieved the level of performance that is needed to effectively compete with air travel.
Cisco has pioneered the commercial provision of high-quality network services for business telepresence customers. The Cisco approach leverages infrastructure components sold by the company to sustain a more constrained and reliable network path than would be available over generic Internet services. Something similar will probably eventually take place in the consumer market.
Videoconference technology has a long history. The first demonstration of a long distance videoconference meeting took place in 1927. It sometimes thought of as a “cursed” technology, since despite many attempts at product launches, it never seems to take off. Users always seem to be enthused for a few months, but then conclude that making either a voice-only call or a physical trip makes more sense than staging a simulated trip.
But it is often the case that a technology seems cursed for a long period of time, and then starts to work. Virtual-world services like Second Life seemed unlikely to work until one day they did.
It is worth distinguishing a persistent negative result of a business plan that attempts to make use of a demonstrated technology from a persistent negative result about whether a technology can be made to work at all. Video streaming works on the Open Internet, but charging for it has not worked. When a business plan fails repeatedly, the results should be taken to heart. Getting a technology to work is a different kind of game. All technologies fail to work at first. Persistence often pays off.
There is an active academic tele-immersion research community. It is concerned with several tracks of discovery. One concerns the subconscious cues that people pass between one another to facilitate trust. Researchers such as MIT’s Sandy Pentland and Stanford’s Jeremy Bailensen have identified a number of these cues. Some of these cues, such as subtle changes in the eyes and mouth, are important because it has been hypothesized that the reason simulated presence doesn’t compete with the real thing as yet is that our current designs fail to faithfully transmit them.
Another track of tele-immersion research concerns the design of equipment that could support tele-immersion sessions. Some the designs described above in regards to immersion might also be useful in tele-immersion.
I had the honor of leading the National Tele-immersion Initiative of Internet2 during my tenure there in the 1990s, and have since continued to conduct research in tele-immersion in both academic and corporate settings. In conducting this work I have become convinced that we are converging on cracking the problem of tele-immersion.
We are likely to understand much more about the world of subconscious cues that pass between people within the coming years. We are also learning to build new kinds of displays that will open these subconscious channels between distant people for the first time. My best guess is that tele-immerison ought to become commercially available within ten years.
It is worth pointing out the market forces outside of entertainment that might drive demand for tele-immersion, because these greatly increase the chances of tele-immersion being developed and adopted. The entertainment business could get a free ride on other markets in this case.
One primary benefit of tele-immersion will be to reduce the need for air travel. Air travel is a negative factor in climate change. It consumes fuels that should be conserved, and generates greenhouse gasses. If air travel is to scale with predicted demand, there will land use and air space issue in many major cities. In the event of an outbreak of an airborne viral epidemic more serious that SARS, an alternative to air travel will be needed.
Another benefit tele-immersion is a little subtler; the reduction of coordination and travel time costs associate with collaborative activities. The enhanced efficiency could accelerate global productivity.
There are also personal benefits. I am particularly interested in improving the lives of families dispersed by economic opportunity. Perhaps tele-immersion will deepen the degree to which people can stay in touch with aging parents in distant locations, for instance.
In a sense, the next idea I will present might be seen as stretching the definition of “professional content” so far that it is no longer applicable. I think, however, that I am merely describing the next natural stage in the evolution of electronic media.
There was a time, before movies were invented, when live stage shows offered the highest production values of any form of human expression. They were spectacles, inducing awe.
If canned content becomes a harder product to sell in the Internet era, a return of live performance in a new technological context might be the source of new kinds of successful business plans.
Let’s approach this idea first by thinking small. What if you could hire a live musician for a party, even if that musician was at a distance? The performance might feel “present” in your house because of the immersive, “holographic” projectors in your living room.
Imagine telepresent actors, orators, puppeteers, and dancers delivering real-time interactive shows that include special effects and production values that surpass those of today’s most expensive movies.
For instance, a puppeteer for a child’s birthday party might take children on a magical journey through a unique immersive fantasy world designed by the performer.
This design would provide performers with an offering that could be delivered reasonably, even as they grow older and have children. As a working musician, I can testify that travel is the hardest and most time-consuming part of putting on a performance.
Telepresent performance would also provide a value to customers that file sharing could not offer. It would be immune to the problems of online commerce that have shriveled the music labels.
Here we might finally have a scenario that might solve the problem of how musicians can earn a living online. Obviously, the idea of “tele-performance for hire” remains speculative at this time, but the technology appears to be moving in a direction that will make it possible.
Now let’s think big. Suppose big stars and big-budget virtual sets and big production values in every way were harnessed to create a simulated world that home participants could enter in large numbers. This would be something like a cross between Second Life and tele-immersion.
In many ways this sort of support for a mass fantasy is what digital technology seems to be converging on. It is the vision many of us had in mind decades ago, in much earlier phases of our adventures as technologists.
Today’s movie and television producers – together with ISPs – might both evolve to take on new roles providing the giant dream machine foreseen in a thousand science-fiction stories, and do so in a way that does not impoverish but instead enriches Canada’s creative people.
Two examples have been given of future media forms that would reverse negative trends in media business models: Immersion and tele-immersion. Both would require infrastructure investments, however.
The Canadian cyberpunk novelist William Gibson famously said, “The future is here. It’s just not evenly distributed yet.”
One could also say that the Internet infrastructure of the future is here already, but not generally distributed as yet. Academic researchers in immersive media, tele-immersion, and a variety of other new media forms make use of advanced research Internet infrastructures, for instance. This is why research in tele-immersion often takes place in the facilities of organizations like Internet2, which is a major provider of advanced research infrastructures.
All indications are that the existing conception of broadband service is inadequate. Cisco, for example, demands 15Mbps for the most minimal version of its current generation of telepresence. This is a multiple of about three times the level of service available on a typical top-of-the-line home broadband service in the download direction, and about five times the best service in the upload direction.
The academic implementations of full bore tele-immersion have sometimes required (PDF) about 100 times the capacity of a typical current “broadband” connection. Tele-immersion has been known to consume the entirety of an OC3 connection. Bandwidth has not been the only issue. It has also been critical to improve other aspects of network performance, such as latency and jitter.
The level of performance required by academic experiments might not be predictive of the final consumer level that will be required, however.
On the one hand, it might be too high, because we are learning more about compression and other techniques that can reduce the network performance requirements for tele-immersion. On the other hand, we are still in a phase in which we are continuing to increase the performance requirements of tele-immersion cameras and displays as we learn more about how people perceive one another.
Even if the totality of the vision of tele-immersion in the arts that I have presented does not come to be, an intermediate version of it might come about, and that would still probably require infrastructure improvements.
Two potential new-media technologies have been presented which can change the rules and restart the business of content production in the online space, albeit in highly nontraditional ways.
Both ideas can only be realized if substantial infrastructure investments are made.
The first example, immersive content, would require approximately one order of magnitude in improvement, while the second, tele-immersion, would require an improvement of about two order of magnitude.
While the future of the Internet is uncertain, it’s possible to make educated guesses about some specific questions. Three distinct scenarios have been outlined in the previous section. Each one proposed a different, but plausible, fate for the status of traditional “professional broadcast-like” content in the online world.
There is no guarantee that the actual future, as it unfolds, will correspond with one of these three. Scenarios can be thought of as points on a continuum of possible futures. If one can identify plausible scenarios with very high degrees of distinction from one another, then it increases the likelihood that the actual future will fall in between the scenarios, instead of outside of them.
The trio of scenarios presented defines a sufficiently wide space that this author argues they circumscribe the most likely paths of Internet evolution in the future.
Such a scenario set can be used in various ways. One way is to examine how a particular policy philosophy plays in out is likely to play out in each scenario.
If a policy philosophy is likely to be beneficial in all the scenarios, then it is likely to be beneficial in the space between them as well. This is an indication that the risks of a policy are relatively low.
If, on the other hand, a policy philosophy is likely to be harmful in all the scenarios, then it likely to be risky for the actual future as it unfolds.
That is the case with the present scenario set and the idea that ISPs should be treated as a source of funds to supplement content producers as a way of promoting Canadian content.
This policy would likely generate inverted, destructive, or chaotic effects in all three scenarios. Therefore it should be considered a highly risky idea.
Under two of the scenarios, the effects of the Subsidy Proposal are easy to assess:
This scenario is inherently mysterious. It would be a surprise, with unpredictable qualities.
It is reasonable to predict, however, that subsidies to content producers in anticipation of such a scenario would have a random effect, because there is no way to determine which content producers might harmonize well with an emergent “trick.”
Under this scenario, a core challenge will be infrastructure upgrades.
New kinds of service offering, perhaps along the lines of tele-immersion, will appear which cannot be supported on the current Internet infrastructure.
In this scenario, it will be to content producers’ benefit, and to the benefit of all other players, to have infrastructure upgraded as quickly as possible (though we will have to wait a few more years for research results to inform us about the precise nature of the upgrades).
Draining capital from infrastructure providers will delay the day when content producers can adopt new business models.
The normative scenario requires a deeper analysis, however.
Consider the first scenario, “Open Culture Forever.” This is the scenario in which fees and advertising on the Open Internet never amount to anything beyond pilot projects, but the net does function as an effective promotional vehicle for a variety of gated or pseudo-gated specialty hardware that deliver commercially successful content.
Under this scenario, an intervention along the lines we are imagining would impede content providers from promoting their offerings.
For instance, let us suppose that in the future there is a popular gated hardware device for viewing video content. It might be a descendant of today’s video iPods that provides a large-screen experience from a compact, portable piece of hardware. For instance, perhaps a viewer wears special video eyeglasses, or a video-displaying clip over existing glasses. Another possibility is that a large display might be unrolled or unfolded from the pocketable, compact form it takes on when it is not in use. Such devices already exist, so it is no great extrapolation to imagine cost-reduced versions becoming popular. [The author’s company, VPL Research, Inc., sold the first general purpose head mounted display products, in the 1980s.]
Under this scenario, a content business model associated with these devices might take the following form. Video content is available on services like MySpace and shadier file sharing services, but this content is usually fragmented, low resolution, or otherwise not optimal. There is also a vast amount of user-created fan expression, which consists of parodies, mash-ups, and annotated versions of the official content.
No direct money is made from any of the flow of content on the open net at all, from either advertising or fees. It is purely promotional.
However, there is an easy way to tune in to content through the eyewear or pocketable hardware, analogous to loading music on an iPod. A clever technical person can contravene the system and get free content, but the convenience factor is sufficient to persuade most people to pay to have paid content loaded on their favorite hardware device.
As it happens, the infrastructure of the Internet as it now exists, including that portion which is maintained by ISPs, is barely adequate for the transmission of video content. It is possible to stream some good quality video content to some customers some of the time, and poor quality video to most customers most of the time. But it is not possible to stream high quality video, randomly chosen from an arbitrarily large catalog, to all the customers all of the time.
This is why some ISPs have found that the heavy flow of video content by a portion of their customer base is choking off their ability to serve all their customers well all the time. There are only two ways out of such a dilemma. Either disfavor certain flows of information on the Internet, which in practice amounts mostly to video content, or invest heavily to upgrade the infrastructure.
But consider this dilemma from the point of view of a content producer under the “Open Culture Forever” scenario.
On the one hand, the producer would not want the most desirable hardware devices, such as the video eyewear imagined above, to be able to easily gather all possible content, including the most premium versions of content, from the Open Internet. The danger of that happening is slight, because hardware designers can create a more desirable and profitable product by coupling hardware to a gated delivery system, as is done with the iPod and the other examples given earlier.
On the other hand, the flow of either pirated or allowed examples of the producer’s video on the Open Internet is the most powerful promotional resource available to the producer. It is the free online versions, often reduced in either resolution or duration, that might be seen as the result of search, or a social network interaction, or the viewing of fan-based derivative productions. If the flow of the free version of the content is reduced, then the paid version becomes harder to promote, and therefore becomes less valuable.
This principle has been tested and borne out many times. Authors have given away online versions of their books to promote the versions on paper, and so on.
It has also become commonplace, though not ubiquitous, for movie producers to allow clips of current releases to circulate, and to encourage derivative fan productions. The Web sites associated with TV networks increasingly post whole TV shows in order to at least lure some of the eyeballs that would otherwise be lost to P2P file sharing sites.
This author does not consider this scenario to be nirvana for content producers for reasons stated earlier, but it has proven viable, and is probably sought by more stakeholders than other scenarios at this time.
A tax or fee regime would create incentives for ISPs to not invest in new infrastructure. Therefore, the only option available to them would be to adopt policies that disfavor certain flows of bits.
If ISPs are disfavoring the flow of video, it will become harder to promote video content on those channels that remain, or come into existence, in which video is commercially valuable.
The proposal would be a tax on promotion, not a tax on sales or consumption. It would reduce the commercial viability of content that is subject to the promotional tax.
Because of the global nature of the Internet as it is today, the effect of the Proposal would not be as simple as it would be if physical goods were being taxed in a given locality. But the effects could still be quite dramatic.
If the Canadian home market is one in which the promotion of content is taxed, then content from foreign markets, where there is no such tax, will be able to evolve and improve in a less expensive manner. While any particular promotion of content would be similarly disfavored, whether that content was Canadian or not, Canadian content careers would be more disfavored than foreign content production careers.
An American producer for instance, would be able to build up a following in America before taking their content global. Once success on a global level has been achieved, whether in capitalization, name recognition, or skill, the increased difficulty of promotion in the Canadian market will be a lessened burden.
A Canadian producer, on the other hand, would find the cheapest Internet promotion to be available in foreign markets, where the distribution of promotional materials in untaxed. This suggests some interesting sub-scenarios.
For instance, a Canadian producer who is descended from immigrants from a certain part of the world might find it cheaper to promote their wares in that part of the world than in Canada.
A tax on the promotion of content within Canada would have the effect of diluting a sense of Canadian identity in the content that becomes successful, whether it originates in Canada or not.
This is one way that the Internet changes things. In the past, a subsidy to a Canadian content producer could be expected, in a common-sense way, to promote Canadian content.
In the Internet era, a similar subsidy system can create an incentive for a Canadian content producer to develop an early career in serving foreign markets before being able to engage in the more expensive promotion needed to engage the home market. Canadian identity is diluted not because of the incentive structure of any particular content production or distribution deal, but because of the way that the careers of content promoters are swayed in their early years.
For this reason, there is little hope in pulling money out of promotion and putting it into production, if the goal is to encourage Canadian identity. Of course, that policy would benefit certain recipients, but it would discourage emerging content producers, for whom promotion is a major challenge.
The purpose of the proposed subsidies would be to support future generations of content producers, but unfortunately those producers with established careers would be the primary beneficiaries, and only to the detriment of the next generations.
Using a truncated version of the scenario method, we have explored three different future paths for the evolution of the role of professional- or broadcast-like content on the Internet.
The Subsidy Proposal would ultimately be likely to increase risks for content producers under all three scenarios.
It is instead time to think now about how the next wave of Internet infrastructure will be financed.
The problem motivating the Subsidy Proposal is real, but the Subsidy Proposal would create policy on the basis of one frame in a sequence that comprises a moving image. In doing so, it might have the effect of holding in place precisely the circumstances it is meant to correct.
The imposition of financial burdens on ISPs might not only delay the arrival of an intrinsically better era for content providers, but would in the meantime tax the promotional use of the Internet, which is currently the main benefit they can derive from it.
The Internet is still evolving. It is possible, even likely, that future levels of technological advancement and service provision will naturally undo disadvantages that have beset some content producers at this time.
The path to a new dynamics in the online world is innovation. The most publicly celebrated form of innovation is the form of Silicon Valley Internet service startups attempting to launch at just the right moment to catch the “network-effect rocket ship.” When they succeed at this, they rapidly gain tremendous popularity, and even a semi-monopolistic niche, at low cost. This is the primary type of online innovation that is taking place in the context of the existing standard of Internet service. Unfortunately, the latest crop of these niches that relate to “content” do not support profitable business plans.
But in my view, the most important forms of innovation are in core technologies and infrastructure. If, for instance, the effective Internet service to the home was improved tenfold at about the same time that desirable “holographic” displays become available, a new kind of content business would rapidly emerge. Immersive content might have tremendous appeal, and is likely to be remunerative because the new types of displays would naturally function in the way “commercializing hardware” like the iPod already do – as arenas in which users are inclined to pay for content.
The Commission should favor policies that will allow the Internet to grow as quickly as possible into a new state that inherently creates a more favorable environment for content providers. To impose a differential drag on that process in Canada can only hurt Canadian culture in the long term, even if it might seem to bolster some individual content providers in the short term.