FCC to World: How Should We Do This Broadband Plan?
FCC to World: How Should We Do This Broadband Plan?
FCC to World: How Should We Do This Broadband Plan?

    Get Involved Today

    Last week, the FCC held a number of workshops to start discussing how to create and implement the National Broadband Plan called for in the Stimulus Bill. The workshops brought together academics, industry representatives, and public interest groups, and gave them an opportunity to highlight issues that the FCC should consider when putting together the plan.

    Benchmarks – are we there yet?

    The first workshop was titled “Benchmarks” and was designed to try and understand how we will know if the National Broadband Plan is actually working. All of the panelists agreed that finding and implementing effective benchmarks would be hard because there are so many ways to measure broadband (availability, performance, price) and it is unclear when how much is “enough.” Some panelists, such as AT&T’s Richard Clarke, seemed satisfied to leave it at that. Others, however, clearly viewed the complexity of the issue as a challenge and tried to suggest ways to overcome it.

    Gregory Rosston of Stanford suggested that, instead of trying to quantify the benefit of broadband in and of itself, it might be better to view broadband as a “general purpose technology,” like electricity. The true value of a general-purpose technology comes from the fact that it drives innovation, not from the technology itself. If this is the case, it may be that the best way to measure broadband goals is to look at how many other innovations are built upon it.

    PK’s own Harold Feld had a slightly different take. Harold pointed out that one thing that the entire panel could agree on was that no one was sure how to measure a national broadband plan. As a result, the FCC should be prepared to revise benchmarks as the plan progresses. However, this need to revise benchmarks as we better understand how to measure the effectiveness of the plan should not be confused with a desire to revise the benchmarks merely to reflect what has already been achieved – a “declare victory and go home” strategy.

    Going forward, Harold felt that it was important that the FCC accumulate as much real time data as possible – probably more than it thinks that it needs. By making this data available to the public, researchers will be able to test different ways to measure broadband effectiveness and penetration against real data. Hopefully it will be the results of these tests that will help to revise the benchmarks.

    Catherine Sandoval, Assistant Professor of Law at Santa Clara University, put forward the most concrete suggestions for how to measure broadband effectiveness. Prof. Sandoval pointed out that the traditional use of speed as a way to evaluate broadband was flawed. The more relevant inquiry is not “is my internet faster than your internet?” but rather “would I trade my internet for your internet?” In other words, look to the substitutability of services to decide if they are comparable. It is unlikely that anyone would replace a wired internet connection with a wireless internet connection, even if they were nominally of the same speed, if the wireless connection came with bandwidth caps and limits as to what applications and hardware could be used.

    Finally, Jon Eisenberg of the National Academies highlighted one of the central conflicts in trying to measure the adequacy of broadband. He shared the two logical, yet conflicting, proposed definitions of broadband from a recent report published by his organization. According to the definitions, broadband is adequate either when local broadband access is no longer the limiting factor in developing new applications or when local broadband is such that it motivates deployment of new applications.

    The Future of the Internet – how are we actually going to do this?

    The second workshop was on the future of internet architecture. Unlike the other two workshops, everyone on this panel’s name was prefaced with “Dr.” and they were all concerned with how the broadband network of the future is actually going to be structured.

    Surprisingly, the discussion actually started with a comment on benchmarks. Dr. David D. Clark of MIT suggested that uptake, not buildout is the most relevant metric for any broadband plan. While it is important in his view to get the network out to the more rural parts of the country, a more interesting question is why people who already have access to the internet are not using it.

    After bridging the gap from the prior day’s workshop, Dr. Clark quickly moved back to the topic at hand. He posed an interesting question that was repeated in a number of different ways throughout the workshop: is video the end? In other words, is there a future application that will require another bandwidth increase over video similar to the bandwidth increase video required over music?

    From that big picture question, the workshop quickly moved into more technical architectural problems that the internet is suffering. As Dr. Van Jacobson of the Palo Alto Research Center noted, today’s internet was designed to share resources, not to share data. This distinction is at the heart of a number of problems manifesting themselves today – especially security problems. A number of panelists pointed to shortcomings in security infrastructure as the single largest challenge to the future of internet architecture.

    Perhaps the most radical idea was put forward by Dr. Scott Shenker of UC Berkley. Dr. Shenker described the impact of the migration from industrial routers controlled by proprietary software towards “cheap” routers running free, open source software could have on the future internet. In his view, this development could rapidly increase the rate of architectural innovation going forward. Dr. Taieb Znati of the National Science Foundation, among others, agreed. Dr. Znati characterized the history of the internet up to today to be a search for a single protocol that would be all things to all people. Going forward, cheap hardware and free software could allow large scale parallel networks to develop, each with protocols designed for specific applications. These networks could be used to test innovations at scales approximating today’s internet and fuel innovation.

    The other thing that the entire panel could agree on was that there was a role for the Federal Government to help fund long-term basic science research. Dr. Robert Atkinson of the Information Technology and Innovation Foundation stressed the importance of generic technology research. Dr. Richard Green of CableLabs, among others, drew a distinction between small, incremental innovations that the current VC model is good at supporting and the kind of long-term generic research that creates leaps forward, which it is not.

    One of the final questions asked of the panelists was how their prior predictions of the future fared today. While the answers varied, the unifying element was that it was not necessarily network development that darkened their crystal ball. Instead, it was a delay in a seemingly peripheral technology that delayed the predicted future. As an example, it was suggested that one of the delays for the consumer adoption of HDTV was not that the technology did not exist to send and receive it. Instead, it was the fact that no one had figured out how to economically manufacture large scale televisions that would allow consumers to appreciate the difference in picture quality.

    While I cannot say whether consumer HDTV rollout was really slowed by the lack of HDTV monitors, the example was an important reminder to keep all of the pieces of a prediction in mind. Just because one part is capable of supporting an innovation, it does not mean that the innovation is ready for widespread adoption. I know that I have a fast enough internet connection to support HD video, and a TV that can display it, but will not be until the linux version of flash can offload video onto my video card that I will truly bring web video to my TV.

    Internet TV – how, and when?

    The last workshop was focused on the coming storm of internet TV. There has been a great deal of discussion about what the impact of full quality HD-over-internet video will have on both the internet and television industries. Many people think that over the top video providers will emerge to compete with cable companies. The hope is that these over the top providers will be able to offer a video service that competes with cable, but that is distributed over the internet. This will allow them to avoid the problems associated with building their own physical network, which traditionally has limited local cable competition.

    Dr. Green of Cable Labs was back and on this panel. He described the current situation as “wonderful,” and insisted that cable operators would not move to disadvantage customers because competition in the broadband area was effectively keeping cable providers in check.

    The two panelists who were directly involved in new internet television distribution companies, Giles BianRosa of Vuze and Phil Wiser of Sezmi, disagreed. Both insisted that it was critical that the FCC support emerging disruptive technologies in order to encourage innovation and expand the reach of broadband.

    Mr. BianRosa described his concerns about the ability of incumbent video providers to slow innovation by comparing them to the electric car industry. A world in which incumbent video providers control the development of internet video distribution would be like allowing the oil companies to control the development of electric cars. Just as electric car drivers do not want to be forced to pay $200 a month to Chevron when they no longer need gas, video internet customers do not want to be forced to pay a cable company when they no longer need its video services.

    In order to avoid this type of future, PK’s Gigi Sohn put forth a number of suggestions on how best to protect internet-based competitors to traditional cable and satellite companies. She called for a “Cable Carterphone” that would allow anyone to create cable set top boxes that could integrate content from a number of sources, including the internet. Additionally, she insisted that the FCC should force incumbents to separate out billing for internet and cable television in order to give customers a chance to truly compare different options. Ultimately the more steps that the FCC takes to encourage the growth of internet based video offerings, the more competition incumbents will feel. This should force innovation on both sides and result in better options for all consumers.

    And so…..?

    These workshops are clearly a first step in the FCC’s process of developing the National Broadband Plan. While they opened with short prepared statements, panelists quickly started answering questions put forward by FCC staffers, often leading to open-ended debate. Over the course of the workshops two ideas surfaced that were new to me. The first was Prof. Sandoval’s insistence that substitutability as the most important way to compare different broadband offerings. Substitutability seemed like the best way to try and balance the innumerable factors that consumers consider when purchasing broadband without resorting to a formal index that can quickly become outdated.

    The second was Prof. Shenker’s description of the shift from expensive, proprietary routers running the backbone of the internet towards commodity routers running open source software. According to him, this shift will greatly reduce the costs associated with reorganizing the internet. Instead of having to physically go out and install new (expensive) routers and software, the changes will be implemented merely by changing the software. Prof. Shenker compared this change to the change from mainframes to PCs in personal computing. If he is even a little bit right, everything that we think about how the internet works could be in for quite a change.