Library of Economics and Liberty masthead logo 
Arnold Kling

The Economics of the Wireless Last Mile

Arnold Kling*

Introduction
 
"The wireless approach may pose a regulatory dilemma for Congress and the FCC. The terms of existing spectrum licenses are not flexible enough to accommodate the wireless last mile."
Imagine what the Internet might do for you if high-speed access were available anywhere and everywhere. You could access the Internet in all of the rooms of your house, or in your yard, or in your car, or on the beach, as easily as listening to the radio today. Imagine that this connectivity is at broadband speed, meaning equal to or better than the download speeds of cable or digital subscriber lines (DSL). Now, imagine something else—this pervasive, high-speed Internet access available for a small monthly fee—or even for free.

This is the vision of the Wireless Last Mile. Rather than trying to expand broadband Internet access by bringing fiber or cable to homes, the partisans of the Wireless Last Mile argue that by changing the way that we use radio spectrum, we can achieve dramatic improvements in Internet access, at low cost. The potential for the Wireless Last Mile reflects the progressive improvement in computing power, which is changing the terms of the trade-off between different ways of using the radio spectrum for communication.

The architecture for the Wireless Last Mile raises issues of public policy. Current regulation of the radio spectrum is not compatible with the proposed new architecture. How should these regulatory barriers be addressed?

Another issue concerns the need, if any, for a government subsidy for the Wireless Last Mile. For years, some pundits have argued that widespread Internet access at broadband speed is valuable enough to warrant government resources to be spent to achieve such a goal. Is the Wireless Last Mile a public good in the economic sense of the term, and therefore deserving of a subsidy?

Engineering Background
 

The Last Mile Problem: 5 Solutions

The goal of widespread access to the Internet at broadband speed has attracted the interest of many public officials. For example, in the debate over regulation of local phone companies, both the pro-regulatory and anti-regulatory forces claim that their respective approaches would provide a better incentive to expand the provision of broadband services.

According to this chart, about four-fifths of the more than fifty million households with Internet access continue to use dial-up connections over modems that offer much slower speeds than broadband. According to this study, about 2/3 of the broadband connections are through cable modems, with the remainder over phone lines using DSL.

The Internet "backbone" (the main communication links between hubs on the Internet) already has sufficient capacity or could readily have its capacity increased to handle more broadband users. The challenge is in connecting the hubs to individual households. Getting these endpoints connected to broadband is known as the "last mile" problem.

For potential solutions to the "last mile" problem, there are five contenders. DSL and cable modems are the two leaders thus far. Another option, which was popular among pundits five years ago but which is less in favor today, is bringing "fiber to the curb." High-speed fiber-optic communication lines make up the Internet backbone, and this network could be extended to consumers. As with cable television, this would require a massive effort to bring new underground cables into every household.

Because digging yet another underground cable infrastructure is such an expensive and daunting prospect, two other options are under consideration. One of the dark horses is using the electrical wiring grid to carry communication signals. There have been a few small-scale demonstrations that this is feasible, but the technology has yet to take off commercially.

The final contender in the "last mile" derby is wireless Internet access. If it were feasible, it could save the nation's streets and yards from another enormous excavation project. Furthermore, wireless access is the kind of access that people want. As I found with my DSL line at home, having a broadband Internet connection that only works in one room is frustrating, particularly when more than one person needs to use the Internet at the same time. People want broadband wherever they may be, not just in the room that has the broadband modem.

For more information, including some additional last-mile alternatives, see Scientific American, articles of October 1999 and July 2002.

Today's radios assume that signals arrive on separate frequencies. If the radio station at 99.3 FM all of a sudden started broadcasting at 99.5 FM where another station also is broadcasting, our radio would not be able to sort out the two stations. We have come to call this "interference."

Contemporary engineers say that "interference" is not given by the laws of physics. Rather, it is a characteristic of an architecture in which radios are relatively dumb. Smarter radios, they suggest, would not have to "tune" to one frequency at a time. Instead, they could interpret multiple signals coming in over multiple frequencies. In other words, a single frequency can be used by more than one transmitter, and a single message might be broken up and sent over multiple frequencies.

What the engineers propose is substituting smart radios and a less restrictive allocation of frequencies for the current architecture of dumb radios and limitations on access to frequencies. Whether this substitution makes sense depends on economics.

To understand the economics of this new vision for radio spectrum, it may help to compare it with the relationship between the Internet and traditional telephony. A traditional phone network tends to use switches efficiently and lines inefficiently. By keeping the circuit open during a pause in the conversation, you reduce switching costs but waste the line. In contrast, the Internet is relatively more efficient in its use of lines, but it is less efficient in its use of switches.

Shared spectrum represents a similar revolution in wireless communications. The traditional model was to reserve slots of spectrum for particular users. For example, the FM frequency of 99.3 would be reserved in a particular area for one radio station. An alternative model allows spectrum to be shared among many users, with something like the Internet's packet-based model. Radio signals would be routed to their destination by using an addressing system, rather than by adhering to a specific frequency. A given frequency could be used to carry many different signals from many different types of users, rather than being limited to a single broadcaster.

Under the traditional model, sharing spectrum is impossible, because of "interference." However, the problem is not that signals literally block one another. Rather, it is the case that receivers cannot interpret multiple signals without having intelligence built in. Interference is in the ear of the receiver. With intelligent receivers, spectrum can be shared, as long as the devices adhere to certain standards and protocols.

The shared-spectrum model is the basis for a number of wireless networks used by early adopters. In my home, I am able to access my DSL from any room, including my porch, using a wireless network. Starbucks offers wireless Internet access in many of its coffee shops. The University of Maryland offers wireless access in some of its common areas, and I predict that within two or three years it will provide free wireless access to the entire surrounding community, in order to be able to provide high-speed Internet connections to the many students who live in nearby off-campus housing.

Shared-spectrum advocates, such as David P. Reed, believe that these small-scale examples could be expanded to make wireless Internet access universally available. He cites the work of Tim Shepard, who argues that one can design a wireless network in which the capacity of the network actually increases with the number of devices communicating over it. As I understand it, devices would act as relay stations as well as senders and receivers.

The traditional spectrum-allocation model is inefficient in its use of spectrum. However, it saves on the computational capacity required by devices that use the airwaves. The shared-spectrum model allows a given amount of spectrum to carry more information, but it requires devices with computational intelligence. Just as the Internet uses communication lines more efficiently at the cost of requiring smart computers in place of less-smart phones, the Wireless Last Mile uses spectrum more efficiently at the cost of requiring smarter devices in place of less-smart radios, televisions, or cell phones.

Economists Hal R. Varian and Jeffrey K. MacKie-Mason wrote a seminal paper that explained how the Internet came to be favored by Moore's Law. In 1965, Intel's Gordon Moore had observed that the power of computer chips had just about doubled every year, and what came to be known as Moore's Law is the striking—and accurate—conjecture that the power of computer chips would continue to double about every eighteen months. Thanks to Moore's Law, the Internet has become progressively more economical over time in comparison with the traditional telephone network, as the cost of intelligent devices to route and interpret Internet packets has fallen faster than the cost of lines.

Similarly, the cost of providing devices with the intelligence to communicate in a shared-spectrum setting has plummeted. In fact, Pat Gelsinger, Intel's Chief Technology Officer, sees radio embedded in a chip as a technology that is within reach. That is, a computer chip can have embedded in it hardware that allows it to send and receive radio signals.

Imagine being able to integrate all the features of wide-area, local-area, and personal-area networks into a single piece of silicon. What if we were able to add transmit and receive, intelligent roaming, network optimization, and permanent IP connectivity capabilities? And what if we were able combine data, voice, and video services on that same piece of silicon? Pretty cool, right?

Let's take it to the next level. If we can shrink this technology down to where it sits on the corner of a die, then we'll have radio on chip (RoC). Every processor will have integrated multiradio capabilities. The result will be ubiquitous radios that are always connected and seamlessly networked across offices, buildings, and even cities.

The radio-on-chip could make it possible to inexpensively embed communication capabilities into just about any ordinary product, including clothing, toys, appliances, medical devices, and dangerous substances. One can conceive of using these capabilities for anything from tracking the movement of explosives to enabling remote medical treatment to being able to find your eyeglasses when you cannot remember where you put them down.

Outdated Regulations

The existing paradigm for regulating radio spectrum would tend to frustrate the development of the shared-spectrum wireless solution for the last mile. The problem is that spectrum licenses are not flexible enough to allow the license owners to dedicate spectrum to alternative uses. The owner of a television (radio) license only may use that license for sending television (radio) signals over the licensed frequency.

Existing license restrictions have the effect of fencing off specific frequencies for specific types of content. For example, the television frequencies are reserved for television content, meaning that they cannot be used for phone calls. These restrictions conflict with the elegance of the packet-based approach, in which any content can use any frequency. With shared spectrum, the interpretation of the content takes place in the receiver. A given collection of frequencies might be carrying phone calls, television signals, and email all at once, to be sorted out at the point of delivery.

The current owners of spectrum licenses could not open their frequencies to shared-spectrum uses, even if they wanted to do so. The terms of the license for an AM radio station, for example, say that the frequency must be used for AM radio broadcasting.

 

Coase and the FCC

Nobel laureate Ronald Coase was the first to argue that market mechanisms could be used to allocate spectrum. The problem of "interference" is much like the externalities that are dealt with in Coase's famous paper, "The Problem of Social Cost." In that paper, Coase argues that externalities can be dealt with in the private sector if property rights are properly defined. This came to be known as the Coase Theorem. Moreover, as Hazlett points out, Coase was analyzing FCC issues, including spectrum allocation, at the time that he was working on what became the Coase Theorem.

Reed and other engineers who advocate shared-spectrum wireless tend to support the idea of a spectrum "commons." In that model, ownership of spectrum licenses would be canceled. Instead, the relevant spectrum would be in the public domain, to be used by any device that adheres to the standards and protocols necessary to enable spectrum sharing to work.

The alternative point of view is represented by Thomas W. Hazlett. He argues that the property rights of the owners of spectrum licenses ought to be strengthened rather than weakened. He makes a case first argued by Ronald Coase, whose famous theorem can be applied to spectrum allocation. Hazlett says that if license owners had complete freedom to choose the uses of their spectrum, then profit-maximizing behavior would lead to more spectrum allocated to shared-spectrum packet-based communications.

For example, television stations own spectrum licenses that are declining in value. Given the widespread adoption of cable, broadcasting over the airwaves is an expensive proposition that on the margin serves very few viewers. In fact, as Hazlett points out, the FCC allocates 67 channels for broadcast television, even though only a handful are used in most areas. The owner of a television station is not free to re-sell the spectrum license to someone who might have a better use for it.

If they were given the freedom to sell their licenses to other users, some television stations might find profitable buyers from among wireless Internet access providers. This would shift spectrum away from an uneconomic use and toward a more productive use.

The traditional argument for licensing spectrum in segregated blocks is to protect license-owners from interference. However, this would appear to be a case in which the Coase theorem applies. As long as property rights are clear, a license-owner who wants to do traditional broadcasting could bargain with those who want to do spectrum sharing.

The "spectrum commons" approach amounts to confiscating spectrum licenses from owners where the government deems the original purpose of the license to be outmoded. The Hazlett-Coase approach allows spectrum owners themselves to make the decision of whether or not to shift the use of their spectrum from its original purpose to shared-spectrum packet communications.

Reed and Hazlett appear to be talking past one another. Hazlett argues that if spectrum were treated as a "commons" and made available for free, then people would try to use it too intensively, creating congestion. Reed argues that Hazlett does not understand the technology, which in Reed's view means that spectrum need not be a scarce resource. In Reed's view of the shared spectrum model, users supply the physical network infrastructure by employing devices which relay signals to other users. One can imagine this as a cell phone network where the connectivity is supplied by the phones on the network, without requiring cell towers.

If Reed is correct, then it may be that in a competitive market the price for using spectrum ultimately would fall to zero. That is, if adding more devices that adhere to proper protocols tends to increase the capacity of a given amount of spectrum, and congestion does not arise, then competition among spectrum license owners would tend to drive the price of spectrum use to zero. In that case, implementing Hazlett's strong property rights would lead to a result that from a consumer's standpoint would be indistinguishable from the "commons" that Reed advocates.

The current regulatory regime clearly is biased against the shared spectrum solution. We will return to the question of how best to change that regime in the conclusion.

Network Effects and Switching Costs

If the FCC changed its regulations today in a way that encouraged shared spectrum, tomorrow we would still wake up to a world in which consumers have cell phones, radios, and televisions that rely on today's signal-segregating regulatory scheme. With the exception of some recent-vintage laptop computers, none of the electronic equipment is built for the shared-spectrum model of communication over the airwaves. Until people switch to newer devices, they will not want to see frequencies allocated away from their traditional uses. For example, if AM radio were changed to a shared spectrum solution, nobody's car radio would work (until people get new cars or new radios). On the other hand, if no changes are made to the way that spectrum is allocated, it may not be possible to provide the Wireless Last Mile, and people then would have no incentive to buy new devices. Thus, the Wireless Last Mile must overcome network effects and switching costs.

A network effect occurs when my benefit from joining a network depends on how many others have joined the network. If the Wireless Last Mile is going to involve relay stations embedded in consumer devices, as required by some business models, then clearly there are going to be network effects. This could retard adoption of the wireless approach.

When radio-on-chip becomes available, adoption may be slow, even if it adds little to the marginal cost of new devices. Consumers may be perfectly content to stick with their existing televisions, cell phones, and so forth. The need to obtain new equipment gives rise to a cost of switching to newer technology. That cost must be weighed against the benefits. Even though consumers would agree that they would never choose traditional devices over newer devices, the fact that they start with older devices makes it costly to switch to shared-spectrum wireless.

Of course, new technology always must overcome network effects and switching costs. When audio CDs were introduced, people who already owned ordinary record players and tape players were reluctant to adopt the new technology. This reluctance eventually was overcome, because the newer technology offered quality, durability, and convenience.

In the case of shared spectrum, are the network effects and switching costs strong enough to justify a government subsidy to support the new alternative? David Isenberg and David Weinberger believe so.

Arguably, building the best network is a Public Good. It will boost the economy, open global markets, and make us better informed citizens, customers and business people.

However, most of these benefits are private benefits, for which individual users should be willing to pay. It is not clear that the purely social benefits—those that cannot be captured by individual firms and consumers—are so high that a subsidy is warranted. Also, it is not clear that policymakers have the information they need to arrive at a reasonable estimate for a subsidy, or even to be sure which is the best technology to subsidize. This is worrisome, because as Weinberger and Isenberg also point out, "Big governments tend to make big, costly, persistent mistakes."

Conclusion

The wireless solution for the "last mile" has something in common with the popular technologies of the Internet and cell phones. Like the Internet, it employs an elegant network architecture that benefits from the powerful economic force of Moore's Law. Like cell phones, it enables people to enjoy mobility as they take advantage of communication services—in this case, broadband access to the Internet.

The wireless approach may pose a regulatory dilemma for Congress and the FCC. The terms of existing spectrum licenses are not flexible enough to accommodate the wireless last mile.

Because the Wireless Last Mile gets less expensive to deploy with every iteration of Moore's Law, it seems almost certain that at some point the Wireless Last Mile will become reality. However, is the cost-benefit calculation for adopting the Wireless Last Mile favorable now? Or will it not be favorable for several more years? As with any futuristic technology, such as hydrogen fuel cells or obtaining fresh water from seawater, the market should help to determine when the wireless last mile makes economic sense. That is, given the uncertainty about the technical feasibility, costs, and benefits of the Wireless Last Mile, the best public policy may be to rely on the information supplied by individuals and firms via the market.

Perhaps one solution would be for the FCC to hold another auction. In the new auction, current license owners could put their spectrum up for sale, and the spectrum could be bid on by new or existing owners. Once the spectrum has been re-auctioned, it could be used for any purpose, and it could be sold at any time.

By putting all spectrum up for sale at once, the FCC could enable Internet serviced providers to obtain large enough blocks of spectrum to enable the Wireless Last Mile. However, by using an auction mechanism rather than confiscating spectrum, the FCC could ensure that existing owners are able to realize some economic value for the licenses they now own.

The alternatives to a new auction are simply confiscating spectrum to create a "commons" or de-regulating existing licenses to allow owners to use or re-sell their licenses for any purpose. Confiscating spectrum would assume that the government can determine that the benefits of the Wireless Last Mile exceed the costs as of right now. De-regulating existing licenses could be too chaotic, and it might leave manufacturers and serviced providers too uncertain as to which blocks of spectrum will ultimately be available for use in the wireless last mile.

The Wireless Last Mile represents an intriguing opportunity. It raises many questions about costs and benefits. The challenge is to find a way for the market to supply the answers.


*Arnold Kling has a Ph.D. in economics from the Massachusetts Institute of Technology. He is a frequent contributor to TechCentralStation. He maintains two web logs on economic topics, one called EconLog [formerly called "Great Questions of Economics"—Econlib Ed. update, 2003] and the other called Corante bottomline.
Return to top
Copyright ©2008
Liberty Fund, Inc.
All Rights Reserved