The future of networking is wireless, and we shouldn’t want it any other way. Despite the well-deserved hype about the boundless capacity of fiber optic wired networks, most network interactions in the future will begin and end without wires. There are two obvious reasons for this: mobility and flexibility. Not only do wireless networks allow for access while we’re in motion, they also allow us to rearrange our gear without the hassle of pulling new wires. And wireless networks are fundamentally safer, neater, and more reliable than their wired counterparts, as they do away with cable clutter and are immune to cable cuts.
Emerging technologies promise to increase the efficiency of wireless networks by a hundred times (0): parallelism technologies such as Multi-User Multiple Input/Multiple Output (MU-MIMO), Space Division Multiplexing (SDM), and Beam-Forming permit multiple devices to use the same spectrum at the same time in the same place, something that hasn’t been possible before without a reduction in performance. Today’s Wi-Fi, by contrast, uses a time-sharing technique that is overcome by congestion when too many people use the same network; we experience this congestion today in airports and trade shows.
For the most part, the performance of personal computers, laptops, smartphones and data center servers also increases by parallelism with each generation of technology. Hence, there’s a multiplier effect when faster and faster devices are interconnected with faster and faster networks.
The new technologies work in both licensed and unlicensed spectrum. This functionality enables better integration of licensed mobile networks – which cover wide areas – and unlicensed networks, such as Bluetooth and Wi-Fi, whose primary value is indoors and between relatively stationary devices.
One of the most important developments for the integration of licensed and unlicensed networks is LTE for Unlicensed Networks, or LTE-U (1). This technology manages the ways that wireless devices interact with networks in small, mainly indoor settings. It leverages the information that network management has about network activity to improve efficiency, performance, and quality.
One of the shortcomings of traditional Wi-Fi is that devices affect each other in ways they can’t predict. Shifting more control to the network access points where these interactions can be seen can prevent harmful interactions and make gaming and voice/video conferencing work better.
Parallelism makes networks faster and more efficient, but it doesn’t solve all the problems we have with pervasive networks, as we’ve learned from small cell deployments in the mobile space. In order to connect incoming calls and deliver incoming data packets, the network needs to locate the user. When cells shrink to increase data capacity, more information needs to be exchanged for location info to stay up-to-date. 5G mobile networks will track device motion more efficiently by integrating small micro cells with big macro cells more efficiently. With Licensed Assisted Access (LAA), LTE and LTE-U networks can also leverage this information.
While parallelism, small cells, and licensed/unlicensed integration offer the means to increase network efficiency by 300 times or more in the aggregate, this won’t be enough to meet our future needs. The problem, of course, is the massive explosion in new applications that depend on wireless access. The figures on the growth of wireless data use since the advent of the iPhone are staggering; according to Cisco (2), “[2013’s] mobile data traffic was nearly 30 times the size of the entire global Internet in 2000.” And this growth isn’t slowing down.
It’s not unreasonable to target a thousand times increase in overall wireless capacity from the 2013 level, as Qualcomm has done (3). While this goal seemed ludicrously optimistic two years ago, it’s now easy to see how it could happen. We simply have to combine the projected 300 times improvement in network efficiency over the next five to ten years with a corresponding tripling of spectrum allocations.
This is where Washington needs to do its part. The current Internet regulation kerfuffle tends to obscure the fact that spectrum policy doesn’t have to be a bitterly partisan dispute. The fundamental technique is to shift legacy systems to more advanced technologies with smaller spectrum footprints, and then to repurpose the spectrum they no longer need to more useful applications, systems, and networks. The shorthand for this process is “upgrade and repack.”
The best example in recent history was the digital TV (DTV) transition. DTV allowed broadcasters to transmit five times as much video in half the spectrum they needed in the old analog days. The legacy systems that need to be upgraded and repacked today mainly belong to the government and have been relatively sluggish for years. While the engineering community begins developing new networking standards as soon as the last generation’s standards are nearly complete, government systems remain stagnant for decades.
We have a structural problem with spectrum policy in the US. Government use is overseen by one agency – NTIA – while commercial use is managed by the FCC, a much more powerful agency. Combining spectrum responsibilities into a single agency with more say over government usage, independence from agency lobbying, and incentives to improve overall value is the only way to shift spectrum from government to public use; I’ve discussed this in a rough plan for a Federal Spectrum Service (4).
The math is compelling: technology will increase spectrum efficiency by 100 times, and small cells will triple or quadruple that number; government needs to triple it again simply by using the tools the private sector has provided. That shouldn’t be too hard, even for Washington.
Beyond the reallocation of the spectrum we use today, technology offers some easy opportunities for policy makers in frequency bands that aren’t heavily used. For example, the 10 GHz band is useful for wireless backhaul and possibly for some mobile uses, although there are questions about its value for battery-powered devices. The 60 GHz band is extremely useful for very short distance but highly data intensive multi-media scenarios around entertainment systems. The application of new technology to millimeter wave frequencies can increase their efficiency, and there are opportunities to use frequencies below today’s TV band for sensors and other Internet of Things devices.
The opportunities are boundless, and the future belongs to the bold. Spectrum policy would do well to take a page from the engineer’s playbook and make planning for progress a deliberate and ongoing activity.