Editor’s note: Tom Snyder, executive director of rapidly growing Raleigh-based RIoT and a thought leader in the emerging Internet of Things, is the newest columnist to join WRAL TechWire’s list of top drawer contributors. His “Datafication Nation” columns are part of WRAL TechWire’s Startup Monday package.
RALEIGH – Last week I wrote about 5G and how it has struggled to gain traction in the enterprise market. You may want to read that piece, to have context for today’s article where I’ll discuss how emerging 6G standards are likely to evolve, based on historical norms. Next week I’ll suggest a better approach that I believe would more closely serve market needs, but would disrupt the current power-structure and require a paradigm shift in government policy.
To start, let’s be clear. 5G is here for a while – a LONG while in technology terms. The organization that manages cellular standards is expected to release the first 6G specification, called Release 20, in 2025. The industry will review and tweak and then ratify Release 21 in 2028 leading to commercial deployment of 6G in 2030. Even once a specification has been released, it takes years for the semiconductor companies to engineer, build and test chips, which then take years to get designed into phones, tablets and infrastructure equipment. And all this network infrastructure has to be built out and deployed. It is a long process.
But work has already begun to determine the technical specifications for the 6th Generation of cellular technology. Most of the early work involves industry experts and lobbyists working with governments around the world to determine future radio spectrum allocation. The issue, in short, is that society (and the industries that serve it) wants more and more wireless communication. All equipment and devices are getting connected to each other and to the internet, but there is limited bandwidth to do this wirelessly.
There is no way to magically create more radio frequencies. The physics of radio waves are governed by natural science. But we do have choices on how to use the spectrum that is available. The problem is that most of the usable spectrum is already allocated to other applications.
In theory, there are infinite higher frequencies to explore – but in a practical sense, very high frequency signals only travel extremely short distances and require massive amounts of energy to transmit and receive which is not practical for most applications.
Inside the spectrum
The alternatives to creating new spectrum are to reallocate existing blocks of spectrum, and to use existing spectrum more efficiently. For example, in the analog TV days, each VHF and UHF channel in the US was allocated 6 MHz of spectrum within the various bands. Wayne’s World, the Saturday Night Live sketch, claimed to air on Aurora Public Access Channel 10. This would have broadcast at 196-202 MHz. Digital broadcast is more spectrum efficient, and an equivalent Channel 10 today would only require about 2.5 MHz of bandwidth.
When the government sunset analog TV broadcast and replaced it with a mandate for digital broadcast only, it freed up about 25% of the original frequency bands, including some 700 MHz spectrum originally used for public safety broadcasting. The government auctioned the newly freed up spectrum to cellular carriers to increase the market for wireless broadband. Companies like Verizon and AT&T won those auctions, which netted the federal government $19.6B.
The difficulty with a reallocation strategy is that it involves shutting down old protocols, which in turn shut down entire industries. The radio frequencies between 4200 and 4400 MHz, for example, are allocated for altimeters. This is a significantly wide band of frequencies in a favorable frequency range that cellular and other industries would love to re-purpose. While almost all modern aircraft have more accurate digital sensors and tools for measuring and reporting altitude, there is a massive population of older planes, including most single-engine personal aircraft, that rely on old altimeters. Taking the existing band allocation away would obsolete them, creating a huge negative impact on the aircraft industry.
Globalizing our access
Another challenge is that most industries today have globalized. As such, we want devices that work no matter where we are in the world.
Most countries prioritize their spectrum allocation around national defense. In the early days of spectrum allocation, it was strategic for nations to put their most critical radio communications on different frequencies than their enemies might select. It was really expensive to design radios and antennas that work for all frequencies, so making your own comms at your own frequencies made it expensive and difficult for others to try to listen in.
That set in motion a problem for the market. Since countries allocated different frequencies to their own militaries, there were few common, unallocated frequency ranges for global coordination. Product globalization for any industry was relatively rare at that time. Companies tended to build products for domestic markets, and those few products sold globally, did not yet have wireless radios embedded in them. There was little pressure from industry to address the economic benefits of frequency harmonization.
You may be surprised to know that even today, your mobile phone works almost anywhere in the world, not because every country has put cellular communications at the same frequency bands – but instead because phone designers and chipset manufacturers put multiple radios and antennas into their products so that the phone can detect and switch from radio to radio and frequency channel to channel based on where it is at the time. This is part of what makes our phones expensive and makes achieving great antenna performance so hard. The industry compromises on “perfect” performance at one frequency to instead prioritize “good enough” performance on the many frequency bands the product may need to operate on during its life.
Historically, cellular standards efforts have centered on how to leverage technology to most positively generate market value from the finite frequency spectrum resource. The good news is that most of the Generations, or G’s, of the industry have created benefits for the operators, industry and consumers alike, which has incentivized governments to work closely with the cellular industry to allocate spectrum.
The road to 5G
1G to 2G was mostly about transformation from analog to digital voice communications, creating bandwidth and capacity for the operators and greatly extending battery life and antenna performance for consumers.
2G to 3G brought digital data. We were now able to connect our phones to the internet, opening new value for consumers and revenue streams for the operators.
3G to 4G widened the pipe. We can now stream video seamlessly and conduct truly high bandwidth applications like augmented and virtual reality. 4G truly unlocked the enterprise market and operators flourished.
All of these generational improvements created efficiencies for the cellular carriers, which translated to the ability to serve more customers
4G to 5G is where we’ve seen a bump in the road. Very little “new value” has been realized by consumers or enterprises, as I discussed last week. But there has been significant benefit to the operators themselves. A stronger embrace of software, network slicing and data compression algorithms and other “techy” stuff allows 5G networks to handle far more customers in each cell, and process more total bandwidth of communications. This makes networks more efficient to manage and keeps quality of service higher. This is good for operator profits. But so far we’ve not seen cost savings passed down to consumers, and these efficiency improvements have not enabled new killer apps.
My fear is that the 6G standards are started with this same “operator efficiency” mindset. The biggest new component in discussion today is how to implement AI into the standards. There are ways that AI can enable software defined networks that persistently manage and moderate radio traffic, sub-channel bandwidth, power levels, data compression and a host of other variables to maximize performance in real time. Operators will be able to carry more data on today’s spectrum, to manage more customers and services that will make operators more money.
The final piece of the puzzle takes us back to the spectrum allocation. Carriers would love to get more, and if the government follows old habits, they will hold new 6G spectrum auctions. Governments will reallocate frequency bands from old applications to new 6G commercial use and the big telcos will bid for the license rights to maintain monopoly control of the spectrum they have licensed.
It is not yet apparent, from what I have read of the very early 6G discussions, how 6G is envisioned to create new transformational applications. And if 6G follows the 5G paradigm, there is a risk history repeats itself.
Next week, I’ll share ideas of how we might want to behave differently.
The post Next generation in wireless (6G) is coming – will world learn hard lessons from 5G? first appeared on WRAL TechWire.