Video: 5G: Is all the hype deserved?
It is the fourth time in history that the world’s telecommunications providers (the telcos) have acknowledged the need for a complete overhaul of their wireless infrastructure. This is why the ever-increasing array of technologies, listed by the 3rd Generation Partnership Project (3GPP) as “Release 15” and “Release 16” of their standards for wireless telecom, is called 5G. It is an effort to create a sustainable industry around the wireless consumption of data for all the world’s telcos.
One key goal of 5G is to dramatically improve quality of service, and extend that quality over a broader geographic area, in order for the wireless industry to remain competitive against the onset of gigabit fiber service coupled with Wi-Fi.
New business models
The initial costs of these improvements may be tremendous, and consumers have already demonstrated their intolerance for rate hikes. So, to recover those costs, telcos will need to offer new classes of service to new customer segments, for which 5G has made provisions. These include:
- Fixed wireless data connectivity in dense metropolitan areas, with gigabit per second or better bandwidth, through a dazzling, perhaps bewildering, new array of microwave relay antennas;
- Edge computing services that bring computing power closer to the point where sensor data from remote, wireless devices would be collected, eliminating the latency incurred by public cloud-based applications;
- Machine-to-machine communications services that could bring low-latency connectivity to devices such as self-driving cars and machine assembly robots;
- Video delivery services that would compete directly against today’s multi-channel video program distributors (MVPD) such as Comcast and Charter Communications, perhaps offering new delivery media for Netflix, Amazon, and Hulu, or perhaps competing against them as well.
“It’s not only going to be we humans that are going to be consuming services,” remarked Nick Cadwgan, director of IP mobile networking, speaking with ZDNet. “There’s going to be an awful lot of software consuming services. If you look at this whole thing about massive machine-type communications [mMTC], in the past it’s been primarily the human either talking to a human or, when we have the Internet, the human requesting services and experiences from software. Moving forward, we are going to have software as the requester, and that software is going to be talking to software. So the whole dynamic of what services we’re going to have to deliver through our networks, is going to change.”
Driving for higher yields
5G is comprised of several technology projects in both communications and data center architecture, all of which must collectively yield benefits for telcos as well as customers, for any of them to be individually considered successful. The majority of these efforts are in one of three categories:
- Spectral efficiency — Making more optimal use of multiple frequencies so that greater bandwidths may be extended across further distances from base stations (historically, the main goal of any wireless “G”);
- Energy efficiency — Leveraging whatever technological gains there may be for both transmitters and servers, in order to drastically reduce cooling costs;
- Utilization — To afford the tremendous communications infrastructure overhaul that 5G may require, telcos may need to create additional revenue generating services such as edge computing and mobile apps hosting, placing them in direct competition with public cloud providers.
It was during the implementation of 4G that telcos realized they wished they had different grades of infrastructure to support different classes of service. 5G allows for three service grades that may be tuned to the special requirements of their customers’ business models:
Enhanced Mobile Broadband (eMBB) aims to service more densely populated metropolitan centers with downlink speeds approaching 1Gbps (gigabits per second) indoors, and 300Mbps (megabits per second) outdoors. It would accomplish this through the installation of extremely high-frequency millimeter-wave (mmWave) antennas throughout the landscape — on lampposts, the sides of buildings, the branches of trees, existing electrical towers, and in one novel use case proposed by AT&T, the tops of city busses. Since each of these antennas, in the metro use case, would cover an area probably no larger than a baseball diamond, hundreds, perhaps thousands, of them would be needed to thoroughly service any densely populated downtown area. And since most would not be omnidirectional — their maximum beam width would only be about 4 degrees — mmWave antennas would bounce signals off of each other’s mirrors, until they eventually reached their intended customer locations. For more suburban and rural areas, eMBB would seek to replace 4G’s current LTE system, with a new network of lower-power omnidirectional antennas providing 50Mbps downlink service.
Massive Machine Type Communications (mMTC) [PDF] enables the machine-to-machine (M2M) and Internet of Things (IoT) applications that a new wave of wireless customers may come to expect from their network, without imposing burdens on the other classes of service. Experts in the M2M and logistics fields have been on record saying that 2G service was perfectly fine for the narrow service bands their signaling devices required, and that later generations actually degraded that service by introducing new sources of latency. MMTC would seek to restore that service level by implementing a compartmentalized service tier for devices needing downlink bandwidth as low as 100Kbps (kilobits per second, right down there with telephone modems) but with latency kept low at around 10ms (milliseconds).
Ultra Reliable and Low Latency Communications (URLLC) would address critical needs communications where bandwidth is not quite as important as speed — specifically, an end-to-end latency of 1ms or less. This would be the tier that addresses the autonomous vehicle category, where decision time for reaction to a possible accident is almost non-existent. URLLC could actually make 5G competitive with satellite, opening up the possibility — still in the discussion phase among the telcos — of 5G replacing GPS for geolocation.
Plotting the inflection point
“The first generation of mobile systems that were launched around 1991 — popularly known as 2G/GSM — was really focused on massive mobile device communication,” explained Sree Koratala, head of technology and strategy for 5G Wireless in North America for communications equipment provider Ericsson, speaking with ZDNet. “Then the next generation of mobile networks, 3G, launched starting in 1998, enabled mobile broadband, feature phones, and browsing. When 4G networks were launched in 2008, smartphones popularized video consumption, and data traffic on mobile networks really exploded.
“All these networks primarily catered towards consumers,” Koratala continued. “Now, when you look at this next generation of mobile networks, 5G, it is very unlike the previous generation of network. It’s truly an inflection point from the consumer to the industry.”
Engineers worldwide remain optimistic, at the time of this writing, that the full release of the first complete set of 5G standards (officially “Release 15”) by the wireless industry’s leading standards body will take place in June 2018 — literally a matter of weeks.
Why cooling made 5G an urgent necessity
In May 2017, AT&T President of Technology Operations Bill Hogg declared the existing wireless business model for cell tower rental, operation, and maintenance “unsustainable.” Some months earlier, a J. P. Morgan analyst characterized the then-business model for wireless providers in Southeast Asia as unsustainable, warning that the then-current system has rendered it impossible for carriers to keep up with customer demand. And as research firm McKinsey & Company asserted in a January 2018 report, the growth path for Japan’s existing wireless infrastructure is becoming “unsustainable,” rendering 5G for that country “a necessity.”
One senses a theme.
The world’s telcos need a different, far less constrained, business model than what 4G has left them with. The only way they can accomplish this is with an infrastructure that generates radically lower costs than the current scenario, particularly for maintaining, and mainly cooling, their base station equipment.
Cooling and the costs associated with facilitating and managing cooling equipment, according to studies from analysts and telcos worldwide, account for more than half of telcos’ total expenses for operating their wireless networks. Global warming (which, from the perspective of meteorological instrumentation, is indisputable) is a direct contributor to compound annual increases in wireless network costs. Ironically, as this 2017 study by China’s National Science Foundation asserts, the act of cooling 4G LTE equipment alone may contribute as much as 2 percent to the entire global warming problem.
The world’s biggest example
The 2013 edition of a study by China Mobile, that country’s state-licensed service provider, examined the high costs of maintaining energy-inefficient equipment in its 3G wireless network, which happens to be the largest on the planet in both territory and customers served. In 2012, CM estimated its network had consumed 14 billion kilowatt-hours (kWh) of electricity annually. As much as 46 percent of the electricity consumed by each base station, it estimated, was devoted to air conditioning.
That study proposed a new method of constructing, deploying, and managing network base stations. Called Cloud architecture RAN (C-RAN), it’s a method that has greatly influenced the development of 5G. Some telcos have embraced C-RAN in its entirety — for instance, AT&T, in trials of its intermediate “5G Evolution” system in cities including Indianapolis.
One of the hallmarks of C-RAN cell site architecture is the total elimination of the on-site base band unit (BBU) processors, which were typically co-located with the site’s radio head. That functionality is instead virtualized and moved to a centralized cloud platform, for which multiple BBUs’ control systems share tenancy, in what’s called the baseband pool. The cloud data center is powered and cooled independently, and linked to each of the base stations by no greater than 40km of fiber optic cable.
Moving BBU processing to the cloud eliminates an entire base transmission system (BTS) equipment room from the base station (BS). It also completely abolishes the principal source of heat generation inside the BS, making it feasible for much, if not all, of the remaining equipment to be cooled passively — literally, by exposure to the open air. The configuration of that equipment could then be optimized, like the 5G trial transmitter shown above, constructed by Ericsson for Japan’s NTT DOCOMO. The goal for this optimization is to reduce a single site’s power consumption by over 75 percent.
What’s more, it takes less money to rent the site for a smaller base station than for a large one. Granted, China may have a unique concept of the real estate market compared to other countries. Nevertheless, China Mobile’s figures show that rental fees with C-RAN were reduced by over 71 percent, contributing to a total operational expenditure (OpEx) reduction for the entire base station site of 53 percent.
Keep in mind, though, that China Mobile’s figures pertained to deploying and maintaining 3G equipment, not 5G. But the new standards for transmission and network access, called 5G New Radio (5G NR), are being designed with C-RAN ideals in mind, so that the equipment never generates enough heat to trip that wire, requiring OpEx to effectively quadruple.
The new cloud at the new edge
It would appear a lot of the success of 5G rests upon this new class of cloud data centers, into which the functionality of today’s baseband units would move. As of late April 2018, there is still considerable uncertainty as to where this centralized RAN controller would reside. There are competing definitions.
Some have taken a good look at the emerging crop of edge data centers sprouting adjacent to today’s cell towers, and are suggesting that the new Service Oriented Core (SOC) could be distributed across those locations. Yet skeptics are wondering, why bother with the elimination of the BTS station in the first place, if the SOC would only put it back? Alternately, a separate SOC station could be established that services dozens of towers simultaneously. The problem there, obviously, is that such a station would be a full-fledged data center in itself, which would have real estate and cooling issues of its own.
Either option might be more palatable, some engineers believe, if the servers operating there could delegate computing infrastructure among internal operations and special customer services — edge computing services that could compete with cloud providers such as Amazon and Microsoft Azure, by leveraging much lower latency. The ability to do so is entirely dependent upon a concept called network slicing. This is the subdivision of physical infrastructure into virtual platforms, using a technique perfected by telecommunications companies called network functions virtualization (NFV).
The dicey subject of slicing
Exactly what routes these network slices would take through the infrastructure is completely up in the air. T-Mobile and others have suggested slices could divide classes of internal network functions — for instance, dividing eMBB from mMTC from URLLC. Others, such as the members of the Next Generation Mobile Networks Alliance (NGMN), suggest that slices could effectively partition networks in such a way (as suggested by the NGMN diagram above) that different classes of user equipment, utilizing their respective sets of radio access technologies (RAT), would perceive quite different infrastructure configurations, even though they’d be accessing resources from the same pools.
Another suggestion being made by some of the industry’s main customers, at 5G industry conferences, is that telcos offer the premium option of slicing their network by individual customer. This would give customers willing to invest heavily in edge computing services more direct access to the fiber optic fabric that supports the infrastructure, potentially giving a telco willing to provide such a service a competitive advantage over a colocation provider, even one with facilities adjacent to a “carrier hotel.”
But depending upon whom one asks, slicing networks by customer may actually be impossible. There are diametrically split viewpoints on the subject of whether slicing could congregate telco functions and customer functions together on the same cloud. Some have suggested such a convergence is vitally necessary for 5G to fulfill the value proposition embodied in C-RAN. Architects of the cloud platforms seeking to play a central role in the SOC, such as OpenStack and CORD, argue that this convergence is already happening, and the whole point of the architecture in the first place.
Others in the data center space have argued that aggregating an essentially multi-tenant infrastructure like a customer edge data center, with an essentially single-tenant infrastructure like a telco core facility, would carry significant security risks for both parties. Juniper Networks, one of the pioneers in the software-defined networking (SDN) space, suggests such a security architecture is not feasible. And AT&T has gone so far as to suggest the argument is moot and the discussion is actually closed: Both classes of functions have already been physically separated, not virtually sliced, in the 5G specifications, its engineers assert.
But AT&T isn’t the “Bell System” any more — it doesn’t get the final say. Thus one of the most critical decisions in 5G architecture may end up being the result of trial and error.
However, as this gets resolved, the very fact that slicing must take place somehow, if only to virtually separate those functions that will not have already been physically separated, suggests that 5G will not be “a fully meshed world of wirelessly connected everything.” Security — the topic that always waits until the last moment — will ensure that certain things will remain strategically disconnected, for our own good.
The emergence of fixed wireless
Ericsson’s own forecasts of wireless connectivity have been known to fool people. In June 2017, its annual Mobility Report estimated that mobile data traffic would grow at an average compound annual growth rate of 42 percent through 2022, having grown eightfold by the end of that period. “By the end of the forecast period,” stated Ericsson, “more than 90 percent of mobile data traffic will come from smartphones.”
That forecast generated a truckload of headlines. A half-billion 5G mobile subscriptions are expected worldwide by 2022, reported ZDNet’s Corinne Reichert. Ericsson’s updated report, published last November, doubled that forecast number for 2023, adding that 5G access would reach one-fifth of the world’s population by the end of that year.
Read also: Robots could get cheaper, thanks to 5G
The keyword in the above paragraphs is “mobile.” Up to now, all the “Gs” have pertained to the wireless access technology we’ve historically perceived as synonymous with mobility. For 5G to be truly successful, Ericsson’s Koratala told us, it will need to open up access to a broader range of devices, many of which are actually not the least bit mobile.
The not-so-mobile proposition
“These connections are expected to be going into devices in factories, transportation, and the grid,” said Koratala. “So the range of applications means a huge diversification of performance and requirements for communication. Then there are some use cases that might be demanding a 5x improvement in latency, a 100x or 1000x data volume, as well as [extending] battery life. So when you look at that set of requirements, it’s very clear that it is not a single use case. It really becomes an enabler for a wide variety of use cases, that will have different requirements to be met to make them viable.”
The key mission of mMTC is to service wireless devices that don’t move. Its transmission scheme will be tuned for very high density — for situations like factory floors where thousands of individual mechanical elements are sending operational data, simultaneously, to an off-site location for instant analytics.
Viewed in this light, the prediction that nine-tenths of mobile data will be consumed by the largest class of mobile devices, seems about as spot-on as a forecast that rain will continue to be wet. What is completely unpredictable at this point is whether a fixed wireless use case will be competitive in an environment where wired broadband is also undergoing a revolution.
Exchanging yesterday’s new technology for today’s
You will hear from many sources that 5G is not about what anything is, but rather what it enables you to do. No, it isn’t. 5G is about the things in which the telecom industry, and to a growing extent the data center networking industry, must invest in order to produce the latest editions of platforms such as V2X and mMTC, so that it can start earning revenue from those services. 5G is all about what it is.
If you end up watching smoothly streaming 4K video on a new class of smartphone, allowing yourself to be ferried between cities in an otherwise unoccupied vehicle, or participate in a virtual, real-time football tournament with a few dozen goggle-wearers scattered throughout the planet, then you will be fulfilling the hopes of telco engineers who hope to make 5G viable. The truth is, none of these consumer technologies are the real reason 5G is being engineered. Indeed, they are the side benefits.
The big gamble
5G is a collective bargain between the telecommunications industry and society. To allow for anything close to evenly distributed coverage over a metropolitan area, the base stations containing the transmitters and receivers (the “cells”) must be smaller, much lower in power, and much greater in number than they are today. Essentially, the new cell towers must co-exist with the environment. An outdoor photograph taken in any direction will be just as likely to include a 5G tower as not. (The example above, provided by AT&T, includes three.)
Read also: How US carriers moved up the timeline on 5G
It would not be unprecedented in history. We’ve borne telephone and electric poles through our neighborhoods and, not all that long ago, willingly installed TV aerials the size of kites on our chimneys. Some of us still use their old mounting posts for our satellite dishes. In exchange for the hopefully minor blemish on our landscapes that 5G may bring, we’d all wave a cheerful good-bye to dead spots.
All these things must happen, and in relatively quick succession, in order for telcos to afford the infrastructural overhaul they now have no choice but to make.
Explore further — from the CBS Interactive Network
Previous and related coverage
The chip giant announced that its Snapdragon X50 modem chipset was chosen by 19 global operators for upcoming mobile 5G trials.
5G could lead to an increase in customers’ data plans, Sprint’s CEO said.
ZDNet caught up with Julian Sanchez, director of John Deere Technology Innovation Center, during CES 2018 to talk about how rural connectivity will impact the future of precision agriculture.