A TowerXchange Interview
5G may be the all the buzz, but transformation at the tower has already begun as tower owners are investing to meet the needs of 4.5G. And there’s much more transformation to come. Starting with cluster sites at base stations and, before long, to tops of buildings and within parking lots. Schneider Electric is at the forefront of this transformation working with leading telecom and tower operators to understand the future requirements and define their future-ready infrastructure.
TowerXchange: Please introduce yours and Schneider Electric’s role in the communications infrastructure ecosystem.
Steven Carlini, VP Innovation and Data Center, Schneider Electric: Schneider Electric is uniquely positioned as a provider of secure power and cooling systems for both the data center and telecom tower markets. We have attended several TowerXchange Meetups to investigate the convergence of these infrastructure classes, particularly when it comes to edge computing, and to understand how Schneider can be an enabler.
Schneider Electric manufactures a range of UPS and cooling solutions, as well as architect data center solutions – from medium voltage and low voltage switchgear to busway, technical cooling systems and IT room solutions: IT racks, PDU’s cooling systems, telco racks, et cetera. Continue reading
I’m a Vice President in Schneider’s secure power innovation team, part of the CTO office, and I spend a lot of my time investigating new technologies and new markets, writing and presenting about those opportunities, and conceiving and incubating solutions for different applications. The software group also reports to the CTO office, so we have a robust understanding of data center management software, and of how the integration of communication cards to hook into gateways sends performance information to the cloud. Edge computing has become an increasing focus of my research. As we progress to 4.5G and LTE-A to 5G, data center architecture needs to evolve to enable this technology innovation.
TowerXchange: How do you see as the edge computing opportunity for cell site owners, both tower companies and Mobile Network Operators (towercos and MNOs)?
Steven Carlini: 4.5-5G requires content delivery and computational power in closer proximity to users, and Mobile Edge Computing sites (MECs) have to go somewhere, so it is logical to look at proximity to base stations.
From a technology point of view, it can be tempting to say, “there’s a lot of cell sites with space for equipment – it’s logical to apply MEC there.” But when these 2-3G tower sites were conceived, it was with a specific application in mind. For example, there are constraints in terms of the amount of power delivered to these sites. There are also heavy regulations for telecommunications which (along with associated fees) varies from country, State to State. There is also the question of moving fibre from one location to another. There is also a complex ecosystem of stakeholders in cell sites: who will be the ultimate decision makers in the opportunity to locate MEC at a cell site, the towerco, the real estate owner, or the MNO?
The way MNOs tend to map their coverage zones and cluster circles in dense areas makes cell sites a value proposition in locating MECs in densely populated areas. But, there’s also demand for computational power and content delivery in less densely populated and rural areas.
TowerXchange: What do you see as Schneider Electric’s potential role as an enabler of edge computing at cell sites?
Steven Carlini: Schneider is committed to being an enabler of edge computing. We’re exploring different types and sizes of enclosures, different battery technologies according to anticipated runtime, and where MECs might require more power than is currently delivered to a site, we’re exploring integration with renewables, for example hooking into solar grids and fuel cells.
We’re able to design new solutions for deployment in densely populated areas: narrow streets, parking garages, rooftops – often more exposed to sustain the signal. Ultimately, it’s not Schneider making the decision on where these urban sites are located – our clients come to us with a grid reference and an idea of what they need to put there, and Schneider is able to devise an end-to-end solution encompassing power, cooling, enclosures and management software.
TowerXchange: You talk about rooftops and parking lots as potential early sites for cluster sites, but providing discreet, quiet UPSs can be challenging in such locations – how do we overcome some of these practical challenges?
Steven Carlini: There could be several different approaches to that. One theoretical answer to the space and noise constraints is to use liquid cooling, which eliminates fans, compressors and condensers, bringing noise levels down to almost nothing. Liquid cooling also doesn’t blow air through, so there’s less filter clogging, and you can create a more concealed environment. But the challenge is that you have to have compatible telco and IT equipment.
In this age of increasing awareness of emissions, generation can’t be based on fossil fuels, it has to be renewable, but solar arrays need more real estate than is typically available at a cell site. It’s a problem the industry hasn’t solved yet, although we’re exploring what we can do with increased photovoltaic (PV) cell performance in terms of remote power generation.
Schneider has a microgrid solution for different remote applications, such as a mine or oil exploration site that needs its own redundant power in a confined area, but such applications are not as dispersed as cell towers.
So many of the 5G demonstrations I see are in stadiums or similar environments with line of sight for antennas, and easy access to power; but how is 5G going to work in this distributed world? Perhaps 5G use cases will initially be confined to densely populated areas – for applications like factory automation, stadiums, shipyards and harbours.
TowerXchange: In tower industry terms, availability tends to connote uptime targets between 99.5% and 99.97%. In data center terms, uptime means 100%. How do we economically close the gap?
Steven Carlini: Data centers achieved this level of uptime by having redundant generators, cooling systems, et cetera. You can achieve almost any level of uptime you wish by leveraging increasing levels of redundancy – but that reduces efficiency. The most extreme cases are the (rare) tier four data centers that had to have redundant chillers – they used four times the level of electricity!
Whether it is a data center or a cell site, if you standardize manufacturing and build in enough autonomy, you can cover most outages. It’s advantageous to have a UPS to take care of most blips, but longer outages are going to rely on battery banks. You can spend as much as you can justify on batteries to increase uptime.
The cooling systems in larger data centers are highly complex. For smaller, edge data centers, redundancy must be standardized, and one can failover to different types of cooling. This all needs to be addressed in the design phase, so we can leverage standardization and volume manufacturing, with customization to meet Service Level Agreements (SLAs) achieved by adding as many batteries as you want.
TowerXchange: To what extent can we hope to achieve a standardized approach?
Steven Carlini: There’s been very little standardization to date. Even the Internet Giants, as they move from one data center to the next, are always changing something.
But if 4.5-5G and edge computing is going to require thousands or tens of thousands of new sites, then those sites need to be standardized. We need to drive costs down by building in volume.
Standardization is also good for reliability and availability. If you standardize across 10,000 sites then servicing and troubleshooting becomes much more efficient. If every site is different, it’s a nightmare.
Schneider has developed cloud-based management systems, custom designed for data centers. Our thinking now is to take data from all these new sites, centralize that data in the cloud, and develop analytics to monitor performance and sites benchmark against each other, enabling us to diagnose any recurring problems.
Once data is in the cloud, APIs can be developed to enable reporting. The first wave of management systems generated primitive emails, which can quickly become overwhelming, but the reporting of data is now normalized, reported at regular intervals, and integrated into dashboards with access to images, highlighting problems.
TowerXchange: Do you buy in to the notion that fibre regen (signal boost) sites are also potential edge computing locations?
Steven Carlini: Whether it’s a signal boost site or a BTS hotel, if there’s enough power, there’s an opportunity to change from having just one dedicated function to multiple use sites. And MEC could be one of those multiple purposes.
We see this already where cable TV companies utilize space in their head end systems, or where telcos leverage central offices.
While there’s a way to go to see it cool communications infrastructure, liquid cooling isn’t a new technology. For example, supercomputers and high performance computing sites are heavy users of liquid cooling. Even consumers can buy premium USD $10,000 gaming PCs that are liquid cooled. With the advent of AI, where the processing need means the GPUs being used can generate three to four times typical heat output, this is driving advances in liquid cooling. Liquid cooling can be direct to chip, with cooling to heat sinks on back of processors, or it can be immersion, were the circuit boards with processors on them are immersed in dielectric.
Turning to the power side for urban sites, lithium-ion battery technology lends itself to being packaged in different ways. Lithium-ion is not as susceptible to extreme temperatures as lead acid – it can operate in extremes without degradation. The biggest potential issue is lithium-ion batteries getting hot during the charging and discharging cycle, but one could use the same liquid cooling system for batteries.
TowerXchange: How would the power load differ between a conventional cell site and one with a co-located MEC?
Steven Carlini: A conventional cell site might have 10-12KW of power to the site. While there are many different potential sizes of MEC, you might be adding approximately 20KW of load. So where do you get the power to double or treble the load on a site?
Getting grid power to that site will take time, so at the very least the site owner will need governance processes and an oversight committee to collaborate with utility companies. There is often no business case to run power to remote sites, so we need to innovate local power
TowerXchange: Who is going to be the customer – MNOs, colocation data center companies, FANGs (Facebook, Amazon, Netflix, Google) and what will be their requirements of edge computing?
Steven Carlini: I’d add to your list the IP vendors that are engaged with this. As technology changes, and more open software is disaggregated from hardware, and you have the move to more open systems for NFV (Network Function Virtualization), this opens the door to IP companies and cable TV providers also trying to make a play.
At the moment, the most demanding requirements tend to come from the Internet Giants who in collaboration with colocation data center companies, could deploy the micro data center architecture that 4.5-5G will need.
Carriers are typically less experienced and don’t want to self-deploy the data center architecture needed – they’re also spending their money on spectrum.
We have people asking us to help from all those different groups, and Schneider will do our best to enable their visions. There will be different groups deploying the architecture of 5G, and a battle to control the customer – but those end customers will be in the strongest position and will ultimately drive how this is built out.
TowerXchange: In conclusion, what is Schneider Electric’s vision of a combined cell site / MEC for the 5G era? What do we need to know about the physical conditions, security, power and maintenance of such a site?
Steven Carlini:: Depending on form factor, there will be a mix of AC and DC equipment at such sites. The answer is to try to architect systems in such a way that we can reproduce them to meet different requirements for power load, security, cooling and the management of those systems. There will be an opportunity to bundle off the shelf solutions, but in the future, I believe we will see a more purpose-built, standardized system that delivers the necessary runtime and cooling.
It may start out as a modular solution, but to reach scale it’ll be manufactured in a single, containerized enclosures – that product could be 5-10 years from now.
If we’re going to use the spectrum the industry is calling for 5G, and if we’re going to realize the 1ms latency that will make 5G genuinely transformational, then the data centers must be dispersed. That value proposition requires mass production, otherwise there is a risk that 5G only works in pockets and is not broadly adopted. Schneider stands ready to enable the mass production of Mobile Edge Computing.
August 5, 2019