In today’s episode, I’m taking you to the city of Guiyang in the southwestern part of China.
If you’re not familiar with China’s geography, Guiyang is the capital of Guizhou Province—home to many of those towering super bridges you’ve seen perched on mountaintops.
But beyond impressive infrastructure, the province is also known for something else: data centers.
That’s what brought me here—to Gui’an New Area, a district known for its data center cluster. And I had the chance to step inside one of them.
Hello and welcome to another episode of Climate Watch. I’m Fei Fei.
In today’s episode, I’m taking you to the city of Guiyang in the southwestern part of China.
If you’re not familiar with China’s geography, Guiyang is the capital of Guizhou Province—home to many of those towering super bridges you’ve seen perched on mountaintops.
But beyond impressive infrastructure, the province is also known for something else: data centers.
That’s what brought me here—to Gui’an New Area, a district known for its data center cluster. And I had the chance to step inside one of them.
The moment I entered the building, I was greeted by a constant, low hum.
The space was more organized—and less high-tech looking—than I’d expected.
Before visiting, I imagined rows upon rows of shelves packed with blinking servers, wires dangling like jungle vines, and lights flickering to show unimaginable computing power.
But what I saw instead were aisles of closed doors—almost like a tightly packed apartment block. Apart from the hum, there didn’t seem to be much going on.
But that’s just what my untrained eyes could see.
Jian Chonghai, an engineer overseeing power operations at this China Mobile data center, explained why I wasn’t seeing any blinking lights or exposed wires.
The servers are all hidden—behind those doors.
And no, it's not for aesthetic reasons. The setup helps improve airflow, reduce energy use, and prevent dust from reaching the equipment.
"Here we use hot and cold aisle containment. The hot air is directed into a dedicated hot aisle and then cooled using air conditioning. This helps avoid energy loss so that all the energy is only used for cooling the servers and other equipment. This way, we can maximize the efficiency of the air conditioning system."
In other words, the design minimizes how often you need to cool the air. Rather than cooling down a room ten times, the system only needs to do it once.
And that’s just one cooling method. Jian told me they also use two other systems: maglev air conditioning and liquid cooling.
"Take this building as an example — we use in-row cooling systems with magnetic levitation compressors. There are 80 large outdoor units and 640 indoor units."
I looked it up—these maglev AC units can cost up to $300,000 each. But the investment seems to pay off: they can reduce energy use by 30 to 40%.
Spending just an hour in one room of one building gave me a tiny glimpse into the complexity of data center operations.
Earlier this year, the International Energy Agency released a report called Energy and AI, which explores the role data centers play in our energy systems.
The report breaks down how data centers use energy: about 90% of their electricity goes to running servers and cooling them.
Most of us know what a server is—we might even have a small one at home. But data centers don’t just run one or two servers; they run thousands. According to the IEA, servers alone account for around 60% of a data center’s electricity use.
Because servers run non-stop, cooling equipment is critical. As Jian showed me, that includes air conditioning and humidity controls. In highly efficient data centers, cooling makes up only 7% of energy use—but in older or less efficient ones, it can be more than 30%.
This Gui’an data center mostly offers hosting services—storing and managing clients' servers and providing networking support using China Mobile’s extensive telecom infrastructure.
But the future lies in artificial intelligence.
Li Haiyan is an Intelligent Computing Product Manager at China Mobile. She oversees the development and operation of AI-related products and smart data center services.
Li told me data centers are the “soil” of digital technologies—from cloud storage to AI. But while cloud backups require moderate computing power, AI needs far more.
"The desktop computers we used at home represent an early and limited form of computing power. In contrast, traditional CPU-based servers in data centers provide greater capabilities and are typically used for cloud computing tasks. Today, GPU-based systems have greatly advanced computing performance — a single GPU can deliver several hundred times the processing power of a standard CPU. This has led to a substantial boost in our overall compute capacity."
To support AI workloads, the data center is upgrading its hardware. She showed me four new buildings under construction—part of their expansion plan to meet the growing demand driven by tools like DeepSeek.
"At the S-layer—representing the foundational hardware infrastructure—we have strengthened our computing power, enabling us to offer intelligent computing services such as AI model inference and training."
But higher computing power means higher energy demand—and that’s Jian’s next challenge.
For AI workloads, they now rely on liquid cooling systems.
"Liquid cooling technology offers an effective way to reduce energy consumption by directly cooling key components such as CPUs and GPUs. It concentrates on heat dissipation at the chip level, allowing for precise and efficient thermal management of high-performance computing hardware."
They also make use of the natural environment—through a method called free cooling.
"In winter, when the outdoor temperature drops below 8°C, the compressor can stop running. Instead, the system uses the refrigerant pump to harness natural outdoor cooling, achieving the same cooling effect for the data center without relying on the compressor."
Jian is proud of their local weather.
"We have a very pleasant environment here. For example, with the Grain in Ear solar term just passed and the Summer Solstice approaching, the weather remains quite mild — today, even in early summer, the outdoor temperature is only around 25 or 26 degrees Celsius."
Guiyang markets itself as “爽爽的贵阳”—meaning “cool and refreshing Guiyang.” And it’s true: I visited in early June, and it was a comfortable 22°C—much cooler than Shenzhen, where temperatures hit 35°C that same week.
That’s why this data center cluster exists in Gui’an—more than 1,000 kilometers from coastal tech hubs like Shenzhen.
By the end of last year, Gui’an had 23 operational data centers, including ones by Tencent and Huawei.
All of this ties into a national program China launched in 2022 called “East Data, West Computing”—or 东数西算.
At first glance, it seems counterintuitive: most data is generated in the populous, economically powerful east. Why process and store it in the west?
According to an official explanation published in 2022, the move is designed to cut energy use.
"Currently, there is strong demand for computing power in eastern China. However, the eastern region faces challenges in climate, resources, and environmental conditions, which are not conducive to the development of low-carbon, green data centers. By relocating computing infrastructure to the western region, it is possible to leverage advantages such as favorable climate, abundant energy resources, and better environmental conditions... [this] supports the low-carbon, green, and sustainable development of China’s data centers and contributes to the goals of carbon peaking and carbon neutrality."
Under the initiative, China is building eight national computing hubs and ten data center clusters. In western regions like Guizhou, Inner Mongolia, Gansu, and Ningxia, the focus is on tasks like AI training, data storage, and analytics—things that don’t require instant response.
Guizhou’s cluster mainly serves Guangdong Province, where Shenzhen is located.
Even before this initiative, China had already begun experimenting with greener data centers. In 2021, it released a three-year plan to cap PUE—power usage effectiveness—at 1.3 for large-scale facilities.
A 2024 action plan said the average PUE will be reduced to below 1.5 by the end of 2025.
If you’re unfamiliar with PUE, it’s a ratio that measures energy efficiency. A perfect PUE is 1.0, meaning every bit of electricity is used by the servers, with nothing wasted on cooling or overhead.
The Guiyang China Mobile data center I visited boasts a PUE of 1.19. That’s on par with leading tech companies like Google, Alibaba, and Tencent.
But does a lower PUE mean a greener data center?
I asked Wang Yongzhen, an energy expert at Beijing Institute of Technology who researches data center energy use.
"A large data center can easily consume hundreds of millions or even billions of kilowatt-hours annually. The national average PUE has been pushed down from 1.5 to 1.3, or even 1.2—and with some advanced tech, 1.1. Why? Because lowering PUE means lower electricity bills. Companies now offer incentives for data center teams to improve efficiency."
But Wang also warned that PUE alone isn’t enough.
"Energy use, carbon emissions, and utilization rates are all interrelated. PUE is not a one-size-fits-all metric. Sometimes, even with a higher PUE, if carbon emissions are low, there may be a beneficial trade-off."
According to the IEA, data centers consumed around 1.5% of global electricity in 2024—not a huge share, but one growing rapidly.
Electricity use in data centers grows about 12% a year—four times faster than total electricity demand.
China and the U.S. will account for nearly 80% of global data center energy growth by 2030.
In China, the IEA expects electricity use by data centers to rise 170% from 2024 to 2030.
Wang now considers data centers one of China’s major energy-intensive industries, alongside steel, cement, and petrochemicals. He estimates that by 2030, Chinese data centers will demand around 105 gigawatts, use 26.3 billion liters of water, and emit 310 million metric tons of CO₂.
And there’s no going back on this wave of AI advancement. So the key isn’t just about achieving a lower PUE.
We need to ask deeper questions:
Who’s supplying electricity to data centers? How green is that power?
How can society make better use of the enormous heat generated by servers?
And as we move toward electrification in other sectors—like transport—what other roles can data centers play, beyond being pure energy consumers?
Scholars like Wang Yongzhen envision a new kind of system—one where data centers don’t just use electricity and cooling, but also interact with the broader energy infrastructure. A coordinated network that links computing power, electricity, and thermal energy.
" One of the key goals is to increase the share of green electricity used in data centers—to make AI computing greener. The second goal is to reduce energy consumption—not just through isolated technical upgrades, but by improving overall efficiency through coordination. The third goal is to enable data centers to interact with the power grid, helping shift loads, balance supply and demand, and provide grid flexibility. "
A flexible yet stable grid that taps into every possible resource across society. I’ve heard similar visions from other experts too.
In the future, individual EVs could become part of the power system: charging when there’s excess electricity, and feeding power back to the grid when demand is high.
Data centers, too, could act as buffers—especially useful when the grid has to cope with unstable wind and solar power supplies.
"We’re operating within the broader context of building a new power system. As the share of renewable energy increases, grid operators face growing challenges. It’s becoming harder to balance loads, and it’s not feasible to build enough transmission infrastructure to absorb all the new renewable output.
But AI-focused data centers—especially those doing batch processing or training models—don’t need to run in real time. They can shift workloads across time or location, allowing their energy use to better match grid conditions. That flexibility could be a game-changer."
But to make that vision a reality, China needs to get ready—through improved infrastructure, smarter market design, and better talent training.
Wang believes a more market-oriented power system could encourage more players to join this network and contribute.
"From now through 2030, we’re entering a pilot and demonstration phase. National and provincial agencies—including the Ministry of Industry and Information Technology—are pushing for system integration. We’re running pilot projects in places like Inner Mongolia, Zhangjiakou, and Qinghai, trying to connect computing, electricity, and thermal systems."
"Through these pilots, companies are figuring out which technologies and platforms are needed. It’s also a fertile ground for innovation—driving upgrades in equipment and creating new tech iterations. And yes, revenue is a real motivator. It might sound utilitarian, but innovation that leads to profit is a powerful force for progress."
But there's also a cultural and organizational challenge: getting different teams within a data center to communicate better.
Wang pointed out that IT engineers and operations staff still don’t talk enough—and that could become a serious bottleneck.
In his view, people like Engineer Jian Chonghai and Product Manager Li Haiyan will need to collaborate more closely.
They need to stay on top of developments in both computing and energy systems, to keep pace with the vision of a flexible, integrated power and computing network.
A huge thank you to Associate Professor Wang Yongzhen from the Beijing Institute of Technology, Engineer Jian Chonghai, and Product Manager Li Haiyan of China Mobile in Guizhou—and to everyone else who helped make this episode possible.
There’s still so much more we could explore when it comes to AI and energy.
If you liked this episode and want to hear more:
Share this post