Big quote: The high energy demands for GenAI and other LLMs are accelerating the need for more power-efficient systems. AMD’s CEO Lisa Su is confident that the company is on the right path to increase data center power efficiency by 100x in the next three years.
Everywhere you look, there is a new AI service to improve your personal or work life. Google Search now incorporates its Gemini AI for summarizing search results, but this comes at a cost of tenfold energy increase (with poor results) when compared to non-AI search. The global popularity of generative AI has accelerated the need for rapid expansion of data centers and power demands.
Goldman Sachs estimates that data center power requirements will grow by 160% by 2030. This is a huge problem for countries like the US and Europe, where the average age of regional power grids is 50 years and 40 years, respectively. In 2022, data centers consumed 3% US power, and projections suggest this will increase to 8% by 2030. “There’s no way to get there without a breakthrough,” says OpenAI co-founder Sam Altman.
AMD CEO Lisa Su discussed past successes and future plans to improve compute node efficiency at the ITF World 2024 conference. Back in 2014, AMD committed to make their mobile CPUs 25% more efficient by 2020 (25×20). They exceeded that goal by achieving 31.7% efficiency.
In 2021, AMD saw the writing on the wall regarding the exponential growth of AI workloads and the power requirements to operate these complex systems. To help mitigate the power demand, AMD established a 30×25 goal for compute node efficiency by focusing on several key areas.
It starts with improvements in process node and packaging, which are the fundamental building blocks of CPU/GPU manufacturing. By utilizing 3nm Gate-All-Around (GAA) transistors, an evolution of the FinFET 3D transistors, power efficiency and performance-per-watt will be improved. Additionally, the continual refinement of packaging techniques (e.g., chiplets, 3D stacking) gives AMD the flexibility to swap various components into a single package.
The next area of focus is AI-optimized accelerated hardware architectures. These are known as Neural Processing Units (NPUs) which have been in mobile SoCs like the Snapdragon 8 Gen series for years now. Earlier this year, AMD released the Ryzen 8700G which was the first desktop processor with a built-in AI engine. This dedicated hardware allows the CPU to offload AI compute-intensive tasks to the NPU, improving efficiency and lowering power consumption.
The final pillars of this 30×25 goal are system-level tuning and software/hardware co-design. System-level tuning is another branch of the advanced packaging initiative, focused on reducing the energy needed to move data physically within these computer clusters. Software/hardware co-design aims to improve AI algorithms to work more effectively with next-generation NPUs.
Lisa Su is confident that AMD is on track to meet the 30×25 goal but sees a pathway to achieve a 100x improvement by 2027. AMD and other industry leaders are all contributing to address power needs for our AI-enhanced lives in this new era of computing.