Intel Promises Better AI Performance, Longer Battery Life with Upcoming "Lunar Lake" Chips
Built to meet Microsoft's Copilot+ requirements, the Lunar Lake chips include a 48 TOPS NPU on top of a 60 TOPS GPU.
Intel has announced its "Lunar Lake" chips for on-device machine learning and artificial intelligence on laptops and compact computing systems — featuring a 48 TOPS neural coprocessor and the promise of a 40 percent power reduction compared to the company's previous generation.
"AI is driving one of the most consequential eras of innovation the industry has ever seen," claimed Intel's chief executive officer, Pat Gelsinger, during a keynote presentation at the Computex trade show this week. "The magic of silicon is once again enabling exponential advancements in computing that will push the boundaries of human potential and power the global economy for years to come."
To power that era, Intel has announced its Lunar Lake processor architecture. This, the company says, delivers a 40 percent improvement in energy efficiency and up to a 60 percent boost in real-world battery usage — but, more importantly, comes with a fourth-generation neural processing unit (NPU) designed to accelerate on-device machine learning and artificial intelligence workloads, including generative AI (gen AI).
The Lunar Lake chips are built from three distinct components, with the "compute tile" delivering a combination of energy-efficient and high-performance 64-bit x86 CPU cores, Intel's latest Xe2 "Battlemage" graphics processing cores, and the dedicated NPU 4 accelerator — delivering a claimed 48 tera-operations per second (TOPS) of minimum-precision compute for AI and ML, on top of the up-to-67 TOPS available from the GPU.
A "platform controller tile" adds security features and a connectivity suite including Wi-Fi 7, Bluetooth 5.4, PCI Express Gen. 5 and Gen. 4, and Thunderbolt 4; finally, up to 32GB of on-package memory is designed to drop latency and enable faster access to data — helping to push down the system's overall power consumption, the company claims.
Intel's latest NPU-equipped chips follow hot on the heels of those from rival AMD, as the two compete to follow Qualcomm into powering portable devices built to Microsoft's Copilot+ specification — which mandates a minimum NPU performance and the presence of a dedicated keyboard key for launching Windows 11's Copilot AI assistant. AMD, however, appears to have beaten its longstanding competitor in the NPU performance stakes, promising 50 TOPS of performance from its Ryzen AI 300 series NPU to Intel's 48 TOPS.
At the same time, Intel unveiled its latest hardware for AI at the data center level, offering pricing for its Gaudi 2 and Gaudi 3 accelerators — with a "standard AI kit" based around eight Gaudi 2 accelerators priced at $65,000, a claimed two-thirds discount over "comparable competitive platforms," with an eight-accelerator Gaudi 3 equivalent hitting $125,000 — and unveiling its Intel Xeon 6 E-core "Sierra Forest" and P-core "Granite Rapids" chips.
The Lunar Lake chips will be available in retail systems in the third quarter of the year, Intel promises, "in time for the holiday buying season." Pricing and full specifications had not been disclosed at the time of writing.