Intel has introduced a shift in technique that impacts its XPU and data-center product roadmap.
XPU is an effort by Intel to mix a number of items of silicon into one bundle. The plan was to mix CPU, GPU, networking, FPGA, and AI accelerator and use software program to decide on the perfect processor for the duty at hand.
That’s an formidable undertaking, and it appears to be like like Intel is admitting that it could actually’t do it, no less than for now.
Jeff McVeigh, company vice chairman and normal supervisor of the Tremendous Compute Group at Intel, offered an replace to the data-center processor roadmap that entails taking just a few steps again. Its proposed mixture CPU and GPU, code-named Falcon Shores, will now be a GPU chip solely.
“Loads has modified up to now 12 months. Generative AI is reworking every part. And from our standpoint, from Intel’s standpoint, we really feel it’s untimely to be integrating the CPU and GPU for the next-generation product,” McVeigh stated throughout a press briefing at the ISC Excessive Efficiency Convention in Hamburg, Germany.
The previous plan known as for the CPU and GPU to be on the identical improvement cycle, however the GPU might take longer to develop than the CPU, which might have meant the CPU know-how would sit idle whereas the GPU was being developed. Intel determined that the dynamic nature of right now’s market dictates a necessity for discrete options.
“I will admit it, I used to be flawed. We have been shifting too quick down the XPU path. We really feel that this dynamic nature will probably be higher served by having that flexibility on the platform degree. After which we’ll combine when the time is true,” McVeigh stated.
The result’s a big change in Intel’s roadmap.
Intel in March scrapped a supercomputer GPU codenamed Rialto Bridge, which was to be the successor to the Max Sequence GPU, codenamed Ponte Vecchio, which is already in the marketplace.
The brand new Falcon Shores chip, which is the successor to Ponte Vecchio, will now be a next-generation discrete GPU focused at each high-performance computing and AI. It contains the AI processors, normal Ethernet switching, HBM3 reminiscence, and I/O at scale, and it’s now due in 2025.
McVeigh stated that Intel hasn’t dominated out combining a CPU and GPU, nevertheless it’s not the precedence proper now. “We are going to on the proper time … when the window of climate is true, we’ll do this. We simply don’t really feel prefer it’s proper on this subsequent technology.”
Different Intel information
McVeigh additionally talked up enhancements to Intel’s OneAPI toolkit, a household of compilers, libraries, and programming instruments that may execute code on the Xeon, Falcon Shores GPU, and Gaudi AI processor. Write as soon as, and the API can choose the perfect chip on which to execute. The newest replace delivers velocity beneficial properties for HPC functions with OpenMP GPU offload, prolonged help for OpenMP and Fortran, and accelerated AI and deep studying.
On the supercomputer entrance, Intel has delivered greater than 10,624 compute nodes of Xeon Max Sequence chips with HBM for the Aurora supercomputer, which incorporates 21,248 CPU nodes, 63,744 GPUs, 10.9PB of DDR reminiscence, and 230PB of storage. Aurora is being constructed on the Lawrence Livermore Nationwide Labs and can exceed 2 exaFLOPs of efficiency when full. When operational, it is anticipated to dethrone Frontier because the quickest supercomputer on the earth.
Intel additionally mentioned servers from Supermicro that appear to be aimed toward taking up Nvidia’s DGX AI techniques. They function eight Ponte Vecchio Max Sequence GPUs, every with 128 GB of HBM reminiscence for greater than 1 TB whole of HBM reminiscence per system. Not surprisingly, the servers are focused at AI deployments. The product is predicted to be broadly accessible in Q3.
Copyright © 2023 IDG Communications, Inc.