At the Hot Chips conference last week, Intel showcased its latest neural network processor accelerators for both AI training and inference, along with details of its hybrid chip packaging technology, Optane DC persistent memory and chiplet technology for optical I/O.
Designed to train deep learning models at scale, Intel ’s forthcoming Nervana NNP-T , codenamed “Spring Crest,” is designed to train deep learning models at scale and to accommodate a given power budget. The processor is built “with flexibility in mind, striking a balance among computing, communication and memory,” according to Intel. “While Intel Xeon Scalable processors bring AI-specific instructions and provide a foundation for AI, the NNP-T is architected from scratch, building in features and requirements needed to solve for large models, without the overhead needed to support legacy technology.”
The chip features four PCIe Gen 4 interconnects, support for four stacks of HBM2-2400 (high-bandwidth memory) 8GB devices, up to 24 Google Tensor AI ASIC processors, 64 lanes of SerDes functional blocks used in high speed communications. up to 119 TOPS and 60 MB of on-chip distributed memory. On the inference side, the Nervana NNP-I , a.k.a. “Spring Hill,” is billed by Intel as a high-performing deep learning inference chip that leverages Intel’s 10nm process technology with Xeon “Ice Lake” cores and offers programmability. “As AI becomes pervasive across every workload, having a dedicated inference accelerator that is easy to program, has short latencies, has fast code porting and includes support for all major deep learning frameworks allows companies to harness the full potential of their data as actionable insights,” the company said.
Intel’s new Lakefield hybrid cores are the industry’s first product with three-dimensional stacking and IA hybrid computing architecture for new thin form factor, 2-in-1’s and dual display mobile devices, the company said, that operate always-on and always-connected at low power. “Leveraging Intel’s latest 10nm process and Foveros advanced packaging technology, Lakefield achieves a reduction in standby power, core area and package height over previous generations of technology,” the company said.
TeraPHY is an in-package optical I/O chiplet for high-bandwidth, low-power communication developed by Ayar Labs in conjunction with Intel (see “ Ayar Labs to Demo Photonics Chiplet in FPGA Package at Hot Chips ”). The two companies called their demonstration last week “the industry’s first integration of monolithic in-package optics (MIPO) with a high-performance system-on-chip (SOC).” The optical I/O chiplet is co-packaged with the Intel Stratix 10 FPGA using Intel Embedded Multi-die Interconnect Bridge (EMIB) technology, “offering high-bandwidth, low-power data communication from the chip package with determinant latency for distances up to 2 km,” Intel said, adding that interconnect is designed for the next phase of Moore’s Law by removing the traditional performance, power and cost bottlenecks in moving data.
“To get to a future state of ‘AI everywhere,’” said Naveen Rao, Intel VP/EM, A I Products Group, “we’ll need to address the crush of data being generated and ensure enterprises are empowered to make efficient use of their data, processing it where it’s collected when it makes sense and making smarter use of their upstream resources. Data centers and the cloud need to have access to performant and scalable general purpose computing and specialized acceleration for complex AI applications. In this future vision of AI everywhere, a holistic approach is needed—from hardware to software to applications.”