Intel launches 3rd-gen Xeon Scalable processors to help businesses deploy AI solutions faster – Times of India
Intel has introduced its 3rd-gen Intel Xeon Scalable processors enabling customers to accelerate the development and use of artificial intelligence (AI) and analytics workloads running in data centres. Intel’s new 3rd Gen Xeon Scalable processors makes AI inference and training more widely deployable on general-purpose CPUs for applications that include image classification, recommendation engines, speech recognition and language modeling.
Facebook had recently announced that 3rd-gen Intel Xeon Scalable processors are the foundation for its newest Open Compute Platform (OCP) servers, and other leading CSPs, including Alibaba, Baidu and Tencent, have announced they are adopting the next-generation processors.
Intel is further extending its investment in built-in AI acceleration in the new 3rd Gen Intel Xeon Scalable processors through the integration of bfloat16 support into the processor’s unique Intel DL Boost technology. Bfloat16 is a compact numeric format that uses half the bits as today’s FP32 format but achieves comparable model accuracy with minimal — if any — software changes required, claims the company.
The addition of bfloat16 support accelerates both AI training and inference performance in the CPU. Intel-optimised distributions for leading deep learning frameworks (including TensorFlow and Pytorch) support bfloat16 and are available through the Intel AI Analytics toolkit. Intel also delivers bfloat16 optimisations into its OpenVINO toolkit and the ONNX Runtime environment to ease inference deployments.
The 3rd Gen Intel Xeon Scalable processors (code-named “Cooper Lake”) evolve Intel’s 4- and 8-socket processor offering. The processor is designed for deep learning, virtual machine (VM) density, in-memory database, mission-critical applications and analytics-intensive workloads.
As part of the 3rd-gen Intel Xeon Scalable platform, the company also announced the Intel Optane persistent memory 200 series, providing customers up to 4.5TB of memory per socket to manage data intensive workloads, such as in-memory databases, dense virtualisation, analytics and high-powered computing.
For systems that store data in all-flash arrays, Intel announced the availability of its next-generation high-capacity Intel 3D NAND SSDs, the Intel SSD D7-P5500 and P5600. These 3D NAND SSDs are built with Intel’s latest triple-level cell (TLC) 3D NAND technology and an all-new low-latency PCIe controller to meet the intense IO requirements of AI and analytics workloads and advanced features to improve IT efficiency and data security.
Intel also disclosed its upcoming Intel Stratix 10 NX FPGAs, its first AI-optimised FPGAs targeted for high-bandwidth, low-latency AI acceleration. Intel Stratix 10 NX FPGAs include integrated high-bandwidth memory (HBM), high-performance networking capabilities and new AI-optimised arithmetic blocks called AI Tensor Blocks, which contain dense arrays of lower-precision multipliers typically used for AI model arithmetic.