One of the big movements going on in the tech sector is the rise of the AI. From self-driving cars to smart assistants, AI is nearing reality. To power the more complicated AI’s, both AMD and Nvidia has made custom deep learning accelerators. Not one to be left out of the loop, Intel is getting into the fray, supplementing their current Xeon Phi compute accelerators with the new Lake Crest Deep Neural Network accelerators.
Unlike Nvidia and AMD’s solutions which are modified GPU designs or Xeon Phi which is modified x86, Lake Crest is a whole new architecture tailored for deep learning. This Flexpoint architecture is combined with HBM2 in an MCM with an interposer, not unlike AMD’s Fiji. In total, each Lake Crest chip features 32GB of HBM2 for a total of 1 TB/s of bandwidth. Each 8GB stack has its own controller. Within each Lake Crest chip, there are a total of 12 compute clusters each with several cores. The clusters are connected using the new interconnect which uses links 20x faster than PCIe with a total of 12 links.
In the future, Intel is also planning on combining a Xeon or Xeon Phi with a Lake Crest chip to form Knights Crest for an all in one solution. For now, Lake Crest will be paired up with Skylake Xeons and Knights Mill Xeon Phi. The end goal is to reach a 100x speed improvement by 2020 for machine learning. With so many different options available, it will be interesting to see which platforms become dominant or will they coexist side by side going forward.
According to a new report, the GeForce RTX 5090 GPU will be very expensive. It…
A new AMD processor in the form of an engineering model has been leaked in…
SK Hynix has claimed to be the first company to mass-produce 321-layer NAND memory chips.…
SOUNDS GREAT – Full stereo sound (12W peak power) gives your setup a booming audio…
Special Edition Yoshi design Ergonomic controller shape with Nintendo Switch button layout Detachable 10ft (3m)…
Fluid Motion: These flight rudder pedals are smooth and accurate that enable precise control over…