PIM Solves the AI Data Dilemma
The growth and evolution of AI algorithms and applications have driven the demand for sharp increases in data processing requirements. It is clear that utilizing current memory solutions with existing incremental improvements in capacity and bandwidth will not be enough to meet the evolving needs of application areas such as healthcare, speech recognition and autonomous driving to process ever larger volumes of data at ever increasing rates to gain deeper insights. For a challenge as big as the one facing the future of AI, a revolutionary breakthrough is needed.
A solution to address the constraints of current memory solutions on the growth of AI applications has emerged in the form of Processing-in-Memory (PIM), and in an industry first, Samsung has incorporated PIM into High Bandwidth Memory (HBM). PIM will provide a timely bridge between the growing demands of AI data processing and current memory solutions that are struggling to meet those demands.
PIM in and of itself is not a new technology but it had previously only been explored as a high level concept in academia and industry. PIM works by integrating compute and memory, enabling a memory device with logic to perform computation on data locally, which is a task usually reserved for high performance logic devices such as CPUs, GPU’s and NPU’s. The ability to perform computation on data locally minimizes latency, increases the rate of processing, and improves energy efficiency. Samsung has implemented the PIM concept within HBM for the first time by incorporating an AI engine called the Programmable Computing Unit (PCU) within a HBM device.