Design & Reuse

AI Driving Renewed Interest in Processing-in-Memory

Samsung's new processing-in-memory architecture places AI computing capabilities inside a high bandwidth memory (HBM).

www.eetasia.com, May. 25, 2021 – 

The ever-increasing demands of artificial intelligence (AI) mean concepts for memory that have been around for decades are getting renewed attention, and Samsung Electronics Co. Ltd's recent announcement of a high bandwidth memory (HBM) integrated with AI processing power is good example.

First presented at the International Solid-State Circuits Virtual Conference (ISSCC) earlier this year, Samsung's new processing-in-memory (PIM) architecture places AI computing capabilities inside an HBM. This is a different approach from the commonly used von Neumann architecture for computing systems, which uses separate processor and memory units to carry out millions of intricate data processing tasks. Because the von Neumann architecture employs sequential processing that requires data to constantly move back and forth, it results in a system-slowing bottleneck especially when handling ever-increasing volumes of data found in large-scale processing in data centers, high performance computing (HPC) systems, and AI-enabled mobile applications.

click here to read more...