Samsung develops high bandwidth memory with AI processor


Samsung Electronics has announced that it has developed the industry’s first High Bandwidth Memory (HBM) integrated with artificial intelligence (AI) processing power — the HBM-PIM.

As per the company, the new processing-in-memory (PIM) architecture brings powerful AI computing capabilities inside high-performance memory, to accelerate large-scale processing in data centres, high-performance computing (HPC) systems, and AI-enabled mobile applications.

Most of today’s computing systems are based on the von Neumann architecture, which uses separate processor and memory units to carry out millions of intricate data processing tasks.

However, the HBM-PIM brings processing power directly to where the data is stored by placing a DRAM-optimised AI engine inside each memory bank — a storage sub-unit — enabling parallel processing and minimizing data movement.

When applied to Samsung’s existing HBM2 Aquabolt solution, the new architecture is able to deliver over twice the system performance while reducing energy consumption by more than 70%. The HBM-PIM also does not require any hardware or software changes, allowing faster integration into existing systems.

The company has tested with its existing HBM2 Aquabolt solution and the new architecture delivered twice the system performance and reduced energy consumption by more than 70%. Samsung says that the HBM-PIM also does not require any hardware or software changes, allowing faster integration into existing systems.