Skip to content

How Samsung Memory Is Powering the AI Revolution

New innovations from Samsung in memory technology are helping make artificial intelligences more powerful and efficient than ever before.

  • mail
New hyperscale artificial intelligences (AI) like Midjourney, Google Bard and ChatGPT/GPT–4 are disrupting industries and revolutionizing workstreams. But behind the scenes, these human-seeming AIs require the kind of state-of-the-art memory technologies that only #SamsungSemiconductor can provide: ew types of memory, solid state drives, and even interfaces specifically designed to break through traditional performance bottlenecks. Here’s a primer on just some of the innovations that Samsung has come up with for AI and machine learning (ML) applications. PIM technology for reduced AI data movement AI and ML models move a lot more data through DRAM than standard applications. But with all that data movement comes a performance hit. Here’s why. In a traditional computer using von Neumann architecture, data moves sequentially between the processing unit and memory. In such systems, an instruction fetch and data operation cannot happen simultaneously. In AI, where higher volumes of data processing is needed than in traditional applications, this inability creates logjams, resulting in lower speeds and higher power usage. To solve the exponentially increasing demands that hyperscale AIs make on traditional memory solutions, Samsung created Processing-in-Memory (PIM) technology for high-bandwidth memory (HBM). With PIM technology, we implement a processor right into the HBM DRAM, offloading some of the data calculation work from the processor onto the memory itself, thereby reducing data movement and improving the energy and data efficiency of AI. The results of using PIM technology to power an AI application speaks for itself. In the case of 6B (6 billion) size AI models, it is estimated that more than 80% of the total computational functions can be accelerated through PIM, improving performance by 3.5x. Advanced SSDs for advanced intelligences Just like AI requires more data to move through memory than traditional applications, these models also need to write and access data more frequently on disk, and in smaller chunks. To help address the unique demands AIs make on traditional solid state drives (SSDs), Samsung has innovated two new technologies. First, Memory Semantic SSDs. When used in AI, a memory semantic SSD can lead to up to 20x performance improvements. Optimized to read and write small-sized data chunks, memory semantic SSDs increase the drive’s random read speed while decreasing latency, making it a perfect solution for workloads that require fast processing of smaller data sets like AI/ML. Next, we’ve introduced Samsung SmartSSDs, powered by the AMD Adaptive Platform. Similar to the PIM technology we have integrated into memory, a SmartSSD is a computational storage drive that puts a processor in storage. By pushing the data processing closer to where the data itself is stored, Smart SSDs can dramatically accelerate data-intensive applications like HyperScale AIs. Faster interfaces through CXL for AI applications The Compute Express Link (CXL) interface is an open standard for high-speed, high-capacity connections between processors and memory. By leveraging the unique strengths of the CXL standard, Samsung Semiconductor has created innovative new solutions to help improve the performance and efficiency of hyperscale applications. Technologies like AI, machine learning and cloud computing require greater density and bandwidth memory than conventional DRAM design allows for. To help businesses keep up with this extreme pace of AI innovation, Samsung created CXL Memory Expander technology, which allows servers to expand memory capacity to tens of terabytes while increasing bandwidth to hundreds of gigabytes per second. In addition, there’s CXL-PNM. Like PIM and SmartSSDs, CXL-PNM (that’s Processor-near-Memory) is a cutting-edge technology that can reduce data movement between the CPU and memory. By placing processing units closer to memory, CXL-PNM technology improves the loading speed of AI models by 2x and capacity by up to 4x. The takeaway Because we know the importance of AI will only grow through the 21st century, #SamsungSemiconductor is dedicated to developing next-gen technologies for hyperscale and machine learning applications, as well as releasing the necessary software and simulators to support them. Because when it comes to AI, we at Samsung know that the revolution is just getting started. The Heart of Tech is a monthly newsletter about the way technologies from Samsung Semiconductor are transforming the world. Subscribe and never miss an update.

Would you like to
leave this page?
If you leave this page, the content you are creating
will not be saved.

Registration Are you sure you want to submit this?

Thank you! Please confirm your registration

Your subscription is not active yet!
An email with an activation link
has just been sent to your email address.
Please activate your subscription by clicking on
the activation link inside the email.

Thank you! Please confirm

your existing registration

You have already registered, but before we can send you the
information about upcoming events, we need your confirmation.

If you missed our previous email, please use the button below to resend it.
To activate your subscription, please click on the link included in the email.


To proceed, please click on the "check" button located in the email section.