Samsung's cutting-edge CXL® solutions leverage the power of CXL®, an open standard interface that enables high-speed, low-latency1 connections between processors and devices, delivering expanded memory capacity and bandwidth beyond traditional DDR channels.
Samsung's cutting-edge CXL® solutions leverage the power of CXL®, an open standard interface that enables high-speed, low-latency1 connections between processors and devices, delivering expanded memory capacity and bandwidth beyond traditional DDR channels.
Samsung's cutting-edge CXL® solutions leverage the power of CXL®, an open standard interface that enables high-speed, low-latency1 connections between processors and devices, delivering expanded memory capacity and bandwidth beyond traditional DDR channels.
Up to 50%2 higher
total memory capacity
compared to RDIMM only
Up to 100%2 greater
total memory bandwidth
compared to RDIMM only
Maximize memory utilization
across multiple hosts
We live in the age of AI, where imagination is becoming a reality. Artificial intelligence, which was a story in science fiction, is now being used in various parts of everyday life in ways we could not have imagined.
New devices powered by artificial intelligence are changing the way humans interact with and utilize technology, making our lives smarter.
Find out what artificial intelligence is and how it is changing lives with Samsung Semiconductor.
We live in the age of AI, where imagination is becoming a reality. Artificial intelligence, which was a story in science fiction, is now being used in various parts of everyday life in ways we could not have imagined.
New devices powered by artificial intelligence are changing the way humans interact with and utilize technology, making our lives smarter.
Find out what artificial intelligence is and how it is changing lives with Samsung Semiconductor.
We live in the age of AI, where imagination is becoming a reality. Artificial intelligence, which was a story in science fiction, is now being used in various parts of everyday life in ways we could not have imagined.
New devices powered by artificial intelligence are changing the way humans interact with and utilize technology, making our lives smarter.
Find out what artificial intelligence is and how it is changing lives with Samsung Semiconductor.
In the age of big data and evolving IT technologies, data centers are undergoing continuous innovations and changes.
In particular, as technologies advance, the amount and speed of data to be processed is increasing exponentially.
Therefore, data centers need to build a high-speed networking infrastructure to support fast data transfer and minimize bottlenecks. In line with these changes, memory semiconductor technologies are also constantly evolving to efficiently process large amounts of data. Samsung views performance and power as the core of semiconductor solutions for servers, enabling business users to have fast, reliable, and cost-effective infrastructure solutions.
In the age of big data and evolving IT technologies, data centers are undergoing continuous innovations and changes. In particular, as technologies advance, the amount and speed of data to be processed is increasing exponentially.
Therefore, data centers need to build a high-speed networking infrastructure to support fast data transfer and minimize bottlenecks. In line with these changes, memory semiconductor technologies are also constantly evolving to efficiently process large amounts of data. Samsung views performance and power as the core of semiconductor solutions for servers, enabling business users to have fast, reliable, and cost-effective infrastructure solutions.
In the age of big data and evolving IT technologies, data centers are undergoing continuous innovations and changes. In particular, as technologies advance, the amount and speed of data to be processed is increasing exponentially.
Therefore, data centers need to build a high-speed networking infrastructure to support fast data transfer and minimize bottlenecks. In line with these changes, memory semiconductor technologies are also constantly evolving to efficiently process large amounts of data. Samsung views performance and power as the core of semiconductor solutions for servers, enabling business users to have fast, reliable, and cost-effective infrastructure solutions.
Compute Express Link (CXL®) is a high-speed interconnect technology which is based on PCIe protocol.
CXL is an interconnect technology designed for high-speed data communication between CPUs and various peripheral devices.
It enhances system performance and efficiency through memory sharing and expansion.
Compute Express Link (CXL®) is a high-speed interconnect technology
which is based on PCIe protocol.
CXL is an interconnect technology designed for high-speed data communication
between CPUs and various peripheral devices.
It enhances system performance and efficiency through memory sharing and expansion.
Compute Express Link (CXL®) is a high-speed interconnect technology which is based on PCIe protocol. CXL is an interconnect technology designed for high-speed data communication between CPUs and various peripheral devices. It enhances system performance and efficiency through memory sharing and expansion.
Samsung's CXL® memory product name that stands for CXL® Memory Module - DRAM.
Samsung's CXL® memory product name that stands for CXL® Memory Module - DRAM.
Samsung's CXL® memory product name that stands for CXL® Memory Module - DRAM.
CMM-D expands both memory capacity and memory bandwidth beyond traditional DIMM channels,
offering cache coherence and efficient data transfers across devices.
Another key to its innovation is CXL®’s ability to support memory disaggregation
by allowing multiple hosts to utilize the same memory pool,
driving a more scalable and resource-optimized data center infrastructure.
CMM-D expands both memory capacity and memory bandwidth
beyond traditional DIMM channels, offering cache coherence
and efficient data transfers across devices.
Another key to its innovation is CXL®’s ability to support memory disaggregation
by allowing multiple hosts to utilize the same memory pool,
driving a more scalable and resource-optimized data center infrastructure.
CMM-D expands both memory capacity and memory bandwidth beyond traditional DIMM channels, offering cache coherence and efficient data transfers across devices.
Another key to its innovation is CXL®'s ability to support memory disaggregation by allowing multiple hosts to utilize the same memory pool, driving a more scalable and resource-optimized data center infrastructure.
CXL® memory is ideal for memory-intensive workloads such as AI/ML, high-performance computing (HPC),
and in-memory databases (IMDB), where large memory capacity and bandwidth are critical for efficient processing.
For data centers and cloud infrastructures, CXL® enables memory pooling and sharing across multiple hosts,
optimizing resource utilization and significantly reducing TCO.
CXL® memory is ideal for memory-intensive workloads such as AI/ML,
high-performance computing (HPC), and in-memory databases (IMDB),
where large memory capacity and bandwidth are critical for efficient processing.
For data centers and cloud infrastructures, CXL® enables memory pooling
and sharing across multiple hosts, optimizing resource utilization
and significantly reducing TCO.
CXL® memory is ideal for memory-intensive workloads such as AI/ML, high-performance computing (HPC), and in-memory databases (IMDB), where large memory capacity and bandwidth are critical for efficient processing.
For data centers and cloud infrastructures, CXL® enables memory pooling and sharing across multiple hosts, optimizing resource utilization and significantly reducing TCO.
A CXL Switch is a high-speed switching device that efficiently connects CPUs,
memory, accelerators, and other devices in a CXL (Compute Express Link)–based system.
Simply put, it acts as a hub or intersection managing data flow between devices in a CXL network.
CXL switches are developed by multiple controller vendors as well as traditional PCIe switch makers.
※ Reference:
- CXL (Compute Express Link): A next-generation interconnect technology based on PCIe, enabling high-speed,
low-latency connections between CPU, GPU, and memory.
- CXL Switch: A device that connects multiple CXL devices to a single CPU or across multiple CPUs, allowing flexible resource sharing.
A CXL Switch is a high-speed switching device that efficiently connects CPUs,
memory, accelerators, and other devices in a CXL (Compute Express Link)–based system.
Simply put, it acts as a hub or intersection
managing data flow between devices in a CXL network.
CXL switches are developed by multiple controller vendors
as well as traditional PCIe switch makers.
※ Reference:
- CXL (Compute Express Link): A next-generation interconnect technology
based on PCIe, enabling high-speed, low-latency connections between CPU, GPU, and memory.
- CXL Switch: A device that connects multiple CXL devices
to a single CPU or across multiple CPUs, allowing flexible resource sharing.
A CXL Switch is a high-speed switching device that efficiently connects CPUs, memory, accelerators, and other devices in a CXL (Compute Express Link)–based system.
Simply put, it acts as a hub or intersection managing data flow between devices in a CXL network. CXL switches are developed by multiple controller vendors as well as traditional PCIe switch makers.
※ Reference:
- CXL (Compute Express Link):
A next-generation interconnect technology based on PCIe, enabling high-speed, low-latency connections between CPU, GPU, and memory.
- CXL Switch:
A device that connects multiple CXL devices to a single CPU or across multiple CPUs, allowing flexible resource sharing.
Standards beyond CXL 2.0 have not yet been fully commercialized,
and the overall ecosystem—including CPUs, memory, switches, and devices—is still immature.
Being in the early market stage, challenges include ecosystem development, ensuring compatibility,
and managing costs. Additionally, the market needs time to identify optimal applications and develop various use cases.
Standards beyond CXL 2.0 have not yet been fully commercialized,
and the overall ecosystem—including CPUs, memory, switches,
and devices—is still immature.
Being in the early market stage, challenges include ecosystem development,
ensuring compatibility, and managing costs. Additionally, the market
needs time to identify optimal applications and develop various use cases.
Standards beyond CXL 2.0 have not yet been fully commercialized, and the overall ecosystem—including CPUs, memory, switches, and devices—is still immature.
Being in the early market stage, challenges include ecosystem development, ensuring compatibility, and managing costs. Additionally, the market needs time to identify optimal applications and develop various use cases.
It is mounted on the system board based on PCIe, serving as a bridge between the host and memory devices.
It is mounted on the system board based on PCIe,
serving as a bridge between the host and memory devices.
It is mounted on the system board based on PCIe, serving as a bridge between the host and memory devices.
CMM-D pooling is a technology that integrates multiple CMM-D devices into a single logical memory pool,
allowing multiple hosts to share the pool and dynamically allocate memory as needed.
This approach differs from the traditional DIMM model, in which each module operates independently.
CMM-D pooling is a technology that integrates multiple CMM-D devices
into a single logical memory pool, allowing multiple hosts to share the pool
and dynamically allocate memory as needed.
This approach differs from the traditional DIMM model,
in which each module operates independently.
CMM-D pooling is a technology that integrates multiple CMM-D devices into a single logical memory pool, allowing multiple hosts to share the pool and dynamically allocate memory as needed.
This approach differs from the traditional DIMM model, in which each module operates independently.
DIMMs are directly connected to CPUs as main memory, enabling fast access.
However, current system designs limit each CPU to fixed memory slots without sharing,
restricting scalability and flexibility.
CMM-D, as a CXL-based memory expansion device, uses the CXL protocol to extend CPU connectivity.
It provides additional memory, enables pooling, and allows hosts to dynamically scale available
memory capacity—enhancing overall system performance.
DIMMs are directly connected to CPUs as main memory, enabling fast access.
However, current system designs limit each CPU to fixed memory slots without sharing,
restricting scalability and flexibility.
CMM-D, as a CXL-based memory expansion device,
uses the CXL protocol to extend CPU connectivity.
It provides additional memory, enables pooling,
and allows hosts to dynamically scale available
memory capacity—enhancing overall system performance.
DIMMs are directly connected to CPUs as main memory, enabling fast access.
However, current system designs limit each CPU to fixed memory slots without sharing, restricting scalability and flexibility.
CMM-D, as a CXL-based memory expansion device, uses the CXL protocol to extend CPU connectivity.
It provides additional memory, enables pooling, and allows hosts to dynamically scale available memory capacity —enhancing overall system performance.
CMM-D (Compute Memory Module – Dual) follows an E3.S-based form factor
and uses the CXL 2.0 interface over PCIe Gen5.
It typically supports x8 lanes and offers flexible capacity configurations ranging from hundreds of GB
to multi-TB levels, depending on the NAND or DRAM density inside.
The design allows plug-and-play scalability and pooling within a CXL memory hierarchy.
CMM-D (Compute Memory Module – Dual) follows an E3.S-based form factor
and uses the CXL 2.0 interface over PCIe Gen5.
It typically supports x8 lanes and offers flexible capacity configurations
ranging from hundreds of GB to multi-TB levels,
depending on the NAND or DRAM density inside.
The design allows plug-and-play scalability and pooling within a CXL memory hierarchy.
CMM-D (Compute Memory Module – Dual) follows an E3.S-based form factorand uses the CXL 2.0 interface over PCIe Gen5.
It typically supports x8 lanes and offers flexible capacity configurations ranging from hundreds of GB to multi-TB levels, depending on the NAND or DRAM density inside.
The design allows plug-and-play scalability and pooling within a CXL memory hierarchy.
CXL memory addresses limitations of traditional DDR channels by enabling scalable, shared, and tiered memory expansion.
It helps increase effective memory bandwidth and capacity without redesigning CPU memory controllers.
In large-scale AI, analytics, and HPC systems, CXL memory enhances workload efficiency
by pooling memory resources across multiple hosts and reducing data movement overhead.
CXL memory addresses limitations of traditional DDR channels
by enabling scalable, shared, and tiered memory expansion.
It helps increase effective memory bandwidth and capacity
without redesigning CPU memory controllers.
In large-scale AI, analytics, and HPC systems, CXL memory enhances workload efficiency
by pooling memory resources across multiple hosts and reducing data movement overhead.
CXL memory addresses limitations of traditional DDR channels by enabling scalable, shared, and tiered memory expansion.
It helps increase effective memory bandwidth and capacity without redesigning CPU memory controllers.
In large-scale AI, analytics, and HPC systems, CXL memory enhances workload efficiency by pooling memory resources across multiple hosts and reducing data movement overhead.
Yes. Applications can access CXL-attached memory through OS-level drivers and middleware
that expose it as memory expansion or as a separate tier.
For instance, AI frameworks, in-memory databases, and cache systems can leverage CXL memory
via APIs or runtime environments optimized for tiered memory usage.
Software enablement from Linux kernel and consortium libraries
(like CXL.mem and CXL.cache) continues to mature.
Yes. Applications can access CXL-attached memory through OS-level drivers
and middleware that expose it as memory expansion or as a separate tier.
For instance, AI frameworks, in-memory databases, and cache systems
can leverage CXL memory via APIs or runtime environments optimized for tiered memory usage.
Software enablement from Linux kernel and consortium libraries
(like CXL.mem and CXL.cache) continues to mature.
Yes. Applications can access CXL-attached memory through OS-level drivers and middleware that expose it asmemory expansion or as a separate tier.
For instance, AI frameworks, in-memory databases, and cache systems can leverage CXL memory via APIs or runtime environments optimized for tiered memory usage.
Software enablement from Linux kernel and consortium libraries (like CXL.mem and CXL.cache) continues to mature.
C-TAP is Samsung's performance measurement tool exclusively designed for the CXL® interface, capable of verifying quality of service (QoS) by measuring the tail latency of CMM-D at the system level. Measure the latency, bandwidth, and tail latency performance across various workloads, and optimize the performance of CMM-D products within your system.
The features of C-TAP are as follows:
- Performance measurement tool for CXL®
- QoS latency measurement support
- Supports measurement of bandwidth and QoS latency together in loaded latency
- Control of bandwidth utilization considering system environment in loaded latency
C-TAP is Samsung's performance measurement tool exclusively designed for the CXL® interface, capable of verifying quality of service (QoS) by measuring the tail latency of CMM-D at the system level. Measure the latency, bandwidth, and tail latency performance across various workloads, and optimize the performance of CMM-D products within your system.
The features of C-TAP are as follows:
- Performance measurement tool for CXL®
- QoS latency measurement support
- Supports measurement of bandwidth and QoS latency together in loaded latency
- Control of bandwidth utilization considering system environment in loaded latency
C-TAP is Samsung's performance measurement tool exclusively designed for the CXL® interface, capable of verifying quality of service (QoS) by measuring the tail latency of CMM-D at the system level. Measure the latency, bandwidth, and tail latency performance across various workloads, and optimize the performance of CMM-D products within your system.
The features of C-TAP are as follows:
- Performance measurement tool for CXL®
- QoS latency measurement support
- Supports measurement of bandwidth and QoS latency together in loaded latency
- Control of bandwidth utilization considering system environment in loaded latency