Samsung’s groundbreaking CMM-D1 is a memory expander built with next-generation CXL®2 technology. The CXL® interface seamlessly connects multiple processors and devices, increasing memory capacity thus optimizing memory management. Integrating CMM-D into existing data centers will provide significant memory expansion at minimal cost, unlocking new possibilities for data-intensive workloads. Its integrated CXL® controller enhances reliability and security, ensuring a smooth and stable environment for data centers.
1 CXL® Memory Module-DRAM
2 Compute Express Link
Samsung’s groundbreaking CMM-D1 is a memory expander built with next-generation CXL®2 technology. The CXL® interface seamlessly connects multiple processors and devices, increasing memory capacity thus optimizing memory management. Integrating CMM-D into existing data centers will provide significant memory expansion at minimal cost, unlocking new possibilities for data-intensive workloads. Its integrated CXL® controller enhances reliability and security, ensuring a smooth and stable environment for data centers.
1 CXL® Memory Module-DRAM
2 Compute Express Link
Samsung’s groundbreaking CMM-D1 is a memory expander built with next-generation CXL®2 technology. The CXL® interface seamlessly connects multiple processors and devices, increasing memory capacity thus optimizing memory management. Integrating CMM-D into existing data centers will provide significant memory expansion at minimal cost, unlocking new possibilities for data-intensive workloads. Its integrated CXL® controller enhances reliability and security, ensuring a smooth and stable environment for data centers.
1 CXL® Memory Module-DRAM
2 Compute Express Link
Discover the next-generation interface:
Compute Express Link (CXL®)
Endless compatibility,
exceptional flexibility
Maximize system performance with ease
3 Based on internal test results using CMM-D 256GB EDSFF (E3.S), achieving maximum performance at 2DPC configuration.
All data are approximate and actual product or performance may vary depending on use conditions and environment.
Secure. Reliable.
The next level of stability.
Reference
software
download links
Up to three products are comparable at the same time. Click Export button to compare more than three products.
All product specifications reflect internal test results and are subject to variations by the user's system configuration
All product images shown are for illustration purposes only and may not be an exact representation of the product
Samsung reserves the right to change product images and specifications at any time without notice
For further details on product specifications, please contact the sales representative of your region.
Compute Express Link (CXL®) is a high-speed interconnect technology which is based on PCIe protocol.
CXL is an interconnect technology designed for high-speed data communication between CPUs and various peripheral devices.
It enhances system performance and efficiency through memory sharing and expansion.
Compute Express Link (CXL®) is a high-speed interconnect technology which is based on PCIe protocol.
CXL is an interconnect technology designed for high-speed data communication between CPUs and various peripheral devices.
It enhances system performance and efficiency through memory sharing and expansion.
Compute Express Link (CXL®) is a high-speed interconnect technology which is based on PCIe protocol.
CXL is an interconnect technology designed for high-speed data communication between CPUs and various peripheral devices.
It enhances system performance and efficiency through memory sharing and expansion.
Samsung's CXL® memory product name that stands for CXL® Memory Module - DRAM.
Samsung's CXL® memory product name that stands for CXL® Memory Module - DRAM.
Samsung's CXL® memory product name that stands for CXL® Memory Module - DRAM.
CMM-D expands both memory capacity and memory bandwidth beyond traditional DIMM channels, offering cache coherence and efficient data transfers across devices. Another key to its innovation is CXL®’s ability to support memory disaggregation by allowing multiple hosts to utilize the same memory pool, driving a more scalable and resource-optimized data center infrastructure.
CMM-D expands both memory capacity and memory bandwidth beyond traditional DIMM channels, offering cache coherence and efficient data transfers across devices. Another key to its innovation is CXL®’s ability to support memory disaggregation by allowing multiple hosts to utilize the same memory pool, driving a more scalable and resource-optimized data center infrastructure.
CMM-D expands both memory capacity and memory bandwidth beyond traditional DIMM channels, offering cache coherence and efficient data transfers across devices. Another key to its innovation is CXL®’s ability to support memory disaggregation by allowing multiple hosts to utilize the same memory pool, driving a more scalable and resource-optimized data center infrastructure.
CXL® memory is ideal for memory-intensive workloads such as AI/ML, high-performance computing (HPC), and in-memory databases (IMDB), where large memory capacity and bandwidth are critical for efficient processing. For data centers and cloud infrastructures, CXL® enables memory pooling and sharing across multiple hosts, optimizing resource utilization and significantly reducing TCO.
CXL® memory is ideal for memory-intensive workloads such as AI/ML, high-performance computing (HPC), and in-memory databases (IMDB), where large memory capacity and bandwidth are critical for efficient processing. For data centers and cloud infrastructures, CXL® enables memory pooling and sharing across multiple hosts, optimizing resource utilization and significantly reducing TCO.
CXL® memory is ideal for memory-intensive workloads such as AI/ML, high-performance computing (HPC), and in-memory databases (IMDB), where large memory capacity and bandwidth are critical for efficient processing. For data centers and cloud infrastructures, CXL® enables memory pooling and sharing across multiple hosts, optimizing resource utilization and significantly reducing TCO.
A CXL Switch is a high-speed switching device that efficiently connects CPUs, memory, accelerators, and other devices in a CXL (Compute Express Link)–based system. Simply put, it acts as a hub or intersection managing data flow between devices in a CXL network. CXL switches are developed by multiple controller vendors as well as traditional PCIe switch makers.
※ Reference:
- CXL (Compute Express Link): A next-generation interconnect technology based on PCIe, enabling high-speed, low-latency connections between CPU, GPU, and memory.
- CXL Switch: A device that connects multiple CXL devices to a single CPU or across multiple CPUs, allowing flexible resource sharing.
A CXL Switch is a high-speed switching device that efficiently connects CPUs, memory, accelerators, and other devices in a CXL (Compute Express Link)–based system. Simply put, it acts as a hub or intersection managing data flow between devices in a CXL network. CXL switches are developed by multiple controller vendors as well as traditional PCIe switch makers.
※ Reference:
- CXL (Compute Express Link): A next-generation interconnect technology based on PCIe, enabling high-speed, low-latency connections between CPU, GPU, and memory.
- CXL Switch: A device that connects multiple CXL devices to a single CPU or across multiple CPUs, allowing flexible resource sharing.
A CXL Switch is a high-speed switching device that efficiently connects CPUs, memory, accelerators, and other devices in a CXL (Compute Express Link)–based system. Simply put, it acts as a hub or intersection managing data flow between devices in a CXL network. CXL switches are developed by multiple controller vendors as well as traditional PCIe switch makers.
※ Reference:
- CXL (Compute Express Link): A next-generation interconnect technology based on PCIe, enabling high-speed, low-latency connections between CPU, GPU, and memory.
- CXL Switch: A device that connects multiple CXL devices to a single CPU or across multiple CPUs, allowing flexible resource sharing.
Standards beyond CXL 2.0 have not yet been fully commercialized, and the overall ecosystem—including CPUs, memory, switches, and devices—is still immature. Being in the early market stage, challenges include ecosystem development, ensuring compatibility, and managing costs. Additionally, the market needs time to identify optimal applications and develop various use cases.
Standards beyond CXL 2.0 have not yet been fully commercialized, and the overall ecosystem—including CPUs, memory, switches, and devices—is still immature. Being in the early market stage, challenges include ecosystem development, ensuring compatibility, and managing costs. Additionally, the market needs time to identify optimal applications and develop various use cases.
Standards beyond CXL 2.0 have not yet been fully commercialized, and the overall ecosystem—including CPUs, memory, switches, and devices—is still immature. Being in the early market stage, challenges include ecosystem development, ensuring compatibility, and managing costs. Additionally, the market needs time to identify optimal applications and develop various use cases.
It is mounted on the system board based on PCIe, serving as a bridge between the host and memory devices.
It is mounted on the system board based on PCIe, serving as a bridge between the host and memory devices.
It is mounted on the system board based on PCIe, serving as a bridge between the host and memory devices.
What is CMM-D pooling and how is it different from conventional memory modules?
What is CMM-D pooling and how is it different from conventional memory modules?
What is CMM-D pooling and how is it different from conventional memory modules?
DIMMs are directly connected to CPUs as main memory, enabling fast access. However, current system designs limit each CPU to fixed memory slots without sharing, restricting scalability and flexibility. CMM-D, as a CXL-based memory expansion device, uses the CXL protocol to extend CPU connectivity. It provides additional memory, enables pooling, and allows hosts to dynamically scale available memory capacity—enhancing overall system performance.
DIMMs are directly connected to CPUs as main memory, enabling fast access. However, current system designs limit each CPU to fixed memory slots without sharing, restricting scalability and flexibility. CMM-D, as a CXL-based memory expansion device, uses the CXL protocol to extend CPU connectivity. It provides additional memory, enables pooling, and allows hosts to dynamically scale available memory capacity—enhancing overall system performance.
DIMMs are directly connected to CPUs as main memory, enabling fast access. However, current system designs limit each CPU to fixed memory slots without sharing, restricting scalability and flexibility. CMM-D, as a CXL-based memory expansion device, uses the CXL protocol to extend CPU connectivity. It provides additional memory, enables pooling, and allows hosts to dynamically scale available memory capacity—enhancing overall system performance.
CMM-D (Compute Memory Module – Dual) follows an E3.S-based form factor and uses the CXL 2.0 interface over PCIe Gen5. It typically supports x8 lanes and offers flexible capacity configurations ranging from hundreds of GB to multi-TB levels, depending on the NAND or DRAM density inside. The design allows plug-and-play scalability and pooling within a CXL memory hierarchy.
CMM-D (Compute Memory Module – Dual) follows an E3.S-based form factor and uses the CXL 2.0 interface over PCIe Gen5. It typically supports x8 lanes and offers flexible capacity configurations ranging from hundreds of GB to multi-TB levels, depending on the NAND or DRAM density inside. The design allows plug-and-play scalability and pooling within a CXL memory hierarchy.
CMM-D (Compute Memory Module – Dual) follows an E3.S-based form factor and uses the CXL 2.0 interface over PCIe Gen5. It typically supports x8 lanes and offers flexible capacity configurations ranging from hundreds of GB to multi-TB levels, depending on the NAND or DRAM density inside. The design allows plug-and-play scalability and pooling within a CXL memory hierarchy.
CXL memory addresses limitations of traditional DDR channels by enabling scalable, shared, and tiered memory expansion. It helps increase effective memory bandwidth and capacity without redesigning CPU memory controllers. In large-scale AI, analytics, and HPC systems, CXL memory enhances workload efficiency by pooling memory resources across multiple hosts and reducing data movement overhead.
CXL memory addresses limitations of traditional DDR channels by enabling scalable, shared, and tiered memory expansion. It helps increase effective memory bandwidth and capacity without redesigning CPU memory controllers. In large-scale AI, analytics, and HPC systems, CXL memory enhances workload efficiency by pooling memory resources across multiple hosts and reducing data movement overhead.
CXL memory addresses limitations of traditional DDR channels by enabling scalable, shared, and tiered memory expansion. It helps increase effective memory bandwidth and capacity without redesigning CPU memory controllers. In large-scale AI, analytics, and HPC systems, CXL memory enhances workload efficiency by pooling memory resources across multiple hosts and reducing data movement overhead.
Yes. Applications can access CXL-attached memory through OS-level drivers and middleware that expose it as memory expansion or as a separate tier. For instance, AI frameworks, in-memory databases, and cache systems can leverage CXL memory via APIs or runtime environments optimized for tiered memory usage. Software enablement from Linux kernel and consortium libraries (like CXL.mem and CXL.cache) continues to mature.
Yes. Applications can access CXL-attached memory through OS-level drivers and middleware that expose it as memory expansion or as a separate tier. For instance, AI frameworks, in-memory databases, and cache systems can leverage CXL memory via APIs or runtime environments optimized for tiered memory usage. Software enablement from Linux kernel and consortium libraries (like CXL.mem and CXL.cache) continues to mature.
Yes. Applications can access CXL-attached memory through OS-level drivers and middleware that expose it as memory expansion or as a separate tier. For instance, AI frameworks, in-memory databases, and cache systems can leverage CXL memory via APIs or runtime environments optimized for tiered memory usage. Software enablement from Linux kernel and consortium libraries (like CXL.mem and CXL.cache) continues to mature.