Skip to content

Breaking the Memory Wall with Samsung CMM-D for Next-Generation IMDB Infrastructure

Enabling scalable memory and efficient performance for IMDB and memory intensive workloads

  • mail

Memory scaling limitations and the rise of CXL®

Modern CPUs have advanced rapidly, but memory has remained a key bottleneck due to DRAM slot, channel, and cost limitations—creating the “memory wall” that restricts full processor utilization.

Samsung CMM-D, a CXL 2.0–based memory module connected via PCIe®, is designed to overcome these constraints by expanding memory capacity and bandwidth beyond native CPU channels. It enables flexible memory expansion and pooling across servers, improving resource utilization in data centers. As a result, CMM-D helps build more balanced and efficient infrastructure for data-intensive applications.

 

Why CMM-D matters in modern in-memory databases

CMM-D introduces flexible and adaptable memory built on PCIe 5.0 and the CXL 2.0 standard, enabling cache-coherent memory expansion that integrates seamlessly with modern server platforms. This architecture is designed to support demanding workloads such as in-memory databases, AI/ML, and large-scale cloud applications.

A key advantage is the ability to expand memory bandwidth and capacity beyond the limits of traditional DDR DIMM slots, allowing systems to keep pace with rapidly growing datasets while ensuring CPUs can operate without memory bottlenecks. CMM-D also delivers meaningful total cost of ownership (TCO) benefits by reducing DRAM over-provisioning, extending existing infrastructure, and lowering the need for costly server upgrades.

Finally, flexible system operation through memory pooling enables dynamic allocation of shared memory across multiple servers, improving utilization and supporting highly variable workloads. Together, these capabilities make CMM-D a strong foundation for modern in-memory database environments such as SAP HANA.

 

SAP HANA as the evolution platform

SAP HANA (High-performance ANalytic Appliance) is an in-memory database management system designed for real-time analytics and transactions by combining OLTP (Online Transaction Processing) and OLAP (Online Analytical Processing) workloads on a single platform. It stores data in compressed, columnar storage in main memory, enabling fast query processing while maintaining a low memory footprint. Columnar data is stored in the read-optimized “main storage”, and a separate “delta storage” is maintained for optimized writes. The main storage contains table data, while the delta storage is write-optimized columnar storage that saves newly inserted or modified data, periodically merged with the main storage to balance performance and efficiency. In addition, a designated portion of memory is allocated for operational data and intermediate results during query processing. This allocated space is referred to as “HEX heap memory.” Because large datasets must remain in memory and analytical queries consume significant memory bandwidth, SAP HANA is a highly memory-intensive and industry-recognized platform, making it well suited for evaluating the performance impact of CMM-D.

 

CMM-D and RDIMM performance evaluation in SAP HANA

This study evaluated the feasibility of applying CMM-D in an in-memory database environment using SAP HANA. The experiments compared performance when the main storage and HEX heap memory were placed on either RDIMM or CMM-D, and the baseline device performance was first measured using Intel MLC (Memory Latency Checker). In performance analysis, TPC-C results showed no meaningful performance difference between RDIMM and CMM-D, while TPC-DS showed various types of performance degradation on CMM-D depending on the memory access pattern.

Further workload analysis examined access patterns, read/write ratios, and sequential versus random behavior. Main storage showed predominantly sequential read access, whereas HEX heap memory exhibited weaker sequential characteristics. Outstanding* request analysis revealed that HEX heap memory generates significantly higher outstanding requests, which increases latency and creates overhead on the CMM-D device, explaining the observed performance differences.

* Outstanding: the number of in-flight requests a device must process after receiving commands from the host. (When a request is received from the host, the outstanding count increases by 1, and after the device processes it and sends the response back to the host, the outstanding count decreases by 1)
Diagram comparing baseline RDIMM memory configuration with CXL-based Samsung CMM-D architecture, illustrating CPU-connected main storage and HEX heap memory distribution and expanded memory pooling through CXL.
Figure 1. SAP HANA system configuration driven by main storage and HEX heap memory deployment
Diagram comparing baseline RDIMM memory configuration with CXL-based Samsung CMM-D architecture, illustrating CPU-connected main storage and HEX heap memory distribution and expanded memory pooling through CXL.
Figure 1. SAP HANA system configuration driven by main storage and HEX heap memory deployment

Implications for IMDB scalability

This study evaluated CMM-D across multiple scenarios to assess its suitability for in-memory database environments. Results showed that in OLTP workloads (TPC-C), CMM-D can deliver performance comparable to RDIMM, confirming its viability as a high-capacity memory alternative. In OLAP workloads (TPC-DS), CMM-D also proved effective for applications with sequential access patterns and relatively moderate traffic demands, demonstrating its potential for broader in-memory database use when aligned with workload characteristics.

These findings highlight the growing impact of memory scalability challenges in modern in-memory database systems. By expanding both memory bandwidth and capacity, CMM-D helps overcome DRAM limitations, enabling higher concurrency, more stable latency, and improved scalability for data-intensive workloads.

 

Learn more

For a deeper dive into the detailed evaluation results, workload analysis, and system-level measurements based on real hardware configurations, we invite you to download and read the full whitepaper[1]:  Download

 


 
References
 
[1] White Paper: Samsung CMM-D Utilization in IMDB Applications
 
The whitepaper provides quantitative insights and experimental data that complement the architectural perspectives discussed here, offering a deeper look at why Samsung CMM-D is a key solution for breaking the memory wall in next-generation in-memory databases infrastructure.
 

* The contents of this blog may also include forward-looking statements. Forward-looking statements are not guarantees of future performance and that the actual developments of Samsung, the market, or the industry in which Samsung operates may differ materially from those made or suggested by the forward-looking statements contained in this blog.
* All product specifications and performance data included in this article reflect internal test results and are subject to variations by user's system configurations. Actual performance may vary depending on use conditions and environment.
* All images shown are provided for illustrative purposes only and may not be an exact representation of the products.
* Compute Express Link® (CXL®) is a registered trademark of the Compute Express Link Consortium.
* PCI Express® and PCIe® are registered trademarks of PCI-SIG.