Skip to content

Hyperscalers Embrace Flexible Data Placement (FDP) to Increase Performance and Lower TCO

  • mail

by Michael Allison

Senior Director, Samsung Device Solutions America

Back in December of 2022, a new NVMe specification named ‘Flexible Data Placement’ or FDP was published with the potential to dramatically improve storage TCO for the world’s biggest hyperscalers. The new specification was the result of collaboration between Meta and Google, who had been independently working on solving the same problem: how to reduce or eliminate write amplification (WA) and avoid overprovisioning SSDs, a long-time industry ‘best practice’ for data centers. Developing FDP together was a strategic move since hyperscalers, OEMs and SSD manufacturers (including Samsung) would need a common standard in order to move forward. Since being ratified by NVMe Express®, FDP has been a hot topic at Flash Memory Summit (FMS) and Open Compute Project Summit (OCP) and while it’s arguably the most exciting data storage innovation to come along in years, it’s not yet widely known. But before we dive into FDP, how it works, and what it means to hyperscalers, I should disclose that I was the lead author for the NVMe® TP4146a Flexible Data Placement technical proposal, which you can find here. I worked with Meta, Google, and other NVMe members to combine efforts and develop a single standard. Getting the FDP specification ratified has been a story of innovation and collaboration between the biggest players in the industry (for the benefit of everyone) and for me personally, one of the highlights of my career at Samsung and as an active member of NVM Express. At FMS 2023, having Samsung demonstrating an SSD with FDP and FMS awarding NVM Express the Best in Show Most Innovative Memory Technology for FDP shows how collaboration turns ideas to reality. FDP aims to solve a very expensive top-level problem: How to reduce CapEx and OpEx, and improve QoS for hyperscale datacenter SSD deployments. It’s not difficult to grasp why hyperscalers need better storage architecture solutions more than ever with the rise of AI, cloud applications, and media rich content. We humans are now producing zettabytes of data every week while our storage tech struggles to keep up.1 Disaggregated storage infrastructures separate the server's storage and compute resources. This allows for independent scaling and optimization of storage capacity and performance based on workload requirements. However, the standard disaggregated storage model isn't optimized to handle multiple applications running simultaneously and the vastly different workloads that ‘come and go’ and change over time. As a result, device performance remains unstable, which is inefficient and of course, costly. To understand how an SSD with FDP outperforms a conventional SSD, let's consider how Write Amplification (WA) occurs in a standard disaggregated storage environment. WA is the additional writing of host data to media after that data is initially written by the host to media. This additional writing of host data is undesirable for several reasons: it requires additional storage capacity, it causes non-host induced wear, and the additional media reads and writes require more power and hurt overall system performance. Hyperscalers have to accommodate this extra data by overprovisioning. Write Amplification Factor (WAF) is the ratio of additional data that gets written every time the host sends data to the storage device. In a typical example, a WAF ~2.5 means that for every KB of data that gets written to the device, an additional 1.5 KB of incidental data gets written. A system with FDP enabled gets around this by sharing information with the host about where to allocate data on the media and the host may re-write that data or de-allocate that data before the SSD is required to move it which reduces the need for overprovisioning. To understand how an SSD with FDP outperforms a conventional SSD, let's consider how Write Amplification (WA) works in a standard disaggregated storage environment. With FDP, a WAF ~1 is now possible, and the implications for hyperscalers are, simply put, huge. According to an OCP presentation by Christopher Sabol from Google, "Writes are one of the more power-intensive parts of a SSDs operations, doing 2 1/2 what you need pushes you up against that power envelope much more quickly." For a more technical description of how FDP works, watch “Flexible Data Placement using NVM Express® Perspective” from OCP Summit 2022 here. Flexible Data Placement (FDP) Use Case Let’s compare how FDP and a typical disaggregated storage model each handle data distribution across the media for multiple applications. In this example, mixed data from applications A, B and C is written to available ‘super blocks’ across the media. Application A then de-allocates all of its data which must be garbage collected. Once complete, WAF is measured for both models.

Without FDP Data is written to open super blocks sequentially (timing determines the layout)

● Data from applications A, B, and C is mixed within each super block ● Device performance is unstable and never reaches a ‘steady state’ due to mixed workloads ● Overprovisioning is increased until WA is low enough and performance appears stable ● When application A de-allocates its data, holes (i.e., invalid data) are left behind that need to be ‘cleaned up’ ● Changing workloads cause above process to repeat Under the standard disaggregated storage model, the host has no way to determine what gets written where on the device. Overprovisioning is required to accommodate WA, often more than 25%, and GC is more resource intensive.

With FDP Host is able to tag writes and place data into specific media units

● Data from applications A, B, and C is organized into separate super blocks ● Device performance is able to reach a ‘steady state’ ● Overprovisioning is no longer required ● Changing workloads are handled with significantly less resources With FDP enabled, the host provides media placement ‘hints’ that allow similar data to be written into the same super block. When the data from an application is de-allocated, only a single block needs to be erased and no GC (i.e., no increased WA) is needed.
According to an OCP presentation by Ross Stenfort of Meta -- "FDP significantly improves write amplification, reduces device wear, and improves performance and QoS." Google Write Amplification (WA) Case Study The world’s biggest hyperscalers take performance, power savings, and TCO seriously. Last October at OCP in San Jose, CA, Chris Sabol (Google) presented a Google datacenter infrastructure case study to demonstrate the impact of reducing WA on CapEx and OpEx.2 It was the first time the world had seen the potential bottom line impact of FDP. In the example, Google specifies random 4KiB writes from the host, 28% OP, and a greedy GC algorithm to establish a WAF ~2.5. The case study then shows the potential benefits of reducing to a WAF ~1.25.
What the Google case study tells us, is that the potential for savings and performance increases from implementing FDP is real. Just try and think of another (single) standard supporting random writes with the potential to eliminate 28% OP, enable 2x drive size with the same application write density, double drive lifetime, and double application write rate. There’s currently nothing else on the horizon (that I am aware of at the time of writing) that is poised to impact hyperscale TCO in such a significant way. What Hyperscalers Need to Know About FDP The prospect of FDP-enabled system architectures where WAF ~1 is the new normal should be enough to get any hyperscaler’s attention. Moreover, FDP is very easy to implement. It’s backwards compatible with legacy hosts so infrastructure doesn’t need to be upgraded. Device reads and other behaviors do not change. And FDP is an optional feature in NVMe, it can simply be enabled or disabled. Hyperscalers who are interested in this latest iteration in solving the data placement problem can expect to see FDP-enabled SSDs to become available in the near future. At Samsung, will be supporting FDP in our latest generation of datacenter SSDs, and we look forward to bringing this technology, along with all the benefits, to the entire hyperscale community.
1 Statista. (2022, November 30). Use of big data analytics in market research worldwide 2014-2021. 2 Open Compute Project. (2022, November 1). Flash Innovation: flexible data placement [Video]. YouTube.

Check out more images

Click the button below to see more images on Media Library.

Go to