Batch Cache Management for Page-level FTL

An Efficient Page-level FTL to Optimize Address Translation in Flash Memory on Eurosys 2015.

Problem Definition

None of the cache management policy take a deep look into the distinct feature of SSDs, such as “erase-befor-write” and limited PC circle. This paper first give some detailed trace analysis specifically for SSDs to investigate the key factors that incur extra operations. And then designed TPFTL, which employs two-level LRU lists to organize cached mapping entries to minimize the extra operations.

The most important observations are:

cached_translation_page

This result shows only a small fraction (less than 15%) of entries in a cached translation page are recently used.

We can see that 53%-71% of cached translation pages have more than one dirty entry cached, and the average numbers of dirty entries in each page are above 15.

access_distribution

We see that sequential accesses make the number of cached translation pages first decline sharply and then rise back.

The design of the cache policy clusters the cached entries of each cached translation page in the form of a translation page node (TP node). The Loading Policy performs selective prefetching. The Replacement Policy performs batch-update replacement. The core concept is a two level page mapping ftl:

tpftl

The TPFTL clusters the cached entries of each cached translation page in the form of a translation page node (TP node). This design enable the following optimizing points:

Compressed Mapping Cache

As the mapping entries in the translation pages are stored in order, the LPN of each mapping entry can be obtained from the VTPN and the offset of the entry inside the translation page.

Page-level LRU

we define page-level hotness as the average hotness of all the entry nodes in a TP node to measure the hotness of the TP node. The position of each TP node in the page-level LRU list is decided by its page-level hotness.

Selective Prefetching

If the number of TP nodes continues to decrease by a threshold, TPFTL assumes sequential accesses are happening and performs selective prefetching when a cache miss occurs. If the number begins to continuously increase by the threshold, TPFTL assumes sequential accesses are over and stops selective prefetching.

Replacement Policy

The first technique is batch-update replacement… multiple dirty entries can be updated in each translation page update and the hit ratio will not decrease.

The second technique is clean-first replacement… choosing a clean page as a victim rather than dirty pages can reduce the number of writes in flash memory.