In a direct-mapped cache structure, the cache is organized into multiple sets5 with a single cache line per set. Based on the address of the memory block, it can only occupy a single cache line. The cache can be framed as a n × 1 column matrix.6
Consider a main memory of 16 kilobytes, which is organized as 4-byte blocks, and a direct-mapped cache of 256 bytes with a block size of 4 bytes. Because the main memory is 16kB, we need a minimum of 14 bits to uniquely represent a memory address.
Since each cache block is of size 4 bytes, the total number of sets in the cache is 256/4, which equals 64 sets.
The incoming address to the cache is divided into bits for Offset, Index and Tag.
Below are memory addresses and an explanation of which cache line they map to:
In a fully associative cache, the cache is organized into a single cache set with multiple cache lines. A memory block can occupy any of the cache lines. The cache organization can be framed as 1 × m row matrix.10
Consider a main memory of 16 kilobytes, which is organized as 4-byte blocks, and a fully associative cache of 256 bytes and a block size of 4 bytes. Because the main memory is 16kB, we need a minimum of 14 bits to uniquely represent a memory address.
The total number of sets in the cache is 1, and the set contains 256/4=64 cache lines, as the cache block is of size 4 bytes.
The incoming address to the cache is divided into bits for offset and tag.
Since any block of memory can be mapped to any cache line, the memory block can occupy one of the cache lines based on the replacement policy.
Set-associative cache is a trade-off between direct-mapped cache and fully associative cache.
A set-associative cache can be imagined as a n × m matrix. The cache is divided into ‘n’ sets and each set contains ‘m’ cache lines. A memory block is first mapped onto a set and then placed into any cache line of the set.
The range of caches from direct-mapped to fully associative is a continuum of levels of set associativity. (A direct-mapped cache is one-way set-associative and a fully associative cache with m cache lines is m-way set-associative.)
Many processor caches in today's designs are either direct-mapped, two-way set-associative, or four-way set-associative.13
Consider a main memory of 16 kilobytes, which is organized as 4-byte blocks, and a 2-way set-associative cache of 256 bytes with a block size of 4 bytes. Because the main memory is 16kB, we need a minimum of 14 bits to uniquely represent a memory address.
Since each cache block is of size 4 bytes and is 2-way set-associative, the total number of sets in the cache is 256/(4 * 2), which equals 32 sets.
Below are memory addresses and an explanation of which cache line on which set they map to:
Other schemes have been suggested, such as the skewed cache,14 where the index for way 0 is direct, as above, but the index for way 1 is formed with a hash function. A good hash function has the property that addresses which conflict with the direct mapping tend not to conflict when mapped with the hash function, and so it is less likely that a program will suffer from an unexpectedly large number of conflict misses due to a pathological access pattern. The downside is extra latency from computing the hash function.15 Additionally, when it comes time to load a new line and evict an old line, it may be difficult to determine which existing line was least recently used, because the new line conflicts with data at different indexes in each way; LRU tracking for non-skewed caches is usually done on a per-set basis. Nevertheless, skewed-associative caches have major advantages over conventional set-associative ones.16
A true set-associative cache tests all the possible ways simultaneously, using something like a content-addressable memory. A pseudo-associative cache tests each possible way one at a time. A hash-rehash cache and a column-associative cache are examples of a pseudo-associative cache.
In the common case of finding a hit in the first way tested, a pseudo-associative cache is as fast as a direct-mapped cache, but it has a much lower conflict miss rate than a direct-mapped cache, closer to the miss rate of a fully associative cache.17
"The Basics of Cache" (PDF). https://cseweb.ucsd.edu/classes/su07/cse141/cache-handout.pdf ↩
"Cache Placement Policies". Archived from the original on Feb 21, 2020. https://web.archive.org/web/20200221213947/http://web.cs.iastate.edu/~prabhu/Tutorial/CACHE/bl_place.html ↩
"Placement Policies". Archived from the original on August 14, 2020. https://web.archive.org/web/20200814000302/http://fourier.eng.hmc.edu/e85_old/lectures/memory/node4.html ↩
Mattson, R.L.; Gecsei, J.; Slutz, D. R.; Traiger, I (1970). "Evaluation Techniques for Storage Hierarchies". IBM Systems Journal. 9 (2): 78–117. doi:10.1147/sj.92.0078. /wiki/Richard_Mattson ↩
Solihin, Yan (2015). Fundamentals of Parallel Multi-core Architecture. Taylor & Francis. pp. 136–141. ISBN 978-1482211184. 978-1482211184 ↩
"Cache Miss Types" (PDF). http://meseec.ce.rit.edu/eecc551-winter2001/551-1-30-2002.pdf ↩
"Fully Associative Cache". Archived from the original on December 24, 2017. https://web.archive.org/web/20171224054857/http://www.cs.umd.edu/class/sum2003/cmsc311/Notes/Memory/fully.html ↩
André Seznec (1993). "A Case for Two-Way Skewed-Associative Caches". ACM SIGARCH Computer Architecture News. 21 (2): 169–178. doi:10.1145/173682.165152. /wiki/Andr%C3%A9_Seznec ↩
C. Kozyrakis. "Lecture 3: Advanced Caching Techniques" (PDF). Archived from the original (PDF) on September 7, 2012. /wiki/Christos_Kozyrakis ↩
Micro-Architecture "Skewed-associative caches have ... major advantages over conventional set-associative caches." http://www.irisa.fr/caps/PROJECTS/Architecture/ ↩