Look Through vs Look aside

11,371

Look through and Look aside is the read policy of cache architecture.

First , We will see difference between them

(1) - LOOK THROUGH Policy = If processor wants to search content , it will first look into cache , if cache hits -- get content , if cache miss (here it will search into L2 and then go to main memory) it will go to main memory , read block from main memory and copy block into cache for further access...

Here , To calculate Access time

h = hit rate

c = cache access time

m = main memory access time

Access time = h * c + (1 - h ) * ( c + m )

for L1 = 2 + 10 = 12 ns

for (through L1) L2 = L1 time + 5 + 100 = 117 ns

for (through L1 + L2 ) memory = L1 + L2 + Mem = Mem ns

Access time = (0.8 * 12 ) + (0.18 * 117) + (0.02 * Mem ).

(2) LOOK ASIDE policy = Processor simultaneously look for content in both cache as well as in main memory....

Look aside requires more signal operation for every access(cache and main memory) and when content found in cache , it require to send a cancel signal to main memory..which is biggest disadvantage of look aside policy..

Here , To calculate Access time

you have to consider all signaling time for all operation ....

Note - Most of cache uses look through cache , because now a days , cache hit ratio is more than 95% ..so most of time content is available in cache....

Share:
11,371
Hemanshu Sethi
Author by

Hemanshu Sethi

Updated on July 25, 2022

Comments

  • Hemanshu Sethi
    Hemanshu Sethi over 1 year

    Suppose there are 2 caches L1 and L2

    • L1
      • Hit rate of L1=0.8
      • Access time of l1=2ns
      • and transfer time b/w L1 and CPU is 10ns
    • L2
      • Hit rate of L2=0.9
      • Access time of L2 =5ns
      • and transfer time b/w L2 and L1 is 100ns

    What will be the effective access time in case of Look through and Look aside policies.

  • Peter Cordes
    Peter Cordes about 4 years
    And more importantly, outer caches or memory run at lower frequency, and/or are shared with other cores, so having inner caches filter the bandwidth of requests is an important property. You want to save your L2 / L3 / mem request tracking capacity for tracking in-flight L1 misses that you're waiting for (memory-level parallelism is a thing with modern L1d caches supporting hit-under-miss and miss-under-miss.)
  • Peter Cordes
    Peter Cordes about 4 years
    Also, even for private L2 caches, it would need extra read ports to support starting an access every time L1d or L1i did, on top of handling HW prefetch into L2 in the background! For example, a modern x86 CPU can do 2 reads from L1d cache per clock, and one from L1i cache. With a unified L2 cache, that would be 3 requests per clock, totally defeating the purpose of having split L1 caches.
  • Peter Cordes
    Peter Cordes about 3 years
    This question appears to be about hardware cpu caches (I added that tag based on the question body). CPU caches are always transparent, so the hardware still maintains coherency and consistency even if it starts probing L2 or main mem before knowing whether L1 hit or missed. The diagrams in your linked slides are useful, but the point about the software having to manually maintain the cache on read misses only applies to software / application caches, not something as fundamental as a CPU cache of main memory.
  • jouell
    jouell about 2 years
    simultaneously?