Previous | Next --- Slide 11 of 90
Back to Lecture Thumbnails
lexicologist

*IMPORTANT FOR THIS SLIDE*** There are two kinds of locality in cache access: Spatial locality: use of data nearby each other. Exploited by increasing cache line size Temporal locality: use of data within a relatively small time duration

The important thing here is spatial locality because if a program loads X, then it is likely to load subsequent addresses as well. However, as cache line size increases, false sharing size increases

haiyuem

Larger cache line size better utilizes spatial locality, but increases possibility of false sharing.

wooloo

To expand on the previous comments, it seems larger cache lines increase the possibility of false sharing since it's more likely that the pieces of data we're operating on are smaller than cache lines, or that two processors would want to access data on the same cache line.

mkarra

I'm sort of confused as to what a true sharing miss means. I understand what a false sharing miss means, but when would we ever have a true sharing miss?

haiyuem

@mkarra My understanding about true sharing miss is that when two processors try to write to /read from exactly the same place, one of them must wait for the other to finish operation and get the cache line from the other - it can't just use the local cache.

orz

@mkarra Just to add: a true sharing miss is a miss which you cannot avoid by decreasing the cache line size - they want to access exactly the same thing.

mziv

Conversely, to build off of ^that discusssion, a false sharing miss means that two processors want to write/read to different slots in the same cache line - they shouldn't need to fight for access because in reality they're writing to different memory, but they do because processors can't load anything smaller and more granular than a cache line. Reducing the size of the cache line helps with this problem because it means processors are less likely to collide when writing memory that is close to each other but not the same memory.

arkhan

Would a way to decrease the chances of false sharing be to set certain threads to have an affinity for a given processor, knowing that those threads are the ones that need to access nearby data?

haofeng

In the figures one can observe that cold misses, capacity misses and conflict misses decrease as cache lines grow bigger. This is brought by leveraging spatial locality.

There are two types of sharing misses: true sharing and false sharing. True sharing refers to two processors writing to exactly the same memory location, and false sharing means although two processors writes to different memory locations, the memory locations happen to be on the same cache line, so one write will invalidate the entire line causing the false sharing miss. With increased cache line size, there is more probability that the target memory locations of two processors fall into the same cache line, therefore false sharing miss increases. Can anyone explain why true sharing miss seems to be decreasing as the cache lines become larger?

tspint

How does one measure the miss rate or type of miss in the cache?

haiyuem

@tspint Usually there's a hardware counter on chip that counts the different types of misses.

Please log in to leave a comment.