Previous | Next --- Slide 61 of 81
Back to Lecture Thumbnails
haiyuem

We often think that data access rate is constant when we have to go to DRAM, but actually, DRAM has a "row locality" that is similar to cache locality: if the same row is accessed again, it will be faster than accessing (charging) another row.

wooloo

Is this row locality exploitable in software? Or are improvements from targeting it too minor to be notable?

wooloo

(As opposed to optimizing cache access patterns)

thread17

To add to @haiyuem, if the same row is accessed again, since the row has more charge, their access can be performed faster. Although if the charge has leaked too much(for instance, the second access comes long after the first access), subsequent access would not experience this benefit. I came across this paper that mentioned how to exploit the row locality to improve DRAM latency by adding a table to keep track of recently accessed rows in the memory controller.

dishpanda

I have a a question related to activation: Pretend the system did not have cache and the program (for some reason) kept on reading from the same memory location. At what point in time would the row have to be re-activated?

Please log in to leave a comment.