mem: Split the hit_latency into tag_latency and data_latency
authorSophiane Senni <sophiane.senni@gmail.com>
Wed, 30 Nov 2016 22:10:27 +0000 (17:10 -0500)
committerSophiane Senni <sophiane.senni@gmail.com>
Wed, 30 Nov 2016 22:10:27 +0000 (17:10 -0500)
commitce2722cdd97a31f85d36f6c32637b230e3c25c73
tree72993532267d3f1f99e8519be837dd7c523a722f
parent047caf24ba9a640247b63584c2291e760f1f4d54
mem: Split the hit_latency into tag_latency and data_latency

If the cache access mode is parallel, i.e. "sequential_access" parameter
is set to "False", tags and data are accessed in parallel. Therefore,
the hit_latency is the maximum latency between tag_latency and
data_latency. On the other hand, if the cache access mode is
sequential, i.e. "sequential_access" parameter is set to "True",
tags and data are accessed sequentially. Therefore, the hit_latency
is the sum of tag_latency plus data_latency.

Signed-off-by: Jason Lowe-Power <jason@lowepower.com>
15 files changed:
configs/common/Caches.py
configs/common/O3_ARM_v7a.py
configs/example/arm/devices.py
configs/example/memcheck.py
configs/example/memtest.py
configs/learning_gem5/part1/caches.py
src/mem/cache/Cache.py
src/mem/cache/base.cc
src/mem/cache/base.hh
src/mem/cache/tags/Tags.py
src/mem/cache/tags/base.cc
src/mem/cache/tags/base.hh
src/mem/cache/tags/base_set_assoc.hh
src/mem/cache/tags/fa_lru.cc
src/mem/cache/tags/fa_lru.hh