Causality

From Sniper
Revision as of 02:25, 23 November 2012 by Wheirman (talk | contribs) (Created page with "In Sniper (as in [http://groups.csail.mit.edu/carbon/?page_id=111/ Graphite] from which Sniper is derived), each memory access is simulated to completion in a single function ca...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

In Sniper (as in Graphite from which Sniper is derived), each memory access is simulated to completion in a single function call. This means that in the memory subsystem, time advances during the simulation of one memory access; then time is potentially set backwards to start the simulation of the next memory access. This makes that time inside the memory subsystem isn't always advancing monotonically (known affectionately as the "fluffy time" problem).

Because of this, the contention queue-model can only be an approximation as it doesn't consider request arriving out-of-order (the only state in this model is the last time the resource became available). The history-list queue model improves on this by keeping a list of previous times when the resource was in use. Then, when a request arrives for an earlier time stamp, this time stamp can be looked up in the history list, and the model can determine wether the resource was free at that time, and if not, when the earliest time is when the resource is free which determines the queueing delay.

Of course, even the history list queue model cannot handle actual causality errors, which occur when requests earlier in simulated time affect later requests -- but the former are simulated later in wallclock time (because they were generated by a core that was lagging behind). This is why the Graphite people have moved to their cycle-level mode, but we found that the fluffy-time system works well enough for memory subsystem trade-off studies -- while yielding significantly faster simulation speeds.