Article: AMD's Mobile Strategy
By: Rohit (.delete@this..), December 24, 2011 7:52 am
Room: Moderated Discussions
hobold (hobold@vectorizer.org) on 12/24/11 wrote:
---------------------------
>Rohit (@.) on 12/23/11 wrote:
>---------------------------
>[...]
>
>I have not followed the evolution of DRAM since the days of EDO ("Extended Data
>Out"), and even then my grasp on the details was loose at best.
>
>But I am under the impression that main memory is still accessed in bursts today,
>i.e. with some granularity. And that the bits are organized in fairly large blocks,
>which back then were called "pages" (there was also a concept of "rows" and "columns"
>that imposed further constraints, but I never got to the bottom of this).
>
>The number of transactions in flight used to be so limited that it wasn't possible
>to saturate the memory bus with random accesses even with very deep buffers and very deep pipelining.
>
>
>Did this situation change significantly? Because if it didn't, I think "locality"
>would still be topmost on the list of rules that the term "regularity" implies, no?
By regularity, what I really meant was predictability of locations. If you were chasing a linked list, even one whose nodes were cacheline sized, you would not reach anywhere near peak performance, which you can do with array like accesses.
---------------------------
>Rohit (@.) on 12/23/11 wrote:
>---------------------------
>[...]
>
>I have not followed the evolution of DRAM since the days of EDO ("Extended Data
>Out"), and even then my grasp on the details was loose at best.
>
>But I am under the impression that main memory is still accessed in bursts today,
>i.e. with some granularity. And that the bits are organized in fairly large blocks,
>which back then were called "pages" (there was also a concept of "rows" and "columns"
>that imposed further constraints, but I never got to the bottom of this).
>
>The number of transactions in flight used to be so limited that it wasn't possible
>to saturate the memory bus with random accesses even with very deep buffers and very deep pipelining.
>
>
>Did this situation change significantly? Because if it didn't, I think "locality"
>would still be topmost on the list of rules that the term "regularity" implies, no?
By regularity, what I really meant was predictability of locations. If you were chasing a linked list, even one whose nodes were cacheline sized, you would not reach anywhere near peak performance, which you can do with array like accesses.