Article: AMD's Mobile Strategy
By: Brett (ggtgp.delete@this.yahoo.com), December 21, 2011 10:36 pm
Room: Moderated Discussions
Seni (seniike@hotmail.com) on 12/21/11 wrote:
---------------------------
>Exophase (exophase@gmail.com) on 12/21/11 wrote:
>---------------------------
>>Seni (seniike@hotmail.com) on 12/21/11 wrote:
>>---------------------------
>>Likewise I don't think programs will often have individual data structures larger
>>than 4GB and needing displacements into the middle of them.
>>
>>And I'd feel pretty bad if I really did have to embed
>
>It just seems incredibly shortsighted to assume that executables will never be
>larger than 4GB. Granted, it'll be a while.
The largest program I know of is Unix, and its actually a collection of libraries loaded with address space randomization to hinder attacks.
OSX/iOS iPhone with Siri gives you something more powerful than the Star Trek Computer.
Centralized 5GB code executables just do not make sense, distributed databases/code do.
>Occasionally, you might run into some oddball case like a computed jump into a
>12GB lookup table, each entry of which is a block of code.
>There's no reason why machine-generated code would have any particular maximum
>size, so if RAM and caches are adequate for it, you can expect freakish large programs to become more common over time.
You can build a 12GB table with C++ templates, the compiler and the linker will complain, forcing you to use override flags. Its just not a good idea, I bet those override flags will still be required 20 years from now.
20 years from now a server CPU die will have ~256 cores with ~16GB of embedded ram each. (The idea of external ram will have died a decade before, we will see the transition start in ~3 years.)
Large code/data programs will use multiple cores, worst case using the extra CPU's for paging. (One CPU runs, the extra CPU/rams are for paging more address space.)
I welcome other ideas of what the future holds.
---------------------------
>Exophase (exophase@gmail.com) on 12/21/11 wrote:
>---------------------------
>>Seni (seniike@hotmail.com) on 12/21/11 wrote:
>>---------------------------
>>Likewise I don't think programs will often have individual data structures larger
>>than 4GB and needing displacements into the middle of them.
>>
>>And I'd feel pretty bad if I really did have to embed
>
>It just seems incredibly shortsighted to assume that executables will never be
>larger than 4GB. Granted, it'll be a while.
The largest program I know of is Unix, and its actually a collection of libraries loaded with address space randomization to hinder attacks.
OSX/iOS iPhone with Siri gives you something more powerful than the Star Trek Computer.
Centralized 5GB code executables just do not make sense, distributed databases/code do.
>Occasionally, you might run into some oddball case like a computed jump into a
>12GB lookup table, each entry of which is a block of code.
>There's no reason why machine-generated code would have any particular maximum
>size, so if RAM and caches are adequate for it, you can expect freakish large programs to become more common over time.
You can build a 12GB table with C++ templates, the compiler and the linker will complain, forcing you to use override flags. Its just not a good idea, I bet those override flags will still be required 20 years from now.
20 years from now a server CPU die will have ~256 cores with ~16GB of embedded ram each. (The idea of external ram will have died a decade before, we will see the transition start in ~3 years.)
Large code/data programs will use multiple cores, worst case using the extra CPU's for paging. (One CPU runs, the extra CPU/rams are for paging more address space.)
I welcome other ideas of what the future holds.