By: hcl64 (mario.smarq.delete@this.gmail.com), April 28, 2012 2:23 pm
Room: Moderated Discussions
anon (anon@anon.com) on 4/28/12 wrote:
---------------------------
>hcl64 (mario.smarq@gmail.com) on 4/28/12 wrote:
>---------------------------
>>Paul A. Clayton (paaronclayton@gmail.com) on 4/20/12 wrote:
>>---------------------------
>>>Kira (kirsc@aeterna.ru) on 4/20/12 wrote:
>>>---------------------------
>>>[snip]
>>>>What was the purpose of using a shared decoder even
>>>>supposed to be? Is the size/power overhead of a pair of
>>>>4-wide decoders really that large in a modern
>>>>desktop/server CPU?
>>>>
>>
>>If i understand this correctly, the *decoder* is NOT shared in the sense that
>it only crunches from 1 thread at a time.
>>
>>http://www.realworldtech.com/forums/index.cfm?action=detail&id=128835&threadid=128602&roomid=2
>>
>>I believe it uses a scheme of interleaving multithreading(1 inst from each thread
>>but in consecutive cycles) mix with block or switch-on-event multithreading(several
>>insts from one thread before switching to the other). Not in any occasion is it executing from more than 1 thread.
>
>The decoder is shared. There is no nitpicking of semantics that will allow you
>to say decoder is not shared. And definitely not, within context of the thread you are replying to.
>
What i was trying to say is that is not SMT in any case... and if i read and ear correctly, in no occasion does the decoder execute from more than 1 thread at a time.
>The answer is yes: fast, wide, low latency x86 decoders are power hungry. Intel
>has been trying to reduce / remove the decoder from the critical paths since the
>Pentium 4, with significant additional complexity. Also AMD and Intel both share
>decoders among threads/cores. So we have empirical evidence.
>
>>
>>>>Perhaps a single beefy 4-issue or 6-issue core with SMT
>>>>would have been a smarter move.
>>>
>>
>>A 6 wide issue processor for x86 is simply a pipe dream... until there will be
>>ways to considerably break the "strong dependency model" of x86 it will be out of
>>reach. 4 may be already too much (BD is a false 4 wide issue), since even the strongest
>>Intel u-arch doesn't pass in average the 2 IPC(instructions per clock)...
>
>There is a grain of truth to that, but "average IPC" is largely parroted for the wrong reasons.
>
>When there *is* parallelism, you want to take advantage of it. If you can execute
>33% of the time at 2 IPC, and 67% of the time at 0.5 IPC, then you're averaging
>1 IPC. But it does not mean your decoders are a waste of space.
>
Neither did i imply that. Actually to have good 2 threads on the same front-end be it on Intel SMT or AMD scheme, i think a 5th pipe would be welcome. They haven't done it yet perhaps becasue it might be very hard to accomplish efficiently.
>Nobody should use the "average IPC" statistic without deeply understanding what they are talking about.
>
>> makes
>>wonder under the law of exponential diminishing returns if even a *3 wide issue*
>>like NH/SNB/IB makes sense(perhaps that is why Intel have
>
>Core2, NH, WM, SNB, and IB are 4-wide issue.
>
>SIMD and FP to the same ports of INt... and SMT)
>>http://www.realworldtech.com/page.cfm?ArticleID=RWT091810191937&p=6
>>
>>And no NH/SNB/IB are NOT true 6 wide issue u-archs, it only dispatches 4 uops cycle
>>so 4 is the theoretical max sustainable but under certain
>
>Ah, you are using IBM terminology of dispatch to back end, and issue to execution units. Fair enough.
>
:)
>I guess people try to claim they are wider than they are because of instruction
>fusion and such, but of course that does not change the actual width, only perhaps the effective width.
>
>>conditions, because it
>>only has 3 uop exec ports and there are considerable dependencies to attend(its always much much less average).
>
>Issue/dispatch width has nothing to do with what the microarchitecture can execute
>on average, of course (as I said above).
>
>>http://www.realworldtech.com/page.cfm?ArticleID=RWT091810191937&p=5
>>
>>
>>>As discussed here earlier, the motive seems to have been
>>>to allow substantial sharing between threads in a high
>>>frequency design without the data cache issues that the
>>>early Pentium4 SMT suffered.
>>
>>As above the only things that shares threads in BD are the FlexFPU and the L2..
>
>Are we looking at the same Bulldozer? BD shares the entire front end, L1I, branch
>predictors, ITLBs, fetch, decode and issuing logic. As well as FPU and L2.
>
well its semantics to evidence what is SMT and what is not.. actually when it comes to BD its very hard to pin point a specific multithreading scheme for sharing stuff.. BD uses almost all multithreading schemes i can think of (SMP, SMT, CMT, interleaving, block, switch-on-event )
>>not even a remote resemblance with P4, more so because BD pipeline length is only
>>15 stages, and the first 2 or 3 stages are decoupled and can run ahead, so like
>>SNB L0, it has the propriety of pipeline stage compression, in this case to 13 or 12.
>
>How does this pipeline stage compression work?
>
The same way of the L0, but from the IBBs.. if fetch is decoupled and can run-ahead, chances are when one instructions is needed its already in the IBB.
The L0 has the advantage that is much larger and of being a cache, and being closer to the execution pipes in the pipeline...so stage compression or contraction effect, is larger and longer in terms of consecutive clock cycles.
---------------------------
>hcl64 (mario.smarq@gmail.com) on 4/28/12 wrote:
>---------------------------
>>Paul A. Clayton (paaronclayton@gmail.com) on 4/20/12 wrote:
>>---------------------------
>>>Kira (kirsc@aeterna.ru) on 4/20/12 wrote:
>>>---------------------------
>>>[snip]
>>>>What was the purpose of using a shared decoder even
>>>>supposed to be? Is the size/power overhead of a pair of
>>>>4-wide decoders really that large in a modern
>>>>desktop/server CPU?
>>>>
>>
>>If i understand this correctly, the *decoder* is NOT shared in the sense that
>it only crunches from 1 thread at a time.
>>
>>http://www.realworldtech.com/forums/index.cfm?action=detail&id=128835&threadid=128602&roomid=2
>>
>>I believe it uses a scheme of interleaving multithreading(1 inst from each thread
>>but in consecutive cycles) mix with block or switch-on-event multithreading(several
>>insts from one thread before switching to the other). Not in any occasion is it executing from more than 1 thread.
>
>The decoder is shared. There is no nitpicking of semantics that will allow you
>to say decoder is not shared. And definitely not, within context of the thread you are replying to.
>
What i was trying to say is that is not SMT in any case... and if i read and ear correctly, in no occasion does the decoder execute from more than 1 thread at a time.
>The answer is yes: fast, wide, low latency x86 decoders are power hungry. Intel
>has been trying to reduce / remove the decoder from the critical paths since the
>Pentium 4, with significant additional complexity. Also AMD and Intel both share
>decoders among threads/cores. So we have empirical evidence.
>
>>
>>>>Perhaps a single beefy 4-issue or 6-issue core with SMT
>>>>would have been a smarter move.
>>>
>>
>>A 6 wide issue processor for x86 is simply a pipe dream... until there will be
>>ways to considerably break the "strong dependency model" of x86 it will be out of
>>reach. 4 may be already too much (BD is a false 4 wide issue), since even the strongest
>>Intel u-arch doesn't pass in average the 2 IPC(instructions per clock)...
>
>There is a grain of truth to that, but "average IPC" is largely parroted for the wrong reasons.
>
>When there *is* parallelism, you want to take advantage of it. If you can execute
>33% of the time at 2 IPC, and 67% of the time at 0.5 IPC, then you're averaging
>1 IPC. But it does not mean your decoders are a waste of space.
>
Neither did i imply that. Actually to have good 2 threads on the same front-end be it on Intel SMT or AMD scheme, i think a 5th pipe would be welcome. They haven't done it yet perhaps becasue it might be very hard to accomplish efficiently.
>Nobody should use the "average IPC" statistic without deeply understanding what they are talking about.
>
>> makes
>>wonder under the law of exponential diminishing returns if even a *3 wide issue*
>>like NH/SNB/IB makes sense(perhaps that is why Intel have
>
>Core2, NH, WM, SNB, and IB are 4-wide issue.
>
>SIMD and FP to the same ports of INt... and SMT)
>>http://www.realworldtech.com/page.cfm?ArticleID=RWT091810191937&p=6
>>
>>And no NH/SNB/IB are NOT true 6 wide issue u-archs, it only dispatches 4 uops cycle
>>so 4 is the theoretical max sustainable but under certain
>
>Ah, you are using IBM terminology of dispatch to back end, and issue to execution units. Fair enough.
>
:)
>I guess people try to claim they are wider than they are because of instruction
>fusion and such, but of course that does not change the actual width, only perhaps the effective width.
>
>>conditions, because it
>>only has 3 uop exec ports and there are considerable dependencies to attend(its always much much less average).
>
>Issue/dispatch width has nothing to do with what the microarchitecture can execute
>on average, of course (as I said above).
>
>>http://www.realworldtech.com/page.cfm?ArticleID=RWT091810191937&p=5
>>
>>
>>>As discussed here earlier, the motive seems to have been
>>>to allow substantial sharing between threads in a high
>>>frequency design without the data cache issues that the
>>>early Pentium4 SMT suffered.
>>
>>As above the only things that shares threads in BD are the FlexFPU and the L2..
>
>Are we looking at the same Bulldozer? BD shares the entire front end, L1I, branch
>predictors, ITLBs, fetch, decode and issuing logic. As well as FPU and L2.
>
well its semantics to evidence what is SMT and what is not.. actually when it comes to BD its very hard to pin point a specific multithreading scheme for sharing stuff.. BD uses almost all multithreading schemes i can think of (SMP, SMT, CMT, interleaving, block, switch-on-event )
>>not even a remote resemblance with P4, more so because BD pipeline length is only
>>15 stages, and the first 2 or 3 stages are decoupled and can run ahead, so like
>>SNB L0, it has the propriety of pipeline stage compression, in this case to 13 or 12.
>
>How does this pipeline stage compression work?
>
The same way of the L0, but from the IBBs.. if fetch is decoupled and can run-ahead, chances are when one instructions is needed its already in the IBB.
The L0 has the advantage that is much larger and of being a cache, and being closer to the execution pipes in the pipeline...so stage compression or contraction effect, is larger and longer in terms of consecutive clock cycles.
Topic | Posted By | Date |
---|---|---|
Phoronix tests GCC compiler flags and Bulldozer. | I.S.T. | 2012/04/19 02:05 AM |
Single page view? | David Kanter | 2012/04/19 07:59 AM |
Single page view? | wainwright | 2012/04/19 08:22 AM |
Single page view? | slothrop | 2012/04/19 08:23 AM |
Single page view? | David Kanter | 2012/04/19 08:31 AM |
Single page view? | EduardoS | 2012/04/19 02:12 PM |
Is there a single page view option for RWT articles? | anon | 2012/04/19 08:27 AM |
Single page view? | Del | 2012/04/19 08:36 AM |
Single page view? | slacker | 2012/04/19 02:56 PM |
Single page view? | Del | 2012/04/22 05:09 AM |
Single page view? | David Kanter | 2012/04/22 08:38 AM |
Single page view? | Del | 2012/04/23 12:22 AM |
Single page view? | Michael S | 2012/04/19 12:30 PM |
Single page view? | Ungo | 2012/04/19 01:25 PM |
Single page view? | Foo_ | 2012/04/19 11:17 PM |
Single page view? | James | 2012/04/20 03:01 AM |
There are ads on the web? | JJB | 2012/04/20 03:32 AM |
What a bunch of freeloaders (NT) | slacker | 2012/04/20 12:44 PM |
So are you, probably | iz | 2012/04/21 03:41 AM |
Impression ad revenue | Paul A. Clayton | 2012/04/21 05:44 AM |
So are you, probably | slacker | 2012/04/21 12:09 PM |
So are you, probably | David Kanter | 2012/04/22 08:41 AM |
So are you, probably | iz | 2012/04/22 02:57 PM |
So are you, probably | Doug Siebert | 2012/04/22 11:37 AM |
Aha! | David Kanter | 2012/04/22 02:45 PM |
Aha! | bakaneko | 2012/04/22 07:49 PM |
So are you, probably | iz | 2012/04/22 02:48 PM |
That's not how the business works... | David Kanter | 2012/04/22 04:31 PM |
That's not how the business works... | iz | 2012/04/23 12:49 AM |
So are you, probably | slacker | 2012/04/22 10:31 PM |
back to phoronix | Michael S | 2012/04/23 01:07 AM |
So are you, probably | iz | 2012/04/23 02:29 AM |
Membership at RWT | David Kanter | 2012/04/23 10:24 AM |
So are you, probably | Jukka Larja | 2012/04/27 07:59 AM |
So, what do people think of these numbers> | I.S.T. | 2012/04/19 06:34 PM |
So, what do people think of these numbers> | Linus Torvalds | 2012/04/20 07:34 AM |
So, what do people think of these numbers> | Kira | 2012/04/20 08:18 AM |
So, what do people think of these numbers> | Linus Torvalds | 2012/04/20 09:05 AM |
So, what do people think of these numbers> | Doug Siebert | 2012/04/20 08:00 PM |
So, what do people think of these numbers> | Megol | 2012/04/21 08:05 AM |
So, what do people think of these numbers> | Linus Torvalds | 2012/04/21 12:11 PM |
Most problems are fixed... | Megol | 2012/04/24 06:00 AM |
So, what do people think of these numbers> | bakaneko | 2012/04/20 10:16 AM |
So, what do people think of these numbers> | bakaneko | 2012/04/20 10:37 AM |
So, what do people think of these numbers> | Linus Torvalds | 2012/04/20 12:24 PM |
So, what do people think of these numbers> | Joel | 2012/04/20 01:59 PM |
So, what do people think of these numbers> | Kira | 2012/04/20 02:32 PM |
So, what do people think of these numbers> | EduardoS | 2012/04/20 03:00 PM |
Bulldozer's Oddities. | Joel | 2012/04/20 03:54 PM |
In defense of Bulldozer's Oddities | David Kanter | 2012/04/20 04:32 PM |
In defense of Bulldozer's Oddities | Exophase | 2012/04/20 06:11 PM |
In defense of Bulldozer's Oddities | EduardoS | 2012/04/20 06:46 PM |
In defense of Bulldozer's Oddities | Exophase | 2012/04/20 07:18 PM |
In defense of Bulldozer's Oddities | anonymous | 2012/04/20 10:26 PM |
In defense of Bulldozer's Oddities | JJB | 2012/04/20 10:34 PM |
In defense of Bulldozer's Oddities | imaxx | 2012/04/21 06:21 AM |
In defense of Bulldozer's Oddities | Michael S | 2012/04/21 09:42 AM |
Bulldozer's integer execution units | David Kanter | 2012/04/25 03:29 PM |
Bulldozer's integer execution units | Exophase | 2012/04/26 11:17 AM |
Bulldozer's integer execution units | anonymous | 2012/04/26 02:15 PM |
Bulldozer's integer execution units | EduardoS | 2012/04/26 02:40 PM |
Bulldozer's integer execution units | Foo_ | 2012/04/27 07:21 AM |
Bulldozer's integer execution units | Megol | 2012/04/27 12:38 PM |
Bulldozer's integer execution units | EduardoS | 2012/04/26 02:47 PM |
Bulldozer's integer execution units | Exophase | 2012/04/26 04:02 PM |
Bulldozer's integer execution units | EduardoS | 2012/04/26 05:03 PM |
Bulldozer's integer execution units | Exophase | 2012/04/26 05:24 PM |
Bulldozer's integer execution units | EduardoS | 2012/04/26 06:18 PM |
Bulldozer's cache memory performance | Heikki Kultala | 2012/04/28 12:18 AM |
Bulldozer's cache memory performance | EduardoS | 2012/04/28 09:06 AM |
Bulldozer's integer execution units | David Kanter | 2012/04/26 03:03 PM |
Bulldozer's integer execution units | Exophase | 2012/04/26 03:59 PM |
Bulldozer's integer execution units | David Kanter | 2012/04/26 09:53 PM |
Bulldozer's integer execution units | Exophase | 2012/04/27 07:42 AM |
Bulldozer's integer execution units | David Kanter | 2012/04/27 10:06 AM |
Bulldozer's integer execution units | EduardoS | 2012/04/27 12:27 PM |
K8 divided pipelines? | Paul A. Clayton | 2012/04/27 12:59 PM |
Bulldozer's integer execution units | Michael S | 2012/04/27 03:37 AM |
Bulldozer's integer execution units | Exophase | 2012/04/27 07:33 AM |
Bulldozer's integer execution units | anonymous | 2012/04/27 08:03 AM |
Renaming Flags | Konrad Schwarz | 2012/04/27 02:04 AM |
Renaming Flags | none | 2012/04/27 03:03 AM |
Renaming Flags | Megol | 2012/04/27 11:42 AM |
Bulldozer's integer execution units | hcl64 | 2012/04/27 03:31 PM |
VEX supports 3+ operands. FPU have renaming already(NT) | Megol | 2012/04/28 07:20 AM |
In defense of Bulldozer's Oddities | Linus Torvalds | 2012/04/21 11:26 AM |
Thanks for the lesson | JJB | 2012/04/21 01:23 PM |
Side note.. | Linus Torvalds | 2012/04/21 01:57 PM |
In defense of Bulldozer's Oddities | Exophase | 2012/04/21 11:13 AM |
In defense of Bulldozer's Oddities | EduardoS | 2012/04/21 11:53 AM |
In defense of Bulldozer's Oddities | Gionatan Danti | 2012/04/21 11:42 AM |
In defense of Bulldozer's Oddities | hcl64 | 2012/04/27 04:07 PM |
In defense of Bulldozer's Oddities | David Kanter | 2012/04/28 05:29 AM |
In defense of Bulldozer's Oddities | hcl64 | 2012/04/28 01:44 PM |
In defense of Bulldozer's Oddities | David Kanter | 2012/04/28 08:42 PM |
In defense of Bulldozer's Oddities | hcl64 | 2012/04/28 09:39 PM |
Bulldozer's Oddities. | EduardoS | 2012/04/20 05:05 PM |
Bulldozer's Oddities. | anon | 2012/04/20 07:32 PM |
Bulldozer's Oddities. | EduardoS | 2012/04/21 11:37 AM |
Bulldozer's Oddities. | anon | 2012/04/21 09:16 PM |
Bulldozer's Oddities. | EduardoS | 2012/04/21 09:43 PM |
Bulldozer's Oddities. | anon | 2012/04/22 01:09 AM |
Bulldozer's Oddities. | EduardoS | 2012/04/22 12:57 PM |
Bulldozer's Oddities. | anon | 2012/04/22 03:17 PM |
Bulldozer's Oddities. | EduardoS | 2012/04/22 04:05 PM |
Bulldozer's Oddities. | anon | 2012/04/22 04:42 PM |
Bulldozer's Oddities. | anon | 2012/04/22 05:01 PM |
Bulldozer's Oddities. | EduardoS | 2012/04/22 09:28 PM |
Bulldozer's Oddities. | anon | 2012/04/22 10:05 PM |
Bulldozer's isn't bad. | a reader | 2012/04/21 09:01 AM |
Bulldozer's isn't bad. | Kira | 2012/04/21 10:29 AM |
Bulldozer's isn't bad. | hcl64 | 2012/04/27 04:58 PM |
Bulldozer's isn't bad. | anon | 2012/04/27 05:16 PM |
Bulldozer's isn't bad. | hcl64 | 2012/04/27 06:33 PM |
Bulldozer's isn't bad. | rwessel | 2012/04/27 10:12 PM |
Bulldozer's isn't bad. | EduardoS | 2012/04/28 08:29 AM |
Bulldozer's isn't bad. | EduardoS | 2012/04/28 08:30 AM |
Bulldozer's isn't bad. | Michael S | 2012/04/28 11:36 AM |
Bulldozer is made for SPEC fp | Pelle-48 | 2012/04/21 10:41 AM |
Bulldozer's Oddities. | mpx | 2012/04/22 02:47 AM |
Bulldozer's Oddities. | EduardoS | 2012/04/22 12:57 PM |
Bulldozer's Oddities. | mpx | 2012/04/23 06:04 AM |
Bulldozer's Oddities. | Eric | 2012/04/23 11:33 AM |
Bulldozer's Oddities. | EduardoS | 2012/04/23 01:22 PM |
Bulldozer's Oddities. | Eric | 2012/04/23 06:30 PM |
Bulldozer's Oddities. | hcl64 | 2012/04/27 05:16 PM |
Bulldozer's Oddities. | Y | 2012/04/25 03:34 AM |
Bulldozer's IDIV | Heikki Kultala | 2012/04/27 09:56 PM |
Bulldozer's IDIV | Y | 2012/04/30 12:51 AM |
Bulldozer's IDIV | EduardoS | 2012/04/30 04:39 AM |
Bulldozer's IDIV | P3Dnow | 2012/05/08 12:23 AM |
Bulldozer's IDIV | Exophase | 2012/05/08 06:37 AM |
Bulldozer's Oddities. | EduardoS | 2012/04/23 01:15 PM |
Clustered MT as SMT for high frequency | Paul A. Clayton | 2012/04/20 03:10 PM |
Clustered MT as SMT for high frequency | hcl64 | 2012/04/27 11:56 PM |
Clustered MT as SMT for high frequency | anonymous | 2012/04/28 12:43 AM |
Clustered MT as SMT for high frequency | hcl64 | 2012/04/28 01:59 PM |
Clustered MT as SMT for high frequency | anonymous | 2012/04/28 07:45 PM |
Clustered MT as SMT for high frequency | anon | 2012/04/28 01:13 AM |
Clustered MT as SMT for high frequency | hcl64 | 2012/04/28 02:23 PM |
Clustered MT as SMT for high frequency | anon | 2012/04/28 05:19 PM |
Clustered MT as SMT for high frequency | hcl64 | 2012/04/28 06:58 PM |
Clustered MT as SMT for high frequency | David Kanter | 2012/04/28 05:38 AM |
Guessed meaning of "strong dependency model" | Paul A. Clayton | 2012/04/28 06:24 AM |
Guessed meaning of "strong dependency model" | EduardoS | 2012/04/28 08:46 AM |
*Right meaning* about "strong dependency model" | hcl64 | 2012/04/28 03:59 PM |
Clustered MT as SMT for high frequency | hcl64 | 2012/04/28 03:24 PM |
Clustered MT as SMT for high frequency | anonymous | 2012/04/28 07:50 PM |
Clustered MT as SMT for high frequency | hcl64 | 2012/04/28 08:47 PM |
SNB width | David Kanter | 2012/04/28 08:48 PM |
SNB width | hcl64 | 2012/04/29 01:24 AM |
Clustered MT as SMT for high frequency | David Kanter | 2012/04/28 08:56 PM |
Clustered MT as SMT for high frequency | hcl64 | 2012/04/28 10:44 PM |
SOI, FD vs. PD | David Kanter | 2012/04/29 06:19 AM |
SOI, FD vs. PD | hcl64 | 2012/04/29 04:31 PM |
SOI, FD vs. PD | David Kanter | 2012/04/29 10:26 PM |
SOI, FD vs. PD | hcl64 | 2012/04/30 07:08 AM |
SOI, FD vs. PD | David Kanter | 2012/04/30 08:59 AM |
SOI, FD vs. PD | hcl64 | 2012/04/30 05:10 PM |
SOI, FD vs. PD | David Kanter | 2012/04/30 05:32 PM |
SOI, FD vs. PD | hcl64 | 2012/04/30 09:47 PM |
SOI, FD vs. PD | David Kanter | 2012/05/01 01:24 AM |
SOI, FD vs. PD | hcl64 | 2012/05/01 04:46 AM |
SOI, FD vs. PD | hcl64 | 2012/05/01 05:37 AM |
SOI, FD vs. PD | David Kanter | 2012/05/01 07:19 AM |
SOI, FD vs. PD | hcl64 | 2012/05/01 06:39 AM |
PD-SOI | David Kanter | 2012/05/02 11:22 AM |
SOI, FD vs. PD | slacker | 2012/04/30 07:10 PM |
SOI, FD vs. PD | David Kanter | 2012/04/30 09:16 PM |
SOI, FD vs. PD | slacker | 2012/05/01 09:04 PM |
SOI, FD vs. PD | David Kanter | 2012/05/02 07:19 AM |
SOI, FD vs. PD | zou | 2012/05/02 11:23 AM |
Previous discussion of clustered MT | Paul A. Clayton | 2012/04/28 06:00 AM |
Previous discussion of clustered MT | hcl64 | 2012/04/28 08:38 PM |
Previous discussion of clustered MT | David Kanter | 2012/04/30 03:37 PM |
Previous discussion of clustered MT | hcl64 | 2012/04/30 06:24 PM |
Previous discussion of clustered MT | David Kanter | 2012/04/30 06:40 PM |
Previous discussion of clustered MT | hcl64 | 2012/05/01 08:15 AM |
Latency issues | David Kanter | 2012/05/02 11:01 AM |
So, what do people think of these numbers> | Megol | 2012/04/21 12:57 AM |