Organizing Windows for Performance

Pages: 1 2 3 4 5 6 7

Test Setup

The three drive models I decided to focus on were a Western Digital 36400 (6.4GB) UDMA/33 drive, an IBM 75GXP (45GB) UDMA/100 drive and a Western Digital 21600 (1.6GB) PIO 4 drive, with the main focus on the first two. My assumption here, however poor it might be, is that most users today will have at least a UDMA/33 drive. However, just to provide an additional data point, I also included the older Western Digital 1.6GB PIO 4 drive. Since the drive size was so small, the only way I could effectively test with the same size swap file was to run with the IBM 75GXP on the second IDE channel and put the swap file there. As you will see, this configuration did not affect the base performance to any great degree.

All other components in the tests were identical:

  • AOpen AK73 Pro(A) w/KT133A chipset
  • Athlon 1.2GHz at 100MHz FSB
  • 512MB Crucial Technology PC133 SDRAM at CL3
  • Diamond Viper V770 Ultra (TNT2) w/ 32MB
  • 16-bit color at 1024×768 resolution
  • Windows 2000 Professional with default Windows 2000 drivers
  • BIOS set to ‘Turbo Defaults’.

Note that I could have set the BIOS parameters more aggressively, and could have run at 133MHz FSB, however my intent here is not to show what the optimal performance of the system might be with all components and settings tweaked. Instead, the intent here is to take a look at the differences in various IDE configurations. I did maximize the RAM, because I wanted to minimize the effects of the swap file on performance. With today’s memory prices, it is hard to imagine anyone limiting himself or herself with regards to memory.

The benchmarks I chose are the industry standard Winstone 2001 and Content Creation 2001, both from Ziff-Davis. While there are some concerns circulating about these benchmarks, I consider them to be the best overall benchmarks currently available for general PC performance testing. The reason for this is that they use real applications, the scores do include load time for a few of the applications, and they are primarily focused upon measuring the effect of those functions that cause users to wait. In other words, instead of including functions where the system is waiting for user input, the Winstone benchmarks are timing primarily those activities that typically cause a user to wait for the system. More about this can be found in the Benchmark Insider articles Hitting the Hot Spots and Idle Moments in Winstone99.

The benchmarks were run 5 times each, with a disk defrag and reboot between each. Ziff-Davis recommends reporting the high score as the final number, as this supposedly gives the best performance scenario for the system. However, I feel that this is somewhat misleading and an average score is a better indicator of what most users will see. ZD also recommends throwing out the first run as a ‘training run’ because it usually is the lowest score, for some unknown reason. My results show that the training run is sometimes the lowest, sometimes the highest and sometimes in the middle – so I decided that throwing out the high and low scores and averaging the rest will provide the best ‘normal’ scenario. In the charts provided, I report only this average score, however the Table of Results at the end also includes the high and low scores that were omitted from the averages.

For purposes of this research, I identified four different ‘pieces’ of a Windows system: The OS, the swap file, the applications and the application data. The assumption I made was that the OS would always be installed on the primary master hard disk. To generate base scores, I ran benchmarks with a single drive and all files on that drive. I did do testing with different partitions in different places on the drive (inner tracks, outer tracks), but these benchmarks revealed absolutely no difference as long as all files were on the same drive. As a result, I have included only the scores for the single drive setup where all of the files reside on a single partition – just as most systems are shipped by default.

I then installed a second drive as either a primary slave (i.e., single IDE channel) or as a secondary master (or a two IDE channel setup) and ran benchmarks with various configurations of application (the benchmarks themselves), swap file and application data (the benchmark test data) on the drives. As it turned out, putting the applications on the second IDE drive resulted in almost no performance improvement whatsoever, most likely because these benchmarks are only slightly dependent upon application load time (only two of the six applications are loaded while the timer is active). Therefore, I focused only on the scenarios where the swap file and/or data files were moved to a second IDE device.


Pages: « Prev   1 2 3 4 5 6 7   Next »

Be the first to discuss this article!