Virtual Memory Optimization Guide!

Discussion in 'Reviews & Articles' started by Dashken, Oct 29, 2004.

  1. Adrian Wong

    Adrian Wong Da Boss Staff Member

    Thanks, Bill. :mrgreen:

    Actually, no. Windows reads from the paging file 4KB at a time, irrespective of how large your cluster size is. Here's a quote from the page on Page Size Vs. Cluster Size :

    Hope that helps you some! :mrgreen:
     
  2. Wanderer

    Wanderer Newbie

    First Time writer in this Forum.
    Long Time reader.

    Have to say . . luv the name 'rojak'. Miss eating it a lot too!

    Well Done on your guide. It's a lot better than others.
    You leave very little to be questioned.

    I hope you can help . .

    I am going to run XP Pro. Just bought an additional hard drive and wants to set my PC up optimally mosting for video editing & stuff.

    After reading your comprehensive guide, here's what I am going to do .. (non-RAID)
    ----------------------------------------
    IDE 1
    Master: Hard Drive 1-120Gb 8Mb Cache
    Slave: CD-Burner
    IDE 2
    Master: Hard Drive 2-80Gb 2Mb Cache
    Slave: DVD-Burner
    ----------------------------------------
    HDD1: All to Drive C:
    HDD2: (Min&Max) 1.5Mb partitioned to Drive D:
    and the rest partitioned to Drive E:
    ---------------------------------------
    The 1.5Mb (partitioned as Drive D) will be solely for paging file.
    There will be no paging file on Drive C:. . .as per your guide.
    Video files rendering/encoding will be on drive C: mainly.
    DVD-Burner is on IDE2 slave . .also as per your guide.

    Am I doing it correctly? Or rather did i understood your guide completely ?

    Please help ! Thanks in advance.

    Damo
     
  3. Adrian Wong

    Adrian Wong Da Boss Staff Member

    LOL! I hope you will become a long time forumer too! :mrgreen:

    Well, as far as I can see, it's quite a good solution, if you can spare 1.5 GB of space just for the paging file.

    Alternatively, you can choose to put a 500MB paging file in drive C and 1GB in drive D. This allows Windows to dynamically decide which is faster for its needs at any particular moment.
     
  4. Wanderer

    Wanderer Newbie

    Hi again,

    Recently, a friend has no need of a Promise PCI IDE Controller Card Ultra 100TX2.

    Would there be any issue with my DVD Burner on this PCI IDE Controller Card on its own? (That way, all devices are 'Masters'.)

    I mean would my burner still perform as fast and flassless on it?

    (*By the way, if my DVD Burner is indeed a 'Slave', would it suffer any issue functioning as one in the motherboard's IDE2?)

    Thanks
     
  5. Adrian Wong

    Adrian Wong Da Boss Staff Member

    Err.. Actually, this is off-topic. :mrgreen:

    What has your DVD burner have anything to do with virtual memory? :mrgreen:

    But going off topic for a while, yes, your burner will have no problem on that IDE controller.

    If it's sharing an IDE channel with another drive, well... Only one device can be active at any one time on the same IDE channel so if your other drive is active, then your DVD burner will be affect.

    Hope that helps you some.
     
  6. Wanderer

    Wanderer Newbie

    Hi,

    Well, it sounds related . . but I apologise if it is.
    You see, I couldn't decide if i should go this way,

    Scene 1 ...........................................................
    IDE1
    Master: HDD1

    IDE2
    Master: HDD2
    Slave : DVD Burner

    Conclusion: Possible compromise to DVD Burner (being a slave)but ideal for virtual memory on HDD2.
    ---------------------------------------------------
    or this way,

    Scene 2 ...........................................................
    IDE1
    Master: HDD1

    IDE2
    Master: DVD Burner
    Slave : HDD2

    Conclusion: Possible compromise to virtual memory on HDD2 but ideal for DVD Burner (or so I have read).
    -------------------------------------------------------

    To get the ideal installation, I thought okay right , just hook
    HDD1 to IDE1 as Master, hook HDD2 to IDE2 as Master and
    get a PCI ide controller card . . . and hook up the DVD burner off it . .

    Then I thought of another potential problem . . .would it be treated as a Master or Slave ? Also, I have read some DVD Burner does not work at all off a PCI ide controller card.

    Sorry Adrian . . as you can see . . I am doing this all in the name of virtual memory optimisation . . for my DV to DVD adventures. How would you place them, in my situation ?

    Damo W.
     
  7. Adrian Wong

    Adrian Wong Da Boss Staff Member

    Err.. To be honest, it doesn't matter if you put any drive as Slave or Master.

    The designation Master does not mean the drive will be given a higher priority or anything special.

    So, both setups are essentially the same. Hope that helps you some. :mrgreen:
     
  8. pigasus

    pigasus Newbie

    Moving the paging file to outer tracks

    I have found the guide extremely helpful. I was particularly interested in being able to move my permanent paging file (which is on my c drive) to the outer tracks. To this end I did a boot-time defrag of the paging file using Diskeeper Pro. After rebooting, I checked to see where the paging file was and it was still in the middle of my c drive. I contacted the helpful people at Executive Software and asked them if I was doing something wrong. Here is their reply:

    "Your paging file should be in one contiguous file, if it wasn't that's when Diskeeper would Defragment it and put it together, but Diskeeper cannot change the location, since this is controlled by Windows."

    Any thoughts on this? I'd really like to get the paging file as close to the outer tracks as possible.

    Thanks,
    Sally Schreiber
     
  9. Adrian Wong

    Adrian Wong Da Boss Staff Member

    Well, it used to be possible with older versions of Norton Utilities, etc. But it appears that now the new defragmenters do not allow this. :think:

    You can actually still do it but it's not an easy procedure.

    You will need to format the partition, copy several large files into the drive to take up the space you want to use for the paging file, let's say 1GB. Then install Windows into that partition.

    After installation, set it to use no paging file. Reboot. Delete those temporary files you used to reserve the outer tracks. Set Windows to create a paging file of the size you want and reboot.

    That should do it. But as you can see, it's a lot of hassle. Like I posted before, the key is achieving an optimal balance between effort and absolute performance.

    As long as the semi-permanent paging file is created just after installing Windows, it should be practically as fast as a paging file at the absolute edge of the platter. This is especially true for the higher-density platters used in hard disks these days.
     
  10. pigasus

    pigasus Newbie

    Well then, I guess I'll just let the chips fall where they may. :)

    Sally
     
  11. Adrian Wong

    Adrian Wong Da Boss Staff Member

    Hehe.. That's what I would do too. :mrgreen:
     
  12. jedisb

    jedisb Newbie

    The only thing I would add to this great article is that the paging file can be made contiguous without having to shell out any more for DiskKeeper.

    PageDefrag from Sysinternals is a freeware utility that will defrag the paging file in Win NT/2000/XP at boot time. It also defrags log files and the registry hives.

    [​IMG]
     
  13. quux

    quux Newbie

    RAID 1 not slower

    On page 36, you say

    "A paging file on a RAID 1 array may benefit from a faster access time. But because the paging file has to be mirrored on a second hard disk, this greatly degrades the paging file's write performance."

    I disagree. In many (if not most) cases, RAID 1 write performance be the same as, or will be only marginally slower than, a single disk because

    1. Data is written to the array controller's cache memory, then spooled out to the mechanical drives. (In short writes this makes the array seem almost as fast as RAM. In longer writes, things slow to mechanical speed after the cache is full.)
    2. Mirroring to a second disk doesn't take more time - writes to both disks are happening simultaneously. So it takes the same time as writing to a single disk (if both disks have the same speed characteristics)

    By the way, when there is a slowdown in RAID 1 performance, it can probably be attributed to error-checking - which is all the RAID controller needs to do. Other than that, the controller is simply sending the exact same command stream to two disks simultaneously.

    It's hard to find actual RAID performance benchmarks (as opposed to theories based on how people think RAID works), but I did find these graphs, produced during Tweakers.net's RAID card shootout. There are multiple lines representing multiple RAID controllers. Keep in mind the general trend, which is straight or nearly straight lines between single-drive and a two-drive RAID1 array. All of the below show write performance measured in different ways:

    1. pg 19 - Sequential write transfer rate
    2. pg 21 - StorageMark2000 This is combination read/write in various usage scenarios. Not really illustrative of my point but thrown in for fun. Basically RAID1 is slightly faster than a single disk overall (probably due to a combination of interleaved READs and whatever cache RAM is on the RAID card)
    3. pg 23 - backup server. In this test the array is being written to by 7x clients
    4. legend (which drive is which colored line)

    There's more but I think you see the trend. There's really not much degradation in write performance between a single drive and a RAID1 mirror set.

    But I don't hold it against you! There's a lot of old data and out-of-date theories about RAID out there on the net, and even in textbooks. I don't think many people are updating the base knowledge from the old days, to account for the more-sophisticated RAID controllers we have these days.

    I really wish someone had the time and money to do more RAID benchmarking like this recent Tweakers article (especially with high-end server RAID controllers).
     
  14. KilleenWizard

    KilleenWizard Newbie

    Some comments

    I've read the article, which I thought was very interesting, and I've updated my own memory-related articles accordingly: http://www.local.nu/HelpDesk/index.php?title=Memory and http://www.local.nu/HelpDesk/index.php?title=64-bit_CPUs. I have also been reading the forum messages. Here are my comments on some of the forum messages:

    I saw someone mention a 2G limit. My understanding is that each application gets (with 32-bit CPUs) 2G of application data space and 2G of system data space. The MS article implies that it's 2G/2G for the entire system, which doesn't sound right, considering that the system virtual memory limit for 32-bit CPUs is 64G, and the physical memory limit for Xeon CPUs is 16G.

    Under XP, the max size of any one paging file is 4G. I'm not sure if that's mentioned in the article.

    I agree that programs that claim to recover memory are a no-no; if you find that they seem to help, then chances are that some program is misbehaving (leaking memory). Fix the problem, not the symptoms. Check that all programs are current versions.

    On the problem of the split paging file on an empty drive; this is probably due to the $Mft file, if it's an NTFS partition. This can be seen if you use Diskeeper. Try optimizing the drive and the size of $Mft (Diskeeper 9 allows resizing it) before you create the swap file on the drive. I ran into the same problem myself, but moved on to other things before I tried this suggestion, so I don't know if it'll work or not.

    There seems to be a problem with definitions. Windows uses virtual memory. All applications run in virtual memory, and my understanding is that nearly all system-level routines run in virtual memory also. The underlying low-level support drivers implement virtual memory from a combination of physical memory and paging files. Thus, any (theoretical) discussion of "spillage" should be referring to "spilling" into the paging file.

    On memory usage, I would expect to set it for "Programs" for systems that are compute-bound, and "System cache" for programs that are disk-bound (i.e., a file server). Since the CPU is faster than the hard drive, if Task Manager shows that the CPU spends a lot of time near 100%, then it should be considered compute-bound, no matter how much the hard drive is running.

    A 38MB (I originally thought it said 38GB) partition is small, only a single cylinder on modern hard drives, and probably not worth the effort of messing with. If nothing else, it helps keep the head away from the critical Master Boot Record (MBR), although it wouldn't help much with a head crash. It might actually be a good idea to have this, especially if have multiple partitions. The BOOT.INI file is probably also on this partition, which is another reason to keep it there, to help prevent accidental deletion.

    --Scott.
     
  15. KilleenWizard

    KilleenWizard Newbie

    There are a LOT of things that could stand to be reviewed properly. Organizations like Consumers Union use small sample sizes, so that the results would get sneered at by a statistician. There's tons of biased reviews out there, such as much of the MS vs Linux stuff. There's reviews by people who don't know how to configure things properly -- although that should be a loud hint that things are not documented well enough, or too hard to configure. And so forth. Of course, a lot of things aren't well-documented simply because no one has sat down and done a proper study of the situation, to find out what actually works best.

    --Scott.
     
  16. Adrian Wong

    Adrian Wong Da Boss Staff Member

    Hey, thanks for the tip, jedisb! I will update the guide when I can. :thumb:
     
  17. Adrian Wong

    Adrian Wong Da Boss Staff Member

    Hello quux,

    Actually, I must say that I didn't clarify that properly. I wrote that in the context of the cheap IDE RAID controllers that come with many motherboards.

    These IDE RAID controllers do not have any cache memory. Also, some users may not know enough to keep them on separate IDE channels, instead of on the same channel. In such cases, then mirroring would definitely take a much longer time.

    If you take a look here (http://www.rojakpot.com/showarticle.aspx?artno=141&pgno=9), that is what happened...

    [​IMG]
     
  18. quux

    quux Newbie

    Hi Adrian,

    Thanks for the reply. I had one of those Promise Fastrak RAID controllers once upon a time, and it was just a terrible terrible device. Mine was running 4x 75gb drives in RAID5 and every so often it would just lose its configuration - and all data! Luckily I kept good backups. I think this happened five or six times over the course of a year and a half. When I finally got a better file server setup for home, I smashed that old Promise card to smithereens!

    I never did any performance testing of the device, but to see such horrible performance in a mirrored situation suggests to me that the controller (or configuration) was horribly flawed in some way. Mirroring is the simplest (calculation-wise) of all the RAID operations; it shouldn't take such a huge performance hit. Come to think of it, RAID0 is not getting any better performance than a single disk either. Also, the graph isn't marked - are those IOPS, or MB/sec, or what?

    Just a couple of years ago, IDE RAID solutions sucked pretty badly. I think things are better now, as the results I linked show (those were all IDE too!). IDE RAID may never be as good as the professional fiber & SCSI gear I get to use in the datacenter at work, but that's why we pay the big bucks in there!
     
  19. Adrian Wong

    Adrian Wong Da Boss Staff Member

    Hehe.. Professional-level RAID is REALLY expensive. :hand:

    Oops. I think we missed the label.. It's supposed to be MB/s.
     
  20. skywalka

    skywalka Newbie

    Hi guys. :wave:

    My first post! The pagefile has always baffled me & this great guide has answered many of my ponderings. But I signed up hoping someone may be able 2 answer some more questions for me.

    I remember I once was using a program (maybe Adobe Photoshop?) & I received an error when starting it because I was not using a pagefile. Is that what this statement is referring 2? These days I am not using any programs that tell me I need 2 have a pagefile. So do some (or all) programs have pagefile requirements that R not openly divulged?

    Let's assume I had 1 million gigs of ram & I was using Windows XP. Apart from the memory dump requirements, would I still need a paging file?

    Am I correct in saying Windows will not do a memory dump if there is no pagefile?

    I moved my paging file to another physical harddrive after reading your guide. Windows reccomends 1534 MB for my system. This is on my D drive.

    Using Diskeeper I got the following notification:
    There is no paging file on your boot volume. Microsoft highly recommends that you have at least a small paging file on your boot volume. Y is that?

    I tried keeping my D drive's setting & changing my C drive setting to be managed by the system. After rebooting my page file was 1534 MB ON EACH harddrive! So I then tried setting my C drive to the 2 MB minimum but noticed that the total is displayed as 1534 MB not 1536 MB. The C drive 2 MB is not included.

    Thanx 4 looking.
     

Share This Page