September 22, 2006
Windows Vista has a radically different approach to memory management. Check out the "Physical Memory, Free" column in my Task Manager:
At the time this screenshot was taken, this machine had a few instances of IE7 running, plus one remote desktop. I'm hardly doing anything at all, yet I only have 6 megabytes of free physical memory.
Now compare with this screenshot of Windows XP's Task Manager under similar low-load conditions:
Under "Physical Memory, Available" I have approximately 1.5 gigabytes of free physical memory, as you'd expect.
So what's going on here? Why is Vista using so much memory when I'm doing so very little?
To answer that question, you have to consider what your computer's physical memory (RAM) is for. Just as a hypothetical, let's say you wanted to create a new text file:
- You double-click on the notepad icon.
- The Notepad executable loads from disk into memory.
- Notepad executes.
- Notepad allocates free memory to store your text document.
So Notepad clearly needs a little memory for itself: enough to execute, and to store the contents of the text document it's displaying. But that's maybe a couple megabytes, at most. If even that. What about the other 2,046 megabytes of system memory?
You have to stop thinking of system memory as a resource and start thinking of it as a a cache. Just like the level 1 and level 2 cache on your CPU, system memory is yet another type of high-speed cache that sits between your computer and the disk drive.
And the most important rule of cache design is that empty cache memory is wasted cache memory. Empty cache isn't doing you any good. It's expensive, high-speed memory sucking down power for zero benefit. The primary mission in the life of every cache is to populate itself as quickly as possible with the data that's most likely to be needed-- and to consistently deliver a high "hit rate" of needed data retrieved from the cache. Otherwise you're going straight to the hard drive, mister, and if you have to ask how much going to the hard drive will cost you in performance, you can't afford it.
Diomidis Spinellis published an excellent breakdown of the cache performance ratios in a typical PC circa January 2006:
|Nominal ||Worst case ||Sustained ||Productivity |
|Component ||size ||latency ||throughput ||$1 buys ||(Bytes read / s / $) |
|(MB/s) ||Worst case ||Best case |
|L1 D cache ||64 KB ||1.4ns ||19022 ||10.7 KB ||7.91Ã‚Â·1012 ||2.19Ã‚Â·1014 |
|L2 cache ||512 KB ||9.7ns ||5519 ||12.8 KB ||1.35Ã‚Â·1012 ||7.61Ã‚Â·1013 |
|DDR RAM ||256 MB ||28.5ns ||2541 ||9.48 MB ||3.48Ã‚Â·1014 ||2.65Ã‚Â·1016 |
|Hard drive ||250 GB ||25.6ms ||67 ||2.91 GB ||1.22Ã‚Â·1011 ||2.17Ã‚Â·1017 |
In summary, here's how much faster each cache memory type in your computer is than the hard drive:
|System memory||37x faster
|CPU Level 2 cache||82x faster
|CPU Level 1 cache||283x faster
Those figures explain why I only have 6 megabytes of "free" memory in Windows Vista. Vista is trying its darndest to pre-emptively populate every byte of system memory with what it thinks I might need next. It's running a low-priority background task that harvests previously accessed data from the disk and plops it into unused system memory. They even have a fancy marketing name for it-- SuperFetch:
In previous versions of Windows, system responsiveness could be uneven. You may have experienced sluggish behavior after booting your machine, after performing a fast user switch, or even after lunch. Although too many carbohydrates might slow you down after lunch, your computer slows down for different reasons. When you're not actively using your computer, background tasks -- including automatic backup and antivirus software scans -- take this opportunity to run when they will least disturb you. These background tasks can take space in system memory that your applications were using. After you start to use your PC again, it can take some time to reload your data into memory, slowing down performance.
SuperFetch understands which applications you use most, and preloads these applications into memory, so your system is more responsive. SuperFetch uses an intelligent prioritization scheme that understands which applications you use most often, and can even differentiate which applications you are likely to use at different times (for example, on the weekend versus during the week), so that your computer is ready to do what you want it to do. Windows Vista can also prioritize your applications over background tasks, so that when you return to your machine after leaving it idle, it's still responsive.
This isn't a new concept, of course. But Vista treats system memory like a cache much more aggressively and effectively than any other version of Windows. As alluded to in the above lunch anecdote-- and as you can see from the Task Manager screenshot above-- Windows XP has no qualms whatsoever about leaving upwards of a gigabyte of system memory empty. From a caching perspective, this is unfathomable. Vista tries its damndest to fill that empty system memory cache as soon as it can.
Although I am a total believer in the system-memory-as-cache religion, SuperFetch can still have some undesirable side effects. I first noticed that something was up when I fired up Battlefield 2 under Vista and joined a multiplayer game. Battlefield 2 is something of a memory hog; the game regularly uses a gigabyte of memory on large 64-player multiplayer maps. During the first few minutes of gameplay, I noticed that the system was a little sluggish, and the drive was running constantly. This was very unusual and totally unlike the behavior under Windows XP. Once the map is loaded and you join the game, the entire game is in memory. What could possibly be loading from disk at that point? Well, SuperFetch saw a ton of memory freed to make room for the game, and dutifully went about filling the leftover free memory on a low-priority background disk thread. Normally, this would be no big deal, but even a low-priority background disk thread is pretty noticeable when you're playing a twitch shooter online with 63 other people at a resolution of 1600x1200.
I'm perfectly fine letting SuperFetch have its way with my system memory. The question shouldn't be "Why does Vista use all my memory?", but "Why the heck did previous versions of Windows use my memory so ineffectively?" I don't know. Maybe the rules were different before 2 gigabytes was a mainstream memory configuration.
The less free memory I have, the better; every byte of memory should be actively working on my behalf at all times. However, I do wish there was a way to tell SuperFetch to ixnay on the oadinglay when I'm gaming.
Posted by Jeff Atwood
I'm all for more performance. (Who wouldn't be?) But super-aggressive brainiac schemes like this don't appeal to me, at least, not in theory.
Responsiveness is important in a UI, sure. But another thing that's important is *predictability*, which makes a system more learn-able, something a user can become familiar with and adapt to.
If Vista is guessing what apps I'll want to load at a particular time of day or on a particular day of the week, its responsiveness will change when my routine is broken.
I think that's bad. A major app that always takes a few seconds to start is OK. It would be hugely irritating to have major apps *usually* start up almost instantly but *sometimes* take many seconds.
With hybrid flash-cached hard disks and pure-flash hard disks supposedly on the horizon, the benefits of such a complicated scheme may be short-lived anyway.
This performance dip will go away as people programming games learn to deal with Vista. This is only a problem for the first generation of things, it will go away. Probably quickly; if Linux is suddenly the best option for running XP and older games, M$ is going to be in for a big fat surprise.
All it really would take for them to lose their home market dominance in a few years would be WoW / Starcraft / Warcraft III not running as smoothly on Vista as is it did on Linux.
So the question becomes: why do they bother having the graph if it's going to read 100% all of the time?
The second question becomes: how do you know how much memory is really being used, and how much is a cache that the OS will throw away if needed?
Linux systems have a similar philosophy - any file read goes into cached memory until it is full (lacks the super-fetch approach, though). But at least you can easily tell what's really used and what's a cache.
Caching.. hm. well..
Maybe Vista is better doing this, but I'm currently using XP with 2Gig, and for me, XP is sucking most of the memory for its System Cache, especially when doing lots of file-activites it's using every bit of the memory, and after a while the computer is just getting sluggish. it looks like it can't handle the size of the system cache, and doesn't throw things out.. so, I reboot the system, and it acts normally again..
I really hope this will work better in Vista..
As long as there's some sort of manual control over what to prefetch (similar to how the system tray will guess your commonly used tasks, but still give you manual control over what shows), then it could be kinda neat.
Fast startup times since it asyncronously loads after boot, combined with fast app launching times for mozilla and WOW... that's about all I'd want anyway.
Oh come on you're clearly covering up the fact that memory dealers are in bed with Microsoft to sell unnecessary RAM to poor old grandma.
But seriously, I've noticed this happening more with OS X with each release as well (though others may call it bloat I swear it's being more aggressive caching things in memory)
That said, I've often wondered; as there is a miss penalty for caches, is preloading the main memory in a cache-like manner only going to increase thrashing for many end users who may still somehow be surviving on less than a gig or two of ram under xp?
e.g. if you have 4gb of ram you can pretty safely start throwing stuff in there and still have room to spare. With a more paltry (normal?) 512 it seems like you'd have to have a pretty smart and judicious algorithm for managing what goes in there. Though I suppose we do have decades of cache population/eviction theories to rely on. It's just the penalty of swapping to disk is so tangible, and even audible.
I think any perceived dip in performance per Jeff's anecdotal game performance story is unlikely anything to do with SuperFetch. There is no way to know that SuperFetch itself is causing this difference - it could be any number of complex side effects that cause this to be the observed behavior. After all there are 1000's of difference in Vista any of which could be impacting the behavior directly or indirectly.
The issue SuperFetch looks to address is this:
In a demand paged virtual memory system, things get flushed out to disk automatically based on needing to make more physical memory available. That is, less recently used pages gets pushed to the page file to make more room available. A user is affected by this when they go to lunch and something low priority (or periodic) runs causing all the desktop apps and data to page to disk. However, when the operation finishes, nothing causes anything to get swapped back INTO physical memory proactively. Therefore you have a bunch of free physical memory after returning from lunch because everthing is now idle including the background processes that ran.... It takes the user to return from lunch and try to click on MS-Word window or something to swap stuff back in....again on demand.
SuperFetch looks to address this proactively for the user by attempting to bring things back into physical memory auotmatically via some clever usage statistics of predicting what you are going to try and do when you return to using the computer.
This is stictly on an opportunity basis as a low priority operation itself and should never affect actual normal priority behaviors. Anything proactively restored to physical memory can be easily be discarded since it is already backed by the pagefile....no differently than the case where you were already using MS-Word and simply switched to a new application.
The only case SuperFetch should be able to affect anything negatively is the following pathelogical cases:
-It predicts wrong at the exact instant you were going to do something else and it starts to chew up some physical memory you all of a sudden need for something else. This impact should be trivial as SuperFetch's actions will cease as soon as you do something else more important.
-The added size and complexity of the operating system to implement this feature and to track statistics and such. Again, should also be fairly trivial in the grand scheme of things.
on my RC1 build, it initially appears that you can start/stop superfetch just like any other service.
As far as having manual control over whats cached and whats not, I personally don't care. the L2 and L1 caches have served us pretty well through the years without us nerds micro-managing what is/isn't cached. I'm not approaching Superfetch any differently.
So it is a service that you can go and shutdown?
Also if you look under processes does it show up how much it is memory it is taking up.
So it is a service that you can go and shutdown?
Also if you look under processes does it show how much memory it is taking up.
did game play speed up after the first few frags? How about trying to window out of the game hitting IE7 and then back to the game, are there bad performance hits for that?
Caches buy you less and less the bigger they are. I just can't imagine what it could even populate 2GB of RAM with. I don't think I even have 2GB of *applications* on my computer. Maybe if it cached data files too, but even then I'd be hard pressed to come up with 2GB of data I use on a regular basis.
This would be a tool I'd definitely want to play around with before enabling. Loss of game performance to keep grandma's cookie recipe in memory at all times doesn't seem like a very good trade-off to me.
Jeff, you need to read "Windows Internals, Fourth Edition". Task Manager in XP is a big fat liar. Windows Vista's Task Manager is not comparable.
The figure quoted as 'available' in XP is the sum of zero, free, standby and modified lists. The 'system cache' figure is the sum of the system cache working set (amount of physical memory used by the file system cache plus the physical memory used by pageable code and data in drivers, plus the kernel's paged pool) and the standby and modified lists. Your screenshot shows the double-counting: Available + System Cache is 1.5 times physical memory!
When a page is taken out of (trimmed from) a working set, it isn't immediately reused. Instead it is put on either the modified (if modified since last written to the page file or memory-mapped file it belongs to) or standby list. Links are kept from the process to the page. If the process then references the page again before it's paged out, it causes a page fault to occur but Windows can satisfy it simply by fixing up the page table entry - this is termed a 'soft fault'.
So how do pages get onto the other lists? Modified pages become standby pages by being written out to disk - a couple of background threads do this, mapped file pages are written after a maximum of five minutes on the list, and all pages start to be written after the modified list reaches 800 pages (about 3MB). Standby pages become free pages as the balance set manager (a thread that wakes up every second) runs, if the free list is too small. Finally, free pages become zero pages as the system zero page thread (runs at priority 0 and therefore only runs when at least one processor is idle) writes zeros to free pages.
When allocating physical memory Windows prefers zero pages for private user-mode page allocations, and free pages for kernel mode or mapped-file page allocations. (The use of zero pages is a security requirement to prevent processes being able to read other processes' data). If the zero page list is exhausted, Windows will use the free list and zero the page on demand; if the free list is exhausted for a free page allocation, it will then use the zero page list. If both lists are exhausted it will then try the standby list - it then needs to unlink the page from its original process at this point - and in the pathological case that that is empty, it has to take a modified page, write it out, unlink it, zero it, and then give it to a process.
This is why these lists exist - Windows has done work when idle to ensure that there is physical memory available on demand.
Speaking of on-demand: when VirtualAlloc is called to allocate memory, no physical memory is actually allocated. The 'commit charge' is simply incremented. It reserves space in the page file to make sure it can't overcommit on memory, but this reservation is a logical one - nothing is written to the page file. Only when a page is touched, and a page fault ensues, does a physical memory page get allocated (and the data loaded from disk if touching a memory-mapped file).
This commit charge is the value actually shown on the 'PF Usage' meter in XP's Task Manager, and in 'Commit Charge' total. 'Limit' is the sum of all page files plus physical memory minus whatever Windows can't page out.
There's no limit to how large the standby list can grow. The file system cache has a limit on its virtual address size of approx 300MB IIRC, but since this is implemented as a working set, pages discarded from the cache go on the standby or modified list! However, if files are opened as sequential scan, they go on the front of those lists, not the end, so are likely to be reused more quickly.
While on the subject of Task Manager, the Process tab's 'Mem Usage' column is actually the process's working set. However, this includes a lot of shared pages from system DLLs, so the sum of 'Mem Usage' normally is significantly larger than physical memory. It's impossible to tell if your program has a memory leak from this column. Switch on the 'VM Size' column (actually process private bytes) to monitor this, although this won't necessarily drop when you free from a heap.
So XP normally has a lot of moderately-recently-referenced data sitting in memory (but not recently enough to keep it in the working set) - well, once the system has been running for a while. The difference is that Vista is actively preloading data that it thinks you might use soon.
It is legitimate to have a blog on Bl*gSp*t, you know! (couldn't post originally due to URL)
A few questions.
First, even though it says only have 6MB free in the text of the task manager. The graph shows 905 vs 334 MB being used back in XP. Is the OS truely using that much more memory?
Second, why is your page file 4gb?
I wonder what effect Vista will have on the life expectancy of RAM...
hurts realtime game performance badly
I wouldn't say "badly". It's just annoying for the first minute or two of gameplay. The game is plenty playable.
I should also note that I'm preternaturally sensitive to hitches in framerate.
2Gb a mainstream memory configuration? Maybe among us techies but not for 95% of the PC buying population.
I've no evidence to back it up other than what I see non-developer friends buying and they generally get machines with 512Mb in.
I also share some of Aaron G's concerns. Memory intensive applications such as video editing and image editing are optimised around the OS's current memory management strategies. Until they're updated there may well be some performance drops.
Time will tell I guess.
This is very interesting, but can you tell me why my system doesn't FEEL any faster?
I just can't imagine what it could even populate 2GB of RAM with.
Just wait for Office 2007, or Visual Studio 2007
I wonder if this has any effect on power consumption (higher RAM usage), especially on notebooks?
So, what does it save to the HD when it goes into hibernate mode?
RAM _usage_ isn't important, except in certain server chipset (and, apparently, the upcoming DDR3 Sonoma chipset), since all the memory is always on. But if it's reading and writing to it often, that will definitely increase power consumption. Just depends on how stable it is, I guess.
I'd be happy to have XP preload .Net libraries at some point after startup. The only time I ever have a slowdown, even on a cheapo 5400 laptop drive, is loading the .Net runtime up for an app. Once it's up, no slowdowns anywhere.
The two big similarities in XP are the indexing service and the realtime defragmenter. The indexing service was a miserable failure, since it didn't integrate into windows search without bizarre syntax, and caused a significant performance hit at times. Defrag is much better; not amazing but it runs much less and does a reasonable job of keeping performance to certain levels.
It remains to be seen which one the huge vista cache wll take after.
No poweruser people only open 1 internet explorer window, and star reading his mail, and infecting his computer with virus. Thats computers have only 256 MB of ram, and you can open more programs, but is not the normal use.
But future users will open more programs, and frezze others. Linux users are a good example of powerusers: some linux users open firefox monday, and close it sunday, 7 days with the application open. As you have more memory you understand that you dont really need to close apps.
With +768MB people will start using more than 5 IE windows, and every windows will have more than 3 tabs. So RAM usage will grow a lot. And more windows users will work like linux power users.
disks will wear out faster, CMOS DRAM will use more power (proportional to state changes), fans will run more, busses will be busier (sic) ... basically, you're using more of your hardware more of the time. perhaps that's why our brains only use 10% of their capacity -- using more would wear them prematurely and it's difficult to cool a brain as it is now... :)
my mom told me to put stuff back where i found it. displacing a cache line is fine, as long as you put the original contents back when you're done using the cache. but what that original line is no longer available, or needed? then choose something else to put back. that's all that superfetch is doing. microsoft should make "refetch" APIs available so that third-party companies can tune it to prefer real-time app's, drivers that consume lots of RAM, and locking app's into VM... seems that the old unix "sticky" bit is now required again... remember this:
chmod +t vi
Very interesting article. Many people should read it instead of bashing Vista for no reason at all and thinking you will need at least 2Gb to run it find.
The way SuperFetch work it seems like your PC would be faster after a few weeks of use so it knows what you do and when you do it. Quite like some thermal paste takes 200 houres to be at 100% efficiency. The only thing I'm also wondering is if you change your routine, will it gets slower/worse?
On my current computer, WinXP would recommand 2Gb of PageFile to me so I would set min/max to 2048Mb. Now it seems like Vista recommands more "3Gb" even if it doesn't use more.
Something that I'd really like to know is if you plug a USB stick to use "Windows ReadyBoost" feature, will the OS use in first our fast ram or that usb key? Because if it would use the key which is obviously slower than any ram, it would result in slower performance. I'd really like to know more about how it works as it could be really interesting when the "next generation" of usb key will be out. Those new ones will be small like a 25 and be ~6Gb! Ah and mostly, would it be much faster than the PageFile?! As I won't keep my usb key plugged if it's the same thing or worse than PF. With the low price of huge HDD, it's no more a problem to raise the value of the swap file.
I'm sure it would need a day or three to relearn your new habits, bt in the meantime things will only be as slow as they'd be without the cache, assuming the memory manager has the power to allocate over the cache at will without overhead.
Readyboost is not a RAM expander, it's a hard drive cache. It wouldn't make any sense whatsoever to use it as main memory when DRAM is orders of magnitude faster.
Disks are the only hardware components that are really going to wear out from overuse, rather than from age or abuse, adn this won't even put extra strain on them. Most compenents die from a bad power supply or improper cooling, and the amount of heat these caching technologies will generate doesn't compare at all to how Aero will keep the cpu and gpu burning.
This performance dip will go away as people
programming games learn to deal with Vista.
This is the real kicker, and I'm guessing that it's not the kind of thing that can be solved with a "10 Ways to Make Your Game R0XX0R On Vista" article in MSDN.
It's something that happens with each major change in the technology stack, from the CPU on up through drivers, OS, and utilities, and it's only solvable through trial and error.
For example, take MSFT's own MechCommander 2. The disk usage pattern of that game is such that it interoperates horribly with realtime virus scanners. It seems obvious now, but it probably didn't when the game was being developed.
A change to the stack always hurts initially, no matter how good the developers making the stack change or the games might be.
Anyone else here remember when Win95 would not only boot in 4mb of ram but was actually useful?
905mb of ram vs 334?? I think the more realistic question is what is that CD's worth of ram doing?
And why would I want memory allocations and drive access to be constatntly waring with some background thread that thinks it is "helping" by loading something I might, but probably won't actually need?
There had better be a way to tame/turn this off or I'm going to be pissed. Especially since I plan on setting up a fast RAID config sooner or later, in which case app start-up time will drop considerably anyway but I'll still want to be able to use my RAM quickly.
Also, Windows historically sucks at managing memory. It gets very fragmented, which leads to lots of swapping which leads to sluggish performance which eventually leads to crashes. As a matter of fact, the ENIAC (yes, that thing with the vacuum tubes that were constantly burning out and had to be replaced) was actually more stable than my Windows system is now due to the fact that I regularly use many memory-intensive apps whose demands Windows cannot meet for extended periods of time and therefore eventually crashes. Ergo, constantly swapping possible hard disk data to and from the RAM seems like an incredibly bad idea to me, unless MS miraculously managed to write a competent memory manager for Vista.
Okay, I got work to do so I'm done flaming.
Ok, So I upgraded to 2.5GB on my games machine so I wouldn't have to deal with the annoying first 3 minutes of slowdown I got with BF2. And now if I upgrade to Vista and use SuperFetch I'll get that back regardless of the ram I have? No thanks.
Hopefully you can just turn this off. But since I've heard nothing about Vista that makes me want to switch and plenty to get me to stick to XP, I'll think I'll be part of the majority that stays.
I think the real question here is why hasn't Windows ever allowed the user to specify how much memory to dedicate to a process manually? Or assign a priority so that certain processes are never swapped out to disk if at all possible? I'd kill for that...
i"...Vista is trying its darndest to pre-emptively populate every byte of system memory with what it thinks I might need next..."/i microsoft is trying its darndest to pre-emptively populate every byte of system memory with what it thinks I might need next. they are usually wrong.
I wonder if this is worth a push to linux for the gaming industry. I see specialiszed/optimized gaming kernals in the future, where pre-emptive caching is kept to a minimum.
Microsoft's ideal predictive disk caching system:
IE: L2 cache
Visual Studio: RAM!
Apple web graphics: floppy disk
Linux ISO: they'd never use that, recycle bin
I think that superfetch is getting some undeserved criticism here. In particular, It seems that a lot of poeple believe that superfetch results in long load times in the case where:
1) all your memory is used to cache a specific set of programs, and,
2) you decide to start a program X, which isn't in the cached set.
I have seen nothing that leads me to believe that program X would take a longer time to start (with superfetch) then it would if superfetch wasn't in use.
What happens if the superfetch algorithm guesses your program choice incorrectly? well, program X is swapped into RAM from the hard disk. Guess what? this is exactly what happens if superfetch is turned off. Theres no need to write back a cached program to the hard drive if the program isn't in use. (and if the program is in use, you'd hit the hard drive penalty with or without superfetch).
Of course, not actually knowing the details behind superfetch, I could be wrong here. But it seems that in theory, there's no added penalty for superfetch. I'm not sure why so many poeple are "hoping" that this feature can be turned off. (by the way, as of RC1, it can be turned off).
I hope they have the most advanced theories on fragmentation, scheduling, paging, prediction, and hard drive thrashing. Microsoft has never had a great track record. I will await the final version.
Something to remember is that superfetch runs in background io priority. Because of the way io requests are scheduled, your foreground io request will never wait for superfetch longer than the latency of a single io request (max wait time is bounded).
Well, SuperFetch saw a ton of memory freed to make
room for the game, and dutifully went about
filling the leftover free memory on a low-priority
background disk thread.
I think that's not what's happening. 'A ton of memory freed for the game' would be 'freed' by the game by commiting or reserving it. So it would not be free but reserved.
I think it's about the 'low-priority background disk thread'. It is likely the problems you notice are due to badly prioritized threads. Wether it's the 'low-priority background thread' of a pre-release windows or wether it's some of Battlefield's necessary thread's running on a too low priority, I dont know.
how u turn off superfetch? its not lettin me use my pc at all with music programs , it lags them
how u turn off superfetch?
open up a command prompt, type 'net stop superfetch', and press enter. (you'll probably need admin privileges).
its not lettin me use my pc at all with music programs , it lags them
It's probably not superfetch that's at fault (but disabling it could be a good experiment). I'd make an educated guess that it's a driver issue, but of course I can't be sure. good luck!
program X is swapped into RAM from the hard disk. Guess what? this is exactly what happens if superfetch is turned off.
This is correct, as I see it. I can't see any disadvantage to running SuperFetch. You don't have to erase data in RAM before changing it, you just change it. If you're not going to use the superfetched pages then it'd just load the program you chose as usual. Because of SuperFetch's low I/O priority, there's effectively no latency (as already said)
i tried but i got the message accesss denied and im the system admin?
i tried but i got the message accesss denied and im the system admin?
when you open up the command prompt, open it by right clicking on the cmd.exe file, and select 'run as administrator' from the right click menu. You will have to go through the UAC dialog bs.
after doing this, you'll have a cmd window which *actually* has system admin rights. Even though you are already the sys-admin, you still have to do this... Confused? you should be. In my opinion, The UAC dialogs, unlike superfetch, are worthy of some criticism. Even if they do technically make the system more secure. More info here:
everythin runs faster now , only problem i found so far is when i start winwos, takes longer , something makes my computer think a lot after boot
There is a way to disable superfetch in vista by setting the following registry key to a value of "0":
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management\PrefetchParameters\EnableSuperfetch
A value of 1 prefetches boot processes, 2 prefetches applications and 3 is for both.
They should have given the option to turn this setting off in the computer management mmc but that works.
ok still not workin , i did set the value to 0 on registry and superfetch is still active , had to turn off again from cmd,
question my memory is still at 700 mb wheen i dont have anythin running, is vista usin those 700?
The theory on caching algorithms is still incomplete, and the practice very dependant on hardware for efficiency. I don't see where SuperFetch can manage to have predictors fiable enough to offset the (albeight small) additional cost of:
a) Reading data from the disk and filling the RAM
b) Resolve data dependancies when apps reserve some of the RAM that was used for caching.
The difference between this and L1/L2 proc-level caching being that proc cache memory is NEVER needed for anything else. That lets the founders implement caching strategies just tailored to the respective caracteristics of the hardware and cache sizes.
If Vista actually computes a caching policy on the fly, to fit the hw/sf caracteristics, think of the additionnal cost!
mmm was guessin that it turned on by itself after boot, ill try registry edit
Does anyone wonder why they had to create a cache optimizer in the first place for Windows? It's a sad day when an operating system needs a crutch with this abundent amount of system resources!
What they really ought to have done was just design Windows NT/2000/XP to have a more customizable memory manager like Windows 9X did.
Windows 95's overall memory management was pretty bad, but the idea of holding off on using the swapfile until physical memory was almost gone wasn't half bad. At least, it wasn't if you had lots of RAM in your computer, like 64 or 128 megabytes.
Windows 98/Me introduced a supposed "improvement" in memory management by performing swapfile operations when only about a quarter of memory was in use. The main idea with this seemed to be to improve the performance of memory-starved systems. The trouble was that systems obese with memory got the "improved" treatment, too, which often led to unnecessary swapfile operations when nowhere near all physical memory was in use, leading to reduced system performance.
Thanks to the "ConservativeSwapfileUsage" and "vcache" system options, power users with loads of RAM in their systems could customize Windows' behavior when it came to memory allocation for programs and disk-caching and virtually eliminate swapfile access to bring out the best performance their systems could offer.
In Windows 2000/XP, like Win98/Me's stock configuration, the disk cache gets priority in system memory, and little-used loaded program data is shoved out to the swapfile, even when there's plenty of RAM to go around, all leading to increased swapfile activity. There is no customizability for swapfile behavior or disk cache size range. The most customizability there is for memory management is a swapfile size setting and a setting for whether applications or background tasks get more attention. The rest is hard-coded. There's no condoned way to configure the system to potentially eliminate swapfile usage.
The best that I've been able to do to get zero swapfile activity in a WinXP system is to stuff my system full of RAM(at least 512 MB, 1 GB for games) and disable the swapfile completely. This has worked extremely well for me, but many experts warn that doing this does more harm than good, supposedly.
Windows 9X may have been less stable and supposedly had a far less efficient memory manager than Windows NT/2000/XP, but the level of customizability in that memory manager, in my opinion, allowed for the potential to run far more responsively than Windows NT/2000/XP over a longer period of time. I'll admit, I still run Windows 98 SE most of the time, because it stays fully responsive longer, and XP doesn't unless I use that discouraged no-swapfile setting.
I totally agree with you on disabling the swapfile. Windows XP (on my notebook) seems to like to grind the hdd for no reason, with swapfile disabled. Everything FLYS ... although there's the odd application that demands swapfile to be turned on (eg; Photoshop)
Yeah, disabling swap file in XP makes the computer a lot more responsive.
Actually, Photoshop is just being stupid. When it complains and asks if you want to continue, clicking either yes/no will continue (wonder why it asks at all). Then you can work normally (never had any problem before).
Some of people are really stupid, doesn't even know what they are talking about.
Superfetch will slow your computer down no matter how scheduler is well coded because it's something like emulating another Machine. Scheduler is a program and a program managed by a program doesn't get much of a cpu's speeding up features unless Superfetch is accelerated by a hardware using parallel technic. It will silently eat your resource because when it calculating threshold that's where is stops frequently, that's where the bottleneck lies. Of course it's so efficient so with a better CPU power you won't notice anything but with let's say like 1G cpu, you WILL suffer.
I suffered. After just disabling superfetch, my lag stopped. I use 1G cpu with 655 ram and I changed everything and it lagged as hell and after I disabling superfetch it stopped.
Of course it helped launching time of a program like a Game and it really works pretty well but a slight bit of a idle, scheduler will take the chance and will launch Superfetch related code and that's where your computer stops slightly and it will happen like 5 time a second and you are facked.
Turn everything on, but turn Superfetch, Prefetch, Intelligent start menu or something feature off if you are using slow machine. DLL force unload helps too.
Be careful disabling the swapfile. The benefits are questionable, and it generally adds more risks than it's worth for the negligible benefit.
The benefit is exactly the same as SuperFetch — not having my applications become sluggish due to the System Cache stealing all their RAM. Only I don't have to wait for the action which caused the System Cache to balloon to complete.
When I first encountered the bizarro situation where more RAM made my system more sluggish and realized the cause, I tried reducing the size of my swapfile. Guess what happened? Windows XP would complain constantly about not having enough virtual memory even when plenty of physical RAM was being consumed for cache! So I disabled the swapfile and have continued to do so on every desktop system that I've upgraded to 2GB+ RAM.
What's the risk? No crash dumps. Running out of RAM is probably a Very Bad Thing but in my experience the system becomes sluggish as RAM nears exhaustion so I take that as my cue to close something unimportant, like Outlook.
So I disabled the swapfile and have continued to do so on every desktop system that I've upgraded to 2GB+ RAM
I had some very unusual things happen to me with the swapfile disabled. The last thing I need is more voodoo on my computer at this point.
Can you point to a specific benchmark that demonstrates a concrete, factual, data-backed benefit of disabling the swapfile? If not, then why do it? You're assuming quite a bit of risk for no benefit.
Overall system performance is unlikely to be helped by disabling swap and any benchmark with a significant disk I/O component will probably perform marginally worse since the system will be unable to trade swap for cache.
On a system with excess RAM, disabling swap can solve the problem of "My long-running applications take a great deal of time to respond when they are first accessed following some disk-intensive activity." I don't mind that a new VMware instance may be marginally slower to boot if it means that I won't experience a swap-induced lag when I switch back to Visual Studio. It's a trade-off which works well for my usage patterns. Disabling swap would not be a viable option for someone whose memory needs regularly approach the amount of RAM they have and I would not be surprised by any weird behavior that occurs once RAM becomes exhausted.
PS: Bigger textarea, please.
Does ReadyBoost make hybrid hard disk drives unnecessary? As I understand it, ReadyBoost allows a USB thumb drive to be used as a cache for frequently-used hard drive content. Is that the same thing that the flash memory on a hybrid hard drive does? If one is already using ReadyBoost, how much of a performance improvement will one see from replacing one's hard drive with a hybrid hard drive?
Does ReadyBoost make hybrid hard disk drives unnecessary?
No, because the Flash RAM on the hybrid hard drive is accessible before the OS has fully booted.. so you can speed up the boot and hibernate/suspend sequences.
For the vast majority of users, disabling the page file is a very bad idea unless you have at least 2-3x perhaps the amount of RAM you normally use. The ill effects are described all over the place. For example, some apps allocate a significant amount of memory without ever using it (or take a very long time to use it). If you have a page file, this memory won't be allocated to RAM until it's used. If you don't have a page file, you have a big chunk of RAM doing nothing. Perhaps about 0.0001% of people out there will actually gain from disabling the page file, the rest will lose out significantly. The trouble is there is a lot of FUD out there about the page file and cache. The vast majority of people are too stupid to bother to read properly to understand what on earth they're talking about. For example, a lot of people call it a swap file. It's not, it's a page file. This isn't just a difference in name, it's an important conceptial difference. I currently have 2gb which is enough for me at the moment. But I would never ever consider disabling the page file. Perhaps if I had 6gb or so I guess, but that's way too expensive.
Forgot to mention. While there are probably some users (very few) who benefit from disabling the page file, as I mentioned, you also need to bear in mind that another reason why disabling the page file is a bad idea is because Windows simply wasn't designed to not have a page file. You may not agree with this design philosophy but it was Microsoft's decision and you choose Windows so you have to live with it. You should at least be aware, if you plan to disable the page file, that MS basically creates the page file in RAM because as I mentioned by design it isn't capable of operating without a page file. While obviously it's all still in RAM, so you've fulfilled your primary purpose of preventing it ever going out to disk, this should at least tell you about the Windows design philsophy and why disabling the page file is nearly always a bad idea.
Unless you've actually seen a properly conducted benchmark you should be skeptical of anyone's claims for improvements. People have a tendency to see what they want to see and hear what they want to hear. Double blind tests for example have shown how BS Monster cables are but a lot of people still claim they are super wonderful. Similarly I've seen people make bizzare claims about how much faster their computer is thanks to something when this is obviously not going to provide them a speed improvement or if it did, not to the level they are ascribing.
A lot of people say OMFG why is my page file in use when I have however much RAM. Or why is my disk thrashing every so often? Who gives a fick if it isn't affecting performance (it may cause added wear and tear, but it's unlikely to be that significant unless you need a major RAM upgrade). My point is, don't care if you disk thrases or if your page file has heavy use. Just care about performance (and when I say performance, I mean properly measured not my disk is thrasing so it must be slow). I wonder too how many people are so happy that an unused program can load up a bit faster but ignore the fact that they are degrading overall performance in many other areas. You need to consider overall performance, not single issue performance. Sure disabling the page file may improve the situation when you open up unused background programs, but what about the thousand performance hits you also suffer as a result? Humans aren't goo at measuring overall performance, so again, don't assume it's better, measure it properly. Perhaps it's because I'm a scientist but for me, the most important thing is repeatable, measured performance, not well I think it's faster...
Why would you want Windoze to use ANY memory on a slow, spinning hard drive if you have plenty of way faster RAM memory available?
Suppose you have a PC with 1G of memory and a 1G swap file. Now, suppose you add another 1G of memory so you have a total of 2G of RAM. If everything was running fine before, you won't need ANY swap file because you now have the same amount of total memory as before, except that now it's all in speedy RAM.
What does Windoze do? It now recommends a 2G or so swap file, as if mysteriously now need a total of 4G of memory. Why? Windoze does not do any analysis whatsoever. It just multiplies the amount of installed RAM by a constant and then 'recommends' the result. Stupid!
Here is another thing. Windoze does not make much if any distinction between fast RAM memory and way slow virtual memory on your HD. This is true for Windoze XP Pro, and I would be really surprised if they changed anything in Vista.
You would think that the OS would use up all available RAM and then swap to disk. Not so. The only way to stop swapping is to configure the swap file size to zero. If Windoze really needs some room on the HD, it will just take it, and politely notify you, regardless of your settings.
Case in point. I was running QUAKE 4 some time ago, and did not realize that my system was using the "Let Windoze Manage Virtual Memory" setting.
When I exited the game, I found myself waiting, and waiting, and waiting- for 20 seconds or more. I glanced at the HD light and saw it was on solidly. Aha- Windoze is using the HD for virtual memory.
I have 2G of fast RAM in my system, and no way does Quake 4 need 2G of ram (yet). So, why the h*ll was Windoze using the HD?
I reset the swap file setting to zero, like I have always had it, and tried running Quake 4 again. When I exited the game, it was instantaneous. Your mileage may vary, but if you want way faster response from your apps (especially video editing and such), and absolutely no problems running anything at all, buy more RAM and disable the swap file!
BTW, you regain 2G (or who knows how much) on your HD as a bonus.
If you want to know what's using up the rest of your HD space, I recommend Sequoia View, from the computer science department of the Technische Universiteit Eindhoven. It uses an amazing visualization technique called cushion treemaps to provide you with a single picture of the entire contents of your hard drive. It can be downloaded for free from
As NE said, this is Microsoft's "choice" (btw their privilege to choose). Because Quake, and practically any "windows game" let the question of memory management to be solved by the OS. So the creators of the game have to live with the OS. But the Win OS IS a multitasking system, and so it must optimize globally, and not just making favor for a particular program, or a particular class of programs. (that's one of the reasons why the hell was NT born) So why not blame Id Software for letting the OS manage their program's memory??? With all the fact that (I suppose) they knew the consequences. OS offers services. If you use them, you take the risks.
There is no viable multi-purpose solution for caching problems. If you want gaming performance, then go and buy a gaming platform, which was designed for gaming, and which can be blamed if the gaming performance is not enough, but this is another situation.
It just multiplies the amount of installed RAM by a constant and then 'recommends' the result. Stupid!
Do you know the underlying algorithm? Do you know the factors which resulted to that "constant" ??
A few answers to a few question a while back. First, yes on a high memory machine (1gb) Vista uses around 700mb of mem in idle. Second, i currently have 3gigs of RAM so my view may be biased, but my I find that with superfetch turned off my pc performs FAR more sluggishly, (i.e. average app startup time up too 2secs slower). Starting up a more memory intensive application (such as Dark Messijah) seems to take no extra time than if super fetch was enabled, and seems to "free" the needed memory from the superfetch cache with no apparent problem - I.E. no performance decrease or stutters with it enabled. This might be attributed to my 3GB of Mem, but this still leads me to believe that microsoft might actually have thought about the odd sod gamer who suddenly needs to free up 1.5gb of his "cached" memory for a game. Also as far as i can see the superfetch thread is totaly inactive during some high memory, full screen 3d applications (maybe another indication that microsoft thought about it carefully).
Also slightly more off topic, yes vista does use a larger page file. I'm not sure if it is sized in proportion to avalible disk space or installed memory, but my current (automatically configured) page file is just over 6GB in size.
Hmmmm. Seems to me personal preference also matters. I do NOT want some cache eating all of my free RAM, as I would like to know how much of it I actually have at any one time. Turning off superfetch helps a lot, as it reveals the standard windows memory leak, and allows me to tell when to reboot.
Even with superfetch off, it still shows me it's using several hundred megabytes for caching purposes. I won't mind a bit if an app takes a couple seconds longer to load the first time I use it, if I get a good idea of my actual free RAM in return.
Thanks to all who contributed to this discussion, but my mind's made up. Superfetch is OFF on my vista PC.
Dang! Spoke too soon. Yeah, it helps to turn off superfetch, but all memory still (eventually) winds up in the cache. It just takes longer. If anyone has any idea how to prevent that, please post it! I'll be searching elsewhere on the net, but this is by far the most helpful discussion I've yet found, so I'll be checking back here frequently.
cushion treemaps.. UGG That is such a stupid looking app that it's not useful at all.. what a joke.. I use Drivescan from Stardock and man it rocks.. it shows me how much HD space is used by each folder.. expand that folder and it shows by each subfolder.. sort by size or alphabet. I immediately find out what app is taking up so much HD room.. A true app that saves me tons of time since it's just a few files in a folder I put it on a USB device and working on a clients system to free up room.. it saves me tons of time.
Now on Superfetch I don't know if I like it at all so far. Only time will tell but if I had 2 Gigs of ram I would set my swap file to it's bare mininum 2 megs max.. and not have it using HD at all barely. I like that memory to be used in the app I am using like a game or that.. not some app I am not using at the time.
Vista is waaaaay faster when superfecth is disabled...
I have a new gaming PC and Vista is quite slow with superfetch enabled.
My apolgises I'm no expert on these things, and it nice to have an explaination on why my new laptop is struggling so much. I only have 1gb of RAM and the only thing i was running was chess that comes with windows vista (nothing fancy) and after an hour my system started to struggle and I eventually had to shut it down. It was using 96% of available ram and was continually flaging up that is was low on memory.
As explained here it's running 20+ applications in the background. Now I sorry but I don't need them all to be there to save me 2 seconds when I actually want to use one.
JP, RAM is not the only thing to make your system fast. It's not like you can have a 2.0 GHz celeron processor, 2 gigs of RAM and expect it to be blazing fast just because you have 2 gigs of RAM. There is more to the "speed" of your computer than just RAM. You say you would like an explanation of why your laptop is sluggish? We need more information to help. What kind of laptop? What kind of processor? etc...
ps... this thread is nice :)
Great post, i just built a new system with 2 gigs of ram and vista, let me tell you, best desktop experiance i have ever had.
Checked this out from Digg. I dugg it.. :)
Thanks to EVERYONE for the very constructive and informative comments!
Wow things Linux doing for ages. Welcome to the new millenium.
What I found better than SequoiaView and Drivescan is something called SpaceMonger. Think of SequoiaView (Picture Boxes) mixed with Drivescan's flexibility. It lists the folders and subfolders inside the boxes, whichever one is taking up more space.
Check it out, I've been using it for quite a while and it's been saving quite a lot of disk space.
If you don't want to pay for it download the old 2.1 version, it works just as well.
I'm glad someone is finally explaining people how Windows works. Many of my friends have seen that "Vista eats up all their memory" and switched back to XP because of this false perception. Thank you for the excellent article!
WTF? If you're so concerned about gaming, do yourself a favour. Upgrade your RAM for starters. 4GB if you're dead set serious about gaming (and not the cheap stuff either, I'm running Corsair TwinX C2 Pro Dual 3200. That stuff is synchronised as a pair which is you-beaut for dual-cores).
If you're so concerned about the OS chewing up Page Files while it tries to recover its original status before you played that all-encompassing game for 6 hours straight, just buy yourself another HDD and move the Page Files via a 'Virtual RAID' to the 2nd HDD. Want more Page Files for Vista to play with? No worries, just set values for the 2nd HDD (as well as the original values for the 1st HDD) and you can double your Page File capacity.
If you're so concerned about the life expectancy of your hardware, then don't be. Any sane PC enthusiast will have long since upgraded to superior technology before that old HDD becomes unstable and collapses in on itself causing a black hole which starts growing larger as it twists and absorbs our dimension into the nether-regions of time and space. In other words, don't be a miser - thrash it while you have it. Push it to the extreme and beyond. If you have a decent enough system ie. built it yourself using QUALITY parts, she'll love you for it.
Remember, there is no substitute for quality. You pay for what you get - just make sure you do your homework first.
I agree with Western Infidels about speed and responsiveness. A program and being sluggish because SuperFetch is caching a whole bunch of stuff into my RAM is more than enough reason for me to put off considering Vista for another year or two until it gets ironed out. Either that or just switching to Ubuntu and 'Wine'-ing all my favorite older programs.
Being in IT and having spent some time talking with a few guys from microsoft at a company showing of vista they recommended that everyone use 2GB really.
1GB is what's recommened as a minimum on the box
If you run 3GB+ vista DOESN'T use all your ram (it seems to stop on my system at just under 3GB used). After evaluating quite a lot of the new technologies I think superfetch is awesome but as I said you need a lot of ram for it really......
As for upgrading to vista I've seem tons and tons of reviews slagging it off with no technical commentary at all, these reviews seem to be written either by mac/linux users or people who just can't be bothered to do any research into what vista can actually do and they hate microsoft just because.
As an OS Vista has got some AWESOME features and superfetch is one of them but you WILL NEED a lot of ram to make the most of it. For those people without lots of ram try adding a 2GB flash drive (which are cheap) and configuring the flash drive using READYBOOST (another new feature) to improve performace..
Out of all the people in this thread I wonder how many have objectively reviewed vista and how many are linux fan boys who have just looked for an excuse not to use it???
At present there are a lot of driver issues around the 64Bit version but for anyone that has used it the performance in processor intensive apps is nothing short of phenominal.
This happens on my mac also. No matter how much ram I have, it uses up all of it within a few hours. Pisses me off, as I like to have at least 25% of my ram unused for opening up new applications. I can hear the disk grind working like hell when the memory usage goes to 95%.
Actually, I dont even know if its caching. I tend to think the mac is just memory leaking like crazy. Hopefully 10.5 will fix it with objective c garbage collection.
I turned SuperFetch off on my vista machine (with 2gigs of ram) and theres a noticable performance improvement. However like some people said, it just delays the fill up. Eventually it uses everything, even when you close all your applications. I wish they would fix this and give power users an option to manage our own F*#$ing memory.
This seems to be a fundamental shift in computer architecture. It used to be that the hard drive was 'virtual memory'. Now the memory is 'virtual hard drive'...
Times are changing.
FINE. Your hard drive runs like a maniac becuase everything in memory has to be swapped to the page file so there can be enough space in ram to load your new program.
That is why memory is best EMPTY. So we do not have thrashing hard drives. Yes its sometimes less efficent, but I want my hard drive stopped and memory empty when I run my computer.
Is that so bad ? Hell no. Thrashing hard drives is bad not just ebcuase of the noise but it laggs the computer.
You have 1277 cache, with 2045MB total .. so, you're using almost 800MB of RAM doing what? Where is the REST of your RAM?
Microsoft's caching methods are exceptionally piss poor, and always have been.
Patrick: All operating systems have always used spare ram as drive cache, if they were capable of doing so.. Just that Microsoft's form of it sucks. All the way back to "FASTDRIVE" or whatever the hell DOS 5's cache program was called. It's horrible. And, of course, ther'es the fact that until Gigabytes of RAM became common, Windows had a tendency to use nearly all of your ram, using only it's minimum caching.
It'd be sweet if someone could write programs to replace Microsoft's piss poor internal parts.
Matt, you don't understand how this works. The cache is not a set size, equal to like 80% of your RAM, or whatever. If the system needs RAM, to run a program, it should immediatly free Cache RAM. Also, if running the hard drive while you're not trying to directly use what the hard drive is caching is lagging your system, then you need to invest in some newer hard drives, or SCSI drives.
So; here is a simple question:
How do you know when you need more memory for a system? Looking for a simple way; instead of having to add up columns.
It's very interesting but it doesn't seem to explain why Vista overall is slower than XP.
If these new methods are clever at prefetching pages and managing cache, why is it slower than XP ?
I would of expected each Windows release to be tuned and faster than the previous version, especially with these new features.
I want only the programs I choose to be running to run. I want the computer to have as close to zero CPU cycles and the HD stopped except for when programs I choose to run need either. I want my memory empty.
Vistas slower becuase of the so called optimizations like prefetch and UAC and especially all the indexing.
Its like build in malware...
Ok first off, this little story is a new spin on a typical corporate myth to explain why their latest program uses more memory than last - 'we reserve memory as a cache' etc. I remember Adobe using it a few times, and even XP used it.
Remember that prefetching DOES NOT MAKE YOUR SYSTEM RUN FASTER! It only loads commonly used junk at startup rather than when you double click the icon! This is logical, think about it. Now, if you boot your computer and decide you don't want to run office first off, or whatever, or you don't just have 5 or 6 apps installed on your computer, it will actually be far, far slower with prefetch.
Lastly, you don't really know how memory works. When 'notepad' in your example is loaded into memory, there is no clearing of memory or anything like that, it simply overwrites an area and assigns it an set of RVA's. The program doesn't know whether it is loaded into swap or phys mem. Windows is so terrible it tends to swap out to disk even when only 10-20% of your memory is used... would you trust prefetch not to swap after this?
Lastly, filling up your disk with possible data is generally a BAD idea since any new data (and there will always be new data loaded, even with commonly used programs, think documents, images, etc) will push itself into memory causing cached stuff to swap. So prefetching will most definitely cause MORE swapping rather than less.
Congrats on listening to the microsoft PR campaign, you absorbed the ambiguous details really well.
The question should actually be: "Why did previous versions of Windows use the memory so inefficiently, when so many other operating systems (ie Linux) have been doing this for ages?"
Very interesting! Thanks for posting.
I just turned off superprefetching and indexing services and I still managed to type this. Funny how my HD is quite and I have 1.2 gigs of free ram.
My CPU cycles on both my CPU cores are well close to zero.
OMG MY COMPUTER IS NOT OPTIMIZED. good
The only reason i switched from openSUSE to Windows XP is because some games are made with DirectX, and WINE its not optimized enough to make them playable under Linux-like systems.
I'll stick with my TinyXP Beast Edition that only takes 40 Mb of RAM! Can you believe that!?!
I hope this guy is kidding. For one thing only half of his memory is being used. and secondly has he not read anything about Vista at all? there is a new memory management system that uses much more RAM to keep the speed of the system up. get more RAM if your worried about it
4gb of DDR1... Move up to DDR2 and enjoy some real performance. Read up on Corsairs - They only take the same ole RAM anyone else has and puts their name on it with a heat sink. Sometimes overclocking the RAM and then having people wonder why the RAM goes bad and its supposed to be so nice. If your running PC3200 on dual core, you've waisted your money - You will not get the performance out of a 939 socket dual core, that you would get out of a Socket 940 dual core that uses DDR2. So in other words, my 2GB DDR2 800mhz would blow your Corsair 4GB DDR1 (400mhz) out of the water. Vista does run better than XP - Superfetch seems to be a nice answer and the Windows team has a whole new group of people. In due time things will be worked out and I'm all for linux and playing games on Linux - But I feel my 8800 would go to waist playing older DirectX games. I prefer new games, no something thats been out for 3 years. Expect for BF2 :P
hehe quality?! - So if you pay more it must be nicer?? hahahah
No no, if it has LEDs on the RAM, it MUST be nice! hahahahaha
What's so efficient about 92 MB paged kernel memory anyway?
So they finally started doing the sort of memory management that *nix has used for 25 years. YAWN!