April 18, 2008
Dual core CPUs are effectively standard today, and for good reason -- there are substantial, demonstrable performance improvements to be gained from having a second CPU on standby to fulfill requests that the first CPU is too busy to handle. If nothing else, dual-core CPUs protect you from badly written software; if a crashed program consumes all possible CPU time, all it can get is 50% of your CPU. There's still another CPU available to ensure that the operating system can let you kill CrashyApp 5.80 SP1 Enterprise Edition in a reasonable fashion. It's the buddy system in silicon form.
My previous post on upgrading the CPU in your PC was more controversial than I intended. Here's what I wrote:
In my opinion, quad-core CPUs are still a waste of electricity unless you're putting them in a server. Four cores on the desktop is great for bragging rights and mathematical superiority (yep, 4 > 2), but those four cores provide almost no benchmarkable improvement in the type of applications most people use. Including software development tools.
It's unfortunate, because this statement overshadowed the rest of the post. All I wanted to do here is encourage people to make an informed decision in selecting a CPU. Really, pick any CPU you want; the important part of that post is being unafraid to upgrade your PC. Insofar as the above paragraph distracted readers from that goal, I apologize.
However, I do have strong feelings on this topic. All too often I see users seduced by Intel's marketing department, blindly assuming that if two CPU cores is faster than one CPU core, then, well.. four, eight, or sixteen must be insanely fast! And out comes their wallet. I fear that many users fall prey to marketing weasels and end up paying a premium for performance that, for them, will never materialize. It's like the bad old days of the Pentium 4 again, except for absurd megahertz clock speeds, substitute an absurd number of CPU cores.
I want people to understand that there are only a handful of applications that can truly benefit from more than 2 CPU cores, and they tend to cluster tightly around certain specialized areas. To me, it's all about the benchmark data, and the benchmarks just don't show any compelling reason to go quad-core unless you regularly do one of the following:
- "rip" or encode video
- render 3D scenes professionally
- run scientific simulations
If you frequently do any of the above, there's no question that a quad-core (or octa-core) is the right choice. But this is merely my recommendation based on the benchmark data, not iron-clad fact. It's your money. Spend it how you like. All I'm proposing is that you spend it knowledgably.
Ah, but then there's the multitasking argument. I implored commenters who felt strongly about the benefits of quad-core to point me to multitasking benchmarks that showed a profound difference in performance between 2 and more-than-2 CPU cores. It's curious. The web is awash in zillions of hardware review websites, yet you can barely find any multitasking benchmarks on any of them. I think it's because the amount of multitasking required to seriously load more than two CPU cores borders on the absurd, as Anand points out:
When we were trying to think up new multitasking benchmarks to truly stress Kentsfield and Quad FX [quad-core] platforms we kept running into these interesting but fairly out-there scenarios that did a great job of stressing our test beds, but a terrible job and making a case for how you could use quad-core today.
What you will find, however, is this benchmarking refrain repeated again and again:
Like most of the desktop applications out there today, including its component apps, WorldBench doesn't gain much from more than two CPU cores.
That said, I think I made a mistake in my original statement. Software developers aren't typical users. Indeed, you can make a reasonable case that software developers are almost by definition edge conditions and thus they should seek out many-core CPUs, as Kevin said in the comments:
How would you suggest developers write applications (this is what we are, and what we do, right?) that can actually leverage 4, 8, etc... CPU cores if we are running solo or dual core systems? I put this right up there with having multiple monitors. Developers need them, and not just to improve productivity, but because they won't under stand just how badly their application runs across multiple monitors unless they actually use it. The same is true with multi-core CPUs.
I have two answers to this. One of them you probably won't like.
Let's start with the first one. I absolutely agree that it is important for software developers to consider multi-core software development, and owning one on their desktop is a prerequisite. I originally wrote about this way, way back in 2004 in Threading, Concurrency, and the Most Powerful Psychokinetic Explosive in the Universe. In fact, two of the people I quoted in that old article -- true leaders in the field of concurrent programming -- both posted direct responses to my article yesterday, and they deserve a response.
Rick Brewster, of the seriously amazing Paint.NET project, had this to say in a comment:
Huh? Paint.NET, for one, shows large gains on quad-core versus dual-core systems. There's even a benchmark. I'd say that qualifies as "applications most people use."
He's absolutely right. A quad-core Q6700 @ 2.66 GHz trounces my dual-core E8500 @ 4.0 GHz on this benchmark, to the tune of 26 seconds vs. 31 seconds. But with all due respect to Rick -- and seriously, I absolutely adore Paint.NET and his multithreading code is incredible -- I feel this benchmark tests specialized (and highly parallelizable) filters more than core functionality. There's a long history of Photoshop benchmarking along the same lines; it's the 3D rendering case minus one dimension. If you spend a significant part of your day in Photoshop, you should absolutely pick the platform that runs it fastest.
But we're developers, not designers. We spend all our time talking to compilers and interpreters and editors of various sorts. Herb Sutter posted an entire blog entry clarifying that, indeed, software development tools do take advantage of quad-core CPUs:
You must not be using the right tools. :-) For example, here are three I'm familiar with:
- Visual C++ 2008's /MP flag tells the compiler to compile files in the same project in parallel.
- Since Visual Studio 2005 we've supported parallel project builds in Batch Build mode
- Excel 2007 does parallel recalculation. Assuming the spreadsheet is large and doesn't just contain sequential dependencies between cells, it usually scales linearly up to at least 8 cores.
Herb is an industry expert on concurrent programming and general C++ guru, and of course he's right on all three counts. I had completely forgotten about C++ compilation, or maybe it's more fair to say I blocked it out. What do you expect from a guy with a BASIC lineage? Compilation time is a huge productivity drain for C++ developers working on large projects. Compilation time using
time make -j<# of cores + 1> is the granddaddy of all multi-core programmer benchmarks. Here's a representative result for compiling the LAME 3.97 source:
|1||Xeon E5150 (2.66 GHz Dual-Core)||12.06 sec|
|1||Xeon E5320 (1.86 GHz Quad-Core)||11.08 sec|
|2x||Xeon E5150||8.26 sec|
|2x||Xeon E5320||8.45 sec|
The absolute numbers seem kind of small, but the percentages are incredibly compelling, particularly as you add up the number of times you compile every day. If you're a C++ developer, you need a quad-core CPU yesterday. Demand it.
But what about us managed code developers, with our lack of pointers and explicit memory allocations? Herb mentioned the parallel project builds setting in Visual Studio 2008; it's under Tools, Options, Projects and Solutions, Build and Run.
As promised, it's defaulting to the number of cores I have in my PC -- two. I downloaded the very largest .NET project I could think of off the top of my head, SharpDevelop. The solution is satisfyingly huge; it contains 60 projects. I compiled it a few times in Visual Studio 2008, but task manager wasn't showing much use of even my measly little two cores:
I did see a few peaks above 50%, but it's an awfully tepid result compared to the
make -j4 one. I see nothing here that indicates any kind of possible managed code compilation time performance improvement from moving to more than 2 cores. I'm sort of curious if Java compilers (or other .NET-like language compilers) do a better job of this.
Getting back to Kevin's question: yes, if you are a software developer writing a desktop application that has something remotely parallelizable in it, you should have whatever number of CPU cores on the desktop you need to test and debug your code. I suggest starting with a goal of scaling well to two cores, as that appears to be the most challenging part of the journey. Beyond that, good luck and godspeed, because everything I've ever read on the topic of writing scalable, concurrent software goes out of its way to explain in excruciating detail how hellishly difficult this kind of code is to write.
Here's the second part of the answer I promised you earlier. The one you might not like. Most developers aren't writing desktop applications today. They're writing web applications. Many of them may be writing in scripting languages that aren't compiled, but interpreted, like Ruby or Python or PHP. Heck, they're probably not even threaded. And yet this code somehow achieves massive levels of concurrency, scales to huge workloads, and drives some of the largest websites on the internet. All that, without thinking one iota about concurrency, threading, or reentrancy. It's sort of magical, if you think about it.
So in the sense that mainstream developers are modelling server workloads on their desktops, I agree, they do probably need as many cores as they can get.
Posted by Jeff Atwood
Jeff, I find this article disappointing. I didn't think I would put you in the "those doomed to repeat the mistakes of history" camp. This is the same sort of mistake that Bill Gates made when exclaiming with all seriousness "Nobody will ever need more tha 640K of RAM!" At least you didn't say "nobody". More and more software will be able to take advantage of the extra cores. Games will start using extra cores to compute physics while handling other tasks. I believe Crysis is already pretty scalable to additional cores. I can think of lots of uses for at least four cores and I'm sure there are many more scenarios that I can't think of. Given that a quad core CPU is not much more expensive than a dual core I can't think of any real reason not to get a quad even if the clock speed is a little slower. I consider a quad core more future proof than a dual core even if it runs at a higher clock speed. You can easily overclock an Intel Quad core but you can't add more cores to a dual core. Unfortunately, AMD quads haven't shown much headroom in overclocking ability.
Also, if you can't think of what to do with your extra cores you can always run the Fight AIDS At Home grid software from www.worldcommunitygrid.org. It makes use of multiple cores quite nicely for a good cause.
I often also find developers, myself included, highlight their threaded applications, until someone sat down one day and showed me how hard it really is to write for multi-processors.
But it's not hard to write multithreaded applications. My Master's was in high performance computing, and once I got proficient, I never wrote codes that would deadlock or have other synchronization errors from running on multiple processors.
The problem is that most people just haven't taken a class on the subject.
What about us guys that use VM Workstation and have 4-5 VMs running all the time?
Yes, I use a VM for VS2008 that contains all my projects, third party apps, configuration, etc. I hate rebuilding this crap every time something hoses my host system and I have to reload everything.
I've spent months of my life (over the last 18+ years) re-installing DOS, Windows 3.1, Windows 3.51, NT 4.0, Windows 200, Win95, XP, Windoes 2003, Windows 2003 R2, Vista x86 and x64 approx. 8 times for home stuff already, and Windows 2008. Good Grief!
I highly encourage everyone to stop loading your primary PC up with crapware and beging having a dedicated VM for all your primary needs.
I have one for:
1) Outlook2007, NNTP Reader, and RSS
2) VS2005 + with third party apps and SQL2005
3) VS2008 + with third party apps and SQL2005, Expression Web
4) Visual Studio 6.0 with all third party apps and SQL2000
5) Test XP VM for loading any/all crapware applications for testing, etc
6) Ubuntu Workstation 7
7) Ubuntu Server 7
8) Adobe CS3 apps (Dreamweaver, Contribute, Photoshop, etc.)
With VMware Workstation 6.0 and its multi-montior support, it's almost a no-brainer now to use a VM.
The new 6.5 release will support DirectX9, which means all my Expression Suite products will have their own VM as well.
I'd say quad core is a must for any serious PC user.
DavidM -- what about Windows Server 2008 x64 and the Hyper-V stuff? How does that compare with VMWare Workstation 6? I hear people talk about doing the same sort of thing you just described, except via 2008's Hyper-V thing.
To be honest, I have Windows 2008 loaded within VM Workstation, but have not attempted to yet again blow away my host machine and reinstall another virtualization product. Besides, it's not fully baked yet anyway...
I do not think I'm going to really gain any of the benefits that VM Workstation provides using any other product. Multi-monitor support and DirectX and the soon-to-be Unity (where I can have a VM application appear on my host desktop as a regular application) is something that none of the other virtualization products can touch.
Again, I'm using VM Workstation 6 for a home PC. If I was in the office and trying to have multiple desktops and whatnot, I'd likely use ESX or Windows 2008 Hyper-V with some sort of RDP connection.
Most developers doing web apps? Yeah right, and just how many applications on your PC are web based... that you can stand to use for more than one minute? Thank you.
Web developers are the majority of those chatting about web apps thus it may seem to be a larger category than it actually is.
You must understand that to a web app the OS is the web browser. A very non-standard loosy goosey gonna change for the next 10 years platform. Serious applications are programmed against a real operating system that has real stable standards.
Thus the future of "desktop rich clients" will probably be served by Citrix like servers. Where each executable thinks it is inside it's own OS. This way developers can develop as they always have plus the benefits of pushing software to the web browser.
i think it been mentioned already, but compiling is usually i/o bound, not cpu bound.
on big projects that becomes more pronounced as ur harddisk groans under all those files.
oh, my gosh. Miss a couple of days and... Stand in line.
A couple of Questions here. (They might seem trivial, But I would like to know)
Q #1: Does multi-core CPU processing / compliling become multi-core CPU process dependent programs? Or, can it happen that way? As in a program devoloped in C++, share-compiled across multi-core CPUs, leading to run errors on a non-multi. Is that an issue? (In my little mind it could be an error factor.)
Q #2: On multi-threading program compiles. I'm realitivly sure that most processes are using multi-threating in their operation. Just how does a person compile correctly a multi-threaded program across multi-core CPUs? And than, as in Q #1, have, or lead to problems on non-multi systems.
Anyone care to try to get up up to speed on this? (please, and a Thanks as well.) -d
a ps as well... Would / could those times mentioned as 'bench marks' be also dependent on the thread managment? It seems that even with, or maybe more importantly basic thread managment would some how come into the overall process. No? -
"640K ought to be enough for anyone."
Developers -- or at least testers -- using single CPU computers are still important -- especially when you develop device drivers. Here's an example from my work why it is important:
My company makes PCI cards (for motion control), the firmware for the cards, the device drivers, and libraries for different OSes to access the cards. A few weeks ago, I upgraded the firmware on the card and my computer froze. It turns out that the card kept hitting on the intterupt line. My computer's CPU spent the entire time trying to service the interrupt, not allowing anything else in the operating system to run. None of my co-workers experienced the problem. I knew that I have one of the older computers and probably the only with a single-core CPU. As soon as I suggested my co-workers look at their CPU usage before and after upgrading the firmware, they noticed the one core being completely locked-up. However, they had other cores available to run the rest of the operating system.
We are now trying to come up with a test to monitor CPU usage, but the scary thing is that if it weren't for my old single-core computer, we may well have shipped the software to customers only to discover the problem in the field.
The post sounds like its trying to solve my issues before it knows what they are! Next will be a 32 vs. 64 bit diatribe.... How no one 'needs' more memory than some silly number in the sky.
Badly written software rarely multithreads, and certainly not effectively. I think the point stands.
I have literally never seen a crashed, consume-all-cpu program take out more than one CPU. Have you?
If you wish to run anything that's more than a few processes or a few threads per process, the more cores the better.
I consider this pure wishful thinking. There are no benchmarks, outside of highly specialized tasks, that support what you're describing for typical -- even "developer" typical -- tasks.
Yes, dual core brings clear benefits for 80-90% of general purpose computing. Quad-core, more like 10% at best. You'll suffer from diminishing returns unless you live in those edge conditions.
It's true that eventually all CPUs will be quad and this will be a moot point. But why pay more for the same performance *today*?
"dual-core CPUs protect you from badly written software"
No they don't. It only takes two errand threads and you're done.
Ahem - bye bye goes the 'why pay more' argument then? http://www.custompc.co.uk/news/602457/intel-slashes-core-2-quad-prices-by-50.html
Jeff - it sounds like your whining about making the wrong choice of CPU. Don't sweat it, it happens to us all. Next hardware cycle just try to step back a bit and look at the bigger picture. Don't be embarrassed with just 2 small cores.
Besides, we know you just write these 'John Dvorak' style posts to get the page views up - and who can blame you, you're in the 'content' business now.
I suggest a 'Why I think hi-speed internet is a waste of money for most people' article - that'll get them chomping onto your new ad-sponsored stackoverflow.com venture eh? Click those ad's monkey boy!
An open source example is SharpDevelop. On a typical 2.4ghz celeron, their code editor control scrolls at about 2-3fps, almost unusable. But this bug is marked as "wontfix worksforme", because the developer has one of the fastest desktops money can buy, so it scrolls at like 10-15fps, barely noticeable.
I have a single-core AMD 64 3500+, hardly one of the fastest desktops nowadays. I've also tried SharpDevelop on my old computer (1.2 GHz); and scrolling was still fine.
And I did look at the drawing performance, and found all that all time is spend in the GDI drawing functions, not in our code. I have no idea why GDI is slower for some people, and why this happens only in SharpDevelop. I'm certainly no GDI expert, so it would be great if someone could review the drawing code to check if there's something wrong, but I don't know how to debug and test this without a machine where I can reproduce the problem.
I know this blog entry has you up to your knees or above in comments. ;-)
I'm hoping though that a link to my recent comment / request for your advice or thoughts, on your now nine day old blog entry re: upgrading your 18 month old main home power PC, will get you to notice it, and hoepfully want to respond!
Your PC building and overclocking posts both on Scott Hanselman's system and your upgrade have I think nudged me over the edge towards rolling my own once again, though I haven't done that for a decade now (and am using a laptop as my main home system at the moment.) Time for a Vista 64 / 8gb ram / foray I think. But I need your mobo and other advice. Hence the link!
How silly. I don't care how many processors a single app can use as long as my OS can distribute processes across processors. After all who on earth only runs a single application on their box these days?
I have a dual core 2.3 GHz processor in front of me and I regularly have to restrict the number programs that I'm running because my machine will grind to a halt.
Eclipse + iTunes + MS Word + Excel + OminGraffle + Chat + Mail + Calendar + MySQL + web server + a linux build server + a linux target server (both communicating with each other over encrypted channels) + XP running MySQL for compatibility testing is not an unusual combination. And if I had enough RAM and processing power I'd regularly be setting up another six or seven VMs so I could accurately simulate our production environment.
As a user how snappy my system responds is more important that how quickly it does things. Even if *none* of those tasks know how to use more than a single processor, a quad core is a big win for me, and I suspect that I'm not alone.
I am typing this on an 8 core MacPro. I am a Java developer. At any given time I will have Eclipse running (a CPU hog), I will be debugging one or more servlets, I will have at least one DBMS running, and I will have several web browser pages open (possibly several different browsers if I am trying to figure out why something works in IE vs. FireFox vs. Safari).
I could quite possibly be running one or more VMs in VMWare and maybe several other tools (profilers, debuggers and so on).
I have found that this really taxes a two CPU system and sometimes even taxes my 8 core system (I need to get more memory too).
Other benefits are that I have seen multi-threaded apps work fine on a single core system then crash on a multi-core system due to bugs in the code. I can assign certain CPUs to certain CPUs and keep others from being loaded down.
As for being able to test one a single core, as I mentioned I can either assign the one core to the one process, or I can easily disable one or more CPUs dynamically (CPU Palette, an app that comes with Leopard, allows me to just click on the CPU icon to disable it from being used by any process).
I was going to buy a MacBook Pro, but I am going to wait until they are available with more than 2 CPUs and 4 GB of memory (another reason I bought my MacPro - the ability to stuff 32 GB into it so I can run multiple VMWare sessions).
I remember when the 16 mHz Intel 386 came out. A magazine pundit stated that they would only ever be used in high end workstations or "file servers" as they were more powerful than the typical user would ever need. Famous last words.
Dual core doesn't really protect you against badly written software. The most common problem is 100% memory use (and swapping), not 100% cpu use, and dual core doesn't do anything to protect against that.
Just to beat the horse a little more, I would like to respond to the now infamous "waste of electricity" comment. I just got a Dell Inspiron 530 with the following specs:
Q9300 quad core
8 GB PC4200 RAM
500 GB Hard drive
I use it as a host machine for multiple VM development servers (Visual Studio 2008, SQL 2005, Sharepoint 2007, Domain controller, etc).
After plugging in a Kill-a-Watt meter, I've found this PC uses LESS electricity than my old P4's, and even a 1 year old Athlon dual core.
The Dell quad core measures on average 62 watts which is close to the IBM Thinkpad T61. Here's my Kill-a-watt readings:
Dell Inspiron quad core Q9300 - 62 watts
HP Athlon X2 dual core 3800+ - 84 watts
Following up - here are the rest of the results of my unoffical energy audit. These are the watt readings using a Kill-a-Watt reader while on and running multiple applications. All the machines are using the on board graphics as I don't have a need for fancy graphics while I'm coding.
Dell Inspiron quad core Q9300 - 62 watts
Asus P4P800 with P4 - 95 watts
HP Athlon X2 dual core 3800+ - 84 watts
Dell Dimension 4600 P4 - 70 watts
Lenovo T61 Laptop - 50 watts
Lenovo T61 Laptop w/ screen off - 40 watts
HP dv6525 Core 2 Duo Laptop - 40 watts
Samsung 22 inch monitor - 34 watts
As you can see, the Dell quad core uses less energy than most of my older PC's. The 45nm technology is very energy efficient. I'm not arguing that 4 cores is faster than 2 cores for performance, just that 4 cores does not necessarily mean a waste of electricity. I think that newer computer components are much more efficient, but it's also a factor of graphics cards, power supplies, etc.
I would be interested to see your energy results for the overclocked dual-core E8500 @ 4.0 GHz using the Kill-a-watt reader.
I've detailed the quad core build specs here: a href="http://bluesurftech.com/TechBlog/Lists/Posts/Post.aspx?ID=34"http://bluesurftech.com/TechBlog/Lists/Posts/Post.aspx?ID=34/a
Let's remember that Intel and AMD are pushing quad+ core processors NOT because the computing industry has realised it's the way to go. It's because they've failed in their quest to keep bumping up the clock speeds. In order to keep their market alive they've HAD to change tack and convince us that we all need multi-cores. It's the only thing they can deliver.
Whilst multiple cores clearly have some benefits in very specific circumstances, their adoption is NOT being driven by the software developer community, or the computing industry, like it should be. It is being driven by the chip manufacturers, who need to keep their market alive but find themselves unable to deliver what we actually want, which is ever higher serial throughput.
Intel and AMD are pushing a solution for which there is no clearly articulated problem (very specialised applications aside). Basically, they have failed, and we - the programming community - are having to jump through some entirely artificial hoops.
There is a company who offer compilers for c and C++ that compile code to run on multiple cores at www.codeplay.com
If you're a C++ developer, you need a quad-core CPU yesterday.
Definitely true. I am a C++ native developer, using Visual Studio 2005 with IncrediBuild. For me the Quad Core is much more productive then dualcore, the compilation speed scales almost lineary with total GHz summed for all cores, therefore 4x2.4 is much much better than e.g. 2x2.8 I could perhaps have.
As for power costs, the Quad Core 2.4 GHz I have now is using much less power (and noise) than the PIV 3.6 GHz I have used before, therefore the upgrade was good in this respect as well.
Basically, they have failed
I agree they have "failed", but I do not blame them for the failure. The fact the two competing companies have "failed" at the same time tell me there is probably some reason for the failure. Both AMD and Intel would like to increase scalar performance if they could (and to certain extent they still do), but the technology limits seem currently preventing any major leaps and no one seems to be able to come with any better performing technology so far.
Has Intel finally implemented their version of DEC/AMD's HyperTransport bus?
No, but Intel now have independent FSBs for each physical processor.
If not, there's hardly any point going above a single Core2Duo CPU on an Intel platform.
I can understand this, but thanks to independent FSBs, now that recommendation should be interpheted as max two cores per processor.
Take a look at the LAME benchmark cited in this article if you don't believe me.
On the matter of this, the 1600 MHz FSB will certainly help, but there is no substitute for HyperTransport. Intel has it's own version of HyperTransport, called QuickPath Interconnect, that should be available by late 2008. Google for it for more info.
Would'nt quad-core still be able to bettercope with CPU usage spikes? I'm talking about responsiveness, not the throughput.
While Ruby might use threads, everyday Ruby applications usually come in the form of a Ruby-on-Rails web application -- which uses threads *on the web server*.
So multi-core is absolutely important for servers of any size and shape.
However, I do agree that there are web developers who might benefit from the multi-core on multi-VM environment -- those who stress test their web applications across different browsers across different browser versions (think about, one VM instance for each major release of IE, Firefox, Opera and Safari).
I write a lot of LabView code and in that environment, it's trivial to make multithreaded apps. I do a lot of data collection and analysis using it. What multiple cores have let me do is more real-time analysis rather than just collect data and analyze after the fact. It helps me know that the data I'm collecting is good as it's streaming in. This saves the company I work for time and money and our customer money.
The thing I'm finding is that computer power is making this moot as well, our PC's are jsut so insanely powerful that two cores can do what 4 used to be able to do.
What most people aren't mentioning is that even if your code is beuatifully threaded, and you're running multiple applications, the CPU is rarely going to be the bottleneck in every-day sort of workloads.
The disk is still the main reason users need to wait for the computer, and in many cases RAM can help with that.
It's true. Most developers today are writing web applications instead of desktop applications, and in most cases it's totally unnecessary. Shame, that.
I'm surprised no one has mentioned Parallel FX and PLINQ (Parallel Language Integrated Query), which move the very difficult work of optimizing multi-core usage to the framework level.
Replacing foreach() with Parallel.For()
var q = from x in data.AsParallel() where p(x) orderby k(x) select f(x); //will optimize the data filtering and ordering for multicore
Hopefully people will see beyond the obvious theoretical question:
"But why can't my compiler do this automatically."
to the practical application:
"Compilers can't determine all parallelizable operations without some hinting, and hand writing parallelized operations is really hard without a solid framework."
As a desktop developer, I make serious use of my quad core processor. I often run two virtual machines on my box while developing and testing the client/server software I build. Having extra cores is not about performance with one application, but rather the whole picture of using your computer for multiple things at the same time. Even web developers have multiple applications and browsers open at the same time.
Our build process is not just compiling the source code, but packaging it inside of an installer and running tests. Our typical build takes 45 minutes for a single processor. I'm working on making it multi-threaded to speed it up.