August 8, 2006
Dual core CPUs were a desktop novelty in the first half of 2005. Now, with the introduction of the Mac Pro (see one unboxed), dual core is officially pass. Quad core-- at least in the form of two dual-core CPUs-- is where it's at for desktop systems.
And sometime early next year, the first true quad core CPUs will hit the market.
I think there are clear multitasking benefits in a dual-core configuration for typical computer users. All you need to do is run two applications at once, and who doesn't do that these days?
However, the benefits from moving to quad-core and beyond are less clear. Effectively utilizing 4 or 8 CPU cores requires extremely aggressive multithreading support within applications. How aggressive? Rewrite your entire application in a new language aggressive. That's a much more difficult problem. It's also not a common optimization, except within very specific application niches.
Dual CPU desktop systems weren't twice as fast as single CPU desktop systems. But they were a substantial, worthwhile speed bump. With quad CPU systems, we've hit the point of diminishing returns.
Current benchmark data definitely bears this out. I distilled results from these GamePC and TechReport reviews of the Opteron 275 (dual core 2.2 GHz), which also included the Opteron 247 (single core 2.2 GHz). It's an apples-to-apples comparison between Dual and Quad configurations of an Athlon 64 running at the same speed-- 2.2 GHz.
3D Studio Max 7.0 Radiosity Render
Cinebench 2003 Rendering
Alias Maya 6.0 Zoo Render
Photoshop CS Filter Benchmark
Flash MX 2004 MPEG import
Windows Media Encoder 9.0 MPEG to WMV
LAME 3.97 WAV to MP3
Apache 2.0 10k user stress test
Apache 2.0 50k user stress test
Half-Life 2: Airboat chase
Doom 3: Site 3 timedemo
I eliminated most of the synthetic benchmarks; I tried to focus on real desktop applications that people actually use. The Sysmark 2004 results are particularly telling.
However, the results I did find are so poor that I wonder if any quad CPU system is good for much more than bragging rights. Of the desktop apps, only three truly benefit from a quad CPU configuration: 3D Studio Max, POV-Ray, and Cinebench 2003. Notice a pattern? Rendering and encoding tend to parallelize well.
Unless you're often running a specific application that is optimized for multithreading, there's no compelling reason to run out and buy a quad-CPU desktop system today. And I don't see that advice changing over the next few years. At least, not until the state of software development changes quite radically to embrace multithreading across the board.
Posted by Jeff Atwood
doesn't Windows use other CPUs to run different applications?
I mean, you wouldn't get much benefit from a single application (unless the application takes advantage ot it), but from the whole machine, running different applications at the same time
Hmmm...one of the things that having multiple CPUs really helps with for me, as a developer on linux, is speeding up compiles. "make -j [n]", where [n] is the number of sources you want make to compile at a time is a real help. "scons", which I actually use more often, also has a "-j [n]" option that does the same thing.
In fact, this often helps on single CPU systems as "-j 2" can cause slight speedups as one process can be actually compiling while the other is waiting on disk IO.
Just wondering - can Visual Studio do parallel compiles on multi-CPU systems? Having a 4-CPU system with the VS equivalent of "scons -j 8" running (although it would probably be multi-threaded instead of multi-process due to NT's poor process creation speed, the synchronisation problems wouldn't be that different) would be great for those big compiles.
For performance increases in individual applications you will need advances in thread management. For multiple applications you should see a much bigger benefit sooner.
An example of where I think this could shine is a web server running Apache with pre-forked processes.
Time to start delving more into functional languages.
My thoughts exactly: as the number of processing units increase, we might have to solve more problems using functional languages to benefit from parallelism.
Adam: There are several powerful build solutions for VS that even go beyond single workstation builds, for instance IncrediBuild (http://www.xoreax.com/).
There is of course the issue of how much of a bottleneck the CPU is in the first place.
Disk I/O, RAM I/O, and various latencies are more and more the real bottleneck. Especially for desktop systems, where we often have 1 process running and 30 others sleeping, without much chance of parallelizing work.
And in come the natural inneficiencies of multiple cpus: the need for locking the bus, maintaining cache coherency, affinity, TLB misses and flushes, and so on.
For example, Apache load tests can be dominated by network I/O. And throwing more CPUs at the problem is not really the way to go. Which is why the linux networking people are looking for example at Jacobson's net channels.
It's not _necessarily_ as hard as you make it out to be. Look at the results of a single "#pragma omp parallel for" on an application: http://www.knowing.net/PermaLink,guid,59d8fdd5-54fb-4ddf-8858-c784ac6209d6.aspx
Much (most?) media processing will involve at least some hotspots without loop-carried dependencies that can be sped up similarly. With dual-cores, the benefit of doing some multithreading is perhaps 70% -- debatable benefit. With quad cores, it may reach 3x, and 8 cores perhaps x5-6. I can't see any processor-intensive niche (media, gaming, database) ignoring that.
"I mean, you wouldn't get much benefit from a single application (unless the application takes advantage ot it), but from the whole machine, running different applications at the same time"
I think Eber has hit the nail on the head here.
I agree that there is a point where the returns on optimising an application for more cores won't be worth the effort expended. But then you can look at different ways of working.
Lets have a hypothetical situation where you're a video editor - you've just finished editing one part of a show. So you click encode and away goes two of your CPUs/Cores.
On a dual core system you can't do much during the encode without affecting performance.
On a quad core system, you could potentially pull up another project and finish some editing on that.
With a rewrite of the video editing software, it might be possible to have it encode multiple parts of the show at once - eg: each core gets 1/4 of the show to render.
For software developers, quad core might not provide any immediate benefits on "regular" applications.
You could however implement continuous unit testing - perhaps automated stress testing on your desktop.
Heck, for those developing software that works with database intensive stuff, you can do it all on your dev machine, rather than waiting for a shared resource (eg: the dev server) to become available.
Instead of just looking at how one application can benefit from many cores - start looking at how running many applications can provide a benefit.
A regular end user probably won't see much improvement at first (except that their spyw*re runs faster), even for games. But then again - I don't think these are being maketed for regular end users.
Spot the programs that only use two threads.. :)
As long as you don't lose CPU bandwidth with bus contention and the operating system sets the thread affinity randomly or even intelligently, then more cores should result in a better user experience when you are multitasking applications.
What would be nice is if you added some tests where you ran some of those applications simultaneously on dual and quad core systems.
So I have not seen any price breakdowns of the quad cores yet, but you know for sure it will be a selling point. The initial first use will be simultaneous apps - but I do agree with the apps being disk-bound, bus contention, etc. I do love the dual cores - my favorite part is being able to kill a process that has pegged one of the CPUs.
Couldn't agree more, quad cores are going to be a lot tougher to keep busy. I bet the folding/seti/mersenne/whatever guys are rejoicing though.
I wonder if quad cores will bring more benefits for virtualized operating systems. I would like to say yes, but I don't know for sure.
If you use a lot of virtual machines you can win (developers take note), and if you regularly run lots of apps win again. For me, I suspect that my main time-critical tasks would speed up with more cores because the apps I use are usually multithreaded. Processing digital photos with www.bibblelabs.com it runs a queue in the background as edits to each photo are committed, and I not uncommonly get 30 or more entries in the queues. More cores, shorter queues :)
RAM does become more of an issue, and disk speed too - but I'm willing to boost those to get faster throughput (I already have, RAID5 + mirrored boot disk)
What I do want is multithreaded input - I'd love it if Windows could give me one cursor per input device so that my mouse and tablet could be used independently. Ideally with some way to grab the keyboard with either.
Will and James are right. Sure, single apps won't find much of a boost, but multiple apps will. With quad cores we could switch to being our own continuous integration server, continuously running tests in the background. There is already a framework for this in Ruby. My desktop basically is a server. It actually does more than a server. I listen to music, while writing code, against a web server on my machine that hits a database also on my machine. I hope things would be speedier with 4 CPU's.
I am eager to see benchmarks along the lines of what James suggests. Benchmark a simulation of 50 users hitting an Apache server that hits a PHP application which pokes a MySQL database while playing music. At what point does it become a test of how well the OS handles multiple CPU's?
Or even just run all the benchmarks at the same time.
The C++ compiler in VS2005 has some pragmas to automagically use multiple CPUs in loops, and some of my coworkers have gotten fairly hefty speedups using them, but it's pretty specialized stuff. It's also apparantly fairly .net hostile, too, and can't be done anywhere near managed code.
It would be nice to run a benchmark of a game that was multiprocessor aware. IIRC, early versions of Quake 3 took advantage of multiple CPUs, but that feature got broken with an early update. It would be interesting to run the original game on a quad core system and see the results.
Right now, I have the following apps all running:
TextMate (a text editor)
NetNewsWire (a news reader)
Adium (a chat client)
Transmit (an FTP client)
Parallels (virtual machine)
Finder (file explorer)
In addition, I have the following servers up and running:
Lighttpd (web server)
Now, most of the time, they are idling. But, there are periods when I have TextMate open on one screen, and Parallels on another. Inside the Parallels VM I am running IE, which is calling Lighttpd, which is calling PHP, which is calling MySQL.
At the same time, I am often pushing the last milestone out to the server via ssh or Transmit. And, there's usually iTunes playing in the background.
1 core for Parallels
1 core for the web server, php and mysql (since they are serving one client and can be scheduled synchronously)
1 core for iTunes and Transmit/ssh
1 core for Mail (checking every 30 minutes), the OS services, and the idling apps
Running all that on a single core machine is painful. So painful that even with enough RAM, I would still break it up into discrete groups of apps and only run one group at a time. On a dual core system, it runs along nicely, but does get slow as the level of concurrency goes up. A quad core system would scale even higher.
It's not single application performance that counts, it's the ability to scale when you need it.
On a dual core system, it runs along nicely, but does get slow as the level of concurrency goes up. A quad core system would scale even higher.
Unlikely, because moving to dual core satisfies 90% of the CPU and scheduling bottlenecks you saw on a single core.
You'll get no argument from me on the superiority of dual core for everyone. It's a huge improvement! But throwing 2 more cores in there is a negligible perf gain except in highly specialized scenarios.
My desktop basically is a server. It actually does more than a server.
Really? 100+ users are hitting your desktop simultaneously? C'mon. This is a fantasy. 64-bit and quad-core are effective on a server because the load scenario is extreme. A single user, no matter how 1337 he or she may be, won't even come close to the kind of load you see on a server.
First of all, I find myself greatly appreciating hyperthreading (my box predates the dual-core processors) as it allows a processor-intensive task to run without turning the rest of the system into a dog. There's more room to go in that direction, though--I would definitely appreciate a quad core chip.
Second, as such chips become more common I expect we will see more programs written to take advantage of it.
As for games, yes, they are generally video bound but that doesn't mean there isn't room to improve things with multi-threading. As such chips become common enough I expect we will see additional threads used to run AI's. There's always room for smarter enemies and if you can move the enemy logic onto a separate processor you won't bog the system if the enemy hits something that takes too much thinking about.
The Apache results indicate that something else (probably network resource contention) is a limiting factor.
The Quad CPU would appear to be slower (or Jeff has the numbers the wrong way round) than the dual CPU config - which is typically what happens when there are more processes fighting over a shared resource - more locks to manage.
I normally have 5 or 6 apps and 2 virtual machines running at all times. Often, several of the programs are CPU intensive.
Each app is ALSO a thread, so it is not hard to see that you can easily grow into 4 or 8 processors just by DOING MORE instead of having apps written differently.
It isn't about running all the processors at 95% all the time, its about being able to GET a few of the CPUs saturated and still having plenty of guts left in your machine to do yet more.
I think the real winner is the user experience, not how fast a single app runs. Even with dual core, how often has your system dogged down on you? Mine does this often. I would definitely pay to have a system that was nearly invulnerable to this.
That being said, I work on developing an application that is heavily multi-threaded (too much in some cases) so the more cores the merrier.
No one needs more than 640K either, right? The multicore desktops will enable new, mostly as of yet unimagined applications that are so CPU aggressive, you need to run them on their own core lest they interfere with desktop apps like office or a browser. A good example would be running a dedicated Halo server on the extra core, which is what I plan to do on my new dual core.
Yeah, but having multiple processors in a system like the XBox 360 allows features like interacting with the dashboard and performing background downloads while you are playing a game.
Multiple cores or processors could also possibly help out with online games where maybe 1 of the cores could handle networking while the other handled game processing? No? It might be a pain to get it all synchronized, but I dunno. I don't write games!
Adam has the right idea. XCode lets you distribute compilation (with GCC) across all the CPU's in your network, not just on servers, but on everyone's workstations. Last year's Power Macs already had quad cores so it's already proved its worth.
"1 core for iTunes and Transmit/ssh"
Seriously -- an entire core for an SSH session and playing MP3s?
Even my "old" AthlonXP 1800 didn't have problems with this kind of thing.
Whack in some visual effects and you might be lucky to hit 15% utilisation - of an AthlonXP 1800. A single core from a Core 2 Duo is what - 2-3x faster?
Here's what I'd be thinking of with 4 cores:
1 Core: OS, AV, other non-major stuff (mail, im, downloader apps, media playing apps)
1 Core: Your chosen IDE
1 Core: Your app in debug mode / server running your app / etc.
1 Core: Database being used by your app, continuous integration, unit tests, etc. Browser session (many windows, not processor intensive)
Sure, at the moment, I could probably get away with two - it'd be nicer with three, and four is just luxury. Then again, vista's new UI could probably help use up one of those cores.
Speaking of vista - the things you'll be able to do with Windows Presentation Framework will make it much easier to present information in 3D. I know I have difficulty presenting complex data in 2D, even with the pseudo-3D offered by the "3D" charting in excel doesn't help much.
Letting managers be able to scroll around a 3D representation of the data may be very much useful in some fields.
So, yes, 4 Cores arn't necessary right now in the desktop - but like someone else said - making this available will let people come up with a whole bunch of ideas.
Like how things changed (somewhat for the better) when people realised that they could exchange data with the server, without involving a page reload or any special plugins (Yes, AJAX).
At first it was people doing authentication without a page reload, or pulling down little bits of information.
A few months later and suddenly, someone realises they can combine this with a bunch of other technology, and.. Hey-presto! we have a fully functional spreadsheet or email client in this relatively "dumb" browser.
The real bottleneck or diminishing returns to multiple CPU cores is memory bandwidth and contention for other resources, not a lack of things to run on them.
Clearly people have already hit on the idea having several applications going simultaneously has benefits. This benefit will increase as more applications are designed to operate all the time, not just based on user input and then action. The programs will be doing work or searches on your behalf in a predictive manner. The additional CPU's will allow this to be done in a less intrusive manner.
As just an example, think of MS-Word. It already does this to a large extent with threads doing background spell checking, grammer checking, repagination after local formatting changes, print formatting, etc. Those are all threaded up already, it just happens to be that a single core can handle all those operations pretty effectively already.
As you make more horsepower available, those parallel operations can be much more CPU intensive and still not get in the way of the primary interface threads.
I suggest playing with Vista and Office 2007 Betas for a little while and I think you will see some ideas of what all the processors might be doing pretty soon.
Again, what I worry about is bottlenecks of memory, disk and network bandwidth choking the cores off - not the applications running on them.
But throwing 2 more cores in there is a negligible perf gain except in highly specialized scenarios.
Very true. I wasn't attempting to suggest everyone is hitting a webserver running on their system with a web browser running on a virtual machine on the same system, while tracing the web application's execution and watching the latest episode of Hell's Kitchen in VLC. :)
Quad core means that when I plug in an external drive, I don't notice Spotlight updating it's index for the drive. It means that I can start a 40 minute build without causing an H.264 video to skip a single frame. It means that I can devote an entire core to filtering out ambient noise in speech recognition.
The scenarios are uncommon now, but I'd expect them to be approaching commonplace by the end of these new quad core machine's lifecycle. And, of course, for them to be completely overshadowed by things I haven't thought of. So, while I'd agree that these machines aren't necessary right now, I expect that situation to have noticeably changed within the next 3 years.
I wonder how much of it is that, until now, there was no point in using a lot of threads beyond one for user interaction and one for crunching. Why would anyone bother?
Perhaps now that they've been built, the applications will come...
Will: In regards to video editing and encoding data streams, I mentioned something to Rick Brewster of Paint.NET when they first introduced their multi-proc save routine. It would split the data file into chunks but if say chunk 5/8 was corrupt, the entire thing was. That'll be the most likely problem everyone faces when they start to break up their save routines but it's not particularly difficult to fix.
Assigning applications to cores would solve a lot of concurrency issues but hopefully the designers of operating systems don't allow this to be programmatically altered by the app itself. Why? Your apps will fight for a core all to themselves and this is something the user should be managing anyway. How would you even optimize such a function? If CPU = 100% on core 1, pause, shift higher % down one core, release, rinse, repeat? That would be nice and I believe OSs will eventually need the overhead to detect when a CPU is bogged down and have the ability to auto-correct it. As long as applications can't control their own fate this should be pretty useful.
I agree with both Jeff and Larry though. There will be gains if done correctly but there's a potential to really screw the pooch or just not see any difference when the time actually comes. Developers really can't wait until this becomes a serious issue before tackling it.
I remember a while back, id was talking about a special version of Quake 2, or one of the Quakes..
He was running it on a dual-cpu W2K machine. IIRC, he was pretty impressed with the performance? So, maybe PC game developers can start exploiting this new technology. Or, maybe they have already and I'm just 5 years behind?
I think it's difficult to make use of more than two apps at a time in a typical desktop situation, so what will determine the usefullness of greater than two cores is the ability of single apps to use more than one core.
If one or both of the apps is multi-threaded, then the third and fourth core would be useful. I think the chart hints at that. Those apps that are serious about using many threads get very nice improvements going from two to four cores.
One reason why some classes of apps, like 3D games, may be slow to use multiple cores is because it takes years to develop the 3D engine. The physics and sound processing may be licensed and all of that may need to be upgraded to support multi-core. Some non-3D games already use multiple cores - Football Manager (used to be Championship Manager pre CM5) for example, but then some folks will tell you that's more like a number-crunching spreadsheet than a game.
Actually, I have seen some cases where my MBP bogs down, and another CPU would help. Not the common case when all I am doing is running some tunes, but that common case is not really going to tax a single cpu on a four year old computer.
The quad would help me in the same cases where the dual really seems to be a good thing - IDE at full bore, unit tests and integration running, tunes in the background, perhaps the odd streaming video running muted, and lots of system stuff doing things. At this point, a third CPU would satisfy things, but I would bet you that once those became popular, then a fourth would find things to do.
Obvioutly, I could cut out all but the core dev process, but it is convenient that I am really not impacted by other things I may happen to have simmering away on a back burner. It also means that I am less concerned about what other processes are up to.
So, is a quad useful for most users? Not yet, but that is partially because they are not yet used to a dual core. Give the typical user a year to find things to do, and the extra cpus will be used.
I remember a while back, id was talking about a special version of Quake 2, or one of the Quakes
It's "r_smp" in Quake:
A handful of games are multithreaded, but the gains, even in the best possible scenario, are not that significant.
All modern games are far more video card limited than CPU limited. That's why most review sites, when testing CPUs, will run the games at absurdly low resolutions like 640x480 or 800x600 with no anti-aliasing. Which, in my opinion, entirely defeats the point. (Note that the game performance numbers I cited in this post are run at typical gaming resolutions, eg, 1024x768 or higher.)
Upgrading from the slowest Pentium D to the fastest Core Duo (which is ~30% faster) is a giant waste of money for a gamer. Spend your money on a better video card instead!
--Sorry to turn this into a discussion board, but --
Mike Swaim said "The C++ compiler in VS2005 has some pragmas ... apparantly fairly .net hostile, too, and can't be done anywhere near managed code."
This is not correct. These are the OpenMP pragmas and, today, they work fine in mixed-mode C++/CLI (/clr) and, they've announced that in the future they will work with /clr:pure and /clr:safe.
As an example of how little CPU matters to modern games, see here:
Note the individual framerate scores for each 3DMark06 game (Return to Proxycon, Firefly Forest, etc) are virtually identical, from the slowest and moldiest Pentium D, up to the latest and fastest Core Duo 2 extreme.
Ars Technica review of the Mac Pro:
"Going from a Dual 2.5GHz Power Macintosh G5 to a quad-core 2.66GHz Xeon doesn't feel that much different under normal usage. Both machines feel about as equally responsive, although the Mac Pro appears to boot up much quicker. Really, under everyday load, this machine doesn't feel that different than the other Macs I use on a day-to-day basis (the aforementioned G5 and iMac) although it's far more responsive than my PowerBook G4."
With the latest release of Xoreax Incredibuild you'll get all CPU's (or cores) being utilized for Visual Studio builds.
Many games that are currently in production (including the one I'm working on) rely heavily on multi-threading for performance gains and do see large improvements on multi-core systems. For example, you run graphics on one thread, simulation on a second, pathfinding on a third and ai on a fourth. You'll be seeing more games like this as all of the current gen game consoles are multi-core, so the threading model will carry over to the desktop ports.
who benefits from quad core CPUs? Maybe not lay users but definately: Developers, Developers, Developers! As previously mentioned compiling can greatly speed up. I also happen to run mysql, apache, JBoss, vmware, and a compiler all on the same desktop while developing web services and applications. I definately benefit from my quad processor box. It's getting to be a few years old though so I'll have to upgrade soon.
Virtual Machines currently use only one core per VM. It you're running LINUX and Windows both you've already used two cores. So running Mac OS X and two VM's would benefit from at least 3 cores.
Here's my point of view: Software upgrades are easier for the average user than hardware upgrades (lots of effort/money required to reinstall your apps on a new PC). Taking advantage of multi processor architectures requires only a software upgrade, which (with Internet upgrades) is easier than hardware upgrades. Over the next 3-5 years (average lifetime of a PC) there should be lots of software upgrades that take advantage of multiple processors (at least up to 4 CPUs), so why not upgrade to quad CPUs?
A bit depressing when the Activity Monitor registers 796% CPU usage on the 8 core versus 395% usage on the 4 core, yet the total time to complete the task is only 3% faster in the case of Photoshop CS3 and 7% faster in the case of Aperture.
I like my Dual Processor system well I own 2 of them there bother old but have never let me down and there better than my laptop what has a Pentium IV 2.56Ghz Northwood (1 core).
Both my desktop computers are dual system:
Dual Pentium II 333Mhz and the other Dual Pentium III 1Ghz. There on almost all the time and they have never let me down so far they both run XP Pro.
Windows Media Encoder 9.0 MPEG to WMV 125 119 1.1 x
Xmpeg/DivX encoding 71 75 1.1 x
LAME 3.97 WAV to MP3 69 67 none
How about encoding 4 movies or songs at the same time?
How about running unit tests while working on programming somthing else?
I am a developer and I run Apache 2.2, PHP 5, MySQL 5, ZoneAlarm, AVG, Firefox and sometimes Winamp on a Win XP SP2 P-III ~700MHz 192MB RAM 19GB slow-ass HD Intel onboard a/v. This really sucks all of the time, let alone if I get a hit, and I'd love one of those new-fangled doohickeys they keep coming out with. I've gotten really good at disabling Windows services though which is really good for security as well :)
HAHAHA! I can't wait for people to start throwing out their dual-cores after christmas!!
what we have too remember here is that this technology is not a new idea, and all the ideas and concepts of a multi-processor environment have existed for decades before somebody decided it was cost effective to put it all onto a single chip.
Diminishing returns is something you'd expect in this situation, doubling the number of processors doesn't double the performance. When you start considering how these seperate processors start managing the single resources of the computer, such as memory hdd access, you start to understand why the returns aren't not so great. coupled with instruction "execution order" dependencies within a single application its not particularly easy to automatically divide an application into logically parts that run well concurrently together.
I've a Quad core at work. It's very handy considering I have vista and vmware-server running all the time on it. I have to say that vista still dogs down when using vmware-server, but thats mostly due to pathetic io.
A year on since this was originally posted and nothing has really changed. The truth is that few applications can use more than 2 cores, most are still using only one.
What worries me most is that we could be looking at the maximum single threaded performance we'll see for a LONG time. Intel/AMD/Everyone is going multiple cores and ignoring single threaded performance. Yet most applications do not lend themselves to its usage. Intel is also saying we should be expecting 128 cores in the future.
Saying that we can run a few applications to utilise the cores is true only for very small numbers of cores. I doubt that 128 cores would be utilizable in the same way. Indeed do we honestly want any application to be limited to 128th of the CPU's power. That is stagnation in PC performance marks the end of the scaling and current windows apps being about the best they can be, only features or a fundemental change in the Windows OS design would allow more utilisation.
The final fact that scares me is that if a algorithm has just 10% of it that can't be run in parrallel then the maximum speed up you ever have, regardless of number of cores, is 10x. Doesn't matter how many cores you have all the benefits come from a better algorithm. Alas better algorithms tend to come along less frequently than new faster CPUs.
If we don't work out how to get single threaded performance moving again programming languages will have to change and developers will be rebuilding their entire world to use multiple cores. Reliability thus is going to dive.
For me it's all about 3D rendering. If I can cut the rendering time of a long complex animation from 4 days to 2.5 days, it's worth it. 3D Studio Max scales quite well with additional cores.
I have had a quad core for a year now... I am impressed.. Rendering and video editing are very good. The only problem as mentioned several places are the speed of the hard drives and memory aswell as screencards. I do notice an increase in loading times compared to my old p4. The coding of programs must change to utilize more cpu power spread evenly. But loading times and games loading are still limited to the hard drive speed. but still very happy.
I'm surprised at the Apache result - I would have thought that Apache had sufficient concurrency built in (via forked processes or threads) to utilise loadsa cores....or is it optimised as a single threaded process (surely not?).
Anyway - can anyone explain why my surprise is unwarranted?
It may seem excessive, but there are specific applications to consider which are actually quite common with Macs and Mac users.
One of these is music/media production, which I do on PC, but make no mistake, if I could afford one of those high-end Macs I'd get one just for that purpose. In this type of production you have a lot of virtual instruments or effects running, all of which are *very* CPU intensive, but none of which actually depend on each other in any way (unless they're chained together, but that can still be multithreaded by introducing a delay).
In that particular application, you can never have enough CPU power. Every producer is used to this and has to use kludgey workarounds to minimize CPU usage. Dual-core CPUs in this case are, practically, doubling the concurrency limit, and a quad-core CPU will practically quadruple it.
There may be other examples too. I think the Mac Pro (and even the high-end G5) is really directed toward niche markets like producers who really are always starving for CPU time specifically because of massive-scale threading.
It's not a machine for programmers, gamers, internet surfers, or casual users. For those people, it's definitely excessive. Having a dual-core CPU has made a big difference for me at work but I can't see a quad-core making much of an additional difference.
I agree with what Stuart and Christian said about Apache - that's certainly designed to take advantage of multiprocessing, so the results obviously show it hitting disk- or network-IO limits.
Which demonstrates another reason for quad-core being of limited value on a desktop - having all that CPU power doesn't mean a thing if the rest of the resources can't keep up. Even for developers, a quad box won't speed up compiles much if it spends all it's time doing disk access.
check out microsofts channel9 and they clearly say that the lambda expressions in .NET 3.0 are meant to handle multicore. They are making functional programming extensions of C# and VB.NET.
Then you have software that is faster on quad core, 8 , 16,. 24, 32 core as well.
personally I wouldn't build an 8 core except for servers.
The next "must have" is ray tracing for video games. That comes from making the 24 core 1 TFlop chip that was shown at the 2006 IDF.
Build that with dual core (+24 floating sub cores) and you have the next major leap in computers. Quad + 24 subcores would be the high end.
Of course Intel can just push all of the floating stuff off to these subcores and make four smaller ALU's very easy. So they might as well build the quad..
Right now I am very happy with my E6400! When the Q6600 comes down under $300, I will upgrade!
Virtualization will continue to drive the number-of-cores arms race for the foreseeable future. And that's a good thing.
I love my quad core, 8 gig of Ram, that's all you need.
Last time I was working on an old single core, my god.....never again.
It is incredible the amount of work you can do with a quad core pc or Mac, you don't need to close a single program, just run everything at once.
Thanks for the post!