May 18, 2007
In C# and the Compilation Tax, several commenters noted that they have "fast dual-core computers", and yet background compilation performance was unsatisfactory for them on large projects. It's entirely possible that this is Visual Studio's fault. However, I'd like to point out that not all dual core computers are created equal. Not by a long shot.
Take a look at this Visual C++ compilation benchmark. Details of the benchmark methodology are available on this page, but for now let's assume this is typical compilation performance in a typical IDE. The baseline score of 100 represents a 2.6 GHz Pentium D 805 CPU.
Clearly the multiple core future has already arrived-- every CPU you see here is a dual-core model. Many Pentium 4 models come in dual-core flavor.
The CPU at the bottom of the benchmark results isn't just any garden variety Pentium 4, though. It's the Pentium 965 "Extreme Edition", the absolute pinnacle of the Pentium 4 CPU family. It's a 3.73 GHz dual-core, dual-hyperthreaded CPU that originally retailed for almost a thousand dollars. The fastest possible Pentium 4 is nearly 50 percent slower at compilation than a midrange Athlon 64 or Core 2 Duo CPU. But wait! It gets worse!
Consider WorldBench - Mozilla 1.4 results. The times shown are in seconds; lower scores are better.
Bringing up the rear, by a large margin, are two members of the Pentium 4 CPU family. The 3.6 GHz Pentium D 960 is almost twice as slow as the 2.6 GHz Core 2 Duo E6700 in Mozilla.
Perhaps this is why Tech Report called the Pentium 4 "[a] CPU based on a lame-duck microarchitecture."
If you're running a Pentium 4 CPU-- even a "fast" 3.4 GHz+ dual-core model-- you could more than double your performance by upgrading to a middle-of-the-road Core 2 Duo CPU. And I'm not talking about meaningless synthetic performance benchmark numbers; I'm talking about performance in real world apps that software developers use every day, meat and potatoes stuff like web browsers and compilers.
If you're using a Pentium 4 CPU of any kind, consider upgrading at the earliest possible opportunity. Given how much software developers are paid, it makes no economic sense to hobble them with old, slow PCs based on the underperforming Pentium 4 CPU. Demand your rights. You can pick up a midrange Core 2 Duo system, sans monitor, for under a thousand dollars. Isn't the value of your time worth at least that?
Posted by Jeff Atwood
Even better, if you're in a more than one computer office, demand IncrediBuild, and get ALL the CPU's working for you.
I think increasing the amount of RAM you have can be as good if not better than increasing your CPU power. If you have the latest Core 2 Duo processor but only 512MB of RAM, you're cutting yourself a bit short.
Upgrading my laptop from 512MB to 1.5GB of RAM provided a more noticeable responsiveness and speed improvement than did switching from a Pentium M to a Core Duo.
Um, for one thing they used Visual C++ 6.0. Which has no parallel compilation to speak of.
(That benchmark study would have been *much* more useful if they also measured CPU utilization.)
Also, if your time is this valuable, it probably makes sense to spend a little time figuring out where your bottlenecks really are. Sure, computers are "cheap" these days compared to developer time, but make sure you're concentrating your upgrade money in the right areas.
CPU performance is one thing.. but also measure your CPU utilization over time, as well as your memory utilization, cache miss rate, IO per second, disk utilization, and so on.
Depending on your application, you may realize larger performance gains by upgrading your disk(s) and memory, or your disk controller, than by just upgrading your CPU.
Isn't the value of your time worth at least that?"
Apparently not according to my boss... :(
And 2GB of DDR2-667 memory is less than $80 nowadays. There's no point in having only 1GB anymore.
It matters also that Pentium 4s with HyperThreading are not really multi-core or multi-processor chips in any real fashion. Where as an X2 or Core 2 actually has two physically separate processor cores on the one chip a Pentium 4 is pretending to have any extra core by using its crazy long pipeline to simulate running processes in parallel. In practice it makes stuff faster but the second 'core' is quite slow compared to the first in most real uses.
"Now if only WD would bring out Raptors in capacties greater than 160GB."
Speed and size are always trade-offs. The super speed is probably the entire reason why they don't have a bigger version.
But really, 160 GB is perfectly fine for an applications drive. Just keep all your media on a second drive.
Time to ditch my Pentium D 820 2.8Ghz then... although my computer never felt slow when running VS2005 with middle-size (college) projects.
The only "problem" is energy efficiency, because this thing has a TDP of 130W! Sometimes in the summer with the stock cooler it would reach 140F. My friend's Core 2 Duo E6600 has a TDP of 65W and it never goes above 95F. Totally insane...
Nick, all of these are Pentium D benchmarks, but they're lumped under P4 because that's what they're made of. If you're stuck on a P4HT, compiling large projects (or background compilation) must be extremely painful.
Pentium 4s with HyperThreading are not really multi-core or multi-processor chips in any real fashion.
That's true. But most modern Pentium 4 CPUs are true dual-core chips with 2 physical CPUs, no Hyperthreading present. The Pentium XE 965 is the rare exception; it is dual-core with hyperthreading, so it appears as 4 physical CPUs.
Also, if your time is this valuable, it probably makes sense to spend a little time figuring out where your bottlenecks really are.
Yes, but we're looking specifically at the CPU in these benchmarks. Upgrading disk and memory are always a good idea (particularly memory now that DDR2 has gotten so cheap) but that's not the point of this post.
The solution I'm looking at has about 50k lines of C# code and takes around 5 seconds for a full compile on a Pentium D 940.
C++ takes a long time to compile. If you want to save time compiling, port away from C++.
Also, ReSharper will help you avoid compiling your C# so often: it points out your mistakes as you type.
Use a Borland compiler to save time...
How much of this is the fault of the language itself?
One of my big bugbears with languages in the C family (C and C++) is their heavy reliance on files - source files that include header files that generally require other header files etc. You can have the fastest CPU in the world with a ridiculous amount of memory and you're still massively hampered by file access times. Even when using the trick of #ifdef ___BLAH_INCLUDED, while it does cut down on the time taken to parse the file, still requires that the file be opened first (in modern systems this is by far the most significant time issue).
Of course you can get around this (to a degree) with pre-compiled headers and so on but the ultimate cost of a much more complex compiler, along with all the potential issues that come along with that. There is only so much optimisation you can do on this text file based model.
Alternative languages such as Delphi which do not rely on text based files are so much faster in build times, even for a large application on a "slow" machine the time taken to build and then execute is a fraction of the time taken to compile the C/C++ equivalent.
So how much kickback are you getting from AMD/Intel/Dell/HP for posting this?
Here's a hard one to justify because I have no numbers to back it up, but I'd *swear* that my computer's a lot faster running Vista (64 bit) with SuperFetch -- it just doesn't go to disk that often when running Visual Studio, so compiles are lightning fast. Totally unscientific, and I should be ashamed to bring it up without anything to back up my claims.
(Not that running 64-bit Vista as my primary computer is without its problems!)
Now to find a way to get rid of this Celeron at work...
"how much kickback are you getting from AMD/Intel/Dell/HP"
Wow, that'd be a hell of a kickback!
Although I kind of see the point. This article points to the CPU being the bottleneck. Are we sure that the compiler/IDE is as optimized as it can get? Maybe the reason we need a bigger minivan is because our IDE has a fat butt? Maybe we could get by with a VW Bug if our IDE/compiler wasn't so fat?
Are there any benchmarks comparing GCC and OpenWatcom to the VC++ compiler? On different CPU's? Is it better to use NAnt/MSBuild to compile your products instead of the IDE?
It's quite difficult to compare such things, because you have to compare compilation speed as well as benchmark the output, as well. I'm happy to disable LTCG and full optimization for standard compiles, but for release I'll spend a half-hour building with ICL if necessary. (Although it doesn't take that long, it also doesn't help all that much in most cases.) GCC's been getting slower as its full optimization improves, too, from what I've seen. It's a tricky tradeoff!
let me tell you , my pentium 3 with 256mb ram, way out performs my celeron D with 512mb ram
That's not right.
I FEEL your pain - getting this POS that I'm working on upgraded - finnaly getting up to 2 gig - the think is, by the end of the year it's supposed to be replaced. Company is on a 3-4 year cycle. I try and tell them that a 3 yo pc for a developer is so slow as to be painful, but
For that matter, I ended up buying my own copy of RefactorPro, and my own copy of CC.net, because the boss shells out for the MSDN and that's it (and took 5 years of arguing to get him to do that - before that it was "You need VB.NET - you GET VB.NET" - he didn't want to
hear that under the Corp licensing agreement, it was cheaper to get the MSDN
Captcha - Orange (aren't you glad I didn't say banana)
"One of my big bugbears with languages in the C family (C and C++) is their heavy reliance on files - source files that include header files that generally require other header files etc. You can have the fastest CPU in the world with a ridiculous amount of memory and you're still massively hampered by file access times. Even when using the trick of #ifdef ___BLAH_INCLUDED, while it does cut down on the time taken to parse the file, still requires that the file be opened first (in modern systems this is by far the most significant time issue)."
Not sure about the MS compiler, but GCC and SunPro compilers track which includes have already been read for each compilation unit. In other words, no multiple opening of each header. For compilers that don't do this, you can wrap each include statement in preprocessor directives:
And do the regular ifdef guards in the header:
#endif // !WINGNUT_H
This was recommended in one of the most boring programming books I've ever read, "Large Scale C++ Development" by Lakos. If you can keep your eyelids open for long enough then it imparts some interesting information, but much of it has been obsoleted by things like C++ namespaces.
I'm not surprised that you got more of a (perceived and/or real) speedup by increasing RAM from 512MB to 1.5GB than upgrading a Pentium M to a Core Duo. The Pentium M is derived from the Pentium 3 (with the bus interface and a few new instructions from the P4) and the Core CPUs are derived from the Pentium M. Unless you upgraded significantly on clock speed going from a P-M to a Core Duo, I wouldn't expect a huge improvement except for tasks that can effectively use both cores of the Duo. If you had upgraded from a Mobile Pentium 4 to a Core Duo, you would likely have seen a much larger difference. In your case, RAM would likely help more, especially if you're working with large programs.
A side note for anyone running 16-bit code, the Pentium M and Core Duo are faster (clock for clock) than a Core2 Duo at running 16-bit code. It's not a huge difference, but it's around 15%. It's quickly becoming a moot point as the Core2 CPUs are rapidly replacing the Pentium M and Core CPUs. For 16-bit code, the P-M/Core/Core2 CPUs blow the P4 completely out of the race. Although I haven't had a modern AMD CPU to test, I suspect they also perform well on 16-bit code. One of these days, 16-bit code will be dead, but there are still a lot of legacy in-house DOS/Win3x apps running out there.
The P4 is and always was a dead end. The only reason Intel was able to keep the P4 close to AMD is because their process technology allowed them to move to outrageous clock speeds, and even then, you had to use 32-bit code optimized for the P4 to be competitive.
I avoided the P4 for as long as possible, then finally bought a Pentium D 820 based machine for a good price a little under 2 years ago. I was fortunately able to sell that to one of my clients about 6 months ago, recovering most of my money. I purchased a Core2 Duo to replace that machine.
As someone else noted, benchmarking to find out where the bottlenecks are is the best way to determine what upgrade will make the most difference for you. Unfortunately, unless you have multiple different machines and/or spare parts around to try different combinations, you're still guessing.
I got a Pentium D machine from Dell a little while back thinking "yeah all right! 64-bit! dual core! this is going to be awesome!" But as it turns out, the fool thing CRAWLS. So I'm feeling you on this. Thanks for recommending some upgrades I can look into.
Don't know about the new multi-core, 64-bit machines, but my Xeon desk box is significantly faster than the P4 box target that I occasionally build/debug on.
"my pentium 3 with 256mb ram, way out performs my celeron D with 512mb ram"
haha me too... celerons are so crippled they're basically useless. Especially now that 'office applications' include working with the graphics and such that celerons can't handle
not to mention web browsing
The P4 sucks. Plain and simple. It's deep pipelining killed it. It was a bit of a gamble by Intel and they lost. Thank goodness their mobile division was around to provide an alternative.
Well gee. I was just "upgraded" to a P4 and I'm expected to be thankful to be granted this boon. After all it's better than the Celeron box they hauled away.
Be glad you don't work in a large shop where Soviet-style central planning leaves machines warehoused for 2 years before the box jockeys get off their duffs and deploy them.
Well gee. I was just upgraded to a P4 and I'm expected to be thankful to be granted this boon. After all it's better than the Celeron box they hauled away.
It is better, but yep you are still lagging behind the latest.
If you're running a Pentium 4 CPU-- even a fast 3.4 GHz+ dual-core model-- you could more than double your performance by upgrading to a middle-of-the-road Core 2 Duo CPU.
I hope someone can create an adapter to allow plugging in Core 2s into Socket 775 Pentium 4 motherboards.
These are all lies and the jews have never written one book, one computer program or made any invention.
Celeron, graphics, it is all the babbling of blonde haired retards.
In one day, Intel has made its entire Pentium D lineup of processors obsolete. Intel's Core 2 processors offer the sort of next-generation micro-architecture performance leap that we honestly haven't seen from Intel since the introduction of the P6.
The fastest possible Pentium 4 is nearly 50 percent slower at compilation than a midrange Athlon 64 or Core 2 Duo CPU. But wait! It gets worse
I was laughing at this sentence a lot! Text author, browse for more benchmarks and see how stupid you seem here!
I've been looking at listed Pentium M models running socket 479 at a bus of 400 for my D550 (HAHAHA Funny to most of you, I'm sure) but it actually scored a Benchmark on it's best day of 247. Right now I have a Dothan 1.7 2mb L2 Cache socket 479... Is the biggest I can get a 2.1 or is it possible to get a Dual Core. The issue with a Dual Core, even at a 512 cache, that's only half the buffer rate of my 1.7 M... Anyone got an answer as to what the best ( 2mb cache with highest cpu freq.) is that I can get?
Als getting a fast drive also helps. I recently switched over to WD Raptor drives and I noticed a dramatic change in performance. Now if only WD would bring out Raptors in capacties greater than 160GB.