December 10, 2006
There are two popular formulations of Moore's Law:
The most popular formulation [of Moore's Law] is the doubling of the number of transistors on integrated circuits every 18 months. At the end of the 1970s, Moore's Law became known as the limit for the number of transistors on the most complex chips. However, it is also common to cite Moore's Law to refer to the rapidly continuing advance in computing power per unit cost, because transistor count is also a rough measure of computer processing power.
The number of transistors on a CPU hasn't actually been doubling every 18 months; it's been doubling every 24 months. Here's a graph of the transistor count of each major Intel x86 chip family release from 1971 to 2006:
The dotted line is the predicted transistor count if you doubled the 2,300 transistors from the Intel 4004 chip every two years since 1971.
That's why I prefer the second, looser definition of Moore's law: dramatic increases in computing power per unit cost. If you're a stickler for detail, there's an extensive investigation of Moore's law at Ars Technica you can refer to.
But how do we correlate Moore's Law-- the inexorable upward spiral of raw transistor counts-- with performance in practical terms? Personally, I like to look at benchmarks that use "typical" PC applications, such as SysMark 2004. According to page 14 of this PDF, SysMark 2004 scores are calibrated to a reference system: a Pentium 4 2.0 GHz. The reference system scores 100. Thus, a system which scores 200 in SysMark 2004 will be twice as fast as the reference system.
So, what was the first new CPU to double the performance of the SysMark 2004 reference system with a perfect 200? The Pentium 4 "Extreme Edition" 3.2 GHz scores 197 on the SysMark 2004 office benchmark in this set of Tom's Hardware benchmarks. Let's compare the release dates of these two CPUs:
|Pentium 4 2.0 GHz||August 27th, 2001
|Pentium 4EE 3.2 GHz||November 3rd, 2003
It took 26 months to double real world performance in SysMark 2004. That tracks almost exactly with the doubling of transistor counts every 24 months.
This isn't a perfect comparison, since other parts of the PC get faster at different rates. But it's certainly a good indication that CPU transistor count is fairly reliable indicator of overall performance.
Posted by Jeff Atwood
The Moore's Law is
1. now actually the benchmark of CPU manufacturers, resulting a self-fulfilling prophecy.
2. highly maintained by the demand from software, not those highly specialized, but FPSs and RPGs, "stand-by-in-tray" programs, not-so-optimized applications development and Windows-family.
"Whatever given by Intel taken by MS" still holds today.
But do you know anyone that uses Notepad to compose their term paper or report?
I did. Real men use LaTeX to write their term papers. Nothing else comes close if you want quality output and good looking formulas.
How do you measure performance? Some number crunching application now runs twice as fast? That's nice, but only the geeky care. What can you do with a PC now that you could not do ten years ago? What would you like to do with a PC now that current PC's cannot do? Me, I am still stuck in the text world. I will occasionally paste a picture, but I do almost no graphics work on a computer. I enjoy the graphics that other people produce. Most is fluff, but occasionally I see something really good. Interactive graphing of algebraic equations was one.
What is the website blank for? I put in a URL, submit and I get some error message saying you bad, you tried to put a URL in the space for a website. What gives?
At areound 10 mio. transistors we had enough computing power to start on (mainstream) garbage collected languages, with 100 mio. we got fully debuggable and refactoring capable IDE's. I wonder what 1 bio. will bring!?
For years, it hasn't been microsoft doing the taking away, it's been crappy antivirus and spy_ware [ngg blog filter]... Microsoft's finally back in the game of stealing ur megahurtz though, hopefully it's worth it. =p ( a href="http://www.knitemare.org/cats/headoutpower.jpg"http://www.knitemare.org/cats/headoutpower.jpg/a )
Another decent indicator (but harder to research) is how much performance you can buy with a given amount of money (adjusted for inflation), either individual components or a whole system. That particular measure has probably increased faster than overall performance.
Another decent indicator (but harder to research) is how much performance you can buy with a given amount of money (adjusted for inflation), either individual components or a whole system. That particular measure has probably increased faster than overall performance
in his writings, Moore was quite explicit: this was *exactly* what he wanted to measure. number of devices and feature size were implementation details.
actually, number of transistors is not doubling per processor anymore. It's all about multi-core processors now so average machine can't take an advantage of all this doubling unless it uses software which can actually utilize more cores which is usually not the case.
Everyone I know who does latex uses Emacs. Like mathml, it's not nearly as user-friendly as html. (Not that I doubt you in this instance, anyone can learn to do anything by hand eventually.)
Charles, I'd say quick and easy home DVD authoring is one area that's reached mainstream price in the last two years. No more need to suffer while editing and then let it render overnight. In photography, panoramic stitching has entered that realm.
Like mathml, it's not nearly as user-friendly as html
Sorry, but I'll have to disagree. LaTeX is a thing of beauty once you get a hang of it. It makes the impossible possible. I still use it whenever I can (whitepapers, documents that don't require editing) and output beautifully typeset PDF that's easy on the eyes. If anyone here still hasn't discovered LaTeX, I urge you guys to give it a try. Initial setup can be a bitch, and there's a bit of a learning curve, but once you typeset a good size document in it you will never go back to Word if you can do without it.
BTW, even if you use emacs, it won't automate much for you. Maybe cross references, maybe matching curly braces, but that's about it.
There's a Mac tool that works great when you can't use LaTeX, it's called LaTeXiT. Basically it gives you LaTeX syntax for formulas and then you can drag and drop the result into your non-LaTeX document.
How fast is software getting slower? I think you can build and OS like Windows/Linux and make it start at least 100 times faster.
Yes, Notepad is awesomely fast. But do you know anyone that uses Notepad to compose their term paper or report?
It's a specious argument.
I have to disagree with your final conclusion that transistor count is a strong indicator of performance.
Transistor count is only vaguely linked to performance. For a simple example, imagine a single-core system vs. a dual-core system. The dual-core system will have almost exactly double the number of 'transistors', but will it get double the performance on your example of an office benchmark? Not a chance.
The benchmarks you cite are using a P4 2.0GHz vs. a P4 3.2GHz. This is, again, an office benchmark, so there are a lot more confounding variables. Even assuming that this is a pure CPU performance measurement, it doesn't look right - you have a pair of P4 machines, one running at 1.6 times the clock speed, but somehow getting double the performance. What's going on there? Is there a difference in cache? Chipset? Hard drive size? Memory types? There's no way to know, and it's probably a combination of these things.
There are even problems with the use of 'transistors' as a measurement, because what constistutes a transistor varies greatly to the hardware designer and generally isn't their lowest-level primitive anyway. In certain situations, a transistor might occupy n units of space. If you combine transistors into a logic gate, they might take 1.5n units of space.
I'd say that at best, Moore's Law is a tool for marketing and evangelism, and not something to be used for any sort of scientific measurement or prediction. Intel *want* you (and their shareholders) to believe that processor performance is ramping up; they'll quietly ignore that period after the P4 3GHz where they took three years to increase their clock rate by 1GHz. Now performance-per-watt and multicores is their marketing thrust for the simple reason that *throwing transistors at a single core was giving diminishing returns*.
The vast majority of the transistor count of a single-core CPU was already cache. There are have been plenty of well thought out experiments measuring the performance difference gained by increasing cache size, and you'll find that it's a rare day indeed that doubling your cache size doubles your performance.
"...they'll quietly ignore that period after the P4 3GHz where they took three years to increase their clock rate by 1GHz..."
Right, but this is an Intel design issue; the huge 31 stage pipeline and the unfortunate "overclocked-by-manufacturer" tendency they had to persue.
I would much rather go back to the Pentium days where a discrete cooling profile was enough. My dev system is an AMD X2 3800+ 35W and it provides silent performance, but is relatively expensive. In that light, I would prefer moores law to be non-existant and let CPU architects meassure their success by more than mere FLOPS.
A number of people need to review what "correlation" is: http://en.wikipedia.org/wiki/Correlation
Observing a correlation doesn't imply causation or a "link" (as Ian calls it). And Hamilton Lovecraft, there are correlation coefficients other than 0 and 1. Nobody's claiming it's a 1 (which would mean transistor count is 100% determinative of performance, clearly false), just that it's way higher than arguments based around (completely true) architecture differences and the differences in the rate of improvement of other hardware might lead you to believe, if you don't actually examine the evidence.
The benchmarks you cite are using a P4 2.0GHz vs. a P4 3.2GHz. This is, again, an office benchmark, so there are a lot more confounding variables.
You know, Ian, it's funny, because when I originally did this research I was trying to *disprove* the idea that growth in transistor count relates to performance in a modern PC.
Imagine my surprise when I found that the interval for performance doubling in SysMark2004-- 26 months-- almost exactly mirrors the 24 month doubling rate of transistors in CPUs. I did not expect this.
I'm not saying it's a strictly causal relationship (and I agree with all of the criticisms you list), but there's plenty of evidence to support the idea that transistor count is a decent, if very rough, measure of overall performance.
I'll just mention once again that most of the transistor count on a modern CPU die is L2 cache, and you'll just ignore that entirely on your next post, OK?
Shouldn't computer power be a measurement that can be applied across architectures, and not be transitor based? Maybe this is something Sysmark is trying to do.
What did they use before transitors (the vacuum tube era)?
What happens when transitors are no longer used in PC chips?
In autos you have horsepower, which can be measured regardless of engine size, type or complexity.
If transitor count and CPU processing power increases can be coorelated, then Moore's law has some validity. Otherwise, I think you would need a more independent measurement.
iAnd your point is..?/i
What's the difference in performance between a Pentium 4 HT 520, clocked at 2800MHz, with 1MB L2 cache and 125M transistors and a Pentium 4 HT 620, clocked at 2800MHz, with 2MB L2 cache and 169M transistors?