July 26, 2008
I got a call from Rob Conery today asking for advice on building his own computer. Rob works for Microsoft, but lives in Hawaii. I'm not sure how he managed that, but being so far from the mothership apparently means he has the flexibility to spec his own PC. Being stuck in Hawaii is, I'm sure, a total bummer, dude.
Rob and I may disagree on pretty much everything from a coding perspective, but we can agree on one thing: we love computers. And what better way to celebrate that love by building your own? It's not hard. This industry was built on the commodification of hardware. If you can snap together a Lego kit, you can build a computer.
Maybe this is a minority opinion, but I find understanding the hardware to be instructive for programmers. Peter Norvig -- now director of research at Google -- appears to concur.
Understand how the hardware affects what you do. Know how long it takes your computer to execute an instruction, fetch a word from memory (with and without a cache miss), transfer data over ethernet (or the internet), read consecutive words from disk, and seek to a new location on disk.
In my book, one of the best ways to understand the hardware is to get your hands dirty and put one together, including installing the OS, yourself. It's a shame Apple programmers can't do this, as their hardware has to be blessed by the Cupertino DRM gods. Or, you could build a frankenmac, though you'll run the risk of running a "patched" OS X indefinitely.
As Rob and I were talking about the philosophy of building your own development PC -- something I also discussed on a Hanselminutes podcast -- he said you know, you should blog this. But Rob -- I already have, many times over! Let's walk down the core list of components I recommended for Rob, and I'll explain my choices with links to the relevant blog posts I've made on that particular topic.
ASUS P5E Intel X38 motherboard ($225)
I'm a big triple monitor guy, so I insist on motherboards that are capable of accepting two video cards -- in other words, they have two x8 or x16 PCI Express card slots suitable for video cards. I also demand quiet from my PC, which means a motherboard with all passive cooling. Beyond that, I don't like to pay a lot for a fancy motherboard. After spending the last five years with motherboards packing scads of features I never end up using (two ethernet ports, anyone?), I've realized there are better ways to invest your money. People tend to respect ASUS as one of the largest and most established Taiwanese OEMs, so it's usually a safe choice. I'd go as far down on price on the motherboard as you can without losing whatever essential features you truly need. Save that money for the other parts.
Intel Core 2 Duo E8500 3.16 GHz CPU ($190)
Intel Core 2 Quad Q9300 2.5 GHz CPU ($270)
Ah, the eternal debate: dual versus quad. Despite what Intel's marketing weasels might want you to believe, clock speed still matters very much. Here's an example: SQL Server 2005 queries on my local box, a 3.5 GHz dual core, execute more than twice as fast as on our server, a 1.8 GHz eight core machine. Sadly, very few development environments parallelize well, with the notable exception of C++ compilers. Outside of a few niche activities, such as video encoding and professional 3D rendering, most computing tasks don't scale worth a damn beyond two cores. Yes, it's exciting to see those four graphs in Task Manager (and even I get a little giddy when I see sixty-four of 'em), but take a look at the cold, hard benchmark data and the contents of your wallet before letting that seductive 4 > 2 math hijack the rational parts of your brain.
It's also smart to buy a little below the maximum, with the ultimate goal of upgrading to a whizzy-bang 4 GHz quad core CPU sometime in the future. One of the hidden value propositions in building your own PC is the ability to easily upgrade it later. CPU is one of the most obvious upgrade points where you want to intentionally underbuy a little. Give yourself some room for future upgrades. Until a quad costs the same as a dual at the same clock speed, my vote still goes to the fastest dual core you can afford.
Kingston 4GB (2 x 2GB) DDR2 800 x 2 ($156)
Memory is awesomely cheap. When it comes to memory, I like to buy a few notches above the cheapest stuff, and Kingston has been a consistently reliable brand for me at that pricing level. There's no reason to bother with anything under 8 GB these days. Don't get hung up on memory speed, though. Quantity is more important than a few extra ticks of speed. But don't take my word for it. As an experiment, Digit-Life cut the speed of memory in half, with a resulting overall average performance loss of merely three percent. By the time your system has to reach outside of the L1, L2, and possibly even L3 cache -- it's already so slow from the system's perspective as to be academic. Memory that is a few extra nanoseconds faster isn't going to make any difference. This is also why I specified the latest and greatest Intel CPUs with larger 6 MB L2 caches. Remember, kids, Caching Is Fundamental!
Western Digital VelociRaptor 300 GB 10,000 RPM Hard Drive ($290)
This is arguably the only indulgence on the list. The Velociraptor is an incredibly expensive drive, but it's also a rocket of a hard drive. I'm a big believer in the importance of disk speed to overall system performance, particularly for software developers. At least Scott Guthrie backs me up on this one. Trust me, you want a 10,000 RPM boot drive. Buy a slower large drive for your archiving needs. You want two drives, anyway; having two spindles will give you a lot of flexibility and also help your virtual machine performance immensely.
This new raptor model is the best of the series. It's much quieter, uses less power, generates less heat, and is by far the fastest -- embarrassingly fast. It's expensive, yes. I won't hold it against you if you decide to disregard this advice and go with a respectably fast, less expensive hard drive. But to me, it's all about putting the money where the most significant bottlenecks are, and considered in that light -- man, this thing is so worth it. As Storage Review said, "[its] single-user scores .. blow away those of every other [hdd]".
Radeon HD 4850 512MB video card ($155 after rebate)
Even if you're not a gamer, it's hard to ignore the charms of this amazing powerhouse of a video card. The brand new ATI 4850 delivers performance on par with the very fastest $500+ video card you can buy for a measly hundred and fifty bucks! Modern operating systems require video grunt, either for windowing effects or high-definition video playback. Beyond that, it's looking more and more like some highly parallizable tasks may move to the GPU. Have you ever read stuff like "even the slowest GPU implementation was nearly 6 times faster than the best-performing CPU version"? Get used to reading statements like that; I expect you'll be reading a lot more of them in the future as general purpose APIs for GPU programmability become mainstream. That's another reason, as a programmer and not necessarily a gamer, you still want a modern video card. For all this talk of coming 8 and 16 core CPUs, eventually the GPU could be the death of the general purpose CPU.
We also want our video card to be efficient. Many don't realize this, but your video card can consume as much power as your CPU. Sometimes even more! The 4850, for all its muscle, is remarkably efficient as well. According to a recent AnandTech roundup, it's on par with the most efficient cards of this generation. Pay attention to your idle power consumption, because power consumed means heat produced, which in turn means additional noise and possible instability.
Corsair 520HX 520W Power Supply ($100 after rebate)
The power supply is probably one of the most underrated and misunderstood components of a modern PC. First, because people tend to focus on the "watts" number when the really important number is actually efficiency -- a certain percentage of energy that goes into every power supply is turned into waste heat. An efficient power supply will run cooler and more reliably because it uses higher quality parts. People think you need 1.21 Jigawatts to run a powerful desktop system, but that's just not true. Unless you have a bleeding-edge CPU paired with two high-end top of the line gaming class video cards, trust me -- even 500 watts is overkill.
The Corsair model I recommend gets stellar reviews. It has modular cables and the 80 plus designation, so it's 80% efficient at all input voltages. Note that a quality power supply is not a substitute for a quality UPS or surge protector, but it helps.
Scythe "Ninja" SCNJ-2000 cooler ($50)
Scythe "Ninja Mini" SCMNJ-1000 cooler ($35)
I'll be honest with you. I have a giant heatsink fetish. These giant hunks of aluminum and copper, and the liquid-filled heatpipes that drive them, fascinate me. But there's a more practical reason, as well: if you want a quiet computer, you don't even bother with the stock coolers that are bundled with the CPU. Over the last few years, I keep coming back to Scythe's classic "Ninja" tower cooler, which is available in tall and short varieties. They're so astoundingly efficient that, with adequate case ventilation, they can be run fanless. I even (barely) managed to squeeze the Ninja Mini into my home theater PC build, and it's now mercifully fanless as well. There are plenty of other great tower/heatpipe coolers on the market, but the Ninja is still one of the best, a testament to its pioneering design. The CPU is (usually) the biggest consumer of power in your PC, so it's sensible to invest in a highly efficient aftermarket cooler to keep noise and heat at bay under load.
There you have it. More than you ever possibly wanted to know about how an obsessive geek builds a PC -- painstakingly analyzing every single part that goes into it. Now, like Rob, you're probably sorry you asked; who needs all the philosophical digressions, just give us the damn parts list! OK, here it is:
The best bang for the buck developer x86 box I can come up with, all for around $1100.
I try to avoid posting about hardware too much, but sometimes I can't help myself. I blame Rob. Enjoy your new system, Mr. Conery.
Posted by Jeff Atwood
I enjoy your posts about hardware. But as awesomely appealing as a great desktop setup is, I find myself exclusively working on a laptop these days. I value the portability, and there's nothing like kicking back and working on the couch.
Nice. Thanks for the input. These kinds of posts are helpful. To be honest I don't really have the time to do this much research (even in all my geekiness). The wife, baby and house just means I don't have free time to really read all the reviews.
Even if your recommendations aren't absolutely the most perfect ever, they are pretty good—and a lot better than I would probably pull together with only a few hours of research.
Thanks; these hardware posts are great, Jeff. However, I couldn't help but snicker at the beginning...
'Understand how the hardware affects what you do...'
In my book, one of the best ways to understand the hardware is to get your hands dirty and put one together..
The exact same principal is true (and for programmers, far more relevant) with respect to learning C to affect how you program. You know, for all the nit-pickery about the performance benefit of dual core vs. quad core, etc. it's mind-boggling that you ignore the potential performance benefit of learning C. You ARE a programmer, right?
Lazy programmers piling frameworks on top of libraries on top of frameworks on top of snap-ins on top of abstraction layers ... just to avoid the nitty-gritty is one major reason WHY we all need to keep building faster and faster PCs with more and more RAM and HDD space.
Sorry for the rant, but really. :)
Buying an aftermarket heatsink is a giant waste of money if you're using a Core 2 Duo. The current Intel HSF design is one of the best on the market and comes free with your chip anyway. I have two myself and I wouldn't trade them for those big honking copper things you have to use on AMD chips even it was free.
About the 64bit issues. In case that you need one, it's an easy task for Windows Vista (because a lot of manufacturer provides one) and a little bit more difficult for Windows XP (because sometimes you need to use that one from i.e. the chip producer or a very general one).
But overall dealing with a 64bit environment is not too bad ... I am running 2 machines (1 Windows XP 64, 1 Windows Vista Ultimate 64) and I was never desperated :)
I miss CP/M on a Z-80 with 64k of RAM :(
There is a big difference between just assembling parts from a list and actually understanding what all those parts are doing and how they fit together to effect your system performance. eg. you are 100% correct about speed still mattering. Most software doesn't take advantage of multiple cores. You can run more programs, but in some cases (like with the 1.8GHz cores) those will actually run slower on multi-core hardware.
My background is in embedded systems, writing code on bare hardware with no OS. I cringe at the thought of developers who have no concept of memory addressing, cache or bus speed. Moore's law may be making these things less significant, but we can't ignore the details yet.
Some of us like to have more than one skill. We have a term for programmers who have never assembled a PC or installed an operating system: one-trick ponies.
A programmer who can build a server or workstation from a shopping basket full of parts that he chose himself, and can install and configure an operating system on it, is more flexible and more valuable than one of equal programming skill who has only the vaguest idea about what's in the beige box under his desk. The former can become the sole IT person for a company; the latter is restricted only to corporate jobs with a clearly defined area of focus. (Once I got a consulting job, which turned into a four-year contract, because their Java programmer couldn't figure out how to install a tape drive in a Sun machine).
My employer knows that he can call me and say: We had a fire, and both servers melted. Get thee hence to Micro Center, get two machines with maximum memory and RAID-5, put Linux on them and bring them in. He never has to worry that his Perl programmer doesn't know which end of a screwdriver is which.
While I concur with some of the criticisms leveled above, I do appreciate you bringing up hardware from time to time, the resulting discussions are both instructive for the inexperienced, and amusing for the overly-experienced (e.g. electrical and computer engineers).
* I agree that snapping together a computer from off-the-shelf parts provides little instructive value as to how your code runs on hardware (though it is a hell of a lot of fun!) I doubt many programmers are so oblivious that they are surprised to learn what a hard drive, motherboard, processor, etc. look like. Moreover, very little of the information out there is any good, being authored largely by gamers, and others with limited technical background, whose scientific acumen in performing benchmarks is questionable, and whose technical chops mostly consist of an impressive ability to parrot back marketing-speak from Intel, AMD, nVidia, and the like. This situation is difficult to remedy, however, as a more sensible treatment of these things requires quite a bit of book-learnin'.
* Stating that C++ compilers are the only notable exception to a non-parallelizable build process is a considerable sin of omission. Indeed, the very example you give, make -jN does not depend on a multithreaded compiler at all! Any build process, with the common Linux make/gcc variant being no exception, is comprised of many, many commands, some of which are compiler invocations, some of which are moving files around, adding them to common environment variables (or registry, depending on your OS religion), or other common tasks. Each one of these commands, in an make-style incremental build process, is fired off in a separate shell. make -j4 simply says how many thread contexts can run these shells, and all the smarts as to which jobs can run at the same time and which need to be serialized is baked into the Makefile!
*Memory speed (and thus bandwidth) does matter! I understand the gist of what you were trying to say, but in the interest of correctness, a few nanoseconds faster is a considerable amount, when the cycle time of modern DRAM is under 1 ns. DDR2-1066 over DDR2-800 will provide a noticeable performance difference, assuming that you aren't severely bottlenecked by something else. The test you mentioned doesn't really answer the general question, the effectiveness of a memory subsystem depends *entirely* on the access pattern and the size/nature of the program's working set and the rest of your hardware (and on your OS's virtual memory subsystem, if you want to get picky). Digit-life makes no effort to convince the reader that their setup is representative of the general case, and such benchmarking is remarkably tricky even when lots of thought is put into it.
Not to put too fine a point on it, but it is very important to distrust nearly all benchmarks you find on the internet, as there really are too many variables to trust a single number, or set of numbers, to truly represent the goodness of a hardware choice.
*There is a chicken-and-egg problem not discussed in the article. Wouldn't it make more economic sense for compiler devs (and pretty much all developers) to make their code parallelizable if more developers (users) owned multicore machines and demanded it? What are your thoughts on this? Who's going to win this game of chicken, and who's going to cave first, processor owners or software developers?
*I couldn't believe that commodification was the preferred option on Wikipedia (over commoditization, which doesn't carry the unfortunate side effect of sounding like you meant that we're turning hardware into toilets!)
*Great work on the blog and on the stackoverflow podcast Jeff, I really do enjoy reading/listening, your focus on the human side of code is refreshing. The fact that I disagree with you on technical issues just makes it more fun!
I worked for a big company for several years. As the senior developer in my group, I would (together with my manager) do the interviewing when we were looking to hire more Java programmers.
We'd always ask the candidate to describe their home network, or at least their home PC setup.
Those who didn't get excited at the opportunity to describe what they had built tended not to get hired.
The Q9450 has twice the cache of the Q9300. The Q9300 has even less cache than the 9660 (which has 8 MB).
I think quad cores are important for software developers. Well, I think non-parallel makes suck, but, anyway, having multiple cores is helpful in learning to program for multiple cores.
Of course, at this stage you can still take the wrong road of functional separation, which does not scale to the thousands of cores Intel is seeing in our future. But helps anyway. :-)
As Windows is the exception in this department rather than the rule. Because for just about every other OS I know of drivers that do not work in 64bit mode are rare, GNU/Linux, *BSD, Mac OSX, etc.
That is true, since of the OS's you mention, all but Windows and OS X have source drivers, and OS X, well, Jeff's rants about Apple lock-in are frequent enough. Actually, though, my only device that did not work out of the box with Vista 64-bit was my old scanner (for which Canon refuses to supply a 64-bit clean driver).
And no, I don't use Windows exclusively, though I have a pro-MS bias :P My home machine is Vista, but my two work machines are Win Server 2003 and Debian etch (both 64-bit as well).
The fact that someone who works on the ASP.NET team has to call you for hardware advice is horrifying, and gives me fast insight as to why I hate ASP.NET so very very much.
It only takes one hard drive with vibration problems to turn that big flat sidewall of a PC case into a speaker.
There is a reason people have spent years discussing all the ways to eliminate drive vibration turning into sound
http://www.silentpcreview.com/forums/viewtopic.php?t=8240 There is much more at that link but I'll quote some of the big picture type statements here:
Even a small amount of research on quieting HDDs in SPCR should have led you to these oft-repeated fact:
* Noise is made by moving components in two different ways: Acoustically, and via structure-borne vibrations.
* With hard drives, direct acoustic noise is often less intrusive than noise caused by its vibrations when installed normally in the chassis.
* The world biggest HDD maker, Seagate, says mounting methods matter more in perceived noise than direct air-borne noise.
* Mechanical decoupling is any method to minimize firm contact between structures, usually in an effort to reduced transmission of vibrations that can cause noise.
Subjectively, mechanical decoupling of HHDs not only reduces the low frequency hum that you probably think is an intrinsic part of a PC's noise signature; it also reduces high frequency noise and can virtually eliminate seek noise.
Elastic suspension is unbeatable for mechanical decoupling of HDD vibrations. You can hear and feel the difference.
and OS X, well, Jeff's rants about Apple lock-in are frequent enough.
Clarification: What I meant to say is that *because* Apple controls Mac hardware so tightly, they can force hardware manufacturers to supply 64-bit drivers (or 32-bit drivers that work under 10.5, which is 64-bit) much more easily than Microsoft can.
he said you know, you should blog this.
And so he gives you an excuse to dive into you favorite pass time - self linking. ;-)
When I was in college, there were two routes for someone that wanted to work with computers. One, the business route, and two, the engineering route. However, you could also double major so the business folks could get their hands on hardware, building circuits, assembly, and OS.
The days of flipping through the rather large Computer Shopper and building affordable Zenith Heath kits are gone, but if you like to tinker and learn by doing here's 3 true blue gotta love them tech resources: JDR Microdevices, Circuit Cellar, and Nuts and Volts.
The book Upgrading and Repairing PCs was always a useful reference. Toms Hardware is a useful online site (www.tomshardware.com/us).
We use to build our own (and there is definitely value in doing it), but a couple of years ago we replaced our desktops with laptops (docking stations and all). What a huge disappointment (Sony VAIO) all around; too many issues to mention. But we are back to desktops. Although you learn tons when you build a machine (and this should be a requirement for every tech support person :-)), a respected, reliable local shop can supply us with a customized machine the same day. So if time is money, and you are conservative with both, this can be a good route. We just install all software, including OS of our choice.
Free physical memory is simply *wasted*
I assumed that everyone knows that free physical memory is used for software prefetching on a modern OS. Maybe I should have been more clear.
I'd be interested to see, however, whether anyone has *quantitatively* compared the performance boost of software prefetching into main memory or a certain physical location on the hard disk, versus hardware prefetching into the CPU cache. I would suspect the latter has a significant advantage, though.
Considering the salary of software developers, the time = money equation means that we should all be using machines at least this powerful at our day jobs. When I arrived at my current job (earlier this year), my development machine was a Windows XP box with 512 MB of RAM (Pentium 4, 70 GB HD, 1 monitor). No joke. The IDE took minutes to load, even on a fresh boot. That's right, I was working on a computer that's you could pick up on craigslist for less than a day's wage. I lobbied for a faster machine, second screen, etc. All I got in three months was a RAM upgrade (admittedly, a big help). I'm starting a new job at the end of this week. I'd love to drag that machine out into a field and get all Office Space on it.
Until a quad costs the same as a dual at the same clock speed, my vote still goes to the fastest dual core you can afford.
What am I missing there? It sounds like you're saying When this thing that does more costs the same as this thing that does less, I'll buy this thing that does more. I know that precious little software has actually been written to optimize for n2 processors, but from a hardware perspective, Intel made one with 4 cores, and one with 2 cores- Why would they sell them at the same price?
I think Quads are worthwhile because you can set processor affinity: When I'm doing web-dev I can set my database stuff to one processor, my IDE to another, all my job (but not real work) related apps (outlook, word, excel, etc) to another... The developer's burden is that we have so many computer-intensive apps 1open at the same time. Having 4 processors instead of two can be really handy if you use this feature. Otherwise, you're right: Based on the software's ability to utilize multiple processors, there's just no point in quads on the desktop (yet)
Got to agree at the moment quad core processors aren't really worth it unless they still have a good clock rate and are running something like video editing or database work.
I hope this will change as I have to say as a programmer I think more and more about threading whenever I start a new project as I know it is likely to be running on at least 2 cores at the moment with the possiblity of more in the future.
This was previously an unlikely scenario so I think programmers just tended to do enough threading to keep the GUI from freezing.
I find it somewhat interesting that you make the (quite reasonable) comments that quad-core doesn't produce much benefit over dual-core, and then elsewhere in the post make the statement that GPUs may make general-purpose CPUs obsolete. (Yes, in the post you link to, you say for folding and other distributed-computing efforts, but it's nearly buried there and not even mentioned here.)
The set of applications on which GPUs can produce the sorts of speedups that you describe are a small subset of the applications for which quad core will give you better performance than dual core. Not only that, but I would guess that they are a strict subset of the applications for which going to quad core will give you a full 2x speedup over dual-core.
As you point out, on a development box, that's a pretty small set of applications to start with. The fact that GPUs are painfully bad at branching on a fundamental architecture level  makes that a much smaller set -- and, given that parsing and compiling is pretty much all branches, it excludes most of what was left for things we'd want on a development box.
The only thing that I can think of that's left are things that you are not actually doing on your development box now and are unlikely to be doing for the next half-decade at least -- things like doing automated code optimizations by running large numbers of variants in simulation and adjusting the code with genetic algorithms. GPU processing in development boxes may enable radical new technologies like that which aren't possible on current CPUs, but they won't do much for anything we're doing today.
And the things we do today, we'll definitely still need to do. So, no, GPUs will not make general-purpose CPUs obsolete, for the simple reason that general-purpose computing is fundamentally what we do.
 On ATI's current best GPUs, unless all 64 processors in a bank take the same branch of the code they're executing in lockstep parallel, they will all execute all instructions of both branches, and merely discard the results of the instructions from the branch they're not taking. This is why they can be so fast -- 64 processors, but only enough die space for a single instruction sequencer for the lot of them -- so it's not going away.
Maybe this is a minority opinion, but I find understanding the hardware to be instructive for programmers.
And I agree. I also find it pretty amazing that you don't think it is important to learn C programming (sorry to bring that up again) if you think that understanding the hardware to be instructive for programmers. While I agree with you, I also think that knowing C is actually more instructive than understanding the hardware.
Just updated my system and frankly I think you are wrong about the quad core.
processor ghz is probably the least important number about a processor in today's day an age. With each new processor tossing in new tricks, more cache, smaller die sizes etc etc - getting a new 2.6 Ghz processor is much better than last year's 3.0 ghz anything.
That said, I have found that as ghz follows the law of diminishing returns just like ram, for about the same reason - anything outside of the cache is so slow by comparison. An extra 20% of ghz does not lead to an extra 20% of performance unless your entire OS and applications live entirely in cache - after that, well, kiss your linear performance curve good bye.
I found that extra cores provide that boost that more ghz just does not. If you have 2 cores, once they are pegged, that's it there is no more, its time for processes and you to wait. 4 cores? You can have 2 cores pegged and still have room to maneouver on the other 2 cores.
People talk about multithreaded apps, but they seem to blithly forget, THE OS IS MULTITHREADED and each app is a thread. More cores means its less likely that a cpu hogging task will dominate your machine leaving you wondering where the 386 on your desk just came from.
Oh, and a final note on harddrives : full support for AHCI can be more of a boon than spindle speed (even better when combined!) - make sure your motherboard, harddrives and OS support it. If you are just upgrading a vista machine that did not use it in past, getting it running can be a pain ( a failed attempt can lead to boot crashes or even drives running in PIO mode) but worth the effort to get running.
Oh, and unless you tweaked your BIOS before you installed your OS, chances are you don't actually have AHCI running. ( http://en.wikipedia.org/wiki/Advanced_Host_Controller_Interface )
I'm using a Corsair HX series power supply and a honkin' heat sink as well, but you missed one important feature: a 200 mm exhaust fan on your case. I put my little screamin' quad core (quad core is very useful for distributed computing apps), GeForce 8800 multi-media/gaming system into an Antec 900 and I can barely hear it running. Very quiet indeed.
And all this is just lying on the bench, right? ;-) So what about a recommendation for a case to put it all in?
What about secondary hard drives? You said you want two drives, anyway but are you really suggesting multiple VelociRaptors?
@Mattkins The fact that someone who works on the ASP.NET team has to call you for hardware advice is horrifying, and gives me fast insight as to why I hate ASP.NET so very very much.
Wow - that is a slam-dunk argument you have there. If you aren't an expert at EVERYTHING then you can't be an expert at ANYTHING - Great logic man. /sarcasm
On more than one occassion I have worked with developers who were actually afraid to open their case and take a peak.
At the end of the day it doesn't matter if you can follow a set of instructions to put together a machine. What does matter is whether you understand the components and the stresses your application is putting on them.
A prime example is in figuring out performance issues. The first thing to remember is that a spec is only a wet dream. No harddrive can transfer at 300MB/sec, even though they support that interface. Some are better at being workstation drives, others excel at being server drives.
If you don't understand hardware, how can you tell you boss where the bottleneck really is?
I'd like to comment on the usefulness of a programmer building their own PC.
Assembling a PC from parts teaches only trivially useful things for a programmer: it's nice to have a concrete image of what the separation between CPU, registers, cache, and RAM looks like in the real world, for example. But that's as far as it goes, if you're just slapping together parts based on a list like Jeff's.
(Slapping parts together will teach you to respect hardware, and how fragile it is. I've learned my lesson, and won't ever touch hardware again... Hard disks catching on fire was the final step. Utmost respect to people who manage not to do that.)
Building a PC from parts can teach a programmer a lot of useful things, but they have to do more than just assemble things. They have to actually understand the parts, and how they interact. Why is this kind of memory better than that kind of memory? Why is a high rotational speed good for speed? What's the tradeoff between speed and heat generation, and speed and noise? And so on, and so forth.
A way of achieving that understanding, or some of it, is to do the research yourself. If you just use Jeff's list, you learn little.
All this understanding of the entire abstraction stack from atoms upwards is useful, even for those of us writing code in the highest level languages available. While we don't deal with things like register allocation, performance is a very leaky abstraction: things you do at the HLL level affect how things execute at the lowest level. Just because I write, say, Python code that executes on top of a sophisticated virtual memory system, does not mean I get to ignore the performance impact of overflowing the cache, or the RAM. Or the power/heat impact of using a polling solution over a trigger-event one. And so on, and so forth.
The actual assembling is orthogonal to the understanding, but probably helps most people if they do it at least once. If you don't release the magic smoke, it's even reasonably fun to do, and definitely gives you a feeling of being in control of your destiny.
This is what I think Jeff meant, and said, and what got misunderstood by several earlier commentators.
On memory and CPU cores: my main development machine is my laptop, with a gigabyte of memory. This is, mostly, more than plenty. However, these days a lot of what I do happens in virtual machines, and so my desktop has 8 GB of RAM, to allow me to run several virtual machines at once. Its CPU has four cores for the same reason. This allows me to, for example, do four concurrent Ubuntu installations without speed limitations. Or run test suites on several operating systems at once.
Man, I'm getting a little sick of the little Apple digs Jeff. I've been a Mac user for a few years now, prior to that a long time windows user.
Perhaps you would be interested in reading about the Hostile Media Effect?
How would you have preferred him to have phrased it?
One of the best ways to understand the hardware is to get your hands dirty and put one together, including installing the OS, yourself. However, an even better way to understand hardware is to buy a computer already fully built by the Apple Corporation! They're fantastic!
If you kept reading for ONE MORE SENTENCE, you would have seen that he wasn't even slighting the MacOS itself, he links to a viable alternative to building a Windows machine, then discusses briefly, that it's not a perfect solution. (Which it isn't, for the reasons stated. Still pretty cool, though.)
The best part of your post is two paragraphs later when you agree with the very sentiment you're objecting to. You said yourself :
OS X isn't the best environment for everything; nor is windows.
It should really go without saying that one of the things OS X is not the best environment for is running on a MACHINE YOU BUILT YOURSELF.
I used to buy components like this back at college, but these days I simply buy the latest Dell box. I know it isn't the best hardware configuration on the market, but it just works when I turn in on and is quiet!
I think it was the 'quiet factor' that kept me with Dell, but with this blog post maybe it is worth investigating how quiet a home rig can be..
Why buy a spendy, low capacity 10k RPM drive when you can get 2 or 3 large, cheap 7200 RPM drives and just put them in a RAID 0 stripe for the same amount of money while achieving higher data throughput? You can put together a 1TB RAID 0 array with 14400 effective RPM for less money than that single 300GB 10k RPM drive. Just about every modern motherboard has support for SATA RAID levels 0,1,5,10 out of the box, and the x38 based systems definitely do.
There's an additional x2 right next to the Kingston link, making a total of 8GB.
It's just a hobby, don't go saying how 'important' it is. Millions of people including programmers get work done on a computer that was factory built.
I've been thinking about upgrading my PC because it is a few years old and pretty slow. Would a dual core processor really make a lot of difference? My motherboard is maxed out at 2 GB. I do a lot of video encoding so I would be interested in a quad core processor if it could help with that.
I have several spare computers so my current solution is to just assign a task to an entirely different computer. If you just need to multitask then you might as well use several systems at once. I use my laptop to visit web sites that hog my system resources. I could probably set up a render farm using After Effects and put all my computers to work on video encoding.
Ahhhh; missed that x2
Thank you for pointing it out.
Why would I want a warranty on my computer?
If my computer breaks, I fix it.
I can't imagine anyone I could call on the phone that could get it back up and running faster than I could.
what's up with the dodgy looking clickthrough links?
Ah they're Commission Junction random domains.
No offense, but that's not that good a machine. I just built one last week. Three areas for improvement:
1) Ok, yeah, a 10k drive is fast. Guess what? Two 7200 drives in Raid 0 are faster. Four drives in RAID 0+1 are bigger, cheaper, faster, and... yeah. Better. You have a measly 300GB drive - I have a 1TB (2TB counting mirroring) array that's faster and with automatic mirroring for less than what you spent on one 10k drive. And your cost doesn't include the second slower drive for archiving.
2) You can get 4GB of RAM for about $80.
3) If this isn't a gaming box, just buy a NVIDIA 8200 motherboard. $90 buys you a nice mobo and a small 8xxx DX10-capable video card. That's a $300 savings right there. It can handle twin-monitors on its own - if you want triple monitors, you can install another video card in it, and it auto-SLIs the second card.
I agree that dual core is generally better, and agree on the heat sink / cooling issue. But you're wasting money with the build you use above. You can easily go cheaper and better.
I don't understand why this blog is called coding horror.
I've found the hardware posts informative, and it inspired me to build my own PC a year ago after 8 years of not doing so, despite my initial computer science degree. While I develop software and believe in the power of expressive languages, domain driven design, abstractions and so on, I've seen some trending back towards the hardware, or at least the lower software layers. This move is driven in part for greater need for parallelism as data sets grow, and as more complex problems are tackled. It can't hurt your career to peel back the onion.
Understand how the hardware affects what you do. Know how long it takes your computer to execute an instruction, fetch a word from memory
And yet your pride still prevents you from learning C which is exactly this but for code.
the best posts on this blog have always been hardware-related. Whenever you feel inclined to write about HR-issues, write about hardware instead. Please.
Wow, thanks this is a very convenient guide! I was just looking to start building a computer (rather than using a laptop exclusively), but was getting kind of overwhelmed about the sheer number of parts available. I may look into getting a different hard drive, but this looks like exactly the kind of machine I was hoping to create.
Jeff, how about a setup for a normal computer for people who earn around 1k$ a month? =\
1) Ok, yeah, a 10k drive is fast. Guess what? Two 7200 drives in Raid 0 are faster. Four drives in RAID 0+1 are bigger, cheaper, faster, and... yeah.
This is a dev machine, with mission-critical data... so I disagree with your recommendation of striping, whether or not it's combined with mirroring. Better to take those 4 disks and put them in 2 separate mirror arrays.
2) You can get 4GB of RAM for about $80.
It's hard to see, but he's actually buying 8 GB of memory. Repeating the memory spec line in a single color:
Kingston 4GB (2 x 2GB) DDR2 800 x 2
Look at the end of that line.
3) If this isn't a gaming box, just buy a NVIDIA 8200 motherboard. $90 buys you a nice mobo and a small 8xxx DX10-capable video card. That's a $300 savings right there. It can handle twin-monitors on its own - if you want triple monitors, you can install another video card in it, and it auto-SLIs the second card.
Jeff *is* a gamer (search coding horror for gaming) so choosing a discrete video card is obvious to him.
Peter Norvig's statement reminded me of your discussion with Joel about C/C++ being dead languages, so I was surprised to see that you agree with the quoted part. Then I kept reading, and it appears you're actually talking about completely different things...
Putting together a PC is a good experience, but it doesn't give you *any* insight into how software executes in the CPU.
Asus? Let me tell you a story about Asus. I had one of their AM2 motherboards. I'd see seemingly random crashes every two days or so. No amount of BIOS or driver updates would fix them, and they lasted through several OS reinstalls. After a lightning strike fried that board, I replaced it with a Biostar one, and I have not had a single crash since then.
Also, the fact that I've never gotten more then 10kb/s off any of their file mirrors is pretty pathetic. (What kind of site tells you to come back at off-peak hours to download files?)
Guess what: All of those parts have warranties. Not only that, but they are usually longer than the warranty you would get from Dell or HP.
Also, if you go through the process of component selection, building, and testing the machine you learn how to identify problem parts.. which is kind of the point.
If it's actually a dev box for work (not a home office, like I have), then not having a high powered video card is actually a good thing, as it keeps the temptation to play games in check. An 8200 will do any business stuff you need, and is 1,000 times faster than what you need for OpenGL screensavers and other office nonsense.
Plus, like I said, it does auto-SLI if you buy a discrete card, and thus enables a tri-monitor setup out of the box, for less than half the price of the mobo he recommended.
As far as RAID goes, 0+1 gives you all the benefits of RAID0 (i.e. +75% drive speed, which blows SSD and 10k drives out of the water in terms of performance) while minimizing the risk. There's a chance of data loss when a RAID0 dies during a power failure (since drives will stop writing out their buffer at unpredictable times), but it's usually recoverable, and with the amount of money you'll SAVE vs a 10k drive which is a third the size, you can easily buy a UPS.
Or go with RAID5 if your controller is decent and handles everything in hardware.
If you're in an office setting (which the person appears to be), you'll have a NFS code repository anyway.
Andy, I assume you can't fix a bad DIMM or a malfunctioning motherboard - you still need warrenty to replace parts, even if you assemble the computer yourself.
SSD drives instead of HDD's for boot apps. gGogle up de mobi mtron, OCZ Core and Memoright GT. They crush a velociraptor. Best of all: cheaper or as expensive as the velociraptor.
I am very much surprised that you didnt put that on the list. If you talk hardware seriously, you just cant ignore these beasts.
Does anyone know how well Linux is able to suspend or sleep with Jeff's recommended motherboard? Anyone know the general state of Linux's ability to sleep correctly on common hardware? I've been unable to get Linux to do this, though my MB is several years old by now.
Actually at work we have a situation where we need two ethernet ports. One is for gigabit access to our servers, and believe me we need it. The other is for normal internet, and pretty much everything else too.
Jeff I'm a bit confused by you posting this. You seem to be against learning C and assembly programming but post about building your own PC as a way to learn more about the hardware.
IMHO learning C/assembly programming is gonna teach you more then putting together a PC. In fact I think all programmers should know some assembly programming.
Imho it's all terribly overkill. Sure every bit you can make it faster is possibly better productivity, but 8 gigs of RAM and 2 uber graphics cards...? Not my first choice for a development machine. You could do with a nice 600$-750$ machine.
You won't be able to play hd-video, crysis and compile a monsterproject at the same time just because you have tons of ram.
The cpu will go on it's knees before you even get close to filling up your memory with active _programs_.
Depending on how is tuned your computer on the hardware side (dedicated HDD for each purpose, a quad core or dual quad core, ...), you can run a lot of stuff all together.
I usually build my own PCs, but I recently needed to replace an aging laptop that sits in my family room. I wanted it to be desktop-powerful, but ultra-quiet.
After doing a little research, I bought a 20 iMac and slapped Vista on it. It runs great, performs well, and is quiet as a ninja. I got my ultra-quiet PC, with only a modest loss of power.
The only foreseeable problem would be upgrading the hardware. I guess I'll just have to buy a new machine when the time comes! Ha ha!
One problem I have is that most PC software seems to have been written for machines that don't exist. Meaning that the developer had a very high end machine like the one you are describing here to build, test and deploy their software on.
I also think that anyone writing software for desktop use especially should have an older crappy machine that they test on cause most developers have 'it works for me' syndrome (I'm guilty of this as well).
He didn't say building a computer makes everyone a better programmer.
He said understanding the hardware [is] instructive for programmers... One of the best ways to understand the hardware is to get your hands dirty and put one together
The former suggests that plugging in a hard drive gives you insight into file systems. The latter suggests that choosing a hard drive forces you to look at the range of hard drives available, read reviews, etc, and that this will all result in giving you a better perspective on what technologies are out there, and what specs are most common/popular. As a result, next time you write a brand spankin' new file copying utility using some crazy algorithm you devised, and it runs beautifully copying from your 10K RPM drive to your other 10K rpm drive, the memory of picking out hard drives will remind you that there are 5.4K RPM drives with less than 8MB cache still in use. And you'll test it, and go to lunch, and come back, and realize either that your brilliant, genius algorithm needs some tweaking, or that your product needs a disclaimer on the download page. The point (post-wise) of building your own machine is choosing parts. The point of choosing parts is learning about parts. The point of learning about parts is gaining perspective, so that future programming occurs in such a way that you're more aware of the context under which the application is running.
Re: Maybe this is a minority opinion, but I find understanding the hardware to be instructive for programmers
I would say that understanding that hardware is absolutely ESSENTIAL knowledge for the programmer.
It's this lack of knowledge which has spawned a million and one Java and script kids that write stuff that may well work, but is horribly bloated, slow and generally inefficient.
Software can't run without the hardware. Better hardware = better software = better development. Everyone ought to be as gung-ho about their plastic and metal as they are about the light emating from them.
What would you substitute for the Radeon - NewEgg has run out of stock.
Further updates. That Radeon is now marked as a Deactivated Item at New Egg.
I was also wondering about the video card. I haven't paid attention to hardware in years, but the Radeon looked like a great buy. With it being 'deactivated', I'm not exactly sure which direction to look in next.
Yeah, Jeff, it's great, but what do you think will be the best size of a monitor? What would be better, a 30-inch one or two or more?
What about optical drives, and a case (as mentioned above)?
RE the cheap SSD suggestion - Those cheapie Transcend drives aren't fit for a boot disk. Their read / write speeds are too slow. I believe Transcend has made a public announcement about this and mentioned they are working to release boot-disk worthy drives.
Silly question I'm sure, but how does spec'ing out a few stock parts and plugging together the prefab connectors and slots have *anything* to do with understanding the effects of cache miss penalties and disk latency?
These are geek projects, like building a model airplane. They have no educational value to a professional developer or engineer. I build my own PCs too, but only because I'm not satisfied with the offering of mass-produced shovelware PCs, not out of an illusion that it's helping me understand how any of it works.
Forget how the hardware slots together, that's not Peter Norvig's point.
I'm not saying you need to have memorised http://www.sandpile.org/, but you need to know _why_ things are done in a certain way. I often get laughed at for this belief, but for this reason I think all programmers should start with Basic in order to do some old-school 16-bit x86 Assembly before they start really programming in the real world.
Development machines should be super fast, mostly because it will speed up the development cycles for the developer in question. If someone can manipulate data and compile code very quickly, you eliminate the progress bar syndrome.
However, all developers need to be equipped with a test machine that is utterly awful in comparison. It should be the lowest specification of your target audience (be it the computers in your corporation, or the machine your customer will run the code on) - or even worse.
If your code runs really well on that horrible machine, you've done your job really well.
This rule can be extended to all areas of development. For example, web developers should always try their site on a busy dial-up connection at 800x600@8bpp and see just how friendly your site is.
Having lived in Hawaii myself, I can tell you that it is almost impossible to get these parts. You can't run over to Fry's and get stuff. The CompUSA closed and there's really only a Best Buy. Also, the computer industry being as it is, the small boutique shops have mostly closed. (If you find a good one, let me know!) Not really great pickings.
I've always found mail order parts to take a long time and if you have problems, you're talking at least a week before you get a replacement part by mail. Not my style.
If you're in Hawaii, just buy a Dell, dude, or take what you can get at Costco/Best Buy. It's just much easier.
1.21 Jigawatts! Hahaha! Great quote.