January 20, 2011
Are you familiar with this quote?
640K [of computer memory] ought to be enough for anybody. — Bill Gates
It's amusing, but Bill Gates never actually said that:
I've said some stupid things and some wrong things, but not that. No one involved in computers would ever say that a certain amount of memory is enough for all time … I keep bumping into that silly quotation attributed to me that says 640K of memory is enough. There's never a citation; the quotation just floats like a rumor, repeated again and again.
One of the few killer features of the otherwise unexciting Intel Core i7 platform upgrade* is the subtle fact that Core i7 chips use triple channel memory. That means three memory slots at a minimum, and in practice most Core i7 motherboards have six memory slots.
The price of DDR3 ram has declined to the point that populating all six slots of memory with 4 GB memory is, well, not cheap -- but quite attainable at $299 and declining.
Twenty-four gigabytes of system memory for a mere $299! That's about $12.50 per gigabyte.
(And if you don't have a Core i7 system, they're not expensive to build, either. You can pair an inexpensive motherboard with even the slowest and cheapest triple channel compatible i7-950, which is plenty speedy – and overclocks well, if you're into that. Throw in the 24 GB of ram, and it all adds up to about $800 total. Don't forget the power supply and CPU cooler, though.)
Remember when one gigabyte of system memory was considered a lot? For context, our first "real" Stack Overflow database server had 24 GB of memory. Now I have that much in my desktop … just because I can. Well, that's not entirely true, as we do work with some sizable databases while building the Stack Exchange network.
I guess having 24 gigabytes of system memory is a little extravagant, but at these prices -- why not? What's the harm in having obscene amounts of memory, making my system effectively future proof?
I have to say that in 1981, making those decisions, I felt like I was providing enough freedom for 10 years. That is, a move from 64k to 640k felt like something that would last a great deal of time. Well, it didn't – it took about only 6 years before people started to see that as a real problem. — Bill Gates
To me, it's more about no longer needing to think about memory as a scarce resource, something you allocate carefully and manage with great care. There's just .. lots. As Clay Shirky once related to me, via one of his college computer science professors:
Algorithms are for people who don't know how to buy RAM.
I mean, 24 GB of memory should be enough for anybody… right?
* it's only blah on the desktop; on the server the Nehalem architecture is indeed a monster and anyone running a server should upgrade to it, stat.
Posted by Jeff Atwood
"Buying RAM is for people who don't know how to write Algorithms." -- Me, just now.
One notable fact to remember is a typical server motherboard would utilise ECC RAM, which is priced much higher than consumer desktop RAM modules.
Incidentally, i just upgraded my home server from 12 to 24GB of ECC RAM. My wallet cried from the bleeding.
+1 "Buying RAM is for people who don't know how to write Algorithms."
It kind of Depends how you Define "Memory" doesn't it?
We dream of one day a type of Memory that would unify all, speed of RAM, Capacity of SSD.
And With Muti Core, We are hitting the "wall" for Memory Bandwidth, not Memory Capacity.
"What's the harm in having obscene amounts of memory, making my system effectively future proof?"
Unless you need that much memory right **now**...
No! It doesn't work that way. Your processor (and hence your motherboard, knowing how processor sockets change every or every other generation) would be long outdated by the time 24GB is not enough for your needs, by which time, DDR3 and 4GB RAM modules would be ancient technology.
"Algorithms are for people who don't know how to buy RAM."
And lots of RAM is for people who don't know how to write algorithms. :)
I tend to use RAM as a compilation area for performance (compiling Android firmwares is significantly faster from a RAM disk than even a Flash SSD), that can eat up 10GB without too much trouble. Add into that I have a couple of virtual machines due to my machine being a Mac Pro (it's an 8 core so running some VMs doesn't end up with the system being CPU bound), the company accounts package being Windows only, and Froyo being most suited to building on Ubuntu, and the 20GB I have soon gets eaten away. If you also add in running a database instance loaded up with a few GB of data for performance testing you can hopefully see that 24GB or more can be useful.
For consumers I agree, 24GB is a lot, but for people like me, where buying more RAM is makes economical sense in terms of being able to run VMs instead of buying machines which can run a OS, and the time saved in compilation due to using RAM instead of a physical disk, I can see the need for some people to go beyond 24 GB.
I build my Core i7 desktop about 2 years ago now and put 12GB of RAM in it. It sure cost a lot more than $299 for the RAM then.
CPU wise the speed of the machine seems fine. And I have hit 12GB of usage now and then.. Perhaps its time for 24.. Oh, and SSDs..
"What's the harm in having obscene amounts of memory, making my system effectively future proof?"
Well, Aphrodite3D (2 posts back) beat me to it. But it's even worse. Say that you don't really need 12 of those 24 Gb ram. That's $149.5. Now, if you wouldn't have bought those 12 Gb and notice in 5 years that you really do need more than the RAM you already have, you probably could have bought more with those $149.5 than you have now.
And, as Aphrodite3D said, technology marches on so it's possible that your RAM is outdated at that point in time. So, you need to buy all your 24 Gb again in the new memory technology, what makes those $149.5 completely wasted.
I like your blog but please never write something like algorithms are for people who can't afford more ram. Memory management and efficiency should always be a goal, or else you end up with browsers using 2gb of ram with 5 tabs open, and more crap like that.
Yes, memory is cheap. That shouldn't promote terrible code.
i7 920, clock for clock, made a huge difference over my Core 2 Q6600 on the desktop. I estimate it was between 2 and 3 times faster, both overclocked to 3.4GHz, depending on whether I was doing a build of the source tree, or transcoding a video.
Definitely not "blah".
"making my system effectively future proof? "
Except it isn't. The ram is almost obscenely unbalanced. The video card, hard drive, motherboard and cpu will probably go out of date first, yes. I just think the extra money should have gone into a second video card/cpu, more hard drive space, etc. and you still could have had 8 GB of ram.
Algorithms are for people who don't know how to buy RAM.
... or "who have all the time in the world".
One of the most stupid things I've ever heard.
Some of you guys are pretty serious about the Church of the Holy Algorithm. Which is better for you and your clients: writing software that costs $1000 but requires a $500 hardware upgrade, or writing software that requires no upgrade but costs $4000 because you spent four times as long to wring that last 10% of optimization out of it? Remember, you could have spent that extra 75% overhead on improving the product in more tangible ways or creating new software. Pride is good, but pragmatism is better.
Now, that doesn't explain those companies that charge $10,000 more than the old version for something that requires $4,000 in hardware upgrades, despite having outsourced their entire development staff and fired their testers. Some companies are just too dumb to live, and yet somehow keep making money.
"Algorithms are for people who don't know how to buy RAM."
That's probably the most horrible computer science related statement I've ever read.
relying upon great piles of memory instead of smart usage of it makes me think of foolish man and one that grows rich. The fool uses every penny that comes in, but the rich man thinks long term and avoids unnecessary expenditures. The rich man understands that the future is uncertain and its better to have extra resources available to handle unexpected opportunities and problems.
For a minute there I thought you were making an excuse that if you buy 24 gigs of RAM you are allowed to write sloppy 'who-gives-a-crap-about-memory' code.
If you are... 'STOP IT, Or I'll bury you alive in a box!' :) - http://bit.ly/hZqPF5
"Which is better for you and your clients: writing software that costs $1000 but requires a $500 hardware upgrade, or writing software that requires no upgrade but costs $4000 because you spent four times as long to wring that last 10% of optimization out of it?"
You are right about the business argument, especially when you use an extreme example like that. However, I do oppose to this "let's just add more hardware to the equasion" as an excuse for actual craftsmanship.
Call me a hippy, but it is incredibly wasteful to just keep piling servers (which need a location, power, maintenance, etc) when you could have optimized it or do it right within reasonable effort. I mostly have a problem with the mentality that we live in a world with limitless resources and energy, and money is the only constraint that matters. It is the other way around: money is limitless as it can be created out of nothing, actual resources cannot.
I know this is merely a philosophical argument that wont help anyone in actual business, but I do home some of us start to realize the real value of money vs real things.
It's just not a solution.
Sure managers and the like can try whatever they want to keep costs low, but the thing is that writing 'optimal' code is not more time consuming than writing non-optimal code. Sure, making optimalisations requires times, but optimalisations are only neccessary if you're building on a base that consists of bad code. The beauty of computing science is that 'the best' is usually also the most concise and the most maintainable.
A software engineer can't rely on hardware to solve his problems. They're separate fields involving seperate people. You can't replace the incompetence of one person with the competence of another if that person is active in a different field. It's like telling someone to 'build' a fireproof vault and then that person decides to give the vault its fireproof quality by keeping firemen present at all time.
"Algorithms are for people who don't know how to buy RAM."
That's all well and good until you:
a) Need to run software on a minimalist system (think phone or netbook, or even a budget desktop [relevant if you're an organization trying to cut costs])
b) Need to write software that does some serious heavy lifting (database manipulation, simulations, some types of AI).
c) Realise that RAM consumes 20% of an average desktop's power (this is most relevant if you're running a server)
a in particular is becoming increasingly relevant - it's the reason MS had to recommend XP for netbooks until Win7 came out. Efficient code can be used in places that bulky code can't, and just cause your dev system has 24 GB of RAM doesn't mean that your users do.
While realistically you can't optimise code til it's perfect, efficient code will always be valued by users for its speed, while the reverse is also true (consider how much crap Nero has gotten for being a 1+ GB CD burning package, when competing packages are <10 MB).
This is all to say nothing of the emerging market that is netbooks and smartphones (tablets too) - getting more than 1 GB RAM in those isn't about to become common anytime soon, and they're usage is increasing.
Worths pointing out that not all Core i7 branded processors have triple channel memory controllers. i7-9xx processors support 1066MHz triple channel DDR3 memory whereas i7-8xx processors have dual channel 1333MHz DDR3 memory controllers.
Oh come on people, the algorithms aphorism is a funny little quip that obviously doesn't hold 100% of the time. But there are a great many algorithms out there you can still use, but where it would be cheaper (in terms of total power draw) to just load it into memory and do it there.
For instance a sort of a large dataset. If you don't have the RAM available to do it, fine you'll use some sort of external sort, involving at least a few trips back and forth to tertiary memory. If you do have the RAM though, it'll be faster to just do it in there.
Yes, this doesn't hold 100% of the time. That I need to explain this is kind of sad.
There's more to performance than just what a programmer can do by optimizing its code. This is particularly important on that type of software which performance is also dependent on user input, or on software that is supposed to be scalable. So the quote "Algorithms are for people who don't know how to buy RAM" deserves a little more attention than what some of you guys have been giving it.
It's not stupid, it's not the worst thing you ever heard. It's actually quite, in fact, true!
There's so much a programmer can achieve in terms of code optimization, much of which is well documented and easily understood. There's very few secrets concerning code optimization these days. Particularly on well-known and proven areas of development. Any code displaying less than optimal optimization is pretty much understood these days as either a strategy (maintenance concerns, etc), laziness on behalf of the programmer, or inexperience. Not the product of some secret knowledge available only to a few.
Optimization is, for the most part, taken away from us and put on the hand of the compiler, the operating system and the hardware. Those are the real agents of performance on our modern systems. Our saying (assuming of course good quality code) was pretty much limited.
Now, with only a limited capacity for optimization, it's pretty easy to understand that hardware scalability comes into play in terms of what one can or cannot do to actually increase the performance of our software. Instead of relying oneself on wasting time over-optimizing our code to achieve some performance goal that may actually be unattainable, we greatly reduce costs and achieve much better results by increasing the capabilities of our hardware. And that's where this quote fits in.
And who says RAM, says CPU, or any other relevant piece of hardware. A few microseconds attained by some very smart code optimization technique that took weeks to achieve and introduced new code maintenance problems, cannot ever replace the elegance and simplicity of an hardware upgrade. And will never compete with it on software that is meant to scale.
One disadvantage of large amounts of RAM (that you don't need) is that suspending/hibernating times are considerably longer.
Writing a 24 GB hib file takes much longer than say a typical notebook with 4GB.
I have had 10GB for the past 2 years (Mac Pro 8 core) for VM purposes (live in OS X, work on W7) and I don't look back, but upgrading to, say 20 right now, would probably end up being a waste… I'd rather get OWC SSDs :)
If you have the money why not. I have 12gb in my i7 and it works great when I need to use a virtual machine or two. Not needed more than that yet.
Future proof not so much though the main thing with memory now is speed not capacity as we can afford capacity. The ram you bought runs at 1333Mhz this is pretty standard. But its pretty common to get 1600Mhz ram especially for i7's and OCing you can even go as high as 2200. I can see you having a ddr3 upgrade for faster ram before the move to ddr4.
I still have only 1gb of RAM in both my computers.
Note +1, +24G! "Buying RAM is for people who don't know how to write Algorithms." -- Me, just now.
I am SO using this as my signature now!
The more RAM the better. I would double the RAM I currently have (4 x 4GB modules = 16 GB) if 8GB modules become cheap and affordable. With the advent of 64bit OSes, applications are now beginning to take advantage of RAM. For example, it now takes me less than half the time to edit videos using Adobe Premier Pro and After Effects CS5.
I also run multiple VMs: servers and workstation and for this having 16 GB of RAM is nice. I have my development environment on a VM with RAM allocation of 4GB. One reason for this is so I could transfer the dev env from one PC or laptop to the next. Another reason is I don't want to clutter my main PC with development tools such as databases because these and their background services slow down the PC which I also use for video editing and playing games.
After just upgrading my storage space, after a recent memory upgrade, I generally find storage speed to be the limiting factor more than the amount of memory. 8GB is more than enough for me, and anyway, I'm sticking with AMD.
Now if only I could replace this work iMac with something that can have more than 4GB of Ram...
"It's not stupid, it's not the worst thing you ever heard. It's actually quite, in fact, true!
There's so much a programmer can achieve in terms of code optimization, much of which is well documented and easily understood. There's very few secrets concerning code optimization these days. Particularly on well-known and proven areas of development. Any code displaying less than optimal optimization is pretty much understood these days as either a strategy (maintenance concerns, etc), laziness on behalf of the programmer, or inexperience. Not the product of some secret knowledge available only to a few."
And that's another ridiculous point of view.
Better algorithms are not the same thing as what you call "optimization". It's is hard enough to create optimal algorithms, even considering that most of what you do at corporations is very simple and there are many examples in the literature you can use. But programmers, like yourself, are uneducated. "Optimization" is what you do to an already optimal algorithm, to speed it up in a given platform/hardware.
What use is buying 24gb of RAM when you have a naive exponential algorithm? It is happily going to chew up all your memory and then a hundred more gigabytes, where a better, polynomial algorithm could live with 1gb, or less. Oh, it even might not do it right away, working fine for months and suddenly exploding when the input dataset grows over a threshold.
Usually, "optimizations" are done to minimize those pesky constants the big-O notation hides. That is usually done at the expense of memory, which is fine.
Now saying that "optimization" is equal to "optimal algorithms" displays an unprecedented level of ignorance. It is a shame that people with no CS education are allowed to write non-toy software.
Question: Why buy a house with 1800sqft when you can buy one with 3600sqft?
Answer: You pay more up front and you pay more every second of every day that you heat/cool the house between 69-72F.
"Algorithms are for people who don't know how to buy RAM."
That reminds me of the Jon Bentley book Programming Pearls where he compares (something like) a 300MHz DEC alpha to a 2MHz Radio Shack TRS-80 running a more efficient algorithm. For problems above a certain size the TRS-80 is faster. For problems above an even bigger size, the alpha will effectively never finish.
If your problem is big enough then algorithms matter.
Maybe it should be: "Memory is for people who like getting stuff done."? I'd rather spend time getting my code right than fixing someone else's mistakes.
@Vostok4 You are absolutely right. More hardware is no excuse for bad code. Sore, I can make bad code still run if I throw enough hardware at it, but imagine if I write a decent piece of code, how well it will scale! I agree that the last little bit of optimization is often a waste considering programmer time taken, and machine time saved, but the work to get to pretty good is worth it.
Being primarily a Linux user, I see a great value in that much RAM. It is very aggressive about file system cache. My disks will appear to really fly with that much space to work with.
BTW, the new sign in for posting comments is OK, but I miss orange.
Relax guys... obviously having more RAM won't solve all problems. But it's cheap and nice. And Jeff wants you to buy some RAM using his affiliate link. Chill out and go buy some RAM. Or don't. Your choice.
I'm out of words. I'm quite angry that you are actually proposing this. I spend a fair amount of time arguing with newbie programmers who over allocate and do not pay attention to memory simply because "they can". They obviously have no clue that their machine represents absolutely nothing close to reality, and as a result they write terrible code that you should be hanged for.
Your approach encourages this. I hate this blog post. Sorry.
>> But programmers, like yourself, are uneducated. "Optimization" is what you do to an already optimal algorithm, to speed it up in a given platform/hardware.
And with this you pretty much denounce your own lack of education. Not just the fact you choose to insult people you disagree with, but also the fact your obviously do not really understand what you are talking about.
If you choose to call code optimization "what you do to an already optimized algorithm", I feel obliged to point you to this: http://en.wikipedia.org/wiki/Code_optimization
In there you will hopefully learn that choosing the appropriate algorithm is also part of the optimization procedure. But contrary to what you want to indicate, the choice of an appropriate algorithm is not always straightforward since often you'll find yourself compromising on other aspects of performance; is it faster but uses more memory? Is it slower but has a small footprint? Since these algorithms are proven, you don't have many chances (if any) of optimizing them any further. Yet, when you are faced with these questions, the decision must be made on what you need to compromise.
And then there's those algorithms you design yourself for those non-generic needs, where the level of optimization is decided by you as you are developing. When will you stop optimizing your algorithm? When you are satisfied with the results? Or when you cannot do any more optimization? If the former, congratulations! You made your way into the real world of professional programming. If the latter, unless there's a concrete reason to spend your time with your algorithm (e.g. you are coding a performance-critical application), you will hardly find anyone sympathetic with you. Much less your boss.
And its in this context that quote fits in. RAM, CPU, Hard-Drive, all can contribute for better performance in ways that your coding skills cannot by the simple reason the programming language semantics limit what you can do. It's not an invitation to write bad code or to make poor decisions; it's the reassurance that once you write good code and make good choices, the hardware will do more for your application than any extra bit of added performance you can extract from your code.
I hope, if you choose to reply to this, you avoid the insults. The level of emotional response you put into this topic has the opposite effect than you think; It does not intimidate and does make you look insecure about your own thoughts.
Wow, I'm surprised by the amount of anger in response to this post. Sure the quote "Algorithms are for people who don't know how to buy RAM." is provocative, and probably meant to be somewhat tongue-in-cheek, but I have to agree overall.
Most of us are already coding on top of many layers of abstraction, each of which comes at some cost in terms of efficiency. However, hardware is cheap and getting cheaper, and each layer helps us to be more productive as developers.
The overall response here really reinforces my opinion that as an industry we're often too focused on the geeky details, such as optimizing for minimal RAM usage, while we fail so often at understanding the problem domain and our customer's needs.
RAM is the new disk.
Processor cache is the new RAM.
Algorithms still matter. Actually, they matter more than before, because thanks to Google, users now expect each of their tasks to be completed as they type.
Shades of gray
48 GB is for people who haven't heard of algorithms
12 GB is for people who know how to avoid writing algorithms some of the time
8 GB is for people who know how _not_ to write algorithms
4 GB is for people who know how to _use_ algorithms (you write them on paper, not memory).
2 GB is for marketing people who write required specs that they know would not work smoothly
32 Mb is for people who know how to write algorithms, operating systems and generally rock
In case you wondered: Jeff is in a class of his own :)
I agree with Kevin Krueger. I can't believe so many people are getting upset at the sentence "Algorithms are for people who don't know how to buy RAM". I found it hilarious.
I'd also point out that the alleged trade off between "spend more time writing better code" and "just buy more memory" could in some cases be an effort to hide problems like memory leaks, where you are just delaying the inevitable. Disk speed is primarily an issue for latency-sensitive operations (like interactive database queries) but apps that have to churn through a lot of data ought to be batching up reads and issuing async I/O requests so that the OS can fetch or copy the data independently from the application. The need for programs to be mindful of its current working set doesn't go away with more RAM, it just moves up the cache hierarchy to the CPU.
@Seth Heeren So, what about the 1Mhz, 32K machine I have in my basement that still works great? Busicalc is still quick, even after all of these years.
RAM RAM everywhere and none of it ECC.
For what it's worth, I think people are taking the algorithms vs. RAM comment way too seriously. I interpreted it as a comical statement made mostly in jest—am I wrong?
On an unrelated note, you are coming to my school in February (CMU Silicon Valley)! I am pretty psyched for that.
I don't find memory to a be future-proof investment. I upgrade my system once every year or every other year, and usually by then there's some slight architectural change that renders my old RAM useless. It might be something small, like an increase RAM speed, or going from DDR-2 to DDR-3, but either way, I'm forced to buy new RAM again.
Ummm excuse me, but when did "Algorithm" become synonymous with "Optimisation" ? An algorithm is a predefined set of steps to achieve an end-result. Sounds like a program doesn't it. Algorithm = Program. No Algorithm = no program -> no computers, and any conversation about RAM disappears in a puff of smoke.
What you all appear to be talking about is "Optimisation is for people who don't know how to buy RAM".
Apparently a lot of people are incapable of detecting hyperbole.
RAM is cheap...now, but it wasn't always this way. 20 years ago I was working at Novell, and they showed off their new testing computer.
It cost $1,000,000 just for the 1G of RAM it had. Yes, RAM cost $1,000 a MB at that time.
About a year later someone discovered how to increase RAM yields and the prices fell thru the floor. We might laugh about how lame computers were not so long ago but it wasn't always like this.
My computer only has 3.0 Gig of memory... but my phone has 14.5G!
Some 25 years ago, I was sysadminning a time-sharing computer that had a whole 2.4 gigs of disk... that was two drives the size of small washing machines! Last month, I bought my boss a 10G thumbdrive smaller than most of my fingernails....
Yeah, I don't get that quote: "Algorithms are for people who don't know how to buy RAM."
Wouldn't throwing everything in RAM be an algorithm? Also, if the space complexity of an algorithm is something like exponential, it won't scale at all, no matter how much RAM you have.
I think "Algorithms" was the wrong choice of word here. One of my own professors used to say, "Programmer time is expensive, memory is cheap," which is a worded better. He wasn't talking about algorithms when he would say this though. He was usually talking about using programming languages abstracted further away from the machine, writing layered/modular/object-oriented code, and stuff like that.
This RAM v.s. Algorithms quote is stretching the truth, of course. RAM is not something special in context of algorithms and optimization. It's just another type of resource, which is limited. As a programmer you should track usage of this resources and plan to invest in increasing some of them.
So, if you have twice as big of RAM this does not imply that you would see any improvements in performance. But... in most cases we did.
"Plenty of RAM is great for most applications." — it's 80% rule. There is no need to raise it as a silver bullet. And we all should remember what happend with GHz race in CPU space. Ten years ago I could easily say:
Algorithms are for people who do not know how to buy new processor.
This sort of thinking drives the whole industry for decades. Well, not anymore.
>Your approach encourages this. I hate this blog post. Sorry.
I'm with you on this. Having switched recently from a Windows Mobile phone to an Android one with a very similar hardware specification (same speed CPU for sure), the responsiveness of the HTC is so fluid compared to WMP. I am in admiration for the Android developers (and I suppose Linux) for a non-bloated system. The WMP team tried to shoehorn a big memory approach into a small device.
One bit advantage of the smartphone surge is that it's refocussed attention on performance which is closely tied to using memory correctly.
How is power consumption for i7 (versus i5 or i3 I guess)? For me, lower is better.
Why is everyone so angry? Chill, the comment was in jest.
I don't understand all the talk about algorithms here. It would be awfully hard to burn up large amounts of RAM with a bad algorithm in most situations. Bad algorithms burn CPU time far more than they burn RAM.
Regarding "Algorithms are for people who don't know how to buy RAM", one tip-off to the quote is that it was related by a computer science associate of Clay Shirkey's. Given that CS main focus is algorithms, there should be enough clue that there's subtle wisdom in this statement.
I've been around the block a few times, got my master's degree in CS, love algorithms, and still see the wisdom in this statement. Three real-world examples quickly sprang to my mind.
I remember all the time I spend in my CS studies learning mergesort algorithms for two tape-deck sources spooling to a streaming tape output. Do YOU know how to optimize sorting algorithms for 1k of memory and a trio of streaming tapes? No? Thank your RAM. Problem solved.
I also spent a lot of time learning a number of hairy hidden-line and hidden-surface algorithms to display the visibility problem in computer graphics. My first graphics computer (Amiga 1000) didn't have enough working RAM to hold one screen's worth of 24bit pixels. Today? Ray trace or use a Z-buffer. Indeed, modern raster graphics today have Z-buffers, A-buffers, bumpmap sources, texture sources, stencil buffers and on and on. Better algorithms? Nope -- today we finally have enough RAM to dump all the crazy algorithms we used to have to employ. If you use Z-buffers today but don't know the floating horizon, Roberts, Warnock or Weiler-Atherton algorithm, then hold your tongue and thank your RAM.
I spent several years of my career working on a advanced rendering solution that was predicated on the belief that RAM would be prohibitively expensive and that better algorithms (essentially decomposing scenes into 2D affine sprites with asynchronous error-based update) would save the day. Several years later, RAM prices had dropped considerably and our hardware was a footnote in history.
Yes, learn your algorithms. Study big O and little O notation and learn about the asymptotic performance of the algorithms you use and write day-to-day. But there are clearly areas where RAM is THE solution to various problems we encounter day-to-day.
If you think algorithms will save your butt every time, then you're no better off than the fool who believes that RAM will save his butt every time.
Well said. Use what works best.
Well, quote "Algorithms are for people who don't know how to buy RAM. " is for people who never heard about NP-problems:)
1 minute of mp3 ~~= 1MB
in person life (120 years) 3784320000 minutes.
that mean for music you need 3609 Tera bits - for music (MP3_ for life!
it reach able in 10 years from now....
but the you need to calc , video HD 3D 10 channel of audio ,+ different angles.... :)
life is complicate and take more and more Memory
"Algorithms are for programmers who want to stay employed in the 21st C" - Me
Because desktop computers are on the decline. Laptops are becoming workstations and tablets, consoles and phones are dominating consumer computing.
So mostly you need to write good algorithms that scale well in the cloud or ones that work well in restricted devices or in a sandbox. Who cares that the desktop dinosaur can be monster powerful?
You definitely can't future proof ANY particular piece of hardware anymore. Your 24GB of ram might sound great right now, but in a year when intel stops making the i7, moves on to something else, and that something else only supports DDR4 ram....you're basically SOL for upgrades with your Ram.
The new P67 / H67 motherboards just released 2 weeks ago can go to 32GB once 8GB dimms start appearing.... but it will probably be a lot more expensive at first. :-(
"4 x DIMM, Max. 32 GB, DDR3 1333/1066 Non-ECC,Un-buffered Memory"
"According to Intel® SPEC, the Max. 32GB memory capacity can be supported with DIMMs of 8GB (or above). ASUS will update QVL once the DIMMs are available on the market. "
"otherwise unexciting Intel Core i7 platform upgrade"
Jeff, your article timing sucks. In case you missed it, there was another platform upgrade at the beginning of 2011. Namely, the Sandy Bridge platform upgrade, which introduced a host of changes and new features. The least exciting one there was actually the onboard graphics update, which still comes nowhere near the performance of standalone nVidia and AMD cards.
One of the major changes that has a direct bearing on this article is the new memory controller, which is significantly faster than the ones in the older i5/i7 processors.
Anandtech talks about the Sandy Bridge processors here: http://www.anandtech.com/show/4083/the-sandy-bridge-review-intel-core-i5-2600k-i5-2500k-and-core-i3-2100-tested
@Javin and others:
"I am surprised Bill gates indeed said that.."
He didn't. In fact, he thinks that quote is stupid.
My old home i7-920 has 6GB. (It's one of the early four-slotters, but only using three, to actually be triple-channel.)
My newer office workstation has 9GB.
Nothing wrong with 24, especially if you "really need it" - but on the other hand, not much point in buying ram that won't be "needed"... before it's time for a newer CPU/MB.
Of course 24 gb will be enough!
Until the people who think thats not practical come out with the incompatible "ddr 2.0" which can hold twice as much per stick next month, and it becomes mainstream in a year and a half :D
Sharepoint 2010 eats RAM.
It's funny to think that 10 years ago hard drives were about 24 GB. I suppose that 10 years from now 1 TB of memory will be enough for anybody.
Anyone aware of linear search and binary search?
Especially those in the camp of buying more RAM to the problems, you also need to buy more powerful CPUssss, and SSD etc, as your computer need to be 500 times faster.
As many pointed out, optimization is not better algorithm.
We programmers don't have moral ground to decide whether to buy more RAM or use faster machine to the problems, and the decision is the job of system engineer or project manager during deployment.
Gates may not have said the 640k thing, but Microsoft did say:
"Each 32-bit application can access up to 2 GB of addressable memory space, which is large enough to support even the largest desktop application." -- Microsoft Windows 98 training kit.
"To me, it's more about no longer needing to think about memory as a scarce resource, something you allocate carefully and manage with great care. There's just .. lots."
Maybe I'm not looking in the right places but I rarely ever see examples of programmers managing memory like that. The most common error handling routine for a failed malloc is to quit the process (), that is, if they even check for a failed malloc.
I know I'm way late to this post and my comment will probably only be read by very few people, but to those telling others to "relax, the algorithm/RAM comment was in jest," you obviously don't read this blog much. I really like this blog and stackoverflow is a god-send but Jeff constantly says these types of things. This wasn't a one-time thing.
"Algorithms are for people who don't know how to buy RAM."
What!? A computer science professor said this!? Well, I guess having a doctorate doesn't make you smart...
That professor obviously doesn't understand the difference between time complexity and space complexity.
You're not going to make selection sort (aka. bubble sort) any faster by putting more RAM into your computer, nor are you going to be able to do the Floyd-Warshall algorithm for finding the all-[airs shortest paths in a weighted graph.
I agree with the people above me, that is the most ignorant thing I've ever heard.
I need algorithms! I'm just a programmer and not a computer scientist or software engineer. I have to process 100's of megabits per second. Look-up tables help _a_lot_. More memory isn't a help. I could have 24TB. My problem is the cache. Cache hits are killing me. However, I've improved my algorithm three times (after each elegant/best solution) and the improvements have been worth the effort.
"I agree with the people above me, that is the most ignorant thing I've ever heard."
That's how I feel about a lot of the comments that get horribly offended at the idea that RAM might actually be a better solution than clever code in a lot of situations.
Let's say you have a dataset of, oh, let's say 4 GiB. You want to process this data in some way, each "chunk" of data is 64 kiB (65536 chunks to process).
Now, assuming that processing each chunk incurs an overhead of another 256 kiB while procesing and the processed data also takes up 64 kiB (I'm just making up numbers here) this means if you just dump the whole dataset to RAM, run process it using four threads and then dump the entire resulting dataset to RAM as well before saving it back to disk you will be using roughly 8 GiB of RAM but writing the code is fast (and you can focus in making the actual processing fast).
Now, the "OMGZORZ!!1 RAM IS A RARE COMMODITY!" approach taken to its extreme would of course be to figure out an algorithm that allows you to queue chunks of data so that you are using 4x64 + 4x256 kiB of RAM for processing with a queue that is just long enough that your queue is never empty while still not wasting RAM.
The latter approach makes sense when the dataset is large enough that you can't use the first one (or rather, where the development time for the latter one is less than the hardware cost for the first one). Once upon a time lots of problems had this issue because RAM was expensive, I remember writing little games in c + x86 asm and trying to squueze as much info as possible out of every byte of RAM, these days it often doesn't make sense solving these "problems" because they aren't problems.
(Please don't assume that I'm in favor of unnecessary bloat, I just don't get why some people are obsessed with not "wasting" RAM and refuse to just get more RAM)
for people who are notorious for their mental hability(programmers), many of you act as pretty stupid, making that silly flamish fuss over that algorithm-RAM sentence. do you thing Atwood bases a single line of his code on THAT? have you read more than one post of this blog? go waste your time elsewhere flameboys because it doesn't seem to be worth a penny. i doubt you are even real coders anyway..
"Once upon a time lots of problems had this issue because RAM was expensive, I remember writing little games in c + x86 asm and trying to squeeze as much info as possible out of every byte of RAM, these days it often doesn't make sense solving these "problems" because they aren't problems"
This is somewhat surprising coming from someone who has programmed in assembly. You must have not done much of it because this doesn't make much sense at all, especially for embedded development, when the cost of RAM really isn't as much of a factor as the limitation of it due to design constraints.
I do agree that there is a time space trade off in that the more space you take up in writing an algorithm, the faster it can be and vice versa. This is observed in the use of look up tables and hash tables. However, being able to index those tables can be rather complicated depending on the accuracy you want in your choice of hash function or other index method. It is also hard to analyze their efficiency given their non-deterministic nature and the fact that they rely on heavy sparsing of the data in order to be effective due to the possibility of collisions. And this is just talking about storing the data, you still may want to search it, sort it, and do other forms of manipulation which wouldn't be possible without algorithms.
And in that post, you were only talking about the dataset itself and not about the code and other processes occupying the RAM at that time.
My main point is, you're right that RAM is relatively inexpensive and I wasn't implying that 1 GB of RAM is a rare commodity or anything like that. However, the development or use of algorithms shouldn't be attributed based solely on how much RAM you have and any respected computer scientist would agree with that, and it's shocking to me that a professor said that.
And I'm not even going to waist my time commenting on the ridiculousness of Mdaj79's comment.
I think you're all dumb. The most intelligent thing said here was about pragmatism.
It isn't better algorithms mean less RAM, nor the opposite.
Every situation is different. There is no silver bullet. Stop thinking there is.
It's not about buying obscene amounts of RAM, but about buying obscene amounts of RAM whilst prices are severely depressed.
ECC RAM prices were at parity with non-ECC RAM whilst I was stocking up.
I was reading an article the other day about programming in C and one of the main subjects in the article was the preservation of memory in keeping your code running nice and fast, but that wasn't the only reason for preserving memory.
The article also pointed out that by having more RAM we are consuming more power and therefore more of the Earth's dwindling resources.
Surely this is a good enough reason NOT to get more RAM, just for the sake of it?
I understand this might not be related to the original issue in this blog post but I believe (IMHO) that it supersedes it.
what is the minimum requirement of memory for any one..?
and where it is used any application(the of maximum memory >22 GB...) because 4 GB memory is more than enough..?