August 29, 2007
Eric Lippert notes the perils of programming in C++:
I often think of C++ as my own personal Pit of Despair Programming Language. Unmanaged C++ makes it so easy to fall into traps. Think buffer overruns, memory leaks, double frees, mismatch between allocator and deallocator, using freed memory, umpteen dozen ways to trash the stack or heap -- and those are just some of the memory issues. There are lots more "gotchas" in C++. C++ often throws you into the Pit of Despair and you have to climb your way up the Hill of Quality. (Not to be confused with scaling the Cliffs of Insanity. That's different.)
That's the problem with C++. It does a terrible job of protecting you from your own worst enemy-- yourself. When you write code in C++, you're always circling the pit of despair, just one misstep away from plunging to your doom.
Wouldn't it be nice to use a language designed to keep you from falling into the pit of despair? But avoiding horrific, trainwreck failure modes isn't a particularly laudable goal. Wouldn't it be even better if you used a language that let you effortlessly fall into The Pit of Success?
The Pit of Success: in stark contrast to a summit, a peak, or a journey across a desert to find victory through many trials and surprises, we want our customers to simply fall into winning practices by using our platform and frameworks. To the extent that we make it easy to get into trouble we fail.
Rico Mariani coined this term when talking about language design. You may give up some performance when you choose to code in C#, Python, or Ruby instead of C++. But what you get in return is a much higher likelihood of avoiding the miserable Pit of Despair-- and the opportunity to fall into the far more desirable Pit of Success instead.
As Brad Abrams points out, this concept extends beyond language. A well designed API should also allow developers to fall into the pit of success:
[Rico] admonished us to think about how we can build platforms that lead developers to write great, high performance code such that developers just fall into doing the "right thing". That concept really resonated with me. It is the key point of good API design. We should build APIs that steer and point developers in the right direction.
I think this concept extends even farther, to applications of all kinds: big, small, web, GUIs, console applications, you name it. I've often said that a well-designed system makes it easy to do the right things and annoying (but not impossible) to do the wrong things. If we design our applications properly, our users should be inexorably drawn into the pit of success. Some may take longer than others, but they should all get there eventually.
If users aren't finding success on their own-- or if they're not finding it within a reasonable amount of time-- it's not their fault. It's our fault. We didn't make it easy enough for them to fall into the pit of success. Consider your project a Big Dig -- your job is to constantly rearchitect your language, your API, or your application to make that pit of success ever deeper and wider.
Posted by Jeff Atwood
"Lots of C++ developers bash the .NET GC simply because they don't understand it. In short - you should *NEVER* pepper your code with gc.collect() simply because you *THINK* your app is using too much memory. The CLR allocates lots of memory when it's not needed by other apps to minimize the number of iterations of the GC algorithm - but releases it when memory is tight."
In theory, you're correct. In practice, in certain cases the GC will happily allow your program to run out of memory. Most of the time I've seen it when calling "System.Collections.Generic.Dictionary`2.Resize". This is with a process consuming between 1-1.5GB of memory on a box with 4GB of RAM, which also has the /3GB option turned on and the app made "LARGEADDRESSAWARE" (just to avoid as many OutOfMemory exceptions as possible). There is absolutely no reason for it to be throwing OutOfMemory exceptions, but unless we pepper the code with GC.Collect()s, it does. It's not that *WE* think the program is taking up too much memory - the *program* thinks its taking too much memory.
Now, this is probably a bug in C# and not a failing of the GC model. My guess is the "System.Collections.Generic.Dictionary`2.Resize" method doesnt try to reclaim memory before throwing an OutOfMemory exception, for some reason. But when you come across problems like that in C#, what can you do? You're pretty much hosed.
Thanks for the link Jeff.
Incidentally, I am very amused by the commenters above who aver that anyone who has these kinds of problems with C++ probably is not a very experienced or capable C++ developer.
It's _totally_ true. As I said four years ago, I consider myself a six-out-of-ten C++ programmer: http://blogs.msdn.com/ericlippert/archive/2003/12/01/53412.aspx
I have never written a C++ compiler, so how could I possibly be higher than six out of ten? I totally admit it: my understanding of the subtleties of the language is pretty weak.
And as I have only had twelve years full-time experience writing production compilers shipped to hundreds of millions of users in C++, I probably lack the impressive depth of experience that your commenters have.
Reading the last two paragraphs reminded me of a book we were required to read during university, "The Design of Everyday Things". It's a nice book and really brings the onus back onto the designer.
This sentiment is something I've tried to drill into my parents too. Whenever they use an interface that has been poorly designed (be it a website, or dvd remote control), they get frustrated with themselves because they think it's their fault they are having trouble. I keep telling them, it's not your fault, it's just poorly designed.
Whoo Jeff! I wholeheartedly agree. I've been wanting such tools for years. About 7 years ago, before .NET came out, I designed a language named Q (but never implemented it, though I wrote up a spec). It was supposed to be between C++ and Java, giving you the power and speed of C++ (almost totally) while at the same time being much faster and giving you more control than Java. How did it do this? Actually, it allowed you to code using good practices and it would keep you from shooting yourself in the foot, but every time you wanted to do something potentially dangerous you'd have to be explicit about it. Basically, it fit the development paradigm that I imagine most developers would like to have. That is -- code what you mean, and when bugs come up, be able to track them quickly because with that language, you _know_ what had to cause them. I know it sounds far fetched or lofty.... but I actually thought it through a lot at the time :)
Anyway, it's still sitting on the back shelf.
But two of the first things I wrote in the spec had to do with the just typing in the language... and they were:
Block comments (among other things) can be nested. I can't believe why developers of compilers couldn't simply handle /* /* */ */ easily. What if I want to comment out a bock of code, and then comment out another block around it? I currently can't do that, and it's such a simple fix -- I don't see why not to put it in.
Later down the spec was:
Case statements break by default. To continue to the next case, type continue. If you want multiple cases together, you would write case a, b, c:
What's more common, when a new case statement begins -- continuing to it or breaking? I don't see why KR had to make continue the default, but it's been that way ever since. Forgetting to put that "break" can cause you to have nasty bugs just because you FORGOT. If you want to continue to the next statement, you'll actually explicitly think this thought, and specify it.
The principle I went by when designing this was "put what developers usually mean as the default, because they know what they want as an exception."
Finally, I'll put one more: semicolons. Forget one and it's like an atomic bomb went off "somewhere over here" in your code. Cryptic errors start popping up. In Q, a line break by default ends a statement -- again, because by far, MOST LINE BREAKS DO END STATEMENTS in code. So this is once again what most programmers mean when they press return after typing a statement. And if they DO want to continue to the next line, they explicitly think this thought, because it is an exception. However, semicolons still end statements; so you can put multiple statements on one line, or \
continue them onto the next.
By the way, if someone wants to find out more about this language, or implement it with me, email me :) gregory @@ gregory.net [i am not 'fraid of spam, hee hee]
Good point -- but such a pity that the original piece quoted is really about the perils of programming in 'C'. Even more a pity that most programmers will indeed write 'C' in any language (well, it is a slight improvement over FORTRAN, I suppose).
The problem, to use the vernacular of the slightly sleazier parts of the 'net, is "Bitches don't know about my RAII." -- or, in general, proper separation of responsibilities so that objects manage themselves and are not beholden to their consumers to Do the Right Thing. Alas, this is not surprising, given that the way that C++ is usually introduced as C with knobs on.
Dynamic memory allocation may be the most common resource needing to be managed, but I always feel slightly uncomfortable about the way that managed languages (Java, Python, Ruby, everything .Net,...) make it more obtrusive to the consumer (be it using(), try...finally, ResourceReleasesResource,...) to manage everything else (file handles, sockets, graphics contexts, the many and various Win32 handles,...). And even with garbage collection, memory leaks via strong references that ought to be weak, or large graphs held in an inaccessible part of a necessary object, are still not beyond the wit of man to concoct.
Much of the contribution to the pit of success from the managed comes from the extensive standard APIs that they provide, which then has the beneficial effect that there is a reduced temptation for well-intentioned (and we know which Pit that leads to) roll-your-own replacements. Otherwise there is a fine line between automation of repetitive processes (good) and pretending that the complexity really isn't there (bad).
Jeff Atwood wrote:
"a well-designed system makes it easy to do the right things and annoying (but not impossible) to do the wrong things."
That was the goal when designing the Ada language... 25 years ago. And unfortunately, that's also what made it fall slowly into the darkness of computer history.
People prefer a badly designed, expressiveless, macro assembler, "hacker" language like C because otherwise they feel their algorithmic creativity restrained. the real reason is that they never learnt to write good code, and were only taught the "hacker" part of computing.
People use the right tool for the right job, but when their definition of good is bad, they end up using the bad tool.
"Much of the contribution to the pit of success from the managed comes from the extensive standard APIs that they provide" - Except for Python, who's libraries are in general terrible :P
So the idea is to let the language hold our hand?
As a once-c++ programmer, I can honestly say that the ease of falling into the "pit of despair" was astounding. C# has come a long way, but I don't believe that you can fall into a "pit of success" simply by using a language that makes it difficult to walk off an array. If the code is buggy, it's buggy! If the logic is bad, it's bad! C# and similar languages can't save you from that. So while you might avoid the pit of despair, nothing lets you fall into success but yourself (no matter what language you use)!
I must say, I program C++ and if you know how to use it then it is a VERY powerful language. Agreed, you can easily f*** up, but then again, you can easily create a fast, efficient app too.
What happened with the other name for this metaphor, the "falling into the pit of success" thing? It is called "silver bullet", and it solves all your problems for you.
What you just said now is pure short sightedness!I agree the pits are deep, both sucees pit and failure pit.As always,c++ programmers love living on the edge.quoting-'If ur not living on the edge ur wasting space' ;)
I don't want anyone's applications 'on the edge'. I want to use solid, stable, secure and preferrrably easy-to-use and intuitive applications (like, I must say, I've found many a Mac app to be).
I'm not a programmer: I'm just a user. That's what I want.
In exchange for this, Programming Friends, I will give you my money!
I completely agree with C#. Normal code is pretty, small and most of the time even fast. But bad code, like late-bound calls get terribly ugly because you have to use Reflection or Code-Emit. I really like having to write bad code because every time I see Reflection code I know "uh uh something might easily break here so better watch out".
But isn't it easy to fall into big bad holes easily with Ruby and Python BECAUSE they allow too much? Because they don't have a compiler that helps you find your bugs BEFORE running your app? Thats the main issue that has kept me away from those languages so far.
Jan hit the nail right on the head.
Every language for its purpose. If you need the speed/memory, use a language close to the hardware. With all its pitfalls of self-managed resources and cleaning them up.
If not you can use other, more managed languages. But don't make it a rule, that they are better. And also not the other way around. Yes, you can get a memory-managment in C++ ... but it costs you and you might want to use another hammer, erm, tool.
Personally I like C++ and think many of these pit falls are over rated once you know what you're doing. Things like missing break statements in switch statements are so rate an occurance and so simple to see that I don't think disallowing them outweighs the benefits of allowing them to fall through when required. The nice thing that Java and I think C# have is reflection allowing simple JUnit testing. I don't rate memory management that highly, once you're working with C++ properly in a good structured framework of your app then there is little advantage of garbage collection over deterministic destruction of objects. Plus templates are very nice to use to avoid casting all over the place.
Perhaps you should look into D (http://en.wikipedia.org/wiki/D_programming_language) which seems quite nice. I'm also looking forward to the C++0x release which looks like it'll be adding some nice features lie the initialisation_list.
When programming in C++ I often spent more time worrying about the code than the solution itself.
Comparing C++ to C#, Java et al and say it's a terrible language is like wondering why someone invented BW TV since color TV is obviously so much better.
To be honest to C++, one should point out that C++ is a huge step towards better programs when coming from its C ancestor. e.g. the language comes with destructors: A very nice technique to avoid memory leaks and double-free's (and delete accepts NULL as opposed to free(), which is a bonus protection against these double free).
Of course GC is better. Then again, so is color TV.
I agree with Paul's comments in that I didn't really have a hard time with C or C++ with buffer overruns or bad pointers. Sure they did come up once in a while, but I didn't have too many issues with the 'language'. Now there were always run-time (functional bugs), but that isn't necessarily indicative of the language (to an extent).
I think the main problem was the lack of good library support. That is what really makes .NET super-useful. The API design is very friendly. Now you could do something similar with C++--there is nothing really preventing that.
I don't know if Microsoft just had sucky-engineers, but most of the C++ libraries/Win32 API functions seemed to be designed by someone who writes device drivers.
There's a certain 'state machine'-like pattern that many of the API/class libraries seem to follow--a requirement for device drivers, but not for sending an email. Take a look at MAPI if you want to see what I am talking about--or any of the zillion #define arguments needed to pass into methods.
I think everyone (that has coded in C on other platforms than Windows) can agree that the Win16/Win32 API was very poorly designed and "was just plain screwy" compared to other offerings of the time. But Windows became the dominant OS and that was that...
I really don't miss C++ as most of the work I do today doesn't require it, but looking back, it was fun to code in--we were solving different problems back then.
Great article, with one quibble -- I'm a Massachusetts native, and while I see where you were trying to go, the Big Dig isn't something on which you want to model your successful API or project. It has massive (cost) overruns, (water) leaks, and (ceiling) crashes...
I'm currently studying Comp Science at University and this year have to program in Visual C++. Handles, pointers, references, values.. things very counter-intuitive for a new programmer. I'll admit c++ has alot of virtues attributed to it.. but you definitely need to invest a huge amount of time to learning it well.
I also have to program in Java this semester (and previous semesters) which I've found to be VERY intuitive. It's easy to pick up, learn from scratch, and run with it.. developing (relatively) simple programs in no time at all with very little effort.
Visual C++ on the other hand.. I had to create a few basic classes yesterday to draw a circle and a square, and spent 90% of my time debugging errors which as I see it.. should be picked up by the IDE. Like I said, I need to spend time learning and practicing - but it is still a very difficult language.
Oh yeah, I wanted to make another quick point about Q and semicolons and statements:
If your language ends statements with both line breaks AND semicolons, and uses a continuation character to continue onto the next line, your IDE can protect you from errors related to missing semicolons. That's because when you split a line into two, the IDE can easily check whether the statement is being continued on the next line or no (if there is a binary operator, or a parenthesis is unclosed, or whatever), and will automatically insert the continuation character and indent your next line. However, with a semicolon language, the IDE's job would be to insert semicolons for you, and it won't know, in some cases, where exactly to do that! It would have to second-guess you.
Reasoning like this is what led to a bunch of features in Q. Do you think it's sound, or am I just blabbering away? Honestly :-P
Following on from my previous comment and Webview's. I also really like the distinction that C+ makes between objects, references, pointers and the ability to pass them around as constants with constant functions. This plus the type safety make it possible to write most code that simply will not compile if you've written it wrong. This is opposed to Java where you don't know until you run something whether it'll work or not because you're casting (because you're forced to not because you've explicitly stated to) to the wrong type or modifying something you shouldn't be because you can't make it const, or a class doesn't exist that you're trying to use etc. In C++ the syntax also shows you in plain sight exactly what you're doing with your types, you know if something is a reference or not and thus if using it will affect something else.
Operator overloading is a beautiful invention too, I love the clarity it can give code. The only arguments I've ever heard against them is that people will forget to check the operator on a type when trying to track down a bug, thus making the bug harder to find. Which is frankly a silly argument because if something isn't a built in type it might have it's operator overloaded, so go look.
Header files for function definitions I like too. Other languages seem to rely on the IDE to provide a nice list of methods of the class you're in to the side which I disagree with. Having all the functions in one place listed like in a header file is just nice and isn't a burden to maintain.
The thing I've found with my little exposure to languages like Ruby that favor convention over explicitly stating something, is that you have to remember exactly what the conventions are. Reading the code does not tell you what the conventions are, you have to know them. I find that quite difficult to pick up, I'm not the sort of person that can just accept at face value that something is going to "just work". It's all voodoo to start with.
I seem to remember reading an article on here or somewhere else that advocated understanding the boiler place code that IDE's write for you with class generators, but Ruby et al. goes beyond on not doing that and take it to a whole new level. C++ tells you exactly what it's doing and I like that.
Adding a massive amount of additional libraries like Java and c# have would probably be the single best thing that the language could be added. This is probably the overriding factor that makes Java so appealing, there's just less to do yourself. But that would be a difficult thing to do because of the differences the language faces moving between platforms. Plus C++ has never really been emphasized at application programs, even though it's favored for a lot of them for various reasons.
If you don't want to use C++, use Java.
The issues mentioned above happen if you don't code C++ properly. If you use arrays instead of vectors or manually call new and delete instead of using RAII, you will have problems. Your code won't even be exception safe.
The biggest problem with C++ is that there is so much to learn. It's way too expert-friendly a language.
Regardless, here's my guide to choosing the language for your new project: Which language has the library support you need?
Library support is far more important that language trivialities. With luck, a good library has solved nearly all of your program for you, and you just need to fill in the details. Same goes for prior code-base or purchasable code-base (don't code it when you can buy it).
It has been proven time and time again that smart programmers can create good products in any language and crappy programmers can create crappy products in any language. Hire good programmers and choose languages based on available libraries and code.
I agrees with Tod McKenna's comment. It's depends on the person who uses it. For me, C++ is a comfortable language. I started programming 4 years back and I hadn't a chance to work with "complex" implementation. But I worked with different medium sized projects with enough complexities.
I never went to managed world. When I was thinking about MFC, I thought it's really slow. But .NET is even slower. It's depends on such a heavy framework. Microsoft's approach on .NET versioning seems not good enough. The hell things they're saying indirectly that you need to buy our new version of Visual Studio to work with new .NET framework. From the top to the bottom, Microsoft's strategy on .NET and it's related languages/extensions is not good.. What do you think?
The other point is that C++ compilers are now matured enough. Hope Microsoft will make things light and easy. Moreover, a language should be able to satisfy different aspects of programming from lower level to higher level.
So, how do you do this? And how do any of the languages/processes you name do this?
There's an easy way to avoid falling into the pit of C/C++ despair. Watch where you're f'n walking! You only need a railing to keep you out if you are reckless, stupid, or blind. Most experienced developers rarely encounter the basic problems listed at the beginning of this article, or if they do, they are able to find the bugs pretty easily. Most bugs come from higher level problems that most programming languages have to some extent or another (logic errors, thread issues, etc.)
C++ is only dangerous to people who have only half learned it.
Why do people expect to be able to write production-quality software in a language they have only half learned?
Why do people WANT to be able to write production-quality software in a language they have only half learned?
Well I agree with you Jeff, JAVA and C# are nice languages and so are other high-level languages. So you would think, why would anybody be so silly and still use C++, when there are nicer and cleaner alternatives around.
BUT the fact remains, that C++ is still the most used language and almost *every* big, serious $ successful Software product is written in C++. You won't see anybody do Office.NET or Photoshop in Java. Just look at this small list of Apps written in C++. a href="http://www.research.att.com/~bs/applications.html"http://www.research.att.com/~bs/applications.html/a
Also in professional game-development, there simply is just nothing else than C++.
So in conclusion, Java and C# are nice, but still viewed as "fancy C++ wrappers" by the industry, and every serious thing will still be made in C++.
One of the big reasons I did Win32 programming in Delphi, and many a good commercial app was written it in.
Still remember my C++ friend who was amazed that a program I wrote in it had no memory leaks (it was not due to my skills).
It is amazing how C++ spread so far. Hopefully Delphi can have a revival with native Vista apps...
I come to defend C++ not to bury it.
Quite frankly I'm tired of these complaints. Fact is that every language comes with more than a syntax. It also includes:-
* a "programming model"
* idioms and,
You have to learn all three of these as well as syntax to use the language effectively. If you don't know what these are in your chosen language, you shouldn't be asking people to pay you to program in it.
This article - and many others like it - is just a whinge by an author who is insufficiently versed in these elements to survive in a commercial environment using C++. Poor craftspeople blame their tools, good ones don't.
Complaining about pointers, references, inheritence, etc is just a sign that you don't understand the language enough to be commercially productive.
If you know the C++ programming model, the idioms and the practices you won't have any trouble producing trouble free code as a matter of routine. I know (and have hired) several people who have a proven track record over many years now of producing upwards of 500-1000 lines of working, tested code _into_ _production_. [YES - I've got the stats, and no, you can't have their names].
If you don't know these things you'll be reinventing the wheel and making it up as you go along. That is very definitely the road to blowing your leg off.
Ok - got that out of the way.
With C++, there's one more thing. Are you talking about "windows" C++ (where the Windows API's and libraries are involved) or "unix" C++ (where the Unix/Posix environment is used). Sometimes the terms "server-side" and "client-side" are used, but they essentially refer to the same idea.
Each has a _different_ set of idioms and best-practices, and anyone who knows one will find it difficult to operate in the other environment. If you doubt this, talk to a recruiter or better still a good technical lead who hires developers. It is common for such people to make this distinction and refuse to consider a candidate with a "client side" backgroud for a "server side" position (and vice versa). I know that's my practice, and I know it's also
a widespread practice in the market generally.
In my experience a server-side guy who's read Coplien (and the rest of them), and who knows what the hiding rule is, is someone worth hiring. A client-side guy who's read Petzold (and the rest of them) is worth it for the client side. Anyone without these knowledge sets is a very dubious hire.
Bottom line. You either know the language or you don't. If you know it, you're a hire. If not, you write articles like this.
C++ is a fine, very fast and efficient language that is perfectly safe when used properly by a knowledgable and skilled developer.
You may give up some performance when you choose to code in C#, Python, or Ruby instead of C++. But what you get in return is a much higher likelihood of avoiding the miserable Pit of Despair
I choose to program in Delphi because it gives me the execution speed AND low-level memory management capabilities of C++. The difference is that with Delphi I can CHOOSE to use low-level memory management, and I am not required:
One language that uses reference counting for garbage collection is Delphi. Delphi is not a completely garbage collected language, in that user-defined types must still be manually allocated and deallocated. It does provide automatic collection, however, for a few built-in types, such as strings, dynamic arrays, and interfaces, for ease of use and to simplify the generic database functionality. It is important to note that it is up to the programmer to decide whether to use the built-in types or not; Delphi programmers have complete access to low-level memory management like in C/C++. So all potential cost of Delphi's reference counting can, if desired, be easily circumvented.
That and I don't need a framework/interpreter installed on the machine.
Oh, and Delphi 2007 still is the only IDE with full native support for Windows Vista.
Or, if you'd like to write programs for Linux and OS X (and Windows too), try Lazarus. It's the open source IDE for Free Pascal (open source compiler) that aims to be Delphi compatible: http://www.lazarus.freepascal.org/
Too bad Delphi couldn't live up to its name and show you the future of unemployability.
I agree with you to a point, but have also found that many languages that try to build a bridge across the pit of despair often times lead you to a new pit. Take C# for example.
I used to code a lot of C++, and only made use of heap allocation when it was necessary. This had the advantage of limiting my chances of forgetting a new, or double deleting. C# solved the problem of memory leaks by putting nearly everything on the heap, and replacing delete with Dispose.
So now instead of deleting memory, I have to remember to dispose of it. Granted there is Using... and the GC will eventually call a destructor which hopefully calls Dispose for me... but there is a larger problem. Object!
Everything is a freaking object now! So now instead of having a memory leak to deal with, I have tripled my chances of having an invalid cast exception, or a null reference exception because things I might normally declare on the stack are now allocated on the heap. I've traded one problem for another.
I suppose you could argue that this is a case of replace slow failure with immediate failure, but it doesn't seem like encouraging success to me.
"Bottom line. You either know the language or you don't. If you know it, you're a hire. If not, you write articles like this."
Ouch, JM. I seek to find this magical company for which you work where programmers never make errors and bugs never find their way into a software release. I respect what you're saying about how seasoned, solid programmers are less likely to make gross errors, but once code gets big and complicated and good, solid, but-totally-green-to-your-app programmers get introduced to it.. all bets are off. I think that's what Jeff is talking about.
While not yet mainstream, some languages are just a little better at herding their programmers away from disaster.
"BUT the fact remains, that C++ is still the most used language and almost *every* big, serious $ successful Software product is written in C++. You won't see anybody do Office.NET or Photoshop in Java. Just look at this small list of Apps written in C++. "
Most of these application predate Java and .Net/C# and rewriting them with Java/C# would require spending millions if not billions of dollars without immediate benefits.
Most new applications are web applications and only very few of them are written in C++, most are written in Java/ASP.NET/PHP/Ruby/...
I'm mostly a C++ coder, but I also have a healthy share of C# work. While there are definitely a lot of parts that I like about C#, I've spent more time cursing C# than anything.
For example, the GC. Yes, it does a good job most of the time, but when it doesn't it REALLY doesn't. And, there is no alternative. If you find that the GC isnt handling memory well enough for you, too bad. Rewriting your app in C++ is your only alternative, but by the time you find you have serious memory problems it's too late for a rewrite. So you pepper the code with GC.Collect()s, it gets even slower, but it "works".
And C# isnt without it's own "pit of despair" - reflection. Reflection is absolutely great - unless you use it. It can make debugging and tracing through code a nightmare. But in the right hands, it's an awesome tool.
I've also found like coding in C# is much more like coding against a black box. But that may just be my C++ side complaining. Even coding against Microsoft's MFC you can still step through the MFC source if something crazy is happening. If something funny is happening in the .NET libraries you can only hope that Google has something to say about it.
I'm gonna have to agree with JM. I'm dismayed at the generation of "programmers" brought up on java or C# who have no idea how computers work at all.
stuff like .NET breeds bad programmers.
"If you use arrays instead of vectors or manually call new and delete instead of using RAII, you will have problems. Your code won't even be exception safe."
In reality every non trivial application will use several libraries/framework with their own mutually incompatible auto_ptr, string class, collection classes, exception classes
or OS APIs (written in plain C) and none of these techniques will actually work.
I'm with the commenters on Delphi. Go Delphi and you can be more productive than C++, without the performance hit and installed framework requirement of C#. (Yes, I know you can ngen your C# assemblies, but I can write inline assembler directly next to my Delphi code in the SAME program when I really want speed).
Not to mention that the VCL has been stable and enhanced for over 10 years, while Microsoft has retired VB6 and now Windows Forms in that time. So once WPF becomes old hat in 5 years, everyone can migrate all their code to the *next* greatest thing.
Go Python, Perl, Delphi, or anything else that doesn't lock you in to a vendor that locks you out.
not to derail, but where was that picture taken?
How long do you have to code in C++ before you stop worrying about language and compilation and move on to actually thinking about the problem you're trying to solve? How much of the C++ spec and Stroustrup's book do you have to memorize before you stop getting nailed by "gotchas?"
"Think buffer overruns, memory leaks, double frees, mismatch between allocator and deallocator, using freed memory, umpteen dozen ways to trash the stack or heap"
are problems with c++? i think you meant 'c'?
I program in c++/c all the time. Those problems dont exist in c++.
Tried c# a bit, yeah its easier to use and has a few improvements - do I want to use a language developed from just MS? - no way. Got burned by the hideousness of VB too badly, and C# looks too much like VB - yeah I know a lot of baggage what can I say.
For sure, dealing old pre-modern libraries in C++ is a major pain. The only time I've ever reinterpret cast a pointer to void* in C++ was to call Fortran libraries. Using Qt, an otherwise wonderful library, requires either a lot of custom wrapper classes or calls to new, which are potential memory leaks or worse in the absence of a C++ garbage collector (that'll be in the next standard, FYI).
But there are standard techniques for dealing with old libraries and with libraries that have their own ways of doing things. Smart pointers, exceptions, and RAII are still best practice even when dealing with crufty libraries.
But all that strengthens my main point: language choice is all about the available libraries. Language differences are trivial in the face of available code reuse.
Old non-standard compliant libraries are a problem in any language that has been around long enough to have a wide assortment of libraries. Perl has the same issues. One day even Ruby will have problems in this regard. It's something that you have to deal with.
I also agree with Tod McKenna. A poor programmer can write bad code in any language. A good programmer can write good code in (almost) any language.
"safe" code is entirely the author's prerogative. Eg.
* A program written in C++ overruns an array... and an exception is raised by the O/S which kills the program.
* Same program written in C# overruns the same array... the CLR raises an exception which kills the program.
In both cases the programmer was responsible for the failure. In both cases the programmer had the same (better) choices available (prevent an overrun and/or catch the exception).
But what's the difference to the user?
The user relies on the programmer who wrote the program... if that programmer fails; the user then relies on the programmer who wrote the next level (the host environment)... this chain continues until either a good programmer is encountered or the system itself crashes.
Personally, I love these new-age languages; my salary just keeps going up (currently more than double the average fulltime C# developer). Of course, I don't usually write C#... C# doesn't run on most platforms (and probably never will - some platforms I target don't even have a C++ compiler and I have to write in original ANSI-C (not even // comments)). And most new programmers can't write C++ (let alone ANSI-C). It's fantastic.
the Big Dig isn't something on which you want to model your successful API or project. It has massive (cost) overruns, (water) leaks, and (ceiling) crashes.
That's exactly why I chose it, actually.
You need the intestinal fortitude to *keep* digging until you make it right.
If something funny is happening in the .NET libraries you can only hope that Google has something to say about it.
How so? You can decompile the .NET libraries, which are written in .NET, to see what's going on under the hood. I read blog entries about this all the time.
"Consider your project a Big Dig"
It needs to be ill-thought out, horribly inconvenient to everyone in the area, and go tremendously over budget?
Surely there is a better project to compare your project to?
JM: "Bottom line. You either know the language or you don't. If you know it, you're a hire. If not, you write articles like this. C++ is a fine, very fast and efficient language that is perfectly safe when used properly by a knowledgable and skilled developer."
Typical e-penis waving comment that comes up whenever C++ is discussed. Are you saying any needlessly complicated technology is perfectly fine as long as a few people can use it efficiently? Perhaps the software you write is just as bad but it doesn't bother you as long as a few of your users are comfortable with it?
The bottom line is either you produce good software or you don't. You're (i.e. the majority of programmers) much less likely to do so in a bug prone language like C++. Do you really think the bottom line for your customers is if their application was written in C++?
Time and time again, the C++ supporters are saying "C++ is a great language, but one needs to invest the time to learn it properly." And they are basically correct. And that is precisely why C++ is not the best general purpose language choice today.
The Java, .Net, and several other languages don't have the learning curve of C++. A junior developer can be more productive sooner. A team of junior guys with a smattering of highly experienced developers and a good architect or two can be very productive. The junior guys have opportunities to grow.
On a C++ project, the entire team needs to be senior. It is difficult to generate senior developers if you can't employ junior developers. So you end up hiring junior guys and letting them make mistakes, because you just can't get enough senior guys.
In short the path of the C++ developer is tackling difficult tasks, making mistakes and learning not repeat them. The path of the Java developer is performing well on simpler tasks, and progressing to more complex ones.
Frank Wilhoit: "C++ is only dangerous to people who have only half learned it."
What makes it worth learning when the process of becoming an expert will take much longer than for any other programming language? Simply to pat yourself on the back that you can do things the most complicated way?
I haven't met a perfect programmer yet and despite writing code for one kind of computer or another since the early 1980's, I still have bugs crop up from time to time. Doesn't matter if it is C#, C++, C, AppleScript, Perl... bugs happen. Good designs take care of some of the problem, good practices take care of some of it, automated testing can eliminate others, and code reviews can definitely flush out problems.
C# is different than C++ and each has its merits and pitfalls. Two years ago, I'd have picked C++ hands down for most any program. Now, C# is my go-to language, but I'm not married to it. It depends on the problem to solve and the domain.
I'm not all that wowed by JM's "500-1000 lines of production code" comment. That really isn't all that much code, frankly.
But, the ideas and expectations that JM wrote about - I generally agree with them. We *should* know what we are doing. For most/many of us, this is a career, not just a hobby. Folks pay good money for me to write code for them, so their expectations *should* be high. So my skill set *should* be high too. I'm going to make a bug or two - that is life, but I ought to be proficient and invested in my choice of work. Otherwise, I should be doing something else.
I have to agree wholeheartedly with JM. These days it is popular to bash C++, especially by those who do not have any practical experience in the language to speak of. Nevermind that the problems C++ is often accused of are easily and readily solved (using smart pointers when allocating on the heap, using std::string instead of char* arrays, using standard containers to manage memory, etc.). The fact of the matter is, C++ doesn't mesh well with the average web developer mentality or make it easy for the toy programmer with little understanding of software engineering or computer science fundamentals to limp on by, and therefore it will continue to be the subject of such posts and articles.
A person complaining about C++ with little experience (or time spent learning the language) is like an American complaining about being unable to hold an intelligent conversation in Chinese after learning only a few basic phrases. One obviously can't expect to be _proficient_, _efficient_, and _effective_ without putting in the effort and time required. Those who don't are usually contented to learn pig latin, and spend the remainder of their days communicating on a lower level, with a smaller vocabulary. Obviously Chinese would take more effort, dedication, and perseverance to fully learn and grasp, but it would open the door to wider and more effective communication. Likewise, C++ gives you the ability to solve MORE problems and gives you more choices on how to do so than other languages, allowing you the programmer to come up with the most elegant solution. Try modifying import address tables for Win32 processes in a language without pointers--it can't be done!
C++ is not the right tool for every situation. For web development, I personally choose python. I don't, however, take this as license to spend my time criticizing C++ for web development--I have better ways to spend my time.
For those who don't need or want C++, fair enough. Please, however, stop ignorantly bashing it so that those of us who know better don't have to listen to it.
I'm with Tod McKenna too. Nice point, Jeff, but what about entropy? There is no such thing as a Pit of Success. Success means hard work, means real thought. Actually, building such a pit, you say yourself, takes hard work. If the user of your API isn't solving their problems on their own, one of the two is lacking, but not necessarily the API. Don't dig deep and wide pits for people who can't see in the dark.
Ironically, one can ask Andrew R the same question: what about entropy? A buffer overrun in C# will only raise an exception, while in C++ it might do no apparent damage for hours and hours, a vermin feeding on tiny mistakes, corrupting your data and building up your worst hangover.
JM put it very well ( JM on August 30, 2007 06:32 AM )regarding C++ for technical programmers, but possibly not for business programmers.
What he described is a technical language for professional technical programmers, as contrasted with business programmers. Technical programmers create technical software, e.g. OS's, drivers, libraries, utilities, compilers, middleware, and maybe even sophisticated applications such as the components in an "office" bundle.
Business programmers create line-of-business domain-specific applications. Their main professional competency is knowledge of the business domain.
If one obligation of all programmers is to, "Do no harm," then the expectation for these two groups and the languages they use are different.
It is fair to expect technical programmers to operate at the level of having read and absorbed the books JM described.
However, I think that business programmers can validly prefer languages that are easy to learn and easy to use. I have always valued the characteristic of some languages (and libraries) that let you start with a minimal amount of knowledge and use as much more as you learn. APL, Compiled Quick BASIC, spreadsheet tools and many others have this characteristic.
I also appreciate simple syntax. Compiled QuickBasic meets that characteristic. Yet it is almost as powerful as c, except for function pointers. VisualBasic added even more powerful capabilities over the years, reaching a peak with VB 6 but going downhill from there.
For business programming, it is reasonable for programmers to NOT attain full assimilation of the entire syntax and library. With a good IDE, they can exploit the help and online doc for anything they think should be doable, but never learned or don't recall how.
Even for technical programmers, with so many categories of API's within the operating system, application frameworks, communications, graphics, database etc. you cannot expect to know everything you will need on the next project. In a web services XML world, you cannot expect to know all the schemas and services, even those documented at http://www.oasis-open.org/
It would be interesting to see if there has been any attempts at quantifying the resultant software quality of C# projects vs. C++ projects. Has the C# attempt to reduce 'gotchas' actually resulted in the production of less buggy products?
I've recently had to switch from a computer game project involving a great deal of Python scripting to one that is primarily C++ programming. Since someone will ask, the cause of the switch wasn't performance so much as maturity; we needed a battlefield-tested engine and didn't really have time to write a full toolchain in the engine we'd been using.
As a game developer, I can speak towards the thought that "Everything is written in C++." While it is very true that you're going to find C++ somewhere in the code stack, mature game engines utilize a mixture of low-level and high-level languages to get the work done. A good craftsman has a variety of tools in the belt and is familiar enough with all of them to know which one to pick up for the task at hand. It is as foolish to write a tight-loop rendering algorithm where every ounce of performance must be completely managed by the developer as it is to write a complicated high-level game-state machine in a language that doesn't even have a concept of a string primitive. Ideally, you arrange smooth boundaries between the sections of the program that are written in different languages and do the work with the tool that suits it best.
On the specific issue of C++: The biggest advantages C++ has as a tool are direct memory management, sufficient type safety to squash the most obvious types of mismatch errors, a ton of libraries written with the language in mind, and a relatively sparse feature set that makes it easier to build functionality on top without worrying about how the new functionality interacts with other aspects of the language. The disadvantages include a general sense that the 'correct' codepath is the one that takes more keywords (i.e. when would I ever want a destructor virtual? How often do I want to fall through a case statement? Why do I have to add code to two or three files to add one function?). The last advantage is also possibly the largest disadvantage; with so few low-level features defined in the language, many libraries aren't compatible with each other (the STL should have standardized some of these features, but its implementation leaves much to be desired).
I think there is room for a language such as Q that draws the line in the sand between "language feature" and "library option" closer to the high-level languages; unfortunately, without the library support to offer actual functionality on par with C, new languages face an uphill battle. Allowing for backwards compatibility can make things worse; half the problems I have with C++ I'm convinced are design decisions made to ensure C compatibility. Conclusion? If you want to get work done, know C++... But feel free to use a more specialized language when the problem calls for it.
"For example, the GC. Yes, it does a good job most of the time, but when it doesn't it REALLY doesn't. And, there is no alternative. If you find that the GC isnt handling memory well enough for you, too bad. Rewriting your app in C++ is your only alternative, but by the time you find you have serious memory problems it's too late for a rewrite. So you pepper the code with GC.Collect()s, it gets even slower, but it "works".
I recommend you read this: http://msdn.microsoft.com/msdnmag/issues/1100/GCI/
Lots of C++ developers bash the .NET GC simply because they don't understand it. In short - you should *NEVER* pepper your code with gc.collect() simply because you *THINK* your app is using too much memory. The CLR allocates lots of memory when it's not needed by other apps to minimize the number of iterations of the GC algorithm - but releases it when memory is tight.
""Consider your project a Big Dig"
It needs to be ill-thought out, horribly inconvenient to everyone in the area, and go tremendously over budget?"
And kill innocent users.
Worst analogy ever.
"I'm gonna have to agree with JM. I'm dismayed at the generation of "programmers" brought up on java or C# who have no idea how computers work at all.
stuff like .NET breeds bad programmers."
Chris on August 30, 2007 07:16 AM
And attitudes like yours breed bad reputations for ALL programmers.
I don't know C or C++ beyond very basic console programs. I never needed to learn it because I was busy being paid to write things in VB6/VBA/Office. When .NET came out I picked it up because I liked the mix of 'RAD vs. Nuts and Bolts' -- so I'm sorry but I disagree that it "breeds" bad programmers.
What breeds bad programmers are arrogant, snot-ass blowhards giving fellow professionals a hard time because they judge them by what programming language they use.
The pit of despair and the pit of success are one and the same (at least in programming).
Honestly, I have seen bad code in any language you can mention. Yes, depending on which language you are working in, you are likely to end up in a different kind of hell, but if you write bad code, they all lead down some pit of despair.
The belief that you can solve the problem of bad code by designing the perfect language is, IMHO, misguided. Memory management in C++ is not hard if you have a clue about how to program (RAII springs to mind), neither are most of the common problems you encounter in any other language.
The real problem is bad programmers. People who do not understand what they are doing and cannot seem to be bothered spending the time to learn their field and to keep up with it. Unfortunately that seems to be most of the people in the field.
How about, 'the Valley of Success'. An easy saunter into a fertile, land of abundance...
I direct this at the majority of responders of the "C++ is fine. lrn2code n00b" ilk.
Yes, C++ is a badass language and deserves respect, I don't believe Jeff is debating that. He's highlighting how easy it is to shoot yourself in the foot in so many different ways, if you _DON'T KNOW_ know what you're doing in C++.
And you're all saying "Yeah, but it's fine if you _DO KNOW_ know what you're doing."
So tell me, how many _years_ did it take you to get from _DON'T KNOW_ to _DO KNOW_ in C++. (And if you say you _mastered_ C++ in 1 year I'd like to see what else you lied about on your resume.)
I'm just going to ballpark and guess it's an order of magnitude longer to get competent in C++ __safety__ than in C# __safety__. And in that time difference how many more features*, and tests, iterations, and overall quality improvements can the C# folks get _safely_ out the door while you learned not to thrash the living hell outta memory.
And eventually once the C++ and C# coders are on equal footing skill and safety wise, how much are the milleseconds of execution time saved by the C++ guy worth compared to the amount of troubleshooting and extra "thinking through the plumbing" time incurred?
These comments smack of Jeff's infamous fizzbuzz post:
Where instead of commenting on the sad state that lots of Johnny's can't code at all and what we can do about it, we saw every code monkey and their uncle commenter try to prove that they can in fact code (a ludicrously easy program at that). Grats people, grats on your swing and a miss.
A very good post nonetheless.
* We're talking everyday Bus Apps here people, not graphics shaders or pacemaker timers.
I think that all those people who're blaming C++ for its "unsafety" are talking about "programming on C++ in C-style". I mean, using manual memory allocation, printf-style functions, pointer arithmetic, longjumps and maybe even gotos. Actually, you can program on C++ in a very different way. You may use STL, Boost, the RAII concept and full power of strict type checking. It's your choice.
The last program I wrote in C++ (~8 KLOC) has passed valgrind's memory checking without complains--it means that there were no memory misuse.
C++ allows you to write "unsafe" programs. I think, there's almost no possible way to write OS kernel or a driver in a "safe" way. But you're not obliged to write your payroll program the same way an OS kernel is written.
Greg (the guy who designed Q),
Have you looked at the more modern implementations of BASIC? They seem to follow the design concepts you've listed.
I've been programming for a little or 5 years now as a hobby. I tried to start with C and ended up getting lost and confused. A QuickBASIC clone ended up teaching me all I needed to know about good programming practices and my short time with C kept me from relying on the garbage collection. Overtime I've grown to appreciate the simple complexity of C, however I still can't get over having to include libraries and sticking a semi-colon at the end of the line so I guess I'm stuck with BASIC forever.
Wow. Not enough traffic, so you start a jihad against C++? Kidding!
I've been programming mostly C++ for about 12 years now, with the occasional bit of VB, and Perl. Quite frankly, I've never had problems with memory management. I don't know what others have done that it causes them so many headaches, but I've just never had many problems with that.
So. I think a large part of this is a matter of attitude. I don't want a language to protect me from myself. I don't want a language that restrains me from writing bad code; I like writing good code and I don't like writing bad code, so why restrain me from it?
I get the feeling that a lot of people don't care that much about writing code well. I get the feeling that they're really just concerned with getting the code written, not caring how ugly it may be. If you're one of those people, and using Java or C# or whatever helps you write code that does what you need it to do right now, more power to you! But don't rip me for choosing to use C++ just because you can't use it safely and effectively. That's sort of like ripping on someone who can drive a quad-axle dump truck because you can only drive a Toyota Camry with an automatic transmission.
I didn't master C++ in a year. I still haven't mastered C++ after 14 years working with it. BUT I can use it very effectively to write some very good code, and I think that's a testimony to its power and flexibility.
C++ perhaps more than others is like a giant salt shaker with big wholes in the top. Sprinkle with care because sometimes you pour in more than you want or need. Just like too much salt, you can't it take out so you have to throw it away and start over. Truly a salt pit of despair.
With C++ its always easy to fail, but living on the edge is quite exhilirating at times. So although I no longer write C++ everyday, I still spend a few days a month doing stuff in C++, just because of the thrill!
I just wrote something similar about a tiny program I recently wrote (systhread.net/texts/200708etucross.php). I was writing a small piece of software designed to fit into another software system. I had come across a change I needed to make and thought I might have to do some very serious voodoo to do something as simple as determine a file type. It turned out that the software system I was writing for already had the capability (and many more that I have not explored yet). The pit of success here would be learning to *rely on the pre-existing software system*
The right language for the right Job.
C++ = Great for Drivers, low level functions, app isolation
SQL = Good for data retrieval, transformations, storage
Texas Instruments BASIC = good for animating the letter '0' around the screen ( as long as its in green and black).
Choose your battles developers, don't let them choose you...
Your lettering is all greyish fuzzy can you fix that please ?
There are no pits of success. It's one of those second-law-of-thermodynamics things: they just don't exist, anywhere. Not in programming, not in any part of life. All pits are pits of failure. If you think you're falling into a pit of success, that's just because you haven't hit bottom yet.
Matthew Cuba: "I'm not all that wowed by JM's "500-1000 lines of production code" comment. That really isn't all that much code, frankly."
Yeah sorry - I accidently left out "per day". Over sustained periods of time (like 1 year projects). That *is* a substantial amount of code.
And if you keep up with certain open source projects, you'd recognise the names.
Is the Microsoft Common Language Runtime written in C++ ?
"A program written in C++ overruns an array... and an exception is raised by the O/S which kills the program."
Except if you do what most people do - run without bounds checking. The problem isn't applications getting killed, it is applications corrupting their own memory area and doing bad stuff as a result of that.
"Personally, I love these new-age languages; my salary just keeps going up"
And with the added knowledge I gave you above, your salary should keep going straight to the moon.
1. C# is a nice, sleek car, and can get you where you want as long as you stay on the road. C++ is an off-road 4x4 and it can get you everywhere, even through the woods if you want it to. (Not my metaphor, but it's a good one.)
C++ is what it is. It has been and is being used VERY SUCCESSFULLY on many, many projects. If you want C#, you know where to find it.
2. On thing that Jeff's article claims (and Lippert's essay hints) that "the C++ pit of despair" is a doomed place, full of evil hungry bears that will maul your previous project and cause it to fail. The "pit of success" will cause it to succeed.
However, the first and main lesson of Peopleware (based on their actual research, I believe) is that projects fail not because of technological reasons but because of PEOPLE reasons. As long as your programming language choice is good enough, your project can succeed or fail based on other criteria. This has also been my experience.
Is there some research that you can point me to that shows that C++ projects fail more than C# projects? Or succeeds more? My bet, based on what research I have heard of, is that no, it doesn't much affect the success of failure of the resulting project (though it does affect the quality of the result).
3. I like C#. I like it a lot. However it has its own warts. Why are events a keyword? It most definitely could be implemented as part of the framework, as a class.
Why are "empty" events implemented as null? It's like an empty list being implemented as a null object? Why should I have to check if an event is null all the time? Isn't this a place where the ACTUAL LANGUAGE SPECIFICATION makes it easy for me to fail? Wouldn't it be more robust for the compiler/framework/whatever to check that for me, in a thread-safe manner?
I am sure there are good reasons for this. REALLY GOOD REASONS. I give credit to C#'s designers. They are smart people who have done a mostly good job.
What annoys me is that Lippert doesn't give the C++ designers and committee the same benefit of the doubt.
Just because Lippert can't think of a good reason for a certain wart in C++, doesn't mean a good reason for it does not exist. Give some credit to the people who design the language. They are smart. VERY VERY smart. Assume they are at LEAST as smart as you.
4. I'd take an accomplished programmer who has invested a few years in mastering a language (ANY language, though C++ requires more time than I guess some others) than a bunch of junior programmers who think they can write good code because language X has an easy learning curve.
5. I agree that API, tools, and languages should if possible make it easy to do the right thing, harder to do the wrong thing.
I just think that this has to be balanced against other requirements.
When you balance it against, for example, C++'s primary tenant "you only pay for what you use", it turns out that you have no choice than to allow the user to do the wrong thing.
That's why ALL methods are non-virtual in C++ by default. That's why a destructor has to be explicitly made virtual. Even if that means you might get a resource leak if you forget to make it virtual.
I just don't see why the length of time it takes to learn a language is related here.
C++ does take years to master, and I believe about 1-2 years of real effort and experience to be competent in. I don't see a problem with this. It takes years to become an accomplished rock climber. It takes years (beyond actual studying) to become a good doctor. It takes years to become an accomplished plumber, I am sure, instead of the guy that SAYS he's a plumber but can only fix the 5 most common problems and has no idea how plumbing works.
I don't want people who aren't competent to fix my car, my teeth, my plumbing. Would you like to cross a hanging bridge designed by a new architect who just finished school, built by a company that's new in the business (but don't worry, they use a new method of building that takes less time to learn!)
I shall also link to Peter Norvig's (you might have heard of him) article, which bemoans a related phenomenon:
On of the points he makes is that according to some researches it takes about 10 years to become an expert in anything.
On a different note, does anyone know if Office 2007 is written in C# or C++ (I think we can rule out Java :)
I just wanted to throw in my two cents. I've been programming for about 25 years, 13 of which professionally. C++ is my favorite language, I like the low-levelness of it, reminders that there is actual hardware underneath it all that is being manipulated. My go to language for something that needs to be done now (and I mean more in the one-off utility type of programming then an application one might sell and maintain) is C#. Unless of course I need to parse gigs and gigs of text in which case I reach for PERL. I guess what I'm getting at is, my goal is to use the right tool for the job (as many of your readers have eluded to.)
I agree with the 10 years to become an expert. Some tools just aren't for beginners (and I mean to the language, not the ability to learn one)
Argh, I forgot to say:
I do agree whole-heartedly with the concept of setting yourself up for success, which I think this post was originally about :-)
Mathematically speaking, using a tool that is easier to use and teach to use, you've got better odds. But as you notice, it takes the human out of the equation, which contributes quite a large difference.
It's a little weird that you're comparing an old language with a new one. This is a bit like comparing assembler with C.
Then again, back in the old days I enjoyed assembler coding. There was a purity that seems lacking today. I am such a nerd.
"The problem isn't applications getting killed..." - tcliu
Wow, congratulations on missing my point so perfectly.
If my point was the target you managed to shoot the instructor behind you (I can use inappropriate metahors too).
Still the finest, most elegant, most readable, most maintainable, most Vista enabled, best community support least prone to memory leak, buffer overrun way to develop unmanaged Native Win32 apps at the present time is Delphi (Object Pascal).
How C++ and the ilk ever became popular is incomprehensible. However finding a job in Delphi is another matter. Many excellent software has been written in it e.g Skpe. If you want to struggle on with **++ thats your choice
I love the C# pit of success.
First they erect the inpenetrable barrier of garbage collection to prevent anyone falling into the pit. Then they write MSDN articles explaining why one algorithm is fast and another algorithm is slow (even though both produce identical output) because one works in harmony with the garbage collector's implementation and the other works against the garbage collector - thus being less efficient.
And then someone goes and does some experimental work on new garbage collection algorithms and invents a betetr system - only the previously referred to article isn't quite applicable any more.
Abstractions are essential to any successful project (because the alternative is hand-coded machine code, given that even "assembler" is an abstraction, if not a very high-level one) yet no abstraction is a guarantee of eternal bliss and success.
Next some idiot will claim that Java and C# are wonderful because they "don't have pointers" - but I've seen the disasters that people can create in "safe" languages when they don't understand things as simple as the difference between pass-by-reference and pass-by-value.
Ok, it's not the pit of success I love. It's the look on people's faces when they discover those 6-foot thick 18-foot tall concrete and steel safety walls were actually just wallpaper over a frame made from jello.
Yes, C++ has many varied flaws. We know. C# also has many varied flaws. Deal with it.
I've been around for a while and I've used my share of languages and I've found my pit of success. It's AutoIt.
I started programming for fun by learning C++, and I moved to Java and then to C# with .Net 1.0. I really enjoyed the switch to Java and then even more the switch to C#. But with the changes to C# in .net 2, 3 and 3.5 it started to grow more complicated by having little bits of source code all over the place. So I stopped and had some kids.
For work we needed to automate the data migration from old computers to new computers for 5000 users an I thought about using C# or VBScript when a coworker mentioned I should look at this scripting language called 'AutoIt' ( www.hiddensoft.com ).
Wow, talk about a pit of success, it's small powerful and easy to use, has it's own IDE with code completion, can do COM automation if needed and has a very effective and comprehensive help file. In a week we had the data migration script written tested and ready for deployment. That was last year, and now with Vista looming we'll need to do another migration soon, so we dusted off the script and gave it a go, it worked first try! What was going to be a week or two effort instead turned out to be a day of testing, awesome!
The memory footprint of AutoIt is quite small as well so it's appropriate for making small applications that run in the systray. For example if you are a weewar.com player then you may have seen weewarify written by Bruce Kroeze, it's great and it works but it consumes about 63Mb of RAM. 63Mb (30 in RAM and 33 in virtual memory) is a pretty heavy tax for something that just checks a web page every 5 or six minutes and add to that an apparent memory leak of about 100kb per hour it's even worse. Weewarify is written in Adobe's AIR which I think is why it has such a big footprint.
I figured I could do better with C# and I thought that this would be a great opportunity to get back into the C# saddle and figure out the split source file business. After a few days and about 45kb of source files I gave up, looked at my AutoIt Icon on my desktop and smiled. That same night I had my Weewar notifier working (7kb of source in 199 lines) and it only takes 504kb of ram and around 8-10mb of virutal memory, so about a 1/6 of weewarify and no memory leak.
Thanks for being my pit of success AutoIt.
I'm always amused by C++ developers that always claim they're "good programmers". I guess the bad C++ developers are sequestered away somewhere without internet access. I've worked with a lot of arrogant C++ developers that claim they're "good" and man, the shit code I've seen from them makes me shudder.
C++ is the way it is because it's very popular, very useful and old. One day C# will also be old. And it will also contain all sorts of stuff superseded by other stuff and all sorts of features that supplant other features and 1000 ways to do something, instead of the 500 ways you can do it now, with two of those ways the "proper" way and the compiler will give you warnings for the other 998.
I'm always amused by developers that always
claim they're "good programmers". I guess the bad
developers are sequestered away somewhere without
internet access. I've worked with a lot of arrogant
developers that claim they're "good" and man, the shit
code I've seen from them makes me shudder.
I think that's true regardless of what language you're talking about. Are *you* a good programmer? Who says?
I find that guys that stick with C++ for the long term tend to be better programmers and the guys who can't really hack it, go for the flavor of the month languages. I'm pretty sure that 20 years from now C++ will still be in use. C# I'm not so sure about. It's pretty tightly tied to .Net, right? C++ isn't tied to anything except the general concept of a machine with addressable memory.
I progam in C daily and I can't remember the last time I had any of the memory issues you talk about...it's called experiance :) also the fact that you code a simple memory manager and it detects any would be problems for you.
There isn't much excuse for poor memory management in C++ these days with STL collections and reference counted pointers.
I'm working on a reasonably large C++ project (around 10,000 lines to date) and there isn't a single raw pointer in it.
Quote Jim : "it's called experiance :)"
Actually it's called "experience", but i agree with you.
Assembly is a fine, very fast and efficient language that is perfectly safe when used properly by a knowledgable and skilled developer.
And of course people who can't code in assembly are idiots and kid programmers.
I am ashamed people who don't think about how processor, interrupt, cache, memory, bus, DMA etc. works all the time they code are calling themselves programmers.
Of course I hate any kind of abstraction which simplify the job that needs to be done.
It is unfortunate that US programmers are somewhat snobs about using C
++ or you aren't a real programmer. I have run into a couple of Java programmers that feel the same way.
Delphi is without a doubt the most productive native compiled tool on the market. It is unfortunate that Codegear hasn't come up with a way to deploy there code to the web in a robust manner ala Java and ASP.Net.
One person discussed the issue with memory leaks. Delphi is set up with objects having owners. The owner of an object automatically cleans up its own children objects. There a just a few object types that don't support this, so it is very difficult to write a memory leaky program in object Pascal. Free Pascal really opens up the world of multi-platform support.
Things are getting back on track for Delphi and Object Pascal.
"One person discussed the issue with memory leaks. Delphi is set up with objects having owners. The owner of an object automatically cleans up its own children objects. There a just a few object types that don't support this, so it is very difficult to write a memory leaky program in object Pascal."
Not quite true from my Win32 Delphi experience. TObject has no ownership concept. Only when you get to TComponent do you get ownership, and then only of other TCompnents and inherited classes. And the typical VCL container, TList, is not typesafe as it holds pointers to TObject. As TComponent is mostly used in the visual side of the VCL, you only get automatic cleanup of form objects.
Ownership, reference counting and object control is built into the depths of .NET and is far more automatic in prevention of memory leaks.
A similar thing can be done in C++ using the Qt framework. QObject can own lists of other QObjects and will get rid of the children when the destructor is called.
As for .NET and associated languages, I'm not impressed. I write high performance C++ applications and made that decision 10 years ago after finding out Delphi just didn't have the speed I needed (and was also very hard to interface to third party libraries, just like C++ Builder). I can fully understand why many using C# need to resort to GC.Collect. The second link at the top of the following page helps explain it. Kind of funny how C# and .NET fanboys will defend it to the death even when it clearly has problems.
Wow...so many topics in one comment chain, where to begin?
I started programming in VB when I was 14, C++ when I was 16, and .NET when I was 21 (I am now 25). Most of my professional experience has been in .NET, but in terms of personal experience I have more in C++ than .NET. Perhaps I am not what you would call a "senior developer" in any language, but I do have a considerable amount of experience in both. My general take is that, yes, its slightly easier to shoot yourself in the foot with C++ than .NET, but not excessively so. However, .NET definitely has a shorter "0 to productive" learning curve. Regardless, I wholeheartedly agree with Joel Eidsath that "...smart programmers can create good products in any language and crappy programmers can create crappy products in any language."
As far as the concept of the "Pit of Success"...its a nice concept, and there are some things in C# regarding that concept that I like (having to mark override methods to guard against "brittle base class", for example). In fact, this concept is my biggest argument against dynamic languages, whose "do whatever you want to however you want to" ideology is a recipe for disaster. However, my biggest concern is that the concept can be taken too far, to the point where the attempt by the language to protect me from myself just gets in my way. VB.NET is the worst example of the problem that I have ever seen; and .NET in general seems to suffer from Microsoft's arrogant assumption that they can know more about my situation from the castle in Redmond than I do sitting at my desk.
I also agree with Joel and Webview that the single biggest advantage of any programming language/platform is library support. The .NET BCL is the single biggest reason why I chose to begin my professional career as a .NET programmer instead of a C++ programmer. As old as C++ is, there are a lot of libraries written for it, for just about every conceivable function. But these libraries have almost nothing in common, and knowing one library does almost nothing to help you learn the next one. With .NET, I know that no matter where I go, I have access to the BCL; and I also feel like third party libraries written in .NET are more consistent with each other than C++ libraries (probably because many .NET programmers will intuitively pattern their own libraries after the BCL, so there is a common point of reference). Yes, C++ has the STL, but it is pretty limited in scope, and although it uses classes, it is not really object-oriented, and therefore doesn't fit very well into an object-oriented environment (Stepanov, the co-creator of the STL library, has said "I find OOP technically unsound."; http://www.stlport.org/resources/StepanovUSA.html).
Finally, regarding GC.Collect...James, the link you reference was written by someone who admittedly did not understand how garbage collection works, and most of what he said was based on wrong information and faulty assumptions. The fact is, the VAST MAJORITY of the time, GC.Collect will not help, and can in fact make things worse by disrupting the GC's auto-tuning functionality. However, it exists for a reason, and if you can demonstrate that your app doesn't work without it but works with it, then no one has any right to argue with you. However, if you find yourself in this position, I would encourage you to open a dialouge with Microsoft; I am fairly certain that they would take a significant interest in a situation in which GC.Collect is preventing OutOfMemoryExceptions (contrary to all of their published material on the subject).
I am the maintainer and team leader for a 300k+ loc C++ project. If it works it's not because of the language, it's because the tests we run. If it fails, it's not because of the language either, it's because the tests we haven't run.
I don't believe that a software product will be more reliable just because it's written in a higher-level language. It's reliable and stable because almost all of its functionality has been tried or even stress-tested.
Lets just face it, the "openness" of C++ has made it into a general purpose, all-features language, adopting multitudes of bad practices, including the above mentioned glitches.
The are other better planned languages out there, such as Delphi.
In fact, I often complain that I can't compile very bad bugs in Delphi!
Oh right, most "gurus"/developers out there think it is a toy language, well go on, make life harder for yourself.
Great post Jeff. This certainly makes up for the post on the registry.