April 29, 2005
In a recent post, Scott Koon proposes that to be a really good .NET programmer, you also need to be a really good C++ programmer:
If you've spent all your life working in a GC'ed language, why would you ever need to know how memory management works, let alone virtual memory? Jon also says it doesn't teach you to work with a framework. What's the STL? What about MFC? ATL? Carbon? All of those things use C++ as their base language. Notice I didn't say to take a C/C++ course at a university as I'm not convinced that a CS course will teach you everything you need to know in the real world. I said to learn C/C++ first because if you understand HOW things work, you'll have a better idea of how things DON'T work. How can you identify a memory leak in your managed application if you don't know how memory leaks come about or what a memory leak is?
The problem I have with this position is that it breaks abstraction. The .NET framework abstracts away the details of pointers and memory management by design, to make development easier. But it's also a stretch to say that .NET developers have no idea what memory leaks are-- in the world of managed code, memory management is an optimization, not a requirement. It's an important distinction. You should only care when it makes sense to do so, whereas in C++ you are forced to worry about the minutia of detailed memory management even for the most trivial of applications. And if you get it wrong, either your app crashes, or you're open to buffer overrun exploits.
You may also be familiar with Joel's article on the negative effects of leaky abstractions. Bram has a compelling response:
Joel Spolsky says
All non-trivial abstractions, to some degree, are leaky.
This is overly dogmatic - for example, bignum classes are exactly the same regardless of the native integer multiplication. Ignoring that, this statement is essentially true, but rather inane and missing the point. Without abstractions, all our code would be completely interdependent and unmaintainable, and abstractions do a remarkable job of cleaning that up. It is a testament to the power of abstraction and how much we take it for granted that such a statement can be made at all, as if we always expected to be able to write large pieces of software in a maintainable manner.
It's amazing how far down the rabbit hole you can go following the many abstractions that we routinely rely on today. Eric Sink documents the 46 layers of abstraction that his product relies on. And Eric stops before we get to the real iron; Charles Petzold's excellent book Code: The Hidden Language of Computer Hardware and Software goes even deeper. In other words, when Joel says:
Today, to work on CityDesk, I need to know Visual Basic, COM, ATL, C++, InnoSetup, Internet Explorer internals, regular expressions, DOM, HTML, CSS, and XML. All high level tools compared to the old K&R stuff, but I still have to know the K&R stuff or I'm toast.
What he's really saying is without these abstractions, we'd all be toast. While no abstraction is perfect-- you may need to dip your toes into layers below the Framework from time to time-- arguing that you must have detailed knowledge of the layer under the abstraction to be competent is counterproductive. While I don't deny that knowledge of the layers is critical for troubleshooting, we should respect the abstractions and spend most of our efforts fixing the leaks instead of bypassing them.
Posted by Jeff Atwood
I come from a C++ and CS background and I feel that both taught me good skills for dealing with software development in the real world. I see so many (and have interviewed so many) wanna-be-developers who know nothing about good programming paradigms, and only get away with working in the software industry because the MS .NET Framework takes the brunt of the hard coding work.
I'm not saying that all .NET programmers are not gifted (I am one), but it is a sad fact that the market is awash with so many people that really cannot code well. Back in the C++ days, when there was very little to help you when your application blew up because of a stray null pointer dereference, programmers and developers really had to know their trade.
Now that the .NET Framework is here my job is much easier - I can create an application in a day that used to take weeks in C++. Never less, my C++ days have taught me the skills needed for writing good efficient code, regardless of what language or framework I am writing developing on.
If I could count the number of developers who write code like:
string a, b, c, d
// Initialize a, b, c, and d.
string result = a + b + c + d;
Now, see often you see this in a C++ app.
Isn't "all abstractions leak" something of a corollary to Godel's incompleteness theorem? I don't have the math skills to prove it, but it sounds right anyway. They leak not because they need to be fixed, but because they are abstractions - it's an unavoidable aspect of their nature.
I love and respect abstractions - they're the very foundation of human reason and knowledge. But there's certain things that can't be done within even the best abstraction - like pick which abstraction to use for a given context.
Knowledge of the abstraction underying the one you are using, and the one below that, etc. (turtles all the way down) is very valuable not only for troubleshooting, but for choosing which abstractions to use when, or how to make different abstractions work together.
There's a balance between time spent learning your chosen abstractions, and time spent learning underlying, less abstract abstractions - determined by what you need to get to your goal. No one can possibly know all of it, but dogmatic reliance on either only the top layers or only the bottom layers is irrational.
I agree that experience is a blanket positive. A developer with 5-6 years in C++ before moving to C# will, all other things being equal, be superior to a developer with only 3 years of C#. I'd never propose that having a solid CS background is a BAD thing.
However, there are some negative aspects of "peeking under the covers".
1) It takes a lot of time. I feel a developer has to dedicate two solid years to .NET to simply become competent in the framework as a whole. It's massive. And getting 50% larger in 2.0!
2) It can teach you bad habits. As Jon Galloway says, "The point here is that modern programming is moving towards Domain Specific Languages (DSL's) which efficiently communicate programmer intent to CPU cycles. C is not a good prototype for any of these DSL's."
Smart developers will know what to un-learn (COM, anyone?) to make room for the improvements of .NET:
Rob - I'm not sure I completely understand the point you were trying to make with your code fragment above. "string result = a + b + c + d" is a perfectly valid statement in C++ assuming you are using std::string.
As a .NET construct, it may or may not be reasonable to do such a thing given the existence of the stringbuilder class and its superior performance for concatenation.
I'm currently a full time .NET developer, and I find having a background in C to be rather helpful. I recently went back and "relearned" C since I never really "got" it the first time around, and noticed that the concepts have helped me out while writing C#. Specifically, I have a much better understanding of when to pass objects by value or reference, and delegates. It's probably not absolutely mandatory to understand C, but all of the best C# developers I know do.
Of course, I very much prefer a GC'ed environment than one without, and I would rather poke myself with a hot iron than create a full application in just C.
Well my point originally was that learning C/C++ as your first language would make it easier to transition to a GC enabled language like Java or a .NET language. Not that you can't be a competent programmer unless you learn C/C++, but that you end up being a better programmer by learning more about how the abstraction works under the hood. I stand by that.
My reasoning for that point is that none of the current frameworks are perfect. At some point if you are programming in C#/VB.NET or Java you will run into a situation where you have a memory leak due to some bug in the framework. Unless you understand how the framework is doing the heavy lifting, you can't really know if it is failing and how to refactor your code to work around the frameworks failures.
Look at some of the .NET code being written out there. Look at the UrlBuddy at the Code4Fun site. It uses some PInvoke calls to watch the clip board. That means that not only do you have to have some understanding of C/C++ so you can figure out how to pass the variables in, but you also have to know how to get around in the Win32 API. Same with RSS Bandit. It's not that they want to do things the hard way; It's that the only way they can achieve the functionality they want is by reaching in to the API and finding a way. You can write around these limitations in the framework, but you lose some functionality.
Scott Koon needs a history lesson. Lets look at the purpose of C++ and C#.
C++ was invented with what I call a theory-mindset. By this I mean it gives developers all the power under the sun- power and beauty of controlling a machine. You can say in a way that you ought not to use C++ unless you are and expert for you can destroy your own foot and eveyone else's who uses your code.
C# and .NET were invented with what I call money-mindset. The driving force is to get thing up fast and great; and forget about everything that do not help in meeting these goals. Things like pointers and memory management are of less concern compared to the business-need of getting things done.
Given the mindset, I do not think it a must to learn C++ intricacies if you really want to understand C#. The purposes of the languages are different so they are designed different despite how similar they appear. C++ can really help you understand the sweetness of Garbage collection. However, C# users uses are paid to get thing done and that is what C# is all about.
> memory management is an optimization, not a requirement.
Sadly, no. I've just been spending a few days on a C# application that has a few cyclic references and forgotten unsubscribes to events that lead to the application grinding to a really slow state that did not quite halt over the course of a few hours. The cause? Somebody didn't get it and just assumed the garbage collector "would get it". Well... it didn't. Not to mention what happens when you leak too many native resources, where at some point you have leaked the full set before a GC and your app comes crashing down - in a completely unrelated operation.
.NET is no alternative to resource management - you'd need to learn it better now that your language somewhat allows sloppy work.
I've been saying for a while that C# is basically C++ minus the template metaprogramming, but with a few much more complex things thrown in (think yield return, IL Emit, reflection, stuff like that). C# is not for people who consider C++ too complex - it's for people who'd like just a bit more mess that is less debuggable.