August 7, 2004
Have you ever noticed that new .NET developers have a tendency to use inheritance for.. well, everything? On some level, this is understandable, since inheritance is used throughout the framework; everything in .NET inherits from a root object. There's one big difference, though: we're writing crappy business logic code, not a language. What is appopriate for a language developer may not be appropriate for simple business code that needs to be maintainable and easy to understand above all else.
Inheritance is a specialized tool, and should only be used for situations that truly warrant a parent-child relationship, and all the "hidden" behavior that entails. I'm sure I have lost some of the OO purists at this point, so lest you think I'm a lunatic who has completely abandoned OO principles, I'd like to point out that I am in good company: Dan Appleman also feels this way. Here's a little excerpt from his excellent (and still relevant) Moving to VB.NET: Strategies, Concepts and Code:
I've been a C++ prorammer for longer than I've programmed in Visual Basic-- and I still program actively in both languages. I've been a firm advocate of object-oriented programming since I first understood the concept of a class back in 1977; and I've programmed in frameworks like ATL that use inheritance extensively and successfully.
But in terms of using inheritance in one of my own applications or components, in all of those years, I can think of maybe a half a dozen times, at most, where inheritance was the right choice.
So, yes, .NET uses inheritance-- it's built into the architecture. And yes, the code generated by the various designers will use inheritance to give you the framework on which you'll build your own code.
However, if you really understand inheritance, you may find yourself living the rest of your career without ever creating a single inheritable class or component.
He goes on to say exactly what I would: inheritance is only one way of many to achieve code reuse. Having a simple object , without all the complex (and mostly hidden) rules of inheritance, is plenty good enough to achieve the most important goal: code reuse.
As for the argument of, "if the .NET framework is built on inheritance, so we should be too!", my question to you is this: how many of you are writing programming languages? How many of you guys are writing operating system kernels? The reality is, those are extreme and rare circumstances, and shouldn't be used as a model for anything other than writing languages or operating systems.
To me, the added baggage of inheritance-- like all added complexity-- is always guilty until proven innocent. Don't inherit unless you have a very compelling and specific set of reasons to inherit.
Posted by Jeff Atwood
OO purists did loose you. Because to an “OO-purist” object oriented analysis and deign is about modeling reality not about some academic exercise. In the real world we have a phylum-order-genus relationship in everything, not just in the academic dog, cat, and animal scene.
The most fundamental discipline that it gives us is the art of abstraction. Looking at something in a domain and factoring it, concretely defining it and allowing it to be made more specific without loosing its core characteristics removes the error-prone, not-reusable tedium of working with the things themselves and instead lets us work with types of things.
But we would be remiss, Jeff, if to OOAD we ascribed only the virtue of reusability—for that is not just what it’s about. A much more immediate benefit can be found in generalization. Look no further than to the copy of (GoF) Design Patterns that must surely occupy a space on your desk—and in your heart.
Look at the Composite pattern. Based on inheritance and polymorphism, the pattern lets you perform operations on a hierarchy of differing node implementations using a single, simple, factored interface. I don’t know how many times I’ve found this pattern to be invaluable. It saves time during implementation, time during maintenance and time during addition of functionality because your system doesn’t need to know about each node. (I presume at this point you’ve inferred the meaning and the burden of “knowledge” as it exists in a type-safe environment such as .NET)
The reason inheritance and polymorphism work in the composite pattern and all of the other patterns in GoF, is because of the simple truth in my first paragraph: abstraction is crucial. If we can see the similarities in things then we can operate on just those similarities—without knowledge or even the desire of knowledge of what makes the thing unique.
When I have this amazing ability and lens with which to view my domain, what would I gain by doing things any other way? Job security, perhaps, when I have to re-factor my entire solution because I have to add something new to those simple, broadly-defined, objects I was using.
Inheritance, polymorphism and encapsulation are more than OO concepts; they are the three pillars--The Holy trinity upon which my church was built. We’ve all been exposed to the wide road. That path filled with the temptation and damnation of RAD, VB6, spaghetti code and procedural noise. But in the height of the corruption by the demonic forces of Redmon we were given pardon by the trinity taken flesh… the zen and the vision… light it up, Jeff… just one little hit off of the Booch-pipe and you’ll join my church.
(All of this is IMHO, of course)
"Look at the Composite pattern. Based on inheritance and polymorphism, the pattern lets you perform operations on a hierarchy of differing node implementations using a single, simple, factored interface"
See my "Loose Types Sink Ships" entry for an example of how easy this would be to implement in Python. No 10-dollar words like 'polymorphism' are involved, either..
You should definitely read the book "multi-paradigm design for C++" -- it makes some great points about when to use different approaches to best solve the problems they're suited to. Not the easiest book to read though.
When I built Animator.js, I got some flack for suggesting that inheritance is not a Good Thing. Keen to avoid a holy war I restated my position to 'inheritance is often useful, but more often overused.' Over the last few months I've been trying to figure out exactly when it should be used, and have concluded - at least for the kind of systems GUI developers build - never.