June 22, 2005
Some users commenting on the poor pre-game user interface in EA's Battlefield 2:
Poster #1: They need to stop hiring angry little men and romantically spurned women to design user interfaces.
Poster #2: But doesn't that describe most programmers?
Poster #3: No, that describes all programmers.
It's funny because it's true. Not the romantically spurned part, mind you, but the accusation that most programmers are bad at designing user interfaces. That's partly because UI is hard:
GUI builders make GUI programming look easy. Nearly anybody can whip up a decent-looking GUI in no time at all using a GUI builder. Done.
It is much harder to whip up a quick and dirty EJB system, giving the impression that server-side coding is harder to do. A bad programmer will continue to struggle with EJB, but a good programmer will find ways to automate nearly every aspect of EJB. That's the secret of server-side programming: it is very well-defined and repetitive. Thus, it can be automated.
Take your favorite Model-Driven-Architecture (MDA) tool. They work best when generating server-side code, things like EJBs, database access code, and web services. They might be able to generate a rudimentary GUI, but a really GREAT GUI cannot be automated.
But programmers are partly to blame, too. Most programmers begin by thinking about the code instead of the user interface:
John almost hit on the most important point in all of this. No one else did. When you're working on end-user software, and it doesn't matter if you're working on a web app, adding a feature to an existing application, or working on a plug-in for some other application, you need to design the UI first.
This is hard for a couple of reasons. The first is that most programmers, particularly those who've been trained through University-level computer science courses, learned how to program by first writing code that was intended to be run via the command line (Terminal window for you Mac OS X users). As a consequence, we learned how to implement efficient algorithms for common computer science problems, but we never learned how to design a good UI.
The second problem is that the tools we use to create UI are often good tools for more simple usability issues, but tend to fall well short when it comes to designing UI for a more complex set of user scenarios. Forms designers are great when you're working within the problem domain that forms are intended to solve, but once you step outside those problem domains, the work gets much harder. Use a more flexible tool, like Xcode's nib tool and the Mac OS X HIView object, and you're going to have to write considerably more code just to manage the UI objects.
This is also known as UI First Development, but I can't find many other references.
Posted by Jeff Atwood
I must disagree with the idea that the most important part of a program should be done first. The failure of conventional program design comes not from designing the UI last, but from not designing it at all, or treating it as an unimportant afterthought. I the case of the UI, it is precisely *because* it is so important that it should be done last, *provided* that it is given the lion's share of the development time and effort.
The internals *must* be set and working first, as they comprise the primitives of the 'language' with which the UI is built. Also, by cleanly separating the UI from the actual program, neither the back end nor the front end are overly constrained by the other; this would allow greater freedom in the UI design, which tangling the logic of front and back ends invariably weakens both.
The internals *must* be set and working first
I still think that's backwards. It's not black and white, but clearly the internals don't matter if the user can't figure out the UI!
Been doing this professionally for 8+ years now or so, and I've heard most of the arguments on either end. But I'm telling you now: the applications that the UI was designed (not completely written, but designed) first have been magnitudes more successful than applications that begin with data or business object design.
No ifs, and, or buts.
An app that begins with the data or business object design will always force a UI to structure itself around the way a developer thinks. And if no one has figured it out yet, developers tend to be a much different breed of animal than the rest of the world...I hate to point at Linux, but there you have it.
At the same time, the UI absolutely cannot be designed nor developed in a vacuum; there needs to be constant communication with whoever is going to be creating the architecture beneath it--or preferably it needs to be designed by someone who can actually code as well. This way you don't end up with a UI that a set of developers have to jump hoops around to make work, and you don't end up with a whole slew of sub-screens that make sense from a relational standpoint but is absolutely useless to Joe Normal.
So I will reiterate, and bear the potential flames...NEVER design an application with a heavy user component bottom up. Preferably from both directions, with guidance coming from the UI team. But never from the data up.
To counter Marty's counter of Jeff's original statement, I think you have to design the UI first. Or maybe it's really this way: don't design the UI last. I too, have been on projects where the obsession on how the UI should look got in the way of how the app should work. That's bad. But just as bad is the "slap together a bunch of white boxes and gray buttons at the end of the project" school of UI design that is more prevalent in my experience. VB and other RAD tools have made it easy for us to create usuable UI's, usable for developers, people that are typically more tolerant of their own creations. End users are much less tolerant and/or willing to live with less than adequate UI's. They typically just want to get their job done, to solve a problem (straight out of Cooper's books).
My theory is that UI last development was really a covert way for developers to not present the project to the users. Let's face it: most users feel that the UI is the application. When it's done, so is the application, right? So if we build the UI, prototype or otherwise, we're almost done. We all know the falacy in that. So somebody really smart said, "let's do the application development bottom up, doing the UI last" and that eliminated a tension point in schedules and user expectations.... A wacky theory but there's probably something there.
The bottom line is that in my experience there is far too little thought given to the UI from both the developer's and the business's point of view. Like the quarterback or the goalie, the UI get's all the blame for any failures and all the credit for any successes. More realistic schedules and time should be given to really develop the solution for the user...but that's not going to happen anytime soon. Business applications suffer the most from this.
Rant over..nice post Jeff, as usual.
The argument for designing the UI first is that (as Mike points out) as far as the user is concerned, it's the program. If you have that part right, then you design and build a backend to serve it, you're satisfying the user's needs. If you go the other way 'round, aspects of the backend are very likely to determine some features of the UI - the backend will intrude upon the user's illusion, and the user won't be getting quite what she expected. Meeting the users' expectations is why we build software, and UI-first is a better way to get there. That doesn't mean give short shrift to the backend, but that the backend's raison d'tre is to serve the UI. Without the UI, you don't know what features the backend needs to provide.
What you definitely want to avoid here is "UI last" development, eg, Linux. I know everyone always trots out Linux in discussions of bad UI, but it really is the canonical example of UI last development. And you see where that's gotten us..
The interesting thing about Battlefield 2 is that the in-game interface is actually quite good. It's a distinct improvement over previous games in the series (eg, the "Command Rose"). But the pre-game front end menus and server browser are every bit as bad as the old versions!
Very nice topic and nice comments.
I think we as programmers have a very bad habbit when we think of ourself as god's creating wonders in our virtual worlds. Alot of programmers enjoy when they have to explain the end users how their software works, and feeling more inteligent when they tell the user how stupid they are because they do not get it. :) I cant help but thinking of the IPod, how simple and ease of use it is, and how popular it is. Nobody reading this blog would design an IPod the way Apple did it. :)
Anyway, I was wondering, does anybody know a way to become better (and hopefully become a skilled) UI Designer? I am developer and I have been programming .. umm like forever. :) But I feel that I could become a good UI Designer. I know I can read more about it and take some clases, but job as a UI designer is something that I have only seen posted a couple of times.
I beg your pardon?
When you say "Linux" has a bad UI, which "Linux" do you mean? Or rather, which GUI do you mean;
- Window Maker
- or was it one of many others?
Here's a terminology explanation:
Linux is a kernel
Fedora, Debian, Gentoo, Suse and SlackWare are package distributions (that all use the Linux Kernel)
Gnome, KDE and others are GUI's (that come as part of distributions)
So if you're going to make a claim that "Linux" has bad UI, its probably the one you saw was either crap or not to your liking.
You don't like a GUI? Great, download another one and install it, and then choose it the next time you log in. In fact, you can have as many as you like on the same install, and switching can be done as easy as logging out then in. I monitor/admin a huge production WAN at work, which is more efficient under Window Maker, and at home I use Gnome for everything else, from Coding to leasure.
Linux distributions don't tie you to a GUI, unlike the Windows approach (In fact, a lot of the Microsoft design mentality reminds me of Soviet Communism, which is scary).
I'm of the opinion that GUI should be developed last, but that it should get extreme focus. So you write your data abstraction, work abstraction, and actual work code. Go ahead and stick a command line interface to test all this crap you just wrote.
ANd then, if you've done a good job with that stuff, you can slap *any* UI you want on it. So nwo you can focus on the UI.
On a team, that means you can have a dedicated GUI guy, or group of guys, or whatever, carefully designing a good GUI to spec. Then when you're both done, you hook up theirs to yours and ship.
(well, test first)
Jaxx Random Websurfer : you really didn't get it ! (no offense !!)
The UI is so important (to the user, maybe not to you), actually it is not only "so important", it's all of what you "ship" (as far as the user is concerned), so it's more than very important, so much that it has a ligitimate right to even impose some choices on the way you build the internals. It's *that* important.
Now you *can* consider it not to be that important.
But real world users won't agree. That's all. Make your choice.
And Linux is THE extreme example. Linux is a kernel. Agreed (actually a wonderful one, and very technically superior to Windows- soryy Jeff, but I have seen win32 api design). So it was designed with no consideration for UIs. And it shows. No matter how hard you try to put a nice UI on top of it, you will very probably fail. Just because UI consideration were not part of it since day 1. It is that critical.
It IS very, very important. Really !
jaxx, how about this to clarify
- Gnome - UI Sucks
- KDE - UI Sucks
- Enligtenment - UI Sucks
- Window Maker - UI Sucks
- xfce - UI Sucks
- ion - UI Sucks
- or was it one of many others? - UI Sucks Infinity
"Linux distributions don't tie you to a GUI, unlike the Windows approach (In fact, a lot of the Microsoft design mentality reminds me of Soviet Communism, which is scary)."
absurd comment, however, ironically the Windows UI doesn't suck, go figure.
How about this for contention:
99% GUI Developers on any given Linux distro are terrible UI designer. 1% I saved just to be nice.
When you take specs, you interview the user and generate use-cases. Those use-cases degenerate into random babbling the developer needs to be able to store the correct data, etc. However, the use-cases (or user scenarios, or whatever phrase you like) are basically the UI in English. I just don't see how one can design a system that meets the specifications of the use-cases (the WORK! the user does) without designing the UI first.
Once you've figured out how the user will be able to interact with the system (hopefully through a series of small usability tests), THEN you can make the system do the work the UI permits the user to do. What WORK are you performing if you're designing the back-end first? Basically, you don't know because there's no way for the user to do it.
The UI is where the rubber meets the road and I can't count the number of times a concept in a design document didn't hold water once the UI was put on top of the back-end function, simply because it made the UI awkward, difficult, or impossible to use.
Robert is right on this one.
The slap together two pieces of independent software (GUI and "backend") well, let us just try to recollect about half of what Jeff have been posting about here (not counting .Net propaganda ;) ), you shall never ever build stuff in two (or more) parts and then try to fit things together in the end, it only produces more faults and bugs and swearing.
You have build everything at once, but do it in increments. Add features as you go. For that you need a (somewhat) clear goal, some use-cases and constant feedback (don't forget to update the goals and feature-lists) to be able to hand out a nicely done UI with a backend that supports it. The backend is only as good as the user understands the UI.
(And IMHO the Linux discussion here is way off (about Linux being a kernel and not a UI), mostly noone denotes the different dists by their name or window managers and the discussion wasn't about the kernel it was about the common standard of UIs that are produced for the Linux/GNU platform.)
Geez, I do manage to write a lot sometimes.. :) Oh, and please don't flame-on just because I got a bit derogatory on Linux, it's great software, but since I'm kind of lazy I'll stick to windows for a while.
What exactly does it mean to build the UI first? Can you give an example of the process? When I hear it I think of sitting down with the IDE and creating the form and buttons. I think you mean something more complex than that. I know this is an older post, but thanks.
"When you're working on end-user software, and it doesn't matter if you're working on a web app, adding a feature to an existing application, or working on a plug-in for some other application, you need to design the UI first."
I definitely disagree with this statement. You need to be keeping the UI in mind when designing your back end and middle tiers, but you definitely don't want to design your UI first. You can have a great UI, but if the architecture it's a face for is crap, then what difference does it make?
I used to have a development manager who was a graphics designer, and he always wanted to design the GUI first in any application we developed. This always led to problems as he would be dreaming up the way the application would look, while the actual guys doing the developing would be determining how it would WORK. A great deal of the time, there would be a disconnect between the two.
I do think GUI development is important, but it should be the finishing stage in your application design, not the first.
Mike, your theory is dead on. My VP swings by my cube with a labware prototype I put together in 2 weeks and was seriously looking for someone to install it for a customer. I almost had to tackle him in the hallway. Now if I'd just written a console app as a proof of concept, nobody would want to buy that ;)