October 26, 2006
Although I've been dismissive of build servers in the past, I've increasingly come to believe that the build server is critical-- it's the heart monitor of your software project. It can tell you when your project is healthy, and it can give you advance warning when your project is about to flatline.
You should start out with a simple pulse-- whether or not your project builds, and how often you're building it. The build server can be so much more, though. The Zutubi article The Path to Build Enlightenment provides a great overview of what a build server can do for your project:
- Machine independence
Let's get past "It runs on my machine" first. The build server retrieves everything from source control, and builds on a machine untainted by developer dependencies. It forces an integration point for all the developers working on the project, in a neutral, indifferent way. You can hate your co-workers, but it's irrational to hate the build server.
- Scripted Builds
Your build process is now clearly defined by a script and under source control. You might say it's almost.. self-documenting. Isn't that the way it should be?
- Scripted tests
Sure, maybe all the code compiles. But does the software actually work? The build server is a logical place to integrate some basic tests to see if your product is doing what it's supposed to do. Mere compilation is not enough. The more tests you accrete into the build over time, the better the feedback is from the build, and the more valuable it will be to your project. It's a positive reinforcement cycle.
- Daily and Weekly builds
Once you have the build server set up, you'll establish a rhythm for your project, where you're building regularly. When something breaks, you'll know, and quickly. A solid heartbeat from the build server leads to a confident development team.
- Continuous Integration
This is the holy grail of build server integration-- doing a complete build every time something is checked into source control. Once you've gotten your feet wet with weekly and daily builds, it's the next logical step. It also forces you to keep your test and build times reasonable so things can proceed quickly.
- Automated releases
The build server automates all the drudge work associated with releasing software. It..
- labels the source code with the build number
- creates a uniquely named drop folder for that particular build
- tags the binaries with the build number in the file metadata
- creates installation packages and installers
- publishes the installs to websites, FTP sites, or file paths
A well-designed, fully-automated build process makes it trivially easy for anyone to get a particular release, or to go back in time to a previous release. And it's less work for you when the build machine does it.
- Building in multiple environments
For advanced projects only. If you have to test your code against 10 different languages, or different variants of an operating system, consider integrating those tests into the build process. It's painful, but so is that much ad-hoc testing.
- Static and Dynamic Analysis
There's an entire universe of analysis tools that you can run on your code during the build to produce the wall of metrics. FxCop, nDepends, LibCheck, and so forth. There are lots of metrics, and only you and your team can decide what's important to you. But some of these metrics are really clutch. At the very least, you'll want to know how much code churn you have for each build.
If you don't have a build server on your project, what are you waiting for?
Posted by Jeff Atwood
I can recommend FinalBuilder to automate your builds - much easier than hand writing script files. Our build machine instant messages and emails us progress on the build. We start off a build process via a small ASP.NET website which collects info on what version to build, build comments to burn into version info, creates a request text file which a windows service spots which starts finalbuilder running. Interfaces to TeamCoherence source control to get latest source, label the build.
We are a small shop tho and dont do cruise control or automated testing. The above has saved me many hours of time and makes it trivial to test the latest changes didnt break the build.
"dynamic languages (=ruby) there's mostly no compiling"
For interpreted languages, substitute "build" with "parse".
If the language runtime can run/parse your code, then it's the same as building for all intents and purposes.
We use virtual machines for the build and test boxes, and run them on the testers box overnight (that mnachine runs lots more virtual machines for GUI based testing during the day).
i use to have a co-worker that would break my code all the time (to lazy to realize the setter is doing stuff, n.p. just make the variable public and set it directly ) Damn i wish i know about tests and continous integeration back then!
The more you hate you're coworker, the more you'll love a good build server
I learned this lesson a long time ago, at a company where everyone was checking stuff into the source repository willy-nilly, breaking each others stuff, keeping code checked out for weeks, not doing a get-latest to verify that they were building against the most current code, and so on.
To make matters worse, they had one developer who was attempting to make the software build, and he was laboring under the impression that it was his job to fix the code that someone else had checked in that had broken the build. It was a nightmare. He didn't understand their code, and often couldn't access their machine to check in the code they had checked out that made the code work on their machine.
They couldn't get a weekly or monthly build out. You could forget a daily build.
I had to twist the arm of the project manager to get them to let me take that job over. First thing we did was create a build server. It wasn't anywhere near as advanced as the stuff you're describing here, but it was leaps and bounds ahead of what we had. First thing we did was establish a rule that said that you had to check in working code. If you checked it in and you broke the build, YOU had to fix it. And we built DAILY. And we always built on the build machine.
The build machine didn't have any special junk on it.
Within a week we were getting daily builds out, labeled, and pinned, backed up. Within a month, it was fully scripted, so that it was creating the installer, deploying to the application server, and emailing the release manager when things went wrong on the nightly build. Amazing.
These things are literally life savers on a project.
I'd love to convince my boss here that we need one. Problem is, I'm a one-man software development team. And hardware resources are scarce. I had to fight tooth-and-nail to get a test box. Digging up the money for a build machine is a whole new game.
Of course, that doesn't mean I'm not going to fight for it. :)
I'm convinced! :) Does anybody have any tips or recomendations for what software (and hardware) you need for this (with .Net)?
Mike Hofer writes...
"Digging up the money for a build machine is a whole new game."
The build machine can be old, slow obsolete, though... it only has to be fast enough to build overnight. Use the old 400 MHz box you retired last year!
"Does anybody have any tips or recomendations for what software (and hardware) you need..."
You can make a pretty good start with Windows scheduled tasks, BAT files, and a command line emailer (e.g. blat).
A build machine for .NET should be relatively simple.
Of course, it's going to depend on what you're using for your project. If you're using source code control (and I hope you are), you're going to need to make sure that your build machine has the client on it--or a library that makes it easy to get to it.
You'll need the compiler for .NET on it. That's pretty easily done by just installing the .NET SDK on it. Make sure you get the compiler for your language of choice. The base SDK comes with compilers for C#, VB.NET, J#, and managed C++.
If you're creating installers, make sure you can invoke it from the command line, and get it on the build machine.
If you're running automated tests with NUnit, make sure that's on your build machine as well. As part of your daily builds, check out the entire solution, including the tests. Label them in source code control when you make the build. If the build is successful, and the tests pass, then you can pin it in the source code repository. Don't pin it if it's not a successful build. (This is all based on experience with VSS, of course--we work with the tools we're given.)
I started with scripting the build process from a batch file. That's enough to get you started. Eventually, I wrote a Visual Basic program to drive the whole thing. I managed to get it down to a single button-click that drove the whole process. That was about 8 years ago.
These days, however, there are plenty of off-the-shelf products that will do it for you that you won't have to maintain yourself. They'll integrate your source code control tasks, building the software, notifying you of failed builds, and starting the install builder for you. Fairly sophisticated stuff. Check it all out. Google's a wonderful thing. :)
Key thing about a build environment: Do not put anything on it that isn't absolutely necessary to building and testing your software. If your machine comes to you with extra junk, remove it.
Anything on a build machine that isn't critical to the build introduces variables into the build process that can't be duplicated *everywhere* else. You have to be able to build the software predictably. That's the whole point of the build machine. A build environment should be a constant, not a variable.
John's right about the machine itself, too. It doesn't have to be fast, or even powerful. It just has to have enough computing power to be able to build your software overnight.
Heck, I just convinced my boss to recycle one of our old clunky laptops and repurpose it as my new build machine.
Imagine that. :)
Just a thought about the section on Scripted Tests. Is the build server really the best place to do the tests? I think that a test scenario of delivery to another clean machine that has nothing to do with compilation or building is more like "the real world" and a better smoke test. Of course, maybe in your opinion, this is getting too far outside of "building" and more in "testing" which is of course, a huge topic all its own. But I have had good success (and early detection of failures!) with a quick and dirty "smoke test" with several projects.
Is there any reason to *not* use CVS? It's worlds ahead in my limited experience (i.e., VSS, CVS).
There isn't really any compelling reason not to use CVS. I use VSS because it's what my company uses. Still. Despite the existence of far better tools.
Also, the advantage of executing the scripted tests (such as those done with NUnit) on the build machine is that you can execute them *every time* the build is executed. These tests are different tests from those executed by your QA folks, mind you. In my eyes, the NUnit tests are the smoke tests; they just verify that the build works, and whether or not I should go ahead and label it as a working build. Then I package it and deploy it to my build staging area as an installer.
Next it gets deployed to our test environment where the QA folks hammer the heck out of it and determine whether or not it satisfies the requirements, and whether or not other unanticipated defects were introduced. It's a totally different kind of testing. In an ideal scenario, that kind of testing is done by a fleet of QA testers who know how to break the software and look for edge cases that the software developers didn't account for in the automated tests. And they're test environment looks as close as possible to the production environment. It doesn't look like the build environment.
Is there any reason to *not* use CVS?
Any modern source control system will do. I do not consider CVS a modern source control system. You can do worse (VSS), but there are definitely better choices.
If you're using CVS and happy with it I would use strongly consider migrating to Subversion, aka CVS 2.0, if at all possible.
Good article. We use a combination of CruiseControl.Net, Subversion, and NAnt scripts for our deployment process and it works great.
CCNet handles monitoring our Subversion repository, when new code is checked in it triggers a NAnt script which does a fresh export onto the build machine, compiles, and then deploys to the staging web server.
Well, with dynamic languages (=ruby) there's mostly no compiling, so the only way to have this integrity check are unit tests. Which could be pretty easily run on each commit (per commit hook), I guess.
Any expiriences on "heart monitors" in dynamic language world?
We actually have two streams running, one for development and other a 'released' build. The released build is the one that is already installed at various customer places. Checkins to this stream are highly controlled and helps us send service packs to customers (they dont have to install the whole build again and re-qualify it).
Dev stream is the regular build to which we checkin on a daily basis.
For both streams, who ever checks in code that either breaks a smoke test or has compile failures is supposed to fix it up asap. In case of failure an auto-email is sent to all naming the culprit. This and the possibility of being held up on friday evening keeps everyone on their toes.
You don't mention running automated functional tests once the build is complete. At the least you can check that it installs OK. Ideally you can then check some basic functionality too. Get your test team involved with this.
This can be scripted using VBscript etc, but is easier using a tool from one of the test tool vendors.
For Java we're using Cruise Control and it's great!
If you really want to get fancy use a new VM instance to build on everynight, sort of a way to ensure everything is absolutly clean everytime.
Anyone such as our host who feels that a build server is not necessary is not working with a sufficiently complex project. Although I feel even a single application project benefits from the use of a build server, once you gain complexity (eg 4 interdependant COM dlls, 12 support applications, 4 type libraries ... you get the point), a build server becomes an absolute necessity. It is the only way that you can be sure of two things.
A. You have not broken any application or dll with a change you made. Since you may not build all binaries affected by a change on your machine.
B. You have built exactly what you want. Often a release or test build will have different options and Compiler directives than one built on a developer machine. A build server ensures you always build with the correct options.
We us finalbuilder at my company, which I can heartily recommend.
I highly recommend virutalizing your Build Server.
I run my Build Server in a VM and uses a combination of CruiseControl.NET, nAnt, and vbscript. This allows me to restore the build server to a clean state if something terribly goes run (such as the test install screws up the registry) and also allows me to use the host computer for other tasks.