Friday, September 15, 2006

Geek Post: Continuous Integration

I read an article today on Continuous Integration, by Martin Fowler. If you’re in the computer field, you’ve probably come across articles by Mr. Fowler before; he’s a very bright man, and has written many good computer-related articles.

The basic tenet of Continuous Integration is that all of the code for a particular system is integrated on a regular basis, rather than being done all at the end. This will help developers find integration-related bugs much quicker, and finding bugs quicker usually makes them easier to solve.

If a clash occurs between two developers, it is usually caught when the second developer to commit [to source control] builds their updated working copy. If not the integration build should fail. Either way the error is detected rapidly. At this point the most important task is to fix it, and get the build working properly again. In a Continuous Integration environment you should never have a failed integration build stay failed for long. A good team should have many correct builds a day. Bad builds do occur from time to time, but should be quickly fixed.

The result of doing this is that there is a stable piece of software that works properly and contains few bugs. Everybody develops off that shared stable base and never gets so far away from that base that it takes very long to integrate back with it. Less time is spent trying to find bugs because they show up quickly.

This quote, from the beginning of the article, illustrates why this is important:

I vividly remember one of my first sightings of a large software project. I was taking a summer internship at a large English electronics company. My manager, part of the QA group, gave me a tour of a site and we entered a huge depressing warehouse stacked full with cubes. I was told that this project had been in development for a couple of years and was currently integrating, and had been integrating for several months. My guide told me that nobody really knew how long it would take to finish integrating. From this I learned a common story of software projects: integration is a long and unpredictable process.

But this needn’t be the way. Most projects done by my colleagues at ThoughtWorks, and by many others around the world, treat integration as a non-event. Any individual developer’s work is only a few hours away from a shared project state and can integrated back into that state in minutes. Any integration errors are found rapidly and can be fixed rapidly.

This contrast isn’t the result of an expensive and complex tool. The essence of it lies in the simple practice of everyone on the team integrating frequently, usually daily, against a controlled source code repository.

When I’ve described this practice to people, I commonly find two reactions: “it can’t work (here)” and “doing it won’t make much difference”. When people find that it’s much easier than it sounds, and see that it makes a huge difference to development. Thus the third common reaction is “yes we do that—how could you live without it?”

Although Continuous Integration is a practice that requires no particular tooling to deploy, we’ve found that it is useful to use a Continuous Integration server. The best known such server is CruiseControl, an open source tool originally built by several people at ThoughtWorks and now maintained by a wide community. The original CruiseControl is written in Java but is also available for the Microsoft platform as CruiseControl.net.

Here are the key practices of Continuous Integration, from Mr. Fowler’s article:
  • Maintain a single source repository. It’s pretty common wisdom that any development team should have some type of source control system, where all code is maintained.
    The rule of thumb is that anything you need to build the system should be in the repository, and anything that is built should not. If you’re a J2EE developer, using some framework, and need the JAR files from that framework for your code, then put them in the source control. But when you’ve finished compiling your EAR file, don’t put the EAR file into source control. (There are exceptions to this rule, but not as many as people would think.) If someone needs the EAR file, for whatever reason, they can get the source files from the source control system, and build it.
  • Automate the build. The build process should use an automated “build script”—in whatever technology makes sense for your project—that makes building the application a one-step process. Because of the self-testing nature of Continuous Integration (see next bullet), this should even go as far as putting the database schema into a valid state.
    In the Java world, Ant is the tool of choice for creating these automated builds, and Mr. Fowler indicates that it can even be used in the Microsoft world. Although many—if not most, if not all—IDEs include functionality for performing the build process, it’s better to have an external build process, which can be run outside of the IDE. It’s fine for developers to use their IDE for building on their local machines, but the central build for the project should never rely on the IDE. (In the Java world, the IDE builds are often done in conjunction with the project’s Ant script—which, by the way, should also be checked into source control!) Another quote from the article, to sum up this point:

    A common mistake is not to include everything in the automated build. The build should include getting the database schema out of the repository and firing it up in the execution environment. I’ll elaborate my earlier rule of thumb: anyone should be able to bring in a virgin machine, check the sources out of the repository, issue a single command, and have a running system on their machine.

  • Make the build self-testing. As much as possible, the build process should include automated testing scripts, which test different aspects of the system during the build. Failures in the test cases should cause the build to fail. The XUnit family of tools (JUnit for Java, CppUnit for C++, etc.) is an excellent place to look, if you’re new to this concept.
    The basic idea is that any time you write a new object or method, you should also have a unit of code, to be executed as part of the build, which tests that object/method. These automated test cases go a long way toward catching bugs quickly, before a human begins running test cases. They are especially useful for regression testing. It’s quite true that these automated tests won’t catch all—maybe not even most—of the bugs in a system. But as Mr. Fowler says: “Testing isn’t perfect, of course, but it can catch a lot of bugs—enough to be useful.” He also says, later on:

    Of course you can’t count on tests to find everything. As it’s often been said: tests don’t prove the absence of bugs. However perfection isn’t the only point at which you get payback for a self-testing build. Imperfect tests, run frequently, are much better than perfect tests that are never written at all.

  • Code should be checked in regularly. (The article actually states this as “Everyone Commits Every Day”.) One of the key aspects of Continuous Integration is that the code is constantly being checked, by the build process. If you have your build automated, and you have a suite of automated tests that are run as part of the build process, bugs which are introduced with any piece of code can be found (and fixed) quickly. If the build process runs every two hours, and a build fails because of a bug, then there is two hours’ worth of code to check. If the process runs daily, then there is a day’s worth of code to test; if it runs every week, then a week’s worth of code.
  • Every commit should kick off the build process, on the build machine. (Of course, this also pre-supposes a “build machine”, which is something else the project should have!) Any time code is checked back into the source control system, this should kick off the build process, so that integration bugs can be detected. This can either be manual—meaning that the developer checks in the code, and then walks over to the build machine and kicks off the process—or automated. With an automated tool, such as CruiseControl, the tool monitors the source control system, and any time code is checked in, it kicks off the build process, usually followed by an email to the appropriate parties, to indicate the success/failure of the build.
  • Keep the build fast. Because Continuous Integration requires every check-in of code to trigger a build process, it’s important to keep the build as quick as possible. If the build process takes an hour, then every time a developer checks in code, it results in an hour of wasted time, waiting for the build to finish. The main thing Mr. Fowler suggests, along this line, is to have the build done in stages. The build script should be smart enough to only build what needs building, and, if possible, only test what needs testing. (It’s often the automated test cases which take up the most amount of time.)
  • Test in a clone of production. As much as possible, make your test environment like the production environment. (Same operating system, same libraries, same hardware (if possible), same network configuration, same numbers of servers in the cluster, etc.) The results of this are obvious—environment-related problems will be discovered before you go to production—but it’s not always possible to get exact copies of environments to test with. Do what you can (within your budget).
  • Make it easy to get the latest version of the code. There should always be a copy of the latest version of the application, which is available for demonstrations, walkthroughs with the users, etc. In my opinion, this may or may not be practical; for web-based applications, which are what I’ve been mostly doing for the last number of years, it might mean occasionally giving people access to the development environment, to see what the system looks like.
  • Everyone can see what’s happening. He recommends that at the end of the build process, there is as much information as possible to indicate the state of the build—red or green lights on the build machine’s screen, to indicate success or failure, auditory noises to indicate success or failure, etc. I’m not really sold that this is a necessary thing, but I’m sure it’s useful.
  • Automate deployment. Finally, an automated build isn’t much good unless it can also deploy automatically. Fowler recommends that the deployment to production should also be done using the automated tools; that way, the same method of deployment is used everywhere, and mistakes are reduced. And, of course, if you are going to automate the build, then rollbacks should be automated, in case there are problems.
Overall, the main benefit of Continuous Integration is the reduced risk. Bugs are often found quicker, making it easier to fix them. Problems with integration—Developer A’s code works fine, but not when Developer B checks in his/her code—can be found right away, and, again, fixed quickly.

Continuous Integrations doesn’t get rid of bugs, but it does make them dramatically easier to find and remove. In this respect it’s rather like self-testing code. If you introduce a bug and detect it quickly it’s far easier to get rid of. Since you’ve only changed a small bit of the system, you don’t have far to look. Since that bit of the system is the bit you just worked, it’s fresh in your memory—again making it easier to find the bug. You can also use diff debugging—comparing the current version of the system to an earlier one that didn’t have the bug.

A lot of development teams shy away from the idea of Continuous Integration, because they feel it will steal valuable time away from their development. But, in the long run, I’ve found the opposite to be true: the amount of time saved, over all, on the integration side of things, more than offsets any incremental time spent on creating test scripts, and running the build on a regular basis. If a developer checks in his/her code, and then has to spend 10 minutes to fix a bug, that was caught by the automated test cases in the build script, that’s 10 minutes well spent—and could have been hours or days, at the end of a project, trying to track down and fix the same bug.

0 comments: