A Little Background on our Continuous Integration Setup
Monday, 28 April 2003
I've been tinkering with a long post on some experiences with continuous integration, but since I'm having trouble finding the time to put it all together, I'll try splitting it up into multiple posts (as seems to be all the rage these days). At the very least this should help me get something posted.
Part 1, a little background.
At the shop where I work we've been running an increasingly continuous integration service for a little over two years. Our first fully automated, detect-a-change, build-and-smoke-test occurred on 15 October 2001. Prior to that we had been running a complete build-and-smoke test at least nightly for several months.
Through fits and starts, this CI process has grown to be pretty comprehensive. It performs a complete build-and-unit-test, and in limited circumstances, deploy-and-functional-test across nearly 100 modules (roughly 200,000 non-blank, non-comment lines of code) supporting various internal and external, server, web-based and desktop applications being developed full time by more than 20 developers. The service is based upon a modified version of CruiseControl (1.2.1) driving a common Ant build script, JUnit unit tests, Latka and jfcUnit functional tests. Build results are reported through email and on our intranet. Clean builds are tagged, and the generated artifacts are placed in a repository that serves as the foundation for both production deployments and sandbox development. (Curiously, we've found CVS to be one of the weakest links in that tool chain, but that's a topic for another day.)
It has its problems, but all in all, it's an admirable, perhaps even enviable, setup. In later posts I hope to discuss some of those problems and some observations we've made along the way.