nav search
Data Center Software Security Transformation DevOps Business Personal Tech Science Emergent Tech Bootnotes BOFH

Agitating Java and testing Windows

Combining manual and automated testing

By Tim Anderson, 27 Feb 2006

Test-driven development is only as good as its tests, and therein lies the problem for many developers. Recently, D Richard Hipp, who is the main author of the Sqlite open source database, analysed his source code. He calculates that 59 per cent of his code base is devoted to testing, covering 97.4 per cent of the code.

Creating comprehensive tests is hard work, and test-driven development represents a change in programming culture; it is not merely "another useful technique". Developers naturally look for a way to automate some of this work. This is inherently difficult. The only definitive description of an application's behaviour is in its code, yet the point of unit tests is that this code may be flawed. The ideal moment to create tests is just before the code is written, when the developer knows his precise intentions. Automation cannot do this for you. Manually written tests are also imperfect. Test authors may fail to cover all the possible scenarios, or may include bugs in the tests themselves. Since neither manual nor automated testing can cover all the bases, how about combining the two? Broadly speaking, this is the thinking behind Agitator, which describes itself as "an automated way of exercising your code without the necessity of writing tests first",

Screenshot shows Agitator running within the Eclipse IDE.

Agitator is implemented as an Eclipse plug-in, though it also integrates with IntelliJ IDEA and JBuilder. These IDE integrations simply take you to Eclipse, so it is Eclipse users that are most likely to enjoy Agitator.

The core feature of Agitator is that it "Agitates". In order to Agitate a class, the system first analyses the compiled byte-code for the purpose of generating tests. Since it works on compiled code, it is essential to Build before Agitating, or to use Eclipse's auto-build feature. Next, Agitator generates sample input data and calls the public members of the class, instantiating objects as needed. You had better make sure your application is not hooked up to a live database at this point, as Agitator will not be responsible for the results. Finally, the results are displayed as a list of observations and problems. You can set the Agitation level to Normal, Extended or Aggressive, so determining the number of times each method is called and the variety of input values.

So what is an observation? These are similar to assertions; statements about the running code which Agitator thinks might be interesting, such as that a certain variable or return value was never null, or fell in a range of values. Agitator also reports exceptions, failed assertions, and code coverage, showing how much of the code was exercised by the Agitation.

Screenshot showing Agitator’s observations as the starting point for tailoring an Agitation.

Most observations are inconsequential, but as you review these you may see unexpected results, perhaps exposing a bug. You can also add and edit the observations, improving on Agitator's guesses and, in effect, creating unit tests that build on the automated foundation. You will need to become familiar with Agitator's small Expression syntax, learning for example that @PRE means the value of an object before a method call, and @RETURN the return value of the method. Snapshots let you drill down into an observation to inspect the other conditions that generated the result. You will likely spot important observations that should be promoted to actual assertions, which you can do with a couple of clicks. This is Agitation at its most valuable, enabling you to identify the correct assertions to make about the behaviour of your code.

Unfortunately you will probably still find that the Agitation did not do quite what was wanted. Randomly generated values may be less useful than close simulations of the input your application will get in the real world. However, you can configure Agitator to supply these by using Agitator's Factories. Factories let you control how Agitator gets its test values, letting you specify anything from a specific range of a primitive type to custom code for constructing objects used by your application. You can also specify how often Agitator gets values from the custom Factory, as opposed to using its defaults. If all that sounds like too much work, you can automate factory configuration and other customisations by using Agitator's Experts. Experts are supplied for Hibernate, J2EE, Struts and Log4j, and you can modify these or create your own. Agitator can automatically create mock objects, too.

There is no need to abandon conventional unit tests in order to use Agitator. If JUnit tests are present, Agitator will call them in the same way as the JUnit harness, and show the outcome in its Agitation results panel. The two work well together [Ed: as you'd expect, since the people behind Agitator are also the people behind JUnit] and you can use Agitator to improve your JUnit tests. In some cases JUnit tests are an easier and more natural approach to test creation than Agitator.

Finally, Agitator has a code rules engine for static code analysis. A generous set of rules is supplied, including coding conventions, naming conventions, and metrics such as maximum number of statements in a method, J2EE standards and likely candidates for bugs. Some of the rules come from Oliver Burn's open source Checkstyle project.

Screenshot showing Agitator’s comprehensive static analysis rules.

A separate extra-cost product, the Agitar Management Dashboard, lets you easily analyse and monitor the health of a project in terms of test failures, coverage and adherence to code rules [Ed: I think this can be an important part of tying unit testing and development back to the project's business sponsors].

Agitator is an intelligent and well-crafted product, the outcome of deep thinking about how to automate unit testing. Static and code coverage analysis comes as part of the package. At just under £2000 per user it's not cheap, but in the right hands it will raise software quality, and provides an excellent framework for creating more effective tests. That said, it's emphatically not a quick-fix route to test-driven development. At worst it can generate large amounts of not very useful data to plough through, and it is not truly geared to test-first development (because you must have code to Agitate). Which is interesting, considering that Kent Beck is involved with Agitar; although you can do test-first development with Agitar and since testing and coding are iterative processes, arguably this may not matter much in practice. However, creating high quality Agitations takes work, and the product will be most useful to developers already skilled with JUnit. There's more information here.

Test-driven development with Visual Studio Team System

It is interesting to contrast the work Agitar is doing with what Microsoft is delivering with Visual Studio Team System. JUnit has long been ported to .NET in the form of NUnit, but Team System does not use this open source product, preferring its own testing framework. Testing is a major focus of the product, influenced by two distinct (and somewhat contradictory) strands of thought. The first, and I think primary, influence is from Microsoft's own testing practice. There is a distinct tester role dedicated to finding and reporting bugs and monitoring software quality, identified in the process guidance expressed in the Microsoft Solutions Framework (MSF) and the Visual Studio 2005 Team Edition for Software Testers. The secondary influence comes from eXtreme Programming (XP), which teaches test-first development and argues that testing is an integral part of the developer's task, not something that should be delegated to others or postponed to a later stage. Even so, Team System does provide the necessary infrastructure for test-driven development, and may prove to be the beginning of a quiet revolution towards more disciplined and process-oriented development on Microsoft's platform.

The starting point is the unit testing built into both the Developer and Tester editions of Visual Studio. Test classes and methods are annotated with attributes, used by the test harness to run tests with configurable initialisation and cleanup. You can easily create unit tests using a wizard. A right-click in the code editor offers Create Unit Tests, which opens a dialog allowing you to select which methods to test. Visual Studio then generates skeleton tests for those methods. These are simple skeletons, not smart Agitations, though they are still great time-savers, especially for things like testing private members, which requires reflection, or testing web applications (available in the Tester edition only). By default, they fail with a result of Inconclusive; it is the developer's responsibility to implement the code that will provide meaningful results. You can easily make tests data-driven, iterating through all the values of a linked table to get input. You can also create suites of tests, and configure code coverage enabling you to see which code was exercised. It is all nicely integrated into Visual Studio, which has a test view where you can browse and run the available tests.

Screenshot showing the test types available in Team System.

Those with the Tester edition or the full suite have additional tests available, including web tests. Load tests simulate multiple method calls or web requests over a set period, and are invaluable for testing performance, scalability and resilience. You can use multiple computers to do the testing, by setting up remote controllers and agents.

Team System has strong support for static code analysis, building on an earlier informal project called FxCop, with hundreds of rules you can apply through project properties.

One interesting feature of Team System, in terms of process, is the ability to set check-in policy, requiring certain standards to be met before it is possible to check-in code. It is possible for developers to override this, but doing so raises an administrator alert.

Screenshot showing Team System Check-in policies.

Screenshot showing overriding check-in policy in Team System.

You can set three types of policy. One specifies code analysis rules, one specifies test policy, and the third requires associated work items. The test policy specifies one or more test lists that must succeed, although test lists can only be created with the Tester edition. A major omission in the first release is that you cannot set a rule to require that a certain proportion of code must be covered by tests before check-in - you can only specify pre-existing tests. Team System is highly extensible, so it's likely that Microsoft or third parties will plug this and other gaps in due course.

Team System is a huge step forward for Microsoft, bringing testing into the heart of the development process. That said, I would prefer to see less emphasis on the distinct Tester role and more emphasis on test-driven development. Some features in the Tester edition, like test management and web tests, should also be in the Developer version. This sort of product partitioning is always frustrating, especially for small teams or solo developers, who will still need the full suite.®

The licensing is so complex that Microsoft has a 15 page white paper (available here) to describe it, involving server and client licenses for Windows and Office as well as Team Foundation Server and Visual Studio. Basic prices are available here.

Still, it remains true that tool costs are generally only a small part of total development budgets, and Team System in its latest Release Candidate looks very worthwhile. More information can be found here.

The Register - Independent news and views for the tech community. Part of Situation Publishing