Wednesday, October 7, 2009

Robocode Editing - Testing Tools

Writing a program that compiles is relatively easy. Writing a program that does what it is supposed to and consistently generates the expected results is a much more difficult task. Luckily, there are a variety of tools available to help programmers utilize test cases to verify results and to verify that all test cases have likely been used. In the case of my software engineering class, these tools are JUnit and EMMA.

To address the issue of efficiently implementing test cases, I used the JUnit testing framework. This involved writing a number of Java test files. Unlike the test files I have written in the past, which typically involved defining a new object, giving it simple properties, then using print statements to verify that I got what I wanted, these test files would do the checking for me. JUnit will perform the actions or calculations I want, and then compare the results to what I expect, and let the test either pass or fail, depending on the results of the comparison.

Before, I begin, if you would like to reference the files I discuss, the distribution file for my newly updated and tested version of PacerEvader can be downloaded here.

There were three basic categories of tests that I had to do for my software engineering class: acceptance, behavioral, and unit.

In the acceptance tests, I verified that my robot would be able to beat two of the sample robots at least 60% of the time. This verifies that my robot is accomplishing its overall task of winning. I found that these were the easiest tests to implement because I did not have to alter the code for PacerEvader in any way - my working robot was simply placed in a battle with the sample robot that I set. I had to do a check only at the end of the battle, when I ensured that PacerEvader was the overall winner, and that it won at least 60% of the battles.

For the behavioral tests, I verified the movement strategies that I wanted to have my robot implement; namely moving to the top of the battlefield to pace back and forth, and moving to another wall when it hit another robot. These tests were more difficult for me to implement because I had to check properties of my robot after each turn, overriding the onTurnEnded method. This was not only tedious, but I also had trouble figuring out how to use the TurnEndedEvent properties to get details on my robot, as it required calling numerous methods to actual get a Robot object with all of PacerEvader's information. Because of the difficulties I had here, I was only able to complete two of the three tests I had wanted to, so I am one test case short of the required six for my software engineering requirement.

Finally, in the unit tests, I sought to verify that my robot fired using a power proportional to the distance it was from the enemy. This, by far, presented the most problems because I could not find a way to actually access PacerEvader's methods. I had first refactored my code to place the firing power calculation in a separate method, so that the value would be easier to check. However, since I could not access PacerEvader's methods, the only work-around that I could find to somewhat verify the calculations were correct were to copy my calculateFirePower method from PacerEvader into my JUnit test file and reference it from there.

In the process of testing my robot, I actually found a flaw in my implementation of the strategies for my robot. I intended to have my robot fire with power proportional to the distance from the enemy, but I left out an expression in the equation, such that it was almost always firing at maximum power. I did not realize this while looking through my code on my own, but double-checked the calculation when my test continually failed. I made the appropriate adjustments, then ran JUnit again to confirm that the change in firing power did not throw off any of the other results, the acceptance tests in particular.

After I completed my test cases, I used EMMA to analyze the coverage that my test cases provided. For the most part, I can feel satisfied with the results, given that this was my first time designing test cases for a program. All of the tests had block coverage above 70% and line coverage over 80%, aside from my main PacerEvader.java, which is not directly run since it is not a test file. (The fact that PacerEvader was not run significantly brought down the overall coverage for the package, so I am choosing to omit the overall summary.

Overall, I feel that I can still improve my test cases to truly ensure that my robot is successfully implementing the strategy I designed for it. To do this, I will need to find an easy way to monitor my robot's scanning and firing from within JUnit, because I could not find a way to check either of these. If I can accomplish this, I will feel much more comfortable with the JUnit quality assurance. I'm not sure if I could further redesign my robot to make testing easier since I am not sure of what will be required to hit the remaining test cases that I want. Through this process, though, I did realize that testing is much easier if you design the code toward it from the beginning, rather than trying to add in testing in the middle of project development.

The distribution file for my newly updated and tested version of PacerEvader can be downloaded here.

*Minor update to distribution file to correct a spelling error in one of the Java test files. The corrected version can be found here.

No comments:

Post a Comment