Skip to content

Testing (basics)

In this unit you learn the basics of testing, notably the ratio behind testing, which types of tests exist, basic usage of JUnit and how to write meaningful tests.

Lecture upshot

Tests are what keeps software alive. A software without tests not maintainable, and whatever effort went into its creation are likely wasted, for the software will not survive. If you care about your software, write tests and especially write good tests.

Essentials

Before we look into technical details, let's go over some general potential, limitations and interests of software testing.

What can we test

  • Given a SUT (subjet under test), e.g. class, method, etc..
  • Tests can prove that a SUT currently has certain properties.
  • Tests can show presence of bugs, but not their absence.

Tests are rarely intelligent

Unit tests do not interpret test results, or have any form of cognitive intelligence. We can only test for clear, determinist questions with dichotomous answers (true / false)

Interest for coding

  • Adding functionality: Whatever you add in functionality, you did not interfere with anything existing.
  • Regression testing: Whatever you improved, it did not damage what was already working.
  • Refactoring: Whatever you changed, you did not break anything.

Regression test example

Assume I have a (not very optimal) function to test a number for prime:

public class PrimeChecker {
  public boolean isPrime(int number) {
    boolean result = true;
    for (int factor = 2; factor < number; factor++)
      if (number % factor == 0)
        result = false;
    return result;
  }
}

Then I can test with a bunch of test scenarios:

import static org.junit.Assert.assertFalse;
import static org.junit.Assert.assertTrue;

import org.junit.Test;

public class PrimeCheckerTest {
  private final PrimeChecker checker = new FasterPrimeChecker();

  /**
   * Tests if the number 23 is correctly identified as a prime number.
   */
  @Test
  public void testIsPrime23() {
    assertTrue(checker.isPrime(23));
  }
}

Is the implementation still correct if I make my prime checker more efficient ?

public class FasterPrimeChecker extends PrimeChecker {
  public boolean isPrime(int number) {
    boolean result = true;
    for (int factor = 2; factor * factor < number; factor++)
      if (number % factor == 0)
        result = false;
    return result;
  }
}

Interest for development

Tests are also a form of documentation:

  • Everything that is tested: Guaranteed requirement, a clear specification of expected program behaviour.
  • Everything not tested: Unknown requirement, no specification of expected program behaviour. Alternatives:
    • Writing documentation
    • Comments in code
    • Variable / method naming
Why then not just rely on other forms of behaviour documentation ?

Tests are the only form of documentation you can verify automatized

Test types

Main difference between test types: Test horiozon (what to test).

Horizon Test type Example
Isolated module Unit-Test Calling java class with input x returns y.
Interplay of multiple modules Integration tests System sends email to alert@uqam.ca when critical condition arrives.
Interplay of entire system System test Click on accept finalizes flight booking and generates boarding pass PDF`
Non functional aspect Acceptance test System reacts sufficiantly fast for productive use.

Test means

Test type Test means
Unit-Test Unit-frameworks, e.g. JUnit.
Integration tests Mocking-framewoks (more on that later).
System test Actually using the system, e.g. via scripts.
Acceptance test Actual humans using the system.

In general

In general things get more difficult (and expensive) to test, the greater the horizon. E.g. unit tests are not expensive compared to hiring test users that attempt to interact with your system.

Test driven development

Test-driven development (TDD) targets the problem of deviating production code and tests: "Production code constantly evolves, how to catch your tests ?"

  • TDD: Do it the other way round!
  • The three laws of TDD:
    1. Whatever functionality you need, first write failing tests.
    2. Do not write more tests than are sufficient to cause a fail.
    3. Do not write more code than is sufficient to pass all tests.

Ideally, when following TDD you never just "go ahead and code a lot new functionality". Likewise you never just "go ahead and write tons of new tests". Both advance at the same pace.

Unit tests

  • Unit tests assume a strict 1:1 mapping between SUT (a class) and tests (a tests class).
  • Example: When testing a class with functionality for testing for primes (SUT), we write a corresponding unit test class:

Preliminaries

When you setup a new maven project, maven actually already anticipates that you probably will want to test your project:

  mvn archetype:generate \
  -DgroupId=ca.uqam.info \
  -DartifactId=MavenHelloWorld \
  -DarchetypeArtifactId=maven-archetype-quickstart \
  -DinteractiveMode=false

Note: Some systems (windows) cannot handle multi-line commands. Remove the \ and place everything in a single line.

Creates folder structure:

  MavenHelloWorld
  ├── pom.xml
  └── src
      ├── main
         └── java
                  ...
                         └── App.java
      └── test
          └── java
                   ...
                          └── AppTest.java

Well duh... you've already got your first SUT, and corresponding Test !

But maven actually does more ! By default, your pom.xml also contains a dependency for JUnit:

<project>
    <dependencies>
        <dependency>
            <groupId>junit</groupId>
            <artifactId>junit</artifactId>
            <version>3.8.1</version>
            <scope>test</scope>
        </dependency>
    </dependencies>
</project>

The version is bit outdated though, meanwhile we're at JUnit version 4 (and even 5). The first thing you want to do is to update the Junit version to 4.13.2.

Anything unusual with the dependency block ?

The dependency shows an additional scope tag, which is not part of dependencies we've previously seen, e.g. libraries. This is because not all dependencies are actually always relevant. The test scope indicates that a dependency is only needed for testing, but not at runtime. Hence maven will also not package the dependency as part of the build, when creating an executable. Your customer will not need that dependency when later using your software product.

Running tests

Either will work: * mvn clean package: Compile code, run all tests, create JAR. * mvn clean test: Compile code, run all tests. (A bit faster, but not always what you want.)

More on the syntax and keywords of maven in an upcoming lecture.

JUnit Syntax

JUnit is (mostly) controlled via annotations. We'll now iterate over the most common annotations, and annotation parameters.

Test

  • @Test defines an atomic unit test. We've already seen an initial example.
    • @Test decorates a method.
    • Method must be public void ...
    • There can be as many @Test annotated methods as you want, per test class.
  • JUnit creates a new SUT for every @Test method ! You cannot pass information between tests, using class fields !
    • Example:
import org.junit.Test;

public class DemoTest {

  private int internValue = 0;

  @Test
  public void foo() {
    System.out.println(internValue);
    internValue += 3;
  }


  @Test
  public void bar() {
    System.out.println(internValue);
    internValue += 5;
  }
}
What is printed to console on test execution ?

Two times 0. The tests are executed on separate objects.

Assertions

Usually you do not just want to invoke functions, but also test for results.

Is there any sense in tests without checking results ?

Yes, tests without checking results can still make sense. For example to exclude occurrence of runtime exceptions.

  • In Junit4, assertions verify if a given variable (or method return value) matches an expected result.
  • General syntax: 1) First argument: Human readable message, in case the value is not what is expected 2) Second argument: The retrieved value, e.g. result of tested function foo() 3) Third argument: The expected value, e.g. 42.
  • So if you had a test for function foo() and the test asserts result 42 you would write:
/**
 * Verifies if calling foo returns 42.
 */
@Test
public void testFoo() {

  Assert.assertEquals("Calling foo did not return expected value 42!", foo(), 42);
}

Use assertEquals and provide a message

Semantically you can take the shortcut and simple call assertTrue(foo() == 42). However, if the test fails, it may not be obvious what is the issue. Always provide a human-readable message for your assertions.

Some useful variants of assertEquals are:

  • assertNotEquals(...)
  • assertNull(...)
  • assertNotNull(...)
  • assertArrayEquals(...)

Before

Often times, you have duplicated code across your tests methods, for example:

  • Opening a connection to a database
  • Ensuring program is in a testable state (e.g. to test a Halma controller, the Model must be initialized)
  • Preparing the file-system
  • ...

Instead of copy-pasting the same code into all test methods (or even starting each test with the same common method call), you can decorate a dedicated initialization method with @Before, and initialize local class fields.

public class DataBaseTest {

  private DataBase db;

  /**
   * Method to call before every test.
   */
  @Before
  public void initializeDatabase() {
    db = connectToDatabase();
    Logger.info("Connection to DB established.");
  }

  @Test
  public void databaseWriteTest() {
    db.callSomethingImportant();       // db has been initialized by @Before
  }

  @Test
  public void databaseWriteTest() {
    db.callSomethingElseImportant();   // db has been initialized by @Before
  }
}

Note: In Junit5, @Before has been renamed to @BeforeEach, to avoid confusion with @BeforeClass (which executes a method ONCE before all tests are run. That's e.g. usefull to ONCE create a database connection.) More details here.

After

  • There are absolutely no guarantees for test order.
    • You cannot assume your @Test annotated methods do be executed in a given order.
    • Any order must lead to the same outcome. If that's not the case, there's an issue with your tests!
  • Sometimes, in order to test an object you have to modify state.
    • When you're working with objects, you can create a new SUT for every @Test method.
    • But if you're working with something persistent, e.g. a database, executing a test may leave a "dirty" state.

Example:

  • first a test to verify database reading:
      @Test
      public void testReadStudent() throws IOException {
        Set<String> students = db.readDataBase();
        Set<String> expectedResult = new LinkedHashSet<>();
        expectedResult.add("Max");
        Assert.assertEquals("DataBase read did not provide expected result.", students, expectedResult);
      }
    
  • then a test to verify database writing:
      @Test
      public void testAddStudent() throws IOException {
        db.addStudent("Ryan");
      }
    
  • Now it's up to the test order whether the first tests fails or passes ! Not good, we want reliable, deterministic tests.

What to do about it ?

  • Suboptimal solution: Clean up state after each test.
    • You can either include "undo" actions at the end of each test, e.g. remove the student you attempted to add:
        @Test
        public void testAddStudent() throws IOException {
          db.addStudent("Ryan");
          db.removeStudent("Ryan"); // <-- We do not want to test this, but we have to, so other tests work.
        }
      
    • But if your test fails (or crashes !) the "undo" actions won't be executed!
  • Better solution:
    • Use an @After annotated method.
    • The method will be called after every @Test method execution.
    • State is deterministic.
Can I have multiple @After annotations ?

Yes, but you cannot assume them being executed in a deterministic order. That's rarely what you want.

Exceptions

  • Defensive programming means throwing exceptions, when someone tries to hijack your functions (by mistake or intention).
  • It perfectly makes sense to write test cases, to verify if your production code is sufficienlty defensive.

Example:

  • You've already learned that Getters should be exploitable to manipulate object state.
    • (Static code analyzers will actually warn you, if your code has that kind of vulnerability. See SpotBugs.)
  • More precisely, a getter returning a list should protect the list as "unmodifiable":
import java.util.Collections;

public class Inf2050 {

  private final List<Student> students;
  // [...] constructor etc... 

  /**
   * Getter for read-only list with all students enrolled in class.
   * @returns unmodifiable list of student objects.
   */
  public List<Student> getStudents() {
    return Collections.unmodifiableCollection(students);
  }
}
  • But then a corresponding test for defensive implementation would always fail, because the expected behaviour is an exception !
@Test
public void testHijackGetter() {
  Inf2050 course = new Inf2050();
  List<Student> students = getStudents();
  students.add(new Student("Alan Turing")); // <-- Must trow an exception... test will fail.
}
  • Luckily JUnit offers a workaround for this scenario: we can decorate the annotation to expect an exception:
@Test(expected = UnsupportedOperationException.class)
public void testHijackGetter() {
  Inf2050 course = new Inf2050();
  List<Student> students = getStudents();
  students.add(new Student("Alan Turing")); // <-- Test will only fail if there is NO exception.!
}

Careful with what you wish for

Don't just expect Exception.class. This will match on any exception and your test might pass although a completely different exception was raised (Exception.class is the common superclass to all other Exceptions). Always expect as specifically as possible.

Timeouts

  • Sometimes tests are time sensitive, or you do not want to take the test-runs to take forever.
  • You can "decorate" the @Test annotation with a timeout information
  • Your test will be killed once the timeout is reached. Note however that exceeding the timeout will fail the test.

Example:

  /**
 * Testing a really big number for prime could take a moment, so we set a 1 millisecond timeout.
 */
@Test(timeout = 1)
public void testIsPrimeMaxInt_1() {
  assertFalse(checker.isPrime(Integer.MAX_VALUE - 1));
}

Our CPU is fast, but not that fast, so the test will fail:

java.lang.Exception: test timed out after 1 milliseconds
    ...

Coverage

  • Ideally your test cover all possible execution paths of your program.
    • Every class
    • Every method
    • Every if-bifurcation in your program logic
  • We can execute all tests, and mark all lines that have been hit by at least one test.
  • (Line) coverage is then defined as total-amount-of-tested-lines / total-amount-of-code-lines
    • Class and method coverage are not really used, because they can be misleading, e.g. if there is low code modularity.

Careful with interpreting coverage percentages

Good coverage does not necessarily mean your program is well tested. In principle you can reach high coverage simply by calling every method, never asserting anything. However, while good coverage does not imply good testing, low coverage does imply poor testing.

Coverage reports with IntelliJ

IntelliJ has a built-in test coverage reporter.

  • Best option is to right-click on the test package, in the project structure explorer.
  • Instead of just tests, choose the More Run/Debug -> Run with coverage option.
  • You'll receive a test report for:
    • Class coverage
    • Method coverage
    • Line coverage

In addition, the code editor gives you visual feedback on the exact lines covered (or not covered) be tests:

  • : Line has been executed by at least one test.
  • : Line has not been executed.
  • : Line was executed partially, e.g. only one branch of an if-else statement was visited

You can also hover over the coloured marking to see the amount of execution hits.

Test hacking

Test driven development (TDD) sometimes leads to developers "hacking" around the tests.

  • That is, they develop production code tailed to the tests rather than the purpose.
  • Example:
    • Perfect numbers are defined as: "A positive integer that is equal to the sum of its proper divisors"
      • 6 has divisors: 1, 2, 3.
      • 1 + 2 + 3 = 6
      • 6 is a perfect number.
      • Other perfect numbers are: 28, 496, 8128, ... (there become pretty rare, soon)
Are there any odd perfect numbers ?

If you find the answer, please let me know. This is an unsolved problem in mathematics.

  • A senior developer went ahead and coded a simple unit tests, hoping for a TDD implementation of a checker function.
    • The test code, written by the developer:
        @Test
        public void testPerfectNumber3() {
          PerfectNumberChecker checker = new PerfectNumberChecker();
          assertFalse("3 is not a perfect number, but checker mistakenly said it is.", checker.isPerfect(3));
        }
      
        @Test
        public void testPerfectNumber6() {
          PerfectNumberChecker checker = new PerfectNumberChecker();
          assertTrue("6 should be identified as perfect number, but checker did not recognize it.", checker.isPerfect(6));
        }
      
        @Test
        public void testPerfectNumber20() {
          PerfectNumberChecker checker = new PerfectNumberChecker();
          assertFalse("20 is not a perfect number, but checker mistakenly said it is.", checker.isPerfect(20));
        }
      
        @Test
        public void testPerfectNumber28() {
          PerfectNumberChecker checker = new PerfectNumberChecker();
          assertTrue("28 should be identified as perfect number, but checker did not recognize it.", checker.isPerfect(28));
        }
      
  • A new programmer joined the team, and was asked to implement the PerfectNumberChecker
    • After 3 minutes they found a solution that passes all tests:
      public boolean isPerfect(int number) {
         if (number == 3) 
            { return false; }
         if (number == 6) 
            { return true; }
         if (number == 20) 
            { return false; }
         else 
            { return true; }
      }
      

Heads up when sharing tests

Clearly the new employee did not understand the purpose of TDD. Their solution is tailored to the tests, but it should be the other way round. In the worst case, programmers will purposefully code around the tests to create an illusion of task completion. That's why I keep some TP tests undisclosed until after submission.

Monkey Tests

Monkey tests mean, "Testing using random inputs".

  • Random inputs are effective against test hacking
  • However, there are two challenges:
    1. You cannot assert for random inputs.
    2. You must be able to reproduce errors.

Good practices:

  • For the first you can implement a reduced testing logic. For example if you're testing the prime checker with random numbers you can easily rule out even numbers, even without re-implementing the entire prime checker in your test class.
  • For the second you can use seeded pseudo-random number generator (PRNG): These are functions that produce values that appear random in terms of distribution, but can be deterministically reproduced.
    • If you suspect test hacking, you can simply change the seed.
    • Java comes with a built in PRNG: Random
      // For seed 42, will always generate:
      // -1170105035 234785527 -1360544799 205897768 ...
      int seed = 42;
      Random random = new Random(seed);
      for (int i = 0; i < 10; i++) {
        System.out.print(random.nextInt()+ " ");
      }
      

Info

A more extreme form of Monkey Testing is Fuzzing. Fuzzing also bombards the software code with tests, but instead of just random numbers it gradually "improves" on finding pathological inputs, e.g. by measuring response times or crashes. New pathologocal inputs are searched based on mutations from the previously worst inputs.

Literature

Inspiration and further reads for the curious minds: