In a recent Spring Boot project I decided to try a new pattern in my integration tests and it worked out pretty well. Curious if others have had success with this too; I haven't seen it written about anywhere.

The basic idea: Use by lazy to create graphs of canned test fixtures, and use those lazy values as defaults in the fixture setup code.

To make up a simple example, say you have "books" and "authors" tables in a database, where every book has to have an author. You might have a couple functions in your test code to populate those tables and return the IDs of the newly-inserted rows so you can use them in tests:

fun insertAuthor(name: String): Long { ... }
fun insertBook(title: String, authorID: Long): Long { ... }

fun testCheckOut() {
    val authorID = insertAuthor("John Steinbeck")
    val bookID = insertBook("Cannery Row", authorID)
    
    library.checkOut(bookID)
    // ...then assert the book is checked out
}

The "by lazy" pattern reduces boilerplate in cases where you just need a book and an author but it's fine for them to be canned values.

val cannedAuthorID: Long by lazy { insertAuthor("John Steinbeck") }
val cannedBookID: Long by lazy { insertBook("Cannery Row") }

// This is the same as before
fun insertAuthor(name: String): Long { ... }

// But this has a default if your test doesn't care who the author is
fun insertBook(title: String, authorID: Long = cannedAuthorID): Long { ... }

// Referencing cannedBookID will insert both the book and the author
fun testCheckOut() {
    library.checkOut(cannedBookID)
    // ...then assert the book is checked out
}

// The canned IDs are inserted exactly once
fun testCheckOutTwoBooks() {
    library.checkOut(cannedBookID)

    // This will use the already-inserted author ID
    val secondBookID = insertBook("Of Mice and Men")

    assertThrows<TooManyBooksException> {
        library.checkOut(secondBookID)
    }
}

The benefit isn't too big with this simple example, but my real project has a more complex data model with multiple layers of dependencies, and it ended up making my tests considerably less cluttered with incidental boilerplate.

Of course, the other approach to this class of problem is to spin up a fully-populated set of test fixtures that gets shared by all the tests. For example, a test database that gets reset with a known set of example data for each test run. That can work well too, and it's a technique I sometimes use, but I prefer to have tests construct the environments they need.

Anyone else used this kind of setup? Are there any additional tricks I'm missing?