Some interesting other types of testing paradigms and approaches I came across, since the last year or so up to recently.
Mutation testing
Mutation testing helps you in guarding your code coverage. The goal here is for your tests to kill all mutants. A mutant, is a mutation to your code. If you have strictly specified the behavior of your code, any significant change should trigger a failing test. A failing test, kills a mutant.
This is a bit abstract, so here’s a pseudo code example:
// Some business logic
public bool Foo(string input)
{
if(input == "some-condition")
return true;
return false;
}
// Test that specifies expected behavior of business logic
[TestMethod]
public void WhenFooGivenInputIsSomeConditionThenResultShouldBeTrue()
{
var input = "some-condition";
var result = Foo(input);
Assert.IsTrue(result);
}
// Mutant in action
public bool Foo(string input)
{
if(input == "some-condition")
return false; // If statement inverted by mutant
return true; // If statement inverted by mutant
}
// The test method above now fails, mutant succesfully killed
The changing of your code and checking that a test fails, is something that is gets handled by a framework. Unfortunately, I’m only aware of 1 active mutation testing framework for .NET, I admit this particular framework doesn’t convince me to start using it.
A user friendly mutation testing framework for .NET, would convince me to integrate it into a CI pipeline. Maybe only execute it in a nightly build, for critical components.
Scientist.NET
I first heard about Scientist.NET on Pluralsight, then later learned on .NET Rocks that Phil Haacked is the author that ported it to .NET.
The idea here is to test code refactorings in a live system with real data. So you would make a change, but still keep the original code in place. Then you call both the original and the new piece of code via Scientist (result of original is used, result of new is recorded for comparison and discarded). This goes into production.
Now Scientist collects the results of both pieces of code and after a period of time, you can validate that the refactored code returned the same results under real use and did not introduce any unexpected behavioral changes.
I was almost completely sold, until I understood that of course, you have to ensure there are no unwanted side effects. Since you executing the same action twice, albeit with different versions, it’s up to you not to duplicate the impact.
So this would work great for purely CPU bound calculations, but other than that you need to pay attention. Thing is, this is typically an approach I would want to use in legacy code, which are traditionally an entangled mess where database access is spread over long methods.
Property based testing
I heard this before but it didn’t stick. After a presentation on the subject it did. Here the approach is to describe the expected behavior as properties, instead of writing them out in tests.
The advantage is, this allows the computer to write the tests. The computer can then generate hundreds or thousands of test cases, that all validate that your business logic acts as you specified in the properties. Kind of reminds me of Pex, which became IntelliTest I guess.
But this post is getting way longer than I intended, so I suggest to check out the presentation for a proper introduction. From the three different approaches to testing here, this is the one I’m most interesting in pursuing. FsCheck seems like the framework to get started with in .NET, so gonna give that a try soon!
Theme music for this blog post
House Shoes w/guest mix by Kutmah – Magic
(scroll down on the page for the player or download if you prefer, no embed available this time)