This is one in a series of articles about Time Tortoise, a Universal Windows Platform app for planning and tracking your work schedule. For more on the development of this app and the ideas behind it, see my Time Tortoise category page.
A key part of test-driven development (TDD) is writing unit tests first, and then writing the code that makes them pass. The benefits of the test-first approach, as explained in an Extreme Programming article from 2000, are still relevant 17 years later:
- You don’t have to make time to write tests later (assuming you remember to write them later).
- You have to understand the requirements before you write the code.
- You know when you’re done writing each section of code (because the tests pass).
- You’re less likely to build functionality that you don’t need, since writing each test makes you think about the value of the use case that you’re implementing.
These are all real benefits, and test-first is a good approach in many cases. However, there’s a common scenario where it’s difficult to follow test-first strictly. Consider my Time Tortoise project. Since I’m making up my own requirements, it’s easy to come up with use cases and think about what tests I should use to verify them. But some tests need to call low-level methods, so they require a clear understanding of the technology stack. If you’re learning a new technology (as I am with UWP), you may need to experiment with several ideas before you know what kind of code will be required to implement a use case. When you’re in that phase of your solution development, it may not make sense to follow the test-first approach.
Here are some examples from my Time Tortoise work this week.
Using Code Coverage Measurement
Code coverage (what proportion of your code is executed when you run your tests) is a popular way to evaluate how well-tested your project is. If the code coverage tool finds code that isn’t executed by any test, how do you know the code does what it’s supposed to? On the other hand, code coverage isn’t a perfect metric. Just because code is executed doesn’t mean that it is comprehensively tested. But it’s a start.
Code coverage and test-first
Code coverage works hand-in-hand with the test-first approach to development. If you always write exactly enough code to make a new test pass, and no more (as dictated by TDD rules), then 100% of the code you write will be covered by your tests. And even if you unintentionally write a section of code that you don’t have a test for, code coverage measurement will alert you that you missed something.
Code coverage and “test second”
Last week, I implemented basic functionality to create, update, and list Time Tortoise activities. Although I stole some code from my earlier experiment, I’m still in the early stages of figuring out how XAML data binding, Entity Framework Core, SQLite, and xUnit.net all work together. So I took a “test second” approach: I experimented until things started working, and then I wrote most of my tests.
If you’re serious about testing, then code coverage is even more important when you’re using a “test second” approach. Since you already have code before you start writing tests, you don’t know for sure how much of your code is being tested until you see the code coverage numbers. On the other hand, code coverage can provide a false sense of security, since a line of code is marked as covered as soon as it is executed by a test, even if the test isn’t as comprehensive as it should be. That brings us to:
The code coverage game
Once you start watching code coverage numbers, it can start to feel like you’re playing a game whose goal is to get 100% coverage. There are two potential problems with the code coverage game:
- As mentioned earlier, 100% coverage doesn’t mean you’re done with your tests.
- There are diminishing returns. Once your coverage is near 100%, your project may benefit more from other types of testing (maybe just poking around in the UI), or from non-test activities, than from grinding away at those last few percentage points.
However, like other types of gamification, playing the code coverage game can prompt you to learn new things about your platform as you figure out how to get particular sections of code to execute in response to test code.
Acknowledging those pros and cons, one of the benefits of working on a personal project with no real schedule or customers is that I can try things out to see how they work and to learn from the process. So here are a few more code coverage tips that I found while playing the code coverage game.
Excluding classes and methods
Regular .NET has a handy attribute called ExcludeFromCodeCoverage that you can use to indicate that a section of code, for one reason or another, shouldn’t be included in the code coverage calculation.
It appears that .NET Core just recently got this attribute, or maybe will be getting it soon. In any case, the version I’m using doesn’t seem to have it yet (as a public attribute). Fortunately, an alternative approach using .runsettings does work. I added the following sections to my CodeCoverage.runsettings file:
<Exclude> <Source>.*\\Migrations\\.*</Source> </Exclude> <Exclude> <ModulePath>.*xunit.*</ModulePath> </Exclude>
The first section excludes the Migrations folder, which contains auto-generated Entity Framework code. Since I didn’t write this code, I’m not going to test it, so there’s no need to include it in the code coverage report.
The second section includes the xUnit.net modules themselves (not my xUnit.net tests). For some reason, these started showing up in my code coverage results when I added the first exclusion. As for actual unit test code, there are conflicting opinions about whether it belongs in code coverage. I chose to include it, since I want to know if some of my tests aren’t being run for some reason.
Getting to 100%
Even after excluding some code and figuring out ways to test the rest, it may still be challenging to get a perfect code coverage score. In my case, this was the one line that I couldn’t get 100% covered:
Assert.Throws<InvalidOperationException>(() => mvm.Save());
This test code verifies that a particular exception is thrown when I try to save an empty activity list. This shouldn’t be possible from the UI, since the Save button should be disabled when the list is empty. Therefore, the test verifies that an exception is thrown if the code somehow gets into this state.
Since the test passes, the Save method must have been called. But for some reason the code coverage report marks
mvm.Save() as “partially executed.” This result is not unusual in cases where exceptions are involved. Strangely, I even got a partial execution result when I excluded this method from code coverage using the .runsettings file (though not when I excluded the whole test class).
One reason to target a 100% code coverage score is to make it easier to see when you have unintentionally added untested code: if your score is less than 100%, then code you wrote recently is untested. Unfortunately, edge cases like this exception example will probably make it impossible to achieve 100% in most non-trivial projects. So you just have to keep an eye on the numbers after each code coverage run.
Code Changes for This Week
In addition to playing the code coverage game, I made a few code changes this week. Here’s a summary:
I added an integration test project for tests that make actual SQLite database calls, using the in-memory database feature I explained previously.
Although an in-memory database allows tests to run quickly, it’s still best not to rely on integration tests. My approach is to write unit tests first, which forces me to design my view model to be testable. Once I have good unit test coverage, I add a few integration tests to catch remaining scenarios. For example, testing a Save method is more realistic with an integration test, since the mock repository used by unit tests doesn’t actually save anything.
Integration tests are included in code coverage results along with unit tests. So getting to 100% coverage is another reason why integration tests may be necessary, since unit tests might not cover all data access layer code.
More unit tests
I added the unit tests necessary to get to (almost) 100% coverage, and to verify some edge cases. As a result, I found and fixed a few bugs.
I also started testing
PropertyChanged events. It’s possible in a unit test to receive events that are raised by the framework when bound properties are modified. This allows unit tests to exercise the properties that will be used in XAML binding, but without any UI dependency.
IsEnabled and Delete
I modified the Save button to enable or disable itself based on whether there is anything to save. With XAML binding, this is achieved by binding the button’s
IsEnabled property to a
bool field that maintains the appropriate value. The binding is one way, since the field (source) always updates the button property (target), not the other way around.
I also added a Delete method and corresponding repository logic to delete an activity from the database. With Entity Framework, the delete process is fairly simple, since entities are tracked automatically as they are added and removed.
Now that I have basic activity list functionality in place, it’s time to think about how to associate time segments with these activities.