One of the things that has been a consistent part of my software career is that I often have to approach a given idea or practice at least 3 times before I start to get comfortable with it. It’s happened with managing websites, making video games (my first Ludum Dare was well beyond the 3rd time I’d tried to make a video game), programming languages (both Nim and Factor were languages I approached once, and then again with more experience under my belt), and software develoment techniques. I got a lot more comfortable with Git after I’d had a chance to use Fossil for a while.
All this to say, that I rarely pick up something completely the first time I go at it, so for any younger colleagues that might have been impressed with my mastery of regex or shell scripting, that mastery was not achieved overnight.
Recently, I’ve started to have another go-round at Unit Testing, and this is feeling like the time-around that will have it stick into my habits as well as version control does. And, in the middle of all of this, the reasons why it seems to be sticking around seem to be a convergence of factors, not just a single thing.
For one, some sort of approach to automated testing is being required at work. Were this the only factor, I’d probably pick up a little bit of unit testing practice for work, but it would certainly not be a major factor outside of work. The other thing is that I picked up several books at the end of the year that all ended up talking about unit testing and picking the right abstraction.
The first, and possibly most influential was Mastering Software Technique, and the associated articles, by Noah Gibbs. It doesn’t have anything to do with unit testing specifically, but Gibbs consistently recommended 99 Bottles of OOP (Sandi Metz and Katrina Owen), which does have a decent amount in it about unit testing. I also picked up the book Working Effectively with Legacy Code, by Michael Feathers, mostly because I was looking for books that addressed relatively timeless topics, rather than mostly books that address version X of a given framework.
So, I ended up with a lot of programming books about unit testing. I also found in going through 99 Bottles of OOP, that the unit testing harness available for Ruby is relatively responsive, especially compared to the NUnit Visual Studio Test Explorer. Eventually, some bugs, either in my code, or the Test Explorer, led me to trying the nunit command-line runner. The difference was impressive. It makes me think that the nunit command-line tools get more attention than the test runner, because getting the command-line tools working consistently was a lot easier than keeping the test-runner working.
The last thing that seems to be cementing automated testing in my mind as a worthwhile investment was an article I ran across about Moon Pig. Often, when people tell you to adopt a practice, they don’t do a great job communicating the concrete reasons they adopted it. For me, the story of what and how various things were mocked for testability reasons in Moon Pig, mocking out the storage and time aspects of things, felt like a great starting point for how to mock things for tests, and it included a good, relatively concrete reasons for mocking things out the way they did.
So, now that I have a test-runner that works at a reasonable speed and reliability, and I’m taking in the wisdom of Michael Feathers about Legacy Code, and that I have a good story to refer too when I’m reasoning about how to mock things out, I think unit testing as a practice will be more sticky this time than it was before.
PostScript:
I should also give a shout-out to Kartik Agaram in this. Mu, and how low level he’s taking a test-driven approach to software has definitely informed how I write system-type software, as opposed to business software. PISC’s unit tests weren’t a direct result of hearing about Mu, but Mu’s artificial file system and such definitely got me thinking in that direction.
Published Jan 12, 2020