Musings about software development, Java, OO, agile, life, whatever.
I thank Google one final time for having me speak last Tuesday (June 6). The talk I gave was entitled "Speeding up Development With TDD." The abstract suggested I was giving a brief live coding demo, which I did. It was intended as an introductory talk, and I'm not sure how more might have been expected given the abstract. Nonetheless, I apologize for the miscommunication.
Given a 60 minute time frame, choosing an example that works for demonstrating technique means choosing a simple class. Unfortunately, choosing a simple example suggests that perhaps this TDD thing is limited to only academic examples. I did try to explain this, using a few anecdotes about real companies getting real results from the practice.
I have learned from my public drubbing, given to me by a Google employee in a blog post. I'm always very enthusiastic about TDD, based on real results I've seen, but I still do try to temper the message (hence the reason I included a slide that talks about "real costs" of doing TDD). Obviously I need to provide more disclaimers in the future.
I agree with the message that the TDD community needs to work harder to provide more "real world" examples. Thank you for suggesting it.
I do have a few messages for Google.
- Tests == specs. Ok, I'll concede that this is silly as a literal comparison. I presented this notion as one of my "mindsets," something that helps me think in terms of some of the goals I should be able to get out of them. The word "mindset" was on the title of the slide I presented.
- If it's not testable, it's useless. Once again, this was on the slide called "mindsets." It's a direction to head in. It's intended to get you to think in the opposite direction, instead of making excuses why not to test. I'm sorry, I can't back down much on this statement. The fact that Google can ship code that just happens to work suggests that maybe they're better programmers than the rest of us. But this notion that it's ok to design most code in a way that it can't be tested is ludicrous. Note that I never said that every test had to be a unit test, or even automated. But the more of them that are, the better. Most code can be unit tested if designed well. Most.
- Your code is just like everyone else's code. I'll take the Google challenge, and bet that I could take almost any Google system (except for the few where they're doing TDD well), and pare it down to 2/3 or less of the original amount of code, while at the same time dramatically improving its maintainability.
- Every company thinks their systems are uniquely complex. I'm willing to bet, however, that the bulk of the requirements for Google systems are probably no more complex than systems anywhere else. I heard the excuse at Google that "some things are just too hard to test." Every time I hear similar excuses, I've seen poorly designed systems. Certainly, there will be small amounts of my system for which coding unit tests outweighs the benefits. But if any significant portion of my system is too hard to test, then I'm doing a poor job of managing complexity using basic OO design concepts.
- Someone in the audience that suggested popping off a stack was a good way to test creation of a stack. This is a perfect example that demonstrates complete misunderstanding of what TDD is about. I said (it's on the tape) that coding such a test into "testCreate" was inappropriate, that it represents different behavior and should be placed in another test. I probably wasn't clear enough on this point. I never dismissed the thought in order to maintain a "script."
I learned a few things from this opportunity. I can only hope Google did too.