I recently had a conversation with a test automation engineer about the virtues of outside-in design vs. inside-out design. I prefer outside-in design while he prefers inside-out design.
I want to tell this story because it clearly demonstrates the current culture war for the heart and minds of developers with respect for testing/design methodologies.
Our disagreement started when the test engineer brought up automated testing. While we both agreed on the virtues of automated testing, we disagreed on the specifics of how we should approach automated testing.
He uses the traditional TDD approach of red-green-refactor for unit tests before writing integration tests (i.e. bottom-up), while I use the approach of writing behavioral tests first, integration tests with stubs second, and unit tests last (i.e. top-down).
Neither of us backed down on this; so, we spared with different types of argument and counter-arguments.
Our disagreement seemed to center around different value systems: the test engineer valued “program correctness”, while I valued “interface design” feedback.
The test engineer argued that you can’t prove program correctness when you start with an integration test. According to him, you have to “prove” the correctness of isolated modules before you can “prove” the correctness of their integration.
His statement of “proving” program correctness shocked me (especially since it came from an automation test engineer). I mentioned that you can’t actually “prove” program correctness with integration tests, and that you can only use them to falsify program correctness. Further, even if you could “prove” program correctness with integration tests then it would be an NP-Hard problem.
He tried to counter me with several arguments, but I felt that they equated to “grasping at straws”.
Eventually, I told him that I don’t use outside-in testing to “prove” program correctness, anyway: I use it as a means to design interfaces.
By definition, an interface constrains the design of a class.
I said that you can not effectively determine the constraints of an interface working from the inside-out because subclasses depend on the interface (not the other way around). Consequently, most people “discover” the constraints of an interface by constantly refactoring.
I argued that you can more effectively “discover” the constraints of an interface by working from the outside-in. How I want to call the methods of an interface determines the types of constraints I place on the interface. The design feedback I get for interface design working from the outside-in has incredible valuable for me.
I drew the following diagram to help illustrate this point.
The solid lines represent the source code dependencies, while the dashed line represents the runtime dependencies. This illustrates that at compile time both the caller and the callee have a dependency on the interface, but when the program runs the caller has a dependency on the callee.
Object Oriented polymorphism gets its power from this asymmetry between the source code dependency and the runtime dependency. For example, when the compiler runs, it will check that the callee obeys the interface properly. However, when the application actually runs, the runtime engine will swap out the interface for the actual concrete class.
A draw decision
In the end, neither of us could convince the other of the justness of our respective causes. We simply agreed to disagree, and we each went our separate ways.
Ultimately, everyone has to make trade-offs between competing concerns. However, given the nature of object orientation, it seems to me that we should value the design of interfaces over implementation details. Consequently, we should choose a methodology that streamlines that process. Otherwise, why even use an OO language.