A Tale of Two Approaches: Inside-Out vs Outside-In Testing

I recently had a conversation with a test automation engineer about the virtues of outside-in design vs. inside-out design. I prefer outside-in design while he prefers inside-out design.

I want to tell this story because it clearly demonstrates the current culture war for the heart and minds of developers with respect for testing/design methodologies.

Our Objective

Our disagreement started when the test engineer brought up automated testing. While we both agreed on the virtues of automated testing, we disagreed on the specifics of how we should approach automated testing.

He uses the traditional TDD approach of red-green-refactor for unit tests before writing integration tests (i.e. bottom-up), while I use the approach of writing behavioral tests first, integration tests with stubs second, and unit tests last (i.e. top-down).

Neither of us backed down on this; so, we spared with different types of argument and counter-arguments.

Our Conflict

Our disagreement seemed to center around different value systems: the test engineer valued “program correctness”, while I valued “interface design” feedback.

The test engineer argued that you can’t prove program correctness when you start with an integration test. According to him, you have to “prove” the correctness of isolated modules before you can “prove” the correctness of their integration.

His statement of “proving” program correctness shocked me (especially since it came from an automation test engineer). I mentioned that you can’t actually “prove” program correctness with integration tests, and that you can only use them to falsify program correctness. Further, even if you could “prove” program correctness with integration tests then it would be an NP-Hard problem.

He tried to counter me with several arguments, but I felt that they equated to “grasping at straws”.

Eventually, I told him that I don’t use outside-in testing to “prove” program correctness, anyway: I use it as a means to design interfaces.

By definition, an interface constrains the design of a class.

I said that you can not effectively determine the constraints of an interface working from the inside-out because subclasses depend on the interface (not the other way around). Consequently, most people “discover” the constraints of an interface by constantly refactoring.

I argued that you can more effectively “discover” the constraints of an interface by working from the outside-in. How I want to call the methods of an interface determines the types of constraints I place on the interface. The design feedback I get for interface design working from the outside-in has incredible valuable for me.

I drew the following diagram to help illustrate this point.

DIP

The solid lines represent the source code dependencies, while the dashed line represents the runtime dependencies. This illustrates that at compile time both the caller and the callee have a dependency on the interface, but when the program runs the caller has a dependency on the callee.

Object Oriented polymorphism gets its power from this asymmetry between the source code dependency and the runtime dependency. For example, when the compiler runs, it will check that the callee obeys the interface properly. However, when the application actually runs, the runtime engine will swap out the interface for the actual concrete class.

A draw decision

In the end, neither of us could convince the other of the justness of our respective causes. We simply agreed to disagree, and we each went our separate ways.

Ultimately, everyone has to make trade-offs between competing concerns. However, given the nature of object orientation, it seems to me that we should value the design of interfaces over implementation details. Consequently, we should choose a methodology that streamlines that process. Otherwise, why even use an OO language.

Data Science and the Answer to the Ultimate Question of Life, the Universe, and Everything

In “The Hitchhiker’s Guide to the Galaxy”, Douglas Adams tells the story of hyper-intelligent pan-dimensional beings who build a computer named Deep Thought to calculate “the Answer to the Ultimate Question of Life, the Universe, and Everything.” After seven and a half million years, Deep Thought outputs an unintelligible answer: 42.

When they probed Deep Thought for more information it tells them that they did not understand the answer because they did not understand what they had asked.

The moral: make sure you have a good question before you start looking for an answer.

So is the case with “data science”.

You can employ the most sophisticated data science techniques with the right data crunching technologies, but without clear goals you can’t make sense of the numbers.

Based on this principle, I believe that business analysts contribute the most to the success of any “data science” project: they know what to ask, and they know what an answer should look like.

Unfortunately, I’ve seen many organizations invest heavily in machine learning experts and statisticians who don’t understand the business. They are simply building another Deep Thought who will return unactionable results like “42”.

All this could have been avoided if more people just read science fiction.