Vertical Slicing and Product Backlog Management with the Gherkin Syntax

TL;DR: To create a product backlog, “vertically slice” your user stories by grouping similar scenarios. Estimate the work and value of these slices instead of the user story.   


I’ve seen a rise in the demand for “full-stack” developers in the last couple years.

The agile concept of “vertical slicing” made these types of positions very popular.

In a traditional team structure, each person on a team will have knowledge of one layer of an application. When the team attempts to complete some feature, they will have to split the feature into tasks corresponding to layers and then distribute the task to the proper people.

We call this “horizontal slicing”.

If you had a team of “full-stack” developers then you could simply assign a feature to a developer and you could expect them to complete the feature end-to-end with little to no help or coordination.

We call this “vertical slicing”.

This works great in theory, but it has a lot of challenges in practice.

One of these challenges is the exact means of creating high quality product backlog with “vertical slices”.

Enter User Stories

I typically see teams create vertical slices based on “User Stories” and the Gherkin syntax.

The following two code snippets provide examples for two fictitious features: (a) Create Account and (b) Login.

Unfortunately, this sometimes creates very large product backlog items that the team can not deliver quickly.

In particular, I’ve seen this happen when the business dictates that teams can only release  polished and bug-free features.

For example,  it could take a very long time for a team to complete the scenarios “Internal Server Error” and “Wait Too Long” for the feature “Create Account’. Further, those scenarios may not deliver much business value compared to the necessary work.  

In comparison, it could take a very short time for a team to complete the scenario “Valid Account Creation”, and that scenario might have very high business value.

This illustrates that coupling all scenarios together can impede the early and frequent releases we need to create a tight feedback loop between developers and testers or users.

Slice Your Slices

User Stories are not bad, though. We just need a better way to generate vertical slices for our product backlog.

Notice that each user story has multiple scenarios, and that we can conceptually break up each user story into individual scenarios.

usecase_in_flows

Based on this principle, we can create vertical slices by grouping scenarios based on business value.

For example, we could slice our features in the following way.

Feature Vertical Slice Scenario
Create Account Basic Valid Account Creation
Business Rule Violations Duplicate Username
Duplicate Email
User Input Errors Not a Strong Password
Passwords Do Not Match
System Problems Long Wait Time
Internal Server Error
Login Basic Valid Username/Password
Business Rule Violations 1 Invalid Username/Password
Business Rule Violations 2 Too Many Incorrect Attempts
System Problems Long Wait Time
Long Wait Time

Each of these “vertical slices” become product backlog items that we can individually estimate and prioritize.

For example, our fictitious product team could prioritize the “vertical slices” in the following way.

  1. Create Account – Basic
  2. Login – Basic
  3. Login – Business Rule Violations 1
  4. Create Account – User Input Errors
  5. Create Account – Business Rule Violations
  6. Login – Business Rule Violations 2
  7. Create Account – System Problems
  8. Login – System Problems

This allows a more granular approach to creating product backlog items.

As an added benefit, you can leave a “user story” largely undefined so long as you already have its highest priority slices within your product backlog.

This allows you to “groom” your user stories in a “just in time” way.

For example, we created 4 “vertical slices” of the feature “Create Account” in the example above. However, as an alternative, we could simply create the first slice “Create Account – Basic” and not bother with further analysis until someone completes that slice. This could have saved everyone from spending unnecessary time in a grooming session.

I am only providing an illustration, though. Ultimately, the end result depends on the situation and the interaction between team members.

 

Advertisements

How Frameworks Shackle You, and How to Break Free

I sometimes hate software frameworks. More precisely, I hate their rigidity, and cookie cutter systems of mass production.

Personally, when someone tells me about a cool new feature of a framework, what I really hear is “look at the shine on those handcuffs.”

I’ve defended this position on many occasions. However, I really want to put it down on record so that I can simply point people at this blog post the next time they asks me to explain myself.

Why Use a Framework?

Frameworks are awesome things.

I do not dispute that.

Just from the top of my head, I can list the following reasons why you should use a framework

  1. A framework abstracts low level details.
  2. The community support is invaluable.
  3. The out of the box features enable you not to reinvent the wheel.
  4. A framework usually employs design patterns and “best practices” that enables team development  

I’m sure that you can add even better reasons to this list.

So why would I not be such a big fan?

What Goes Wrong with Frameworks?

VENDOR LOCK-IN.

Just to reinforce the point, let me say that again: VENDOR LOCK-IN.

There is a natural power asymmetry between the designer/creators of a framework and the users of that framework.

Typically, the users of the framework are the slaves to the framework, and the framework designers are the masters. It doesn’t matter if it is designed by some open source community or large corporation.

In fact, this power asymmetry is why vendors and consultants can make tons of money on “free” software: you are really paying them for their specialized knowledge.

Once you enter into a vendor lock-in, the vendor, consulting company, or consultant can charge you any amount of money they want. Further, they can continue to increase their prices at astronomical rates, and you will have no choice but to pay it.

Further, the features of your application become more dependent on the framework as the system grows larger. This has the added effect of increasing the switching cost should you ever want to move to another framework.

I’ll use a simple thought experiment to demonstrate how and why this happens.

Define a module as any kind of organizational grouping of code. It could be a function, class, package, or namespace.

Suppose that you have two modules: module A and module B. Suppose that module B has features that module A needs; so, we make module A use module B.

In this case we can say that module A depends on module B, and we can visualize that dependency with the following picture.

Slide1

Suppose that we introduce another module C, and we make module B use module C. This makes module B depend on module C, and consequently makes module A depend on module C.

Slide2

Suppose that I created some module D that uses module A. That would make module D depend on module A, module B, and module C.

Slide3

This demonstrates that dependencies are transitive, and every new module we add to the system will progressively make the transitive dependencies worse.

“What’s so bad about that?” you might ask.

Suppose that we make a modification to module C. We could inadvertently break module D, module A, and module B since they depend on module C.

Now, you might argue that a good test suite would prevent that … and you would be right.

However, consider the amount of work to maintain your test suite.

If you changed module C then you would likely need to add test cases for it, and since dependencies are transitive you would likely need to alter/add test cases for all the dependent modules.

That is an exponential growth of complexity; so, good luck with that. (See my post “Testing software is (computationally) hard” for an explanation). 

It gets worse, though.

The transitive dependencies make it nearly impossible to reuse your modules.

Suppose you wanted to reuse module A. Well, you would also need to reuse module B and module C since module A depends on them. However, what if module B and module C don’t do anything that you really need or even worse do something that impedes you.

You have forced a situation where you can’t use the valuable parts of the code without also using the worthless parts of the code.

When faced with this situation, it is often easier to create an entirely new module with the same features of module A. 

In my experience, this happens more often than not.

There is a module in this chain of dependencies that does not suffer this problem: module C.

Module C does not depend on anything; so, it is very easy to reuse. Since it is so easy to reuse, everyone has the incentive to reuse it. Eventually you will have a system that looks like this.

Slide4

Guess what that center module is: THE FRAMEWORK.

Slide5

This is a classic example of code immobility.

Code mobility is a software metric that measures the difficulty in moving code. 

Immobile code signals that the architecture of the system doesn’t support decoupling.

You can argue that the framework has so much reusable components that we ought to couple directly to it.

Ultimately, we value code reuse because we want to save time and resource, and reduce redundancy by taking advantage of already existing components.

Isn’t that exactly what the framework gave us?

Well, yes and no.

It depends on what you want to reuse.

Are you trying to reuse infrastructure code? By all means, please use a framework for that.

Are you trying to reuse domain specific business logic? You probably don’t want to couple your business logic directly to a framework.

An example of coupling your business logic directly to a framework is your typical MVC-ORM design. I’ve already explained this in my blog post “MongoDB, MySQL, and ORMS: when and where to use them”; so, I will not elaborate on it, here.

The Best of Both Worlds  

So it seems that we are at an impasse: we want to reuse infrastructure code from a framework, but we also want our business logic to be independent of it.

Can’t we have it both?

Actually, we can.

The direction of the arrow in our dependency graph makes our code immobile.

For example, if module A depends on module B then we have to use module B every time we use module A.

However, if we inverted the dependencies such that module A and module B both depended on a common module — call it module C — then we could reuse module C since it has no dependencies. This makes module C a reusable component; so, this is where you would place your reusable code.

coupling The Dependency Inversion Principle exists to enable this pattern.

The typical repository pattern is the perfect example of how inverting dependencies can decouple your business logic from a framework. I have already talked about it in my post “Contract Tests: or, How I Learned To Stop Worrying and Love the Liskov Substitution Principle”; so, I won’t elaborate on it, here.

So how would you structure your application to support the inverted dependencies?

Well, there are multiple ways you can legitimately do it, but I use the strategy of placing the bulk of the applications code in external libraries.

Under this model, the framework’s views and controllers  only serve as a delivery mechanism for the application. 

Consequently, we can easily move our code to another framework because the components do not depend on the framework in any way.

As an added benefit, we can test the bulk of the application independent of the framework. 

In fact, this is your typical Component Oriented Architecture.

For example, standard Component Oriented Architecture will break large C++ applications into multiple dll files, large JAVA applications into multiple jar files, or large .NET applications into multiple assemblies.

There exists rules about how you structure these packages, however.

You could argue that we would spend too much time with up front design when designing this type of system.

I agree that there is some up-front cost, but that cost is easily off-set by the time savings on maintenance, running tests, adding features, and anything else that we want to do to the system in the future.

We should view writing mobile code as an investment that will pay big dividends over time.

I plan to create a very simple demo application that showcases this design in the near future. Stay tuned.

Contract Tests or: How I Learned to Stop Worrying and Love The Liskov Substitution Principle

We software developers often regret past design decisions because we get stuck with their consequences. As an industry, we face this challenge so much that we have a name for it: accidental complexity.

Developers introduce “accidental complexity” when they design interfaces or system routines that unnecessarily impedes future development.

For example, I might decide to use a database to persist application state, but later I might realize that using a database introduces scalability problems. However, by the time I realize this, all my business logic depends on the database; so, I can’t easily change the persistence mechanism because I coupled it with the business logic.

In this hypothetical example, I only care about persisting application state, but I don’t necessarily care how I persist it or where I persist it. I could theoretically use any persistence mechanism. However, I “accidentally” coupled myself to a database, and that cause the “accidental complexity”.

I see this frequently happen with applications that use Object Relational Mappers (ORMs). Consider the following code snippet:

The Person class has special annotations from the Doctrine ORM framework. It allows me to “automatically” persist information to a database based on the annotation values. This significantly simplifies the persistence logic.

For example, if I wanted to save a new Person to a database, I could do it quite easily with the following code snippet.

However, as a consequence of our “simplification”, I have also coupled two separate responsibilities: (a) the business logic, and (b) the persistence logic.

Unfortunately, any complex situation will force us to make difficult trade-offs, and sometimes we don’t always have the information we need to make proper decisions. This situation can force us to make early decisions that unnecessarily introduce “accidental complexity”.

Fortunately, we have a tool that can let us defer implementation details: the Abstract Data Type (ADT).

Abstraction to the Rescue

An ADT provides me the means to separate “what” a module does from “how” a module does it — we define a module as some “useful” organization of code.

Object oriented programming languages typically use interfaces and classes to implement the concept of an ADT.

For example, suppose that I defined a Person class in PHP

I could use an interface to define a PersonRepository to signal to the developer that this module will (a) returns a Person object from persistent storage, and (b) save a Person object to persistant storage. However, this interface only signals the “what”; not the “how”.

Suppose that I wanted to use a MySql database to persist information. I could do this with a MySqlPersonRepository class that implements the PersonRepository interface.

This class defines (a) “how” to find a Person from a database, and (b) “how” to save a Person to a database.

If I wanted to change the implementation to MongoDb then I could potentially use the following class.

This would enable us to write code like the following.

Notice how I don’t make any reference to a particular style of persistence in the code above. This “separation of concerns” allows me to switch implementations at runtime by changing the definition of AppFactory::getRepositoryFactory.

To use a MySql database, I could use the following class definition.

and if we wanted to use a MongoDb datastore then we could use the following class definition

By simply changing one line of code, I can change how the entire application persists information.

I HAVE THE POWER … of the Liskov Substitution Principle

Recall, the original thought experiment: I originally used a MySql database to persist application state, but later needed to use MongoDb. However, I could not easily move the persistence algorithms because I coupled the business logic to the persistence mechanism (i.e. ORM).

Mixing the two concerns made it hard to change persistence mechanism because it also required changing the business logic. However, when I separated the business logic from the persistence mechanism, I could make independent design decisions based on my needs.

The power to switch implementations comes from the “Liskov Substitution Principle”.

Informally, the Liskov Substitution Principle states that if I bind a variable X to type T then I can substitute any subtype S of T to the variable X.

In the example above, I had a type PersonRepository, and two subtypes (a) MySqlPersonRepository, and (b) MongoDbPersonRepository. The Liskov Substitution Principle states that I should be able to substitute either subtype for a variable PersonRepository.

We call this “behavioral subtyping”. This type of subtyping differs from traditional subtyping because behavioral subtyping demands that the runtime behaviors of subtypes behave in a consistent way to the parent type.

Everybody (and Everything) Lies

Just because a piece of code claims to do something, does not imply that it actually does do it. When dealing with real implementations of an ADT, we need to consider that our implementations could lie.

For example, I could accidentally forgot to save the id of the Person object properly in the MongoDB implementation; so, while I intended to follow the “Liskov Substitution Principle”, my execution failed to implement it properly.

Unfortunately, we cannot rely on the compiler to catch these errors.

We need a way to test the runtime behaviors of classes that implement interfaces. This will verify that we at least have some partial correctness to our application.

We call these “contract tests”.

Trust But Verify

Assume that we wanted to place some behavioral restrictions on the interface PersonRepository. We could design a special class with the responsibility of testing those rules.

Consider the following class

Notice how we use the abstract function “getPersonRepository” in each test. We can defer the implementation of our PersonRepository to some subclass of PersonRepositoryContractTest, and execute our tests on the subclass that implements PersonRepositoryContractTest.

For example, we could test the functionality of a MySql implementation using the following code:

and if we wanted to test a MongoDB implementation then we could use the following code:

This shows that we can reuse all the tests we wrote. Now we can easily test an arbitrary number of implementations.

Conclusion

Of course, in practice, there are many different ways of implementing contract tests; so, you may not want to use this particular method. I only want you to take away the fact that not only can you implement contract tests, but that you can do it in a simple and natural way.

A Tale of Two Approaches: Inside-Out vs Outside-In Testing

I recently had a conversation with a test automation engineer about the virtues of outside-in design vs. inside-out design. I prefer outside-in design while he prefers inside-out design.

I want to tell this story because it clearly demonstrates the current culture war for the heart and minds of developers with respect for testing/design methodologies.

Our Objective

Our disagreement started when the test engineer brought up automated testing. While we both agreed on the virtues of automated testing, we disagreed on the specifics of how we should approach automated testing.

He uses the traditional TDD approach of red-green-refactor for unit tests before writing integration tests (i.e. bottom-up), while I use the approach of writing behavioral tests first, integration tests with stubs second, and unit tests last (i.e. top-down).

Neither of us backed down on this; so, we spared with different types of argument and counter-arguments.

Our Conflict

Our disagreement seemed to center around different value systems: the test engineer valued “program correctness”, while I valued “interface design” feedback.

The test engineer argued that you can’t prove program correctness when you start with an integration test. According to him, you have to “prove” the correctness of isolated modules before you can “prove” the correctness of their integration.

His statement of “proving” program correctness shocked me (especially since it came from an automation test engineer). I mentioned that you can’t actually “prove” program correctness with integration tests, and that you can only use them to falsify program correctness. Further, even if you could “prove” program correctness with integration tests then it would be an NP-Hard problem.

He tried to counter me with several arguments, but I felt that they equated to “grasping at straws”.

Eventually, I told him that I don’t use outside-in testing to “prove” program correctness, anyway: I use it as a means to design interfaces.

By definition, an interface constrains the design of a class.

I said that you can not effectively determine the constraints of an interface working from the inside-out because subclasses depend on the interface (not the other way around). Consequently, most people “discover” the constraints of an interface by constantly refactoring.

I argued that you can more effectively “discover” the constraints of an interface by working from the outside-in. How I want to call the methods of an interface determines the types of constraints I place on the interface. The design feedback I get for interface design working from the outside-in has incredible valuable for me.

I drew the following diagram to help illustrate this point.

DIP

The solid lines represent the source code dependencies, while the dashed line represents the runtime dependencies. This illustrates that at compile time both the caller and the callee have a dependency on the interface, but when the program runs the caller has a dependency on the callee.

Object Oriented polymorphism gets its power from this asymmetry between the source code dependency and the runtime dependency. For example, when the compiler runs, it will check that the callee obeys the interface properly. However, when the application actually runs, the runtime engine will swap out the interface for the actual concrete class.

A draw decision

In the end, neither of us could convince the other of the justness of our respective causes. We simply agreed to disagree, and we each went our separate ways.

Ultimately, everyone has to make trade-offs between competing concerns. However, given the nature of object orientation, it seems to me that we should value the design of interfaces over implementation details. Consequently, we should choose a methodology that streamlines that process. Otherwise, why even use an OO language.

Cargo Cult MVC

Wikipedia describes “cargo cult programming” as “a style of computer programming characterized by the ritual inclusion of code or program structure that serve no real purpose”.

I believe that MVC web frameworks have essentially forced a cargo cult mentality on programmers.

The Problem

Pretty much all MVC web frameworks use the notion of a front controller.

The front controller has the responsibility of handling every web request, and then gives that request to the proper controller.

For example, in CakePHP the default configuration is to use convention names to choose routes. Suppose I had the following piece of code

The front controller for CakePHP would automatically associate PostController::index() to the url /posts/index

The front controller’s strategies for dispatching work to controllers is pretty arbitrary. However, the most typical strategies are to use xml file configuration, annotations, naming conventions, or fluent interfaces.

Typically, we try to organize our code around use cases because it is a technique that we know promotes loose coupling and high cohesion. A use case in this context is any single useful thing the system must do. For example, “edit an article” would be a use case for a blogging system.

However, there is no guarantee that a single url will always be associated with a single use case. Consider the case of a web search form. Imagine that we have a url of product/search?q=some_search_term. That web request would then be passed to a controller like the following

The problem is that the controller actually has 5 use cases:

  • show zero results
  • show one result
  • show many results
  • show lots of results
  • opps

To understand why, consider how an average person would expect the system to respond.

  • Case Zero: If there were no results for a search then I would want to see a dedicated page telling me that there were no results.
  • Case One: If there was only one result then I would want the system to immediately show me the details of that single item.
  • Case Many: If there were many results that could fit on a single page then I would just like to see a listing of those results so that I can choose to drill down into a specific item.
  • Case Lots: If there were too many results to list on a single page then I would want to see some sort of pagination.
  • Case Opps: If something bad happened which prevented the system from responding to my request then I would want the page to tell me that.

Rather than being MVC, we know have MVVVVVC. This isn’t exactly what we want from an engineering perspective, but it is what we were forced to do in order to satisfy the business requirement.

We can work around that situation pretty easily with the proper use of design patterns. For example, we would simplify the controller logic with the use of the strategy pattern. However, in so doing we are no longer using the MVC paradigm. It would be “Model StrategyContext View Controller”.

Not all cases are that simple to work around, though. Consider the situation where you have different roles for a system, and that each roles is allowed to view different things, and perform different work. Now consider that you wanted to expose that kind of functionality through a single url. How painful would that code be?

The Solution

Do not treat MVC like it is a silver bullet solution. There are plenty of alternative architectures out there. Google the following terms to see

  • Hexagonal Architecture (aka ports and adaptors)
  • Onion Architecture
  • Clean Architecture
  • Lean Architecture
  • DCI Architecture
  • CQRS
  • Naked Objects

Conclusion

Everything has it’s time and place. It is your job as a developer to think. Blind adherence to a framework will not save you.

I should also point out that MVC architectures themselves are not bad. How we use it is bad. MVC architectures definitely had their usefulness, and they probably still do.

For example, MVC architectures are really good at doing very simple data entry system because a single use case can actually fit into a url.

Consider the following mappings between a URL and a Use Case

URL Use Case
/article/create Create
/article/:id: Read
/article/:id:/edit Update
/article/:id:/delete Delete

In this case, each url maps very well to a single use case; so, MVC would probably be a really good choice to power an app like this.

However, in my opinion, most systems will not be this simple. Most systems are now very sophisticated and if they are not sophisticated then they eventually will evolve to something that is. You might as well architect your system with something that you know can help you evolve.