Vertical Slicing and Product Backlog Management with the Gherkin Syntax

TL;DR: To create a product backlog, “vertically slice” your user stories by grouping similar scenarios. Estimate the work and value of these slices instead of the user story.   


I’ve seen a rise in the demand for “full-stack” developers in the last couple years.

The agile concept of “vertical slicing” made these types of positions very popular.

In a traditional team structure, each person on a team will have knowledge of one layer of an application. When the team attempts to complete some feature, they will have to split the feature into tasks corresponding to layers and then distribute the task to the proper people.

We call this “horizontal slicing”.

If you had a team of “full-stack” developers then you could simply assign a feature to a developer and you could expect them to complete the feature end-to-end with little to no help or coordination.

We call this “vertical slicing”.

This works great in theory, but it has a lot of challenges in practice.

One of these challenges is the exact means of creating high quality product backlog with “vertical slices”.

Enter User Stories

I typically see teams create vertical slices based on “User Stories” and the Gherkin syntax.

The following two code snippets provide examples for two fictitious features: (a) Create Account and (b) Login.

Unfortunately, this sometimes creates very large product backlog items that the team can not deliver quickly.

In particular, I’ve seen this happen when the business dictates that teams can only release  polished and bug-free features.

For example,  it could take a very long time for a team to complete the scenarios “Internal Server Error” and “Wait Too Long” for the feature “Create Account’. Further, those scenarios may not deliver much business value compared to the necessary work.  

In comparison, it could take a very short time for a team to complete the scenario “Valid Account Creation”, and that scenario might have very high business value.

This illustrates that coupling all scenarios together can impede the early and frequent releases we need to create a tight feedback loop between developers and testers or users.

Slice Your Slices

User Stories are not bad, though. We just need a better way to generate vertical slices for our product backlog.

Notice that each user story has multiple scenarios, and that we can conceptually break up each user story into individual scenarios.

usecase_in_flows

Based on this principle, we can create vertical slices by grouping scenarios based on business value.

For example, we could slice our features in the following way.

Feature Vertical Slice Scenario
Create Account Basic Valid Account Creation
Business Rule Violations Duplicate Username
Duplicate Email
User Input Errors Not a Strong Password
Passwords Do Not Match
System Problems Long Wait Time
Internal Server Error
Login Basic Valid Username/Password
Business Rule Violations 1 Invalid Username/Password
Business Rule Violations 2 Too Many Incorrect Attempts
System Problems Long Wait Time
Long Wait Time

Each of these “vertical slices” become product backlog items that we can individually estimate and prioritize.

For example, our fictitious product team could prioritize the “vertical slices” in the following way.

  1. Create Account – Basic
  2. Login – Basic
  3. Login – Business Rule Violations 1
  4. Create Account – User Input Errors
  5. Create Account – Business Rule Violations
  6. Login – Business Rule Violations 2
  7. Create Account – System Problems
  8. Login – System Problems

This allows a more granular approach to creating product backlog items.

As an added benefit, you can leave a “user story” largely undefined so long as you already have its highest priority slices within your product backlog.

This allows you to “groom” your user stories in a “just in time” way.

For example, we created 4 “vertical slices” of the feature “Create Account” in the example above. However, as an alternative, we could simply create the first slice “Create Account – Basic” and not bother with further analysis until someone completes that slice. This could have saved everyone from spending unnecessary time in a grooming session.

I am only providing an illustration, though. Ultimately, the end result depends on the situation and the interaction between team members.

 

How Frameworks Shackle You, and How to Break Free (Part Deux)

In my last post, I talked about how over-reliance of a framework creates immobile code, and how you can use the dependency inversion principle to break the dependency on a framework.

I also mentioned that I would create a demo application to demonstrate how to do this.

However, before I live up to that promise, I have to introduce you to a little theory.

Principles of Package Management

Mobile code does not just happen. You have to design it.

To build mobile code we typically group code into reusable packages that follow proper package design principles.

For this discussion, I will only consider the principles of package cohesion:

  • The Release/Reuse Equivalency Principle
  • The Common Reuse Principle
  • The Common Closure Principle

Robert Martin codified these principles in his book “Agile Software Development: Principles, Patterns, and Practices”. This book is the gold standard for agile software development.

The Release/Reuse Equivalency Principle

The Release/Reuse Equivalency Principle says that “the granule of reuse is the granule of release”.

This principle makes an equivalence between reusability and releasability.

This equivalence has two major implications:

  • you can only release code that is reusable
  • you can only reuse code that is releasable

The reverse is also true:

  • You cannot release code that is not reusable.
  • You cannot reuse code that is not releasable.

This principle puts a very heavy burden on the maintainer of a package, and that burden forces package maintainer to have a package release strategy.

Package maintainers generally follow the “semantic versioning” strategy.

Semantic versioning has very strict rules related to “semantic version numbers”.

Semantic version numbers consist of the pattern of x.y.z where x, y, and z are integers.

Each position conveys a particular meaning (hence the name “semantic versioning”).

The first number is the major version.

We usually start a package with version 0. We should considered a version 0 package as unfinished, experimental, heavily changing without too much care for backwards compatibility.

Starting from major version 1, we consider the published API as stabilized and that the package has a certain trustworthiness from that moment. Every next increment of the major version marks the moment that parts of the code breaks backward compatibility.

The second part of the version number is the minor version.

We increment the minor versions when we add new functionality to the package or deprecate parts of the public API. A minor release promises your clients that the package will not break backwards compatibility. A minor version only adds new ways of using the package.

The last part of the version number is the patch version.

Starting with version 0 it is incremented for each patch that is released for the package. This can be either a bug fix, or some refactored private code.

Further, a package maintainer has the option to add meta-data after the release numbers. Typically they will use it to classifying packages as having a particular state: alpha, beta, or rc (release candidate).

For example, these items could be the releases of a package:

  • 2.10.2
  • 3.0.0
  • 3.1.0
  • 3.1.1-alpha
  • 3.1.1-beta
  • 3.1.1-rc.1
  • 3.1.1-rc.2
  • 3.1.1

These number communicate the following to a client:

  • release 2.10.x has two patches. The patches may have been bug fixes or refactors. We would need to look at the changelog or commit logs to determine the importance of each patch.
  • After release 2.10.x, the package maintainer decided to break backwards compatibility. The package maintainer signaled to clients that the package breaks backwards compatibility by creating the 3.0.0 release.  
  • At 3.1.0, the package maintainer introduced new features that did not break compatibility with 3.0.0.
  • 3.1.1-alpha signals that the package maintainer started a patch of release 3.1.0. However, the package maintainer does not want to call the patch stable. Someone may have submitted a bug report to the package maintainer for release 3.1.0, and the package maintainer may have started the initial phases of fixing the bug. In this scenario, the package maintainer likely added some testing code to isolate the particular bug, or validate that the bug is fixed.
  • 3.1.1-beta suggests that the package maintainer completed the “feature”. Most likely this signals that the package maintainer’s automated tests pass.
  • 3.1.1-rc.1 suggests that the package passed manual QA and that the package manager can potentially release the patch as a stable version. The package manager would likely tell clients to run their integration tests against this release. Manual QA likely happen against this release, also.
  • 3.1.1-rc.2 suggests that the package maintainer found regression errors in 3.1.1-rc.1. It may indicate that an integration test failed for a client. The package manager may have fixed issues that a client reported and released the fix as 3.1.1-rc.2.
  • 3.1.1 signals that the package maintainer has successfully patched the 3.1.0 release.

The Common Reuse Principle

The Common Reuse Principle states that “code that is used together should be group together”. The reverse is also true: “code that is not used together should not be grouped together.”

A dependency on a package implies a dependency on everything within the package; so, when a package changes, all clients of that package must verify that they work with the new version.

If we package code that we do not use together then we will force our clients to go through the process of upgrading and revalidating their package unnecessarily.

By obeying the Common Reuse Principle, we provide the simple courtesy of not making our clients work harder than necessary.

The Common Closure Principle

The Common Closure Principle says that “code that changes together, belong together”.

A good architect will divide a large project into a network of interrelated packages.  

Ideally, we would like to minimize the number of packages for every change request because when we minimize the number of effected packages we also minimize the work to manage, test, and release those packages. The more packages that change in any given release, the greater the work to rebuild, test, and deploy the release. 

When we obey the Common Closure Principle we force a requirements change to happen to the smallest number of packages possible, and prevent irrelevant releases. 

Automated Tests are Mandatory

While clean code and clean design of your package is important, it’s more important that your package behaves well.

In order to verify the proper behavior of a package, you must have automated tests.

There exists many different opinions on the nature and extent of automated tests, though:

  • How many tests should you write?
  • Do you write the tests first, and the code later?
  • Should you add integration tests, or functional tests?
  • How much code coverage does your package need?

In my opinion, the particulars of how you write tests or the extent of your tests are situational. However, you must have automated tests. This is non-negotiable.

If you don’t have automated tests then you are essentially telling your clients this:

I do not care about you. This works for me today, and I do not care about tomorrow. Use this package at your own parel.  

Putting the Principles to Practice

Now that we have the principles, we can start to apply it.

In a future post, I will create a basic application that uses the package design principles that I described. Further, I will compose the components into different frameworks to demonstrate how to migrate the application between frameworks.

Stay tuned.