“The Surgical Team” vs “The Full-Stack Developer”

I recently had a conversation with an architect about the wisdom contained inside Fred Brook’s “The Mythical Man-Month”. We both agreed that it contains timeless principles and practices.

At some point, we discussed how his “surgical team” concept might apply to modern development, and we both concluded that we should structure teams around the “surgical team” concept.

I want to tell this story because it clearly demonstrates how we can apply timeless principles to any “modern” problems.

In “The Mythical Man-Month”, Fred Brooks makes two observations::

  • Communication overhead is the single largest problem in software development: the more developers on the project, the bigger the communication overhead.
  • There is a huge productivity gap among software developers: the good developers are very very good. The bad ones are very very bad.

Given these limitations, Dr. Brooks says that we should organize large software development projects into multiple “surgical teams”.

Each “surgical team” has a single “chief programmer” (i.e. the surgeon) who does most of the delicate work.

Much like a surgeon, the “chief programmer” has staff of specialists with more mundane roles to support him.

Contrast this to the current fad of “full-stack” development.

In this model, we build a teams that can handle everything from end-to-end and that can deliver features independently of each other. This is effectively treating people like interchangeable “man-months”, which Dr. Brooks debunks as a “myth”.

I do not want to berate “full-stack” developers, though. They have their place in the world (probably as support for an architect or chief programmer)..

However, a strategy of only hiring full-stack developers does seem very efficient to me because it does not address the two biggest problems that the “Mythical Man Month” addresses.


How Frameworks Shackle You, and How to Break Free (Part Deux)

In my last post, I talked about how over-reliance of a framework creates immobile code, and how you can use the dependency inversion principle to break the dependency on a framework.

I also mentioned that I would create a demo application to demonstrate how to do this.

However, before I live up to that promise, I have to introduce you to a little theory.

Principles of Package Management

Mobile code does not just happen. You have to design it.

To build mobile code we typically group code into reusable packages that follow proper package design principles.

For this discussion, I will only consider the principles of package cohesion:

  • The Release/Reuse Equivalency Principle
  • The Common Reuse Principle
  • The Common Closure Principle

Robert Martin codified these principles in his book “Agile Software Development: Principles, Patterns, and Practices”. This book is the gold standard for agile software development.

The Release/Reuse Equivalency Principle

The Release/Reuse Equivalency Principle says that “the granule of reuse is the granule of release”.

This principle makes an equivalence between reusability and releasability.

This equivalence has two major implications:

  • you can only release code that is reusable
  • you can only reuse code that is releasable

The reverse is also true:

  • You cannot release code that is not reusable.
  • You cannot reuse code that is not releasable.

This principle puts a very heavy burden on the maintainer of a package, and that burden forces package maintainer to have a package release strategy.

Package maintainers generally follow the “semantic versioning” strategy.

Semantic versioning has very strict rules related to “semantic version numbers”.

Semantic version numbers consist of the pattern of x.y.z where x, y, and z are integers.

Each position conveys a particular meaning (hence the name “semantic versioning”).

The first number is the major version.

We usually start a package with version 0. We should considered a version 0 package as unfinished, experimental, heavily changing without too much care for backwards compatibility.

Starting from major version 1, we consider the published API as stabilized and that the package has a certain trustworthiness from that moment. Every next increment of the major version marks the moment that parts of the code breaks backward compatibility.

The second part of the version number is the minor version.

We increment the minor versions when we add new functionality to the package or deprecate parts of the public API. A minor release promises your clients that the package will not break backwards compatibility. A minor version only adds new ways of using the package.

The last part of the version number is the patch version.

Starting with version 0 it is incremented for each patch that is released for the package. This can be either a bug fix, or some refactored private code.

Further, a package maintainer has the option to add meta-data after the release numbers. Typically they will use it to classifying packages as having a particular state: alpha, beta, or rc (release candidate).

For example, these items could be the releases of a package:

  • 2.10.2
  • 3.0.0
  • 3.1.0
  • 3.1.1-alpha
  • 3.1.1-beta
  • 3.1.1-rc.1
  • 3.1.1-rc.2
  • 3.1.1

These number communicate the following to a client:

  • release 2.10.x has two patches. The patches may have been bug fixes or refactors. We would need to look at the changelog or commit logs to determine the importance of each patch.
  • After release 2.10.x, the package maintainer decided to break backwards compatibility. The package maintainer signaled to clients that the package breaks backwards compatibility by creating the 3.0.0 release.  
  • At 3.1.0, the package maintainer introduced new features that did not break compatibility with 3.0.0.
  • 3.1.1-alpha signals that the package maintainer started a patch of release 3.1.0. However, the package maintainer does not want to call the patch stable. Someone may have submitted a bug report to the package maintainer for release 3.1.0, and the package maintainer may have started the initial phases of fixing the bug. In this scenario, the package maintainer likely added some testing code to isolate the particular bug, or validate that the bug is fixed.
  • 3.1.1-beta suggests that the package maintainer completed the “feature”. Most likely this signals that the package maintainer’s automated tests pass.
  • 3.1.1-rc.1 suggests that the package passed manual QA and that the package manager can potentially release the patch as a stable version. The package manager would likely tell clients to run their integration tests against this release. Manual QA likely happen against this release, also.
  • 3.1.1-rc.2 suggests that the package maintainer found regression errors in 3.1.1-rc.1. It may indicate that an integration test failed for a client. The package manager may have fixed issues that a client reported and released the fix as 3.1.1-rc.2.
  • 3.1.1 signals that the package maintainer has successfully patched the 3.1.0 release.

The Common Reuse Principle

The Common Reuse Principle states that “code that is used together should be group together”. The reverse is also true: “code that is not used together should not be grouped together.”

A dependency on a package implies a dependency on everything within the package; so, when a package changes, all clients of that package must verify that they work with the new version.

If we package code that we do not use together then we will force our clients to go through the process of upgrading and revalidating their package unnecessarily.

By obeying the Common Reuse Principle, we provide the simple courtesy of not making our clients work harder than necessary.

The Common Closure Principle

The Common Closure Principle says that “code that changes together, belong together”.

A good architect will divide a large project into a network of interrelated packages.  

Ideally, we would like to minimize the number of packages for every change request because when we minimize the number of effected packages we also minimize the work to manage, test, and release those packages. The more packages that change in any given release, the greater the work to rebuild, test, and deploy the release. 

When we obey the Common Closure Principle we force a requirements change to happen to the smallest number of packages possible, and prevent irrelevant releases. 

Automated Tests are Mandatory

While clean code and clean design of your package is important, it’s more important that your package behaves well.

In order to verify the proper behavior of a package, you must have automated tests.

There exists many different opinions on the nature and extent of automated tests, though:

  • How many tests should you write?
  • Do you write the tests first, and the code later?
  • Should you add integration tests, or functional tests?
  • How much code coverage does your package need?

In my opinion, the particulars of how you write tests or the extent of your tests are situational. However, you must have automated tests. This is non-negotiable.

If you don’t have automated tests then you are essentially telling your clients this:

I do not care about you. This works for me today, and I do not care about tomorrow. Use this package at your own parel.  

Putting the Principles to Practice

Now that we have the principles, we can start to apply it.

In a future post, I will create a basic application that uses the package design principles that I described. Further, I will compose the components into different frameworks to demonstrate how to migrate the application between frameworks.

Stay tuned.

How Frameworks Shackle You, and How to Break Free

I sometimes hate software frameworks. More precisely, I hate their rigidity, and cookie cutter systems of mass production.

Personally, when someone tells me about a cool new feature of a framework, what I really hear is “look at the shine on those handcuffs.”

I’ve defended this position on many occasions. However, I really want to put it down on record so that I can simply point people at this blog post the next time they asks me to explain myself.

Why Use a Framework?

Frameworks are awesome things.

I do not dispute that.

Just from the top of my head, I can list the following reasons why you should use a framework

  1. A framework abstracts low level details.
  2. The community support is invaluable.
  3. The out of the box features enable you not to reinvent the wheel.
  4. A framework usually employs design patterns and “best practices” that enables team development  

I’m sure that you can add even better reasons to this list.

So why would I not be such a big fan?

What Goes Wrong with Frameworks?


Just to reinforce the point, let me say that again: VENDOR LOCK-IN.

There is a natural power asymmetry between the designer/creators of a framework and the users of that framework.

Typically, the users of the framework are the slaves to the framework, and the framework designers are the masters. It doesn’t matter if it is designed by some open source community or large corporation.

In fact, this power asymmetry is why vendors and consultants can make tons of money on “free” software: you are really paying them for their specialized knowledge.

Once you enter into a vendor lock-in, the vendor, consulting company, or consultant can charge you any amount of money they want. Further, they can continue to increase their prices at astronomical rates, and you will have no choice but to pay it.

Further, the features of your application become more dependent on the framework as the system grows larger. This has the added effect of increasing the switching cost should you ever want to move to another framework.

I’ll use a simple thought experiment to demonstrate how and why this happens.

Define a module as any kind of organizational grouping of code. It could be a function, class, package, or namespace.

Suppose that you have two modules: module A and module B. Suppose that module B has features that module A needs; so, we make module A use module B.

In this case we can say that module A depends on module B, and we can visualize that dependency with the following picture.


Suppose that we introduce another module C, and we make module B use module C. This makes module B depend on module C, and consequently makes module A depend on module C.


Suppose that I created some module D that uses module A. That would make module D depend on module A, module B, and module C.


This demonstrates that dependencies are transitive, and every new module we add to the system will progressively make the transitive dependencies worse.

“What’s so bad about that?” you might ask.

Suppose that we make a modification to module C. We could inadvertently break module D, module A, and module B since they depend on module C.

Now, you might argue that a good test suite would prevent that … and you would be right.

However, consider the amount of work to maintain your test suite.

If you changed module C then you would likely need to add test cases for it, and since dependencies are transitive you would likely need to alter/add test cases for all the dependent modules.

That is an exponential growth of complexity; so, good luck with that. (See my post “Testing software is (computationally) hard” for an explanation). 

It gets worse, though.

The transitive dependencies make it nearly impossible to reuse your modules.

Suppose you wanted to reuse module A. Well, you would also need to reuse module B and module C since module A depends on them. However, what if module B and module C don’t do anything that you really need or even worse do something that impedes you.

You have forced a situation where you can’t use the valuable parts of the code without also using the worthless parts of the code.

When faced with this situation, it is often easier to create an entirely new module with the same features of module A. 

In my experience, this happens more often than not.

There is a module in this chain of dependencies that does not suffer this problem: module C.

Module C does not depend on anything; so, it is very easy to reuse. Since it is so easy to reuse, everyone has the incentive to reuse it. Eventually you will have a system that looks like this.


Guess what that center module is: THE FRAMEWORK.


This is a classic example of code immobility.

Code mobility is a software metric that measures the difficulty in moving code. 

Immobile code signals that the architecture of the system doesn’t support decoupling.

You can argue that the framework has so much reusable components that we ought to couple directly to it.

Ultimately, we value code reuse because we want to save time and resource, and reduce redundancy by taking advantage of already existing components.

Isn’t that exactly what the framework gave us?

Well, yes and no.

It depends on what you want to reuse.

Are you trying to reuse infrastructure code? By all means, please use a framework for that.

Are you trying to reuse domain specific business logic? You probably don’t want to couple your business logic directly to a framework.

An example of coupling your business logic directly to a framework is your typical MVC-ORM design. I’ve already explained this in my blog post “MongoDB, MySQL, and ORMS: when and where to use them”; so, I will not elaborate on it, here.

The Best of Both Worlds  

So it seems that we are at an impasse: we want to reuse infrastructure code from a framework, but we also want our business logic to be independent of it.

Can’t we have it both?

Actually, we can.

The direction of the arrow in our dependency graph makes our code immobile.

For example, if module A depends on module B then we have to use module B every time we use module A.

However, if we inverted the dependencies such that module A and module B both depended on a common module — call it module C — then we could reuse module C since it has no dependencies. This makes module C a reusable component; so, this is where you would place your reusable code.

coupling The Dependency Inversion Principle exists to enable this pattern.

The typical repository pattern is the perfect example of how inverting dependencies can decouple your business logic from a framework. I have already talked about it in my post “Contract Tests: or, How I Learned To Stop Worrying and Love the Liskov Substitution Principle”; so, I won’t elaborate on it, here.

So how would you structure your application to support the inverted dependencies?

Well, there are multiple ways you can legitimately do it, but I use the strategy of placing the bulk of the applications code in external libraries.

Under this model, the framework’s views and controllers  only serve as a delivery mechanism for the application. 

Consequently, we can easily move our code to another framework because the components do not depend on the framework in any way.

As an added benefit, we can test the bulk of the application independent of the framework. 

In fact, this is your typical Component Oriented Architecture.

For example, standard Component Oriented Architecture will break large C++ applications into multiple dll files, large JAVA applications into multiple jar files, or large .NET applications into multiple assemblies.

There exists rules about how you structure these packages, however.

You could argue that we would spend too much time with up front design when designing this type of system.

I agree that there is some up-front cost, but that cost is easily off-set by the time savings on maintenance, running tests, adding features, and anything else that we want to do to the system in the future.

We should view writing mobile code as an investment that will pay big dividends over time.

I plan to create a very simple demo application that showcases this design in the near future. Stay tuned.

The Nature and Scope of E-Prime

A friend recently criticized my understanding and use of E-Prime. This resulted in a short conversation about the nature and scope of E-Prime, and a deeper understanding of E-Prime for us.

First let me give you some context.

My friend writes science fiction short stories, and he occasionally will give me a pre-print of something he wants to publish for feedback.

Recently, he started to experiment with E-Prime in his stories.

E-Prime prescribes that you never use the verb “to be” or any of its conjugations and contractions.

For example, E-Prime would not allow me to say “I am Jonathan” because the word “am” is the first person present tense conjugation of the verb “to be”.

In order to express the same idea under E-Prime, I would have to say something like “I call myself Jonathan”, or “you can call me Jonathan”.

Why Communicate in E-Prime?

My friend wanted to constrain his language to be less judgemental, and he argues that E-Prime leads to less judgemental language.

When you write a sentence like “Jonathan is a bad person” the reader might assume that you are passing judgement, and are thus a judgemental person. However, if you rephrase the sentence to “Jonathan seems like a bad person” then the reader can only conclude that you are simply giving an opinion.

The clause “Jonathan is a bad person” is an example of a class membership.

In this case, you are saying that Jonathan belongs to the set of bad people. However, some people might argue that Jonathan belongs to the set of good but misunderstood people. Afterall, who are you to pass judgement on such a person?

Further, it is very hard to control how people will interpret your words. A good writer will find ways to clearly express his ideas in unambiguous ways (assuming that is his intention).  Since the verb “to be” can have many different uses depending on the context, readers might interpret it differently.

This happens because the word “to be” has many different uses. For example, the E-Prime wikipedia page states that the verb “to be” can be used to express (a) identity, (b) class membership, (c) class inclusion, (d) prediction, (e) auxiliary, (f) existence, and (g) location.

Hence, E-Prime leads to stronger less ambiguous writing which happens to be less judgemental: you can’t really express class membership unless you really go out of you way to do it.

When Things Get Awkward

I personally believe that E-Prime is a great rule to follow. However, I couldn’t help but notice odd phrasings when I read his story.

The odd phrasings throughout the story made it incredibly difficult to actually enjoy the story because it would interrupt my train of thought.

For example, the clause “she had not stopped eating by the time he arrived” just seems incredibly awkward, and that awkwardness defocused me.

Further, the best way I know how to rewrite this clause is as “she was eating when he arrived”. However, this rewriting involves the third person present progressive conjugation of the verb “to be”.

The Intention of the Law vs. the Letter of the Law

I realized at this point that we actually use the verb “to be” for both semantic and syntactic reasons. Identity, class membership, class inclusion, prediction, existence, and location are all semantic uses of the verb “to be”, but using a conjugation of “to be” as an auxiliary verb is a syntactic use.

For example, “Jonathan is a bad person” is a semantic use of the verb “to be”. In this case, we are expressing class membership. However, “she was eating when he arrived” is a syntactic use of the verb “to be”. In this case, we are simply conjugating the verb “to eat”, and we are using a conjugate of “to be” to do it.

By disallowing the verb “to be” in all of it’s uses we also disallow all forms of the progressive and perfect progressive tenses. This is a very high price to pay, IMO.


E-Prime is great. However, you should use it to constrain your use of semantics, and not syntax. E-Prime is a means to an ends.

I personally like to use it to achieve a more concise and expressive writing style. However, you can only do that if you use it with intention, and not blindly following the rule.

Testing software is (computationally) hard

In my post “A Tale of Two Approaches”, I mentioned that integration testing is an NP-Hard problem. This seems very intuitive to me, but other people don’t; so, it seems appropriate for me to justify this claim through argumentation.

Why test software?

According to Rice’s Theorem, “there exists no automatic method that decides with generality non-trivial questions on the behavior of computer programs”. This means that the general problem of knowing whether a program correctly runs is undecidable (can not be solved by an algorithm).

This limitation forces developers to either (a) prove their algorithms correct using some formal system or (b) write tests for “functional correctness” by verifying that expected inputs match expected outputs.

Most developers choose the later because providing exact proofs cost too much time and money.

For example, suppose that I wrote the following python method that squared a number:

A program to test that function could look like the following

Problems with software testing

The process of writing very specific functional tests often results in a large number of tests for even simple behavior.

Unfortunately, to be completely thorough you would have to write tests for all possible inputs and verify that you get the expected output. This is completely unrealistic.

Now, I don’t know of a good way to prove that claim, but the theory of NP-completeness comes very close: if I can prove that functional testing belongs to the class of NP-complete problems then it is just as hard as the hardest problems in computer science.

Let me give a quick explanation of NP-completeness so you have enough background to understand my claim.

What is NP-completeness?

In computer science, we often try to write software that solves mechanical problems as efficiently as possible. However, there exist a certain class of “decision” problems that we do not know how to solve efficiently in general cases. We call these problems NP-Complete.

We also have a way to identify NP-Complete problems: create a method that can “efficiently” map one NP-Complete problem to another problem.

Why are functional tests NP-complete

Recall, the class TestSquareMethod in the code sample above.

That test uses the arrange-act-assert paradigm which organizes tests into 3 logical groups:

  • Arrange all necessary preconditions and inputs
  • Act on the object or method under test
  • Assert that the expected results occurs

This model has the same logical form as the traditional if-then statement from propositional logic: if the properties in the arrange grouping are true then the properties in the assert grouping are true when I run the program.

Using this fact, I claim (but will not prove) that any arrange-act-assert style test is the same as a logical proposition and vice versa.

For example, by using the rule of material implication, I can model the test “test_2_squared_equals_4” with the boolean formula p ^ ~q where p represents the “preconditions” in the arrange group and q represents the “postconditions” in the assert group. Further, any boolean formula with the form p ^ ~q I can represent with the test “test_2_squared_equals_4”.

Suppose that I asked the question: Given a functional testing application written in an arrange-act-assert style, does there exist some input that obeys the preconditions but whose output does not obey the postconditions?

Alternatively, we can phrase the question as does there exist some arrangement of inputs that do not satisfy the boolean formula’s associated with a collection of functional test.

Now, there exists a NP-Hard problem called SAT which asks whether there exists an interpretation that satisfies a boolean formula, and I claim (from intuition) that we can reduce the SAT problem to this problem.



I want to appeal to your intuition rather than provide something very rigorous.

Providing a proper proof would likely involve Turing Machines and Hoare Logic, and I just don’t want to do the necessary work.

It would also be boring.


Some people claim that writing functional tests are easy.

I call bullshit.

Writing functional tests are not just hard; they are NP-Hard.

Installing TypeScript on Ubuntu 14.04

I had recently had to use Typescript on an Ubuntu machine, and I could not get it to work. I eventually found out how to fix it; so, I thought I would use this blog post as an opportunity to document the process in case someone else runs into the same problem.


Ensure that you have Git and Node.js installed

Clone a copy of the repo:

Change to the TypeScript directory:

Install Jake and dev dependencies

Make Typscript use nodejs instead of node

At this point, you have installed typescript successfully. However, you need to make it use nodejs instead of node. I know of two approaches:

  • Edit /usr/local/bin/tsc
  • Change update-alternatives

I suggest you change update-alternatives. However, it will also make all calls to node use nodejs, which may cause you problems. Pick whichever one makes more sense for you.

Edit /usr/local/bin/tsc


To this

Change update-alternatives

You can update your alternatives with the following command line argument:

Now, update your PATH to load files from /etc/alternatives first

You can typically by adding the following to $HOME/.bashrc

You can not solve political problems with technology

Politics will kill a project far faster than anything else. I can speak from a position of knowledge since I’m an employee of a giant acquisition company, and I’ve seen my fair share of “system integration” attempts.

Let me justify this statement with a little thought experiment.

Melvin Conway introduced Conway’s Law in 1968 to express his observation that “organizations which design systems … are constrained to produce designs which are copies of the communication structures of these organization”.

Essentially, people will build systems around themselves.

As a consequence, integrating different systems necessarily introduces political problems because now you have forced people to work together, and they may not necessarily like each other.

For example, I’ve seen situations where different groups within an organization (i.e. Accounting and Marketing) have two separate ldap authentication server or two separate windows domain servers.

Now suppose, you have a CTO demand that these two group integrate their systems because the CFO of the company had a spreadsheet that showed huge cost savings associated with integrating their systems.

Unfortunately, you can only understand so much about your organization from a spreadsheet.

These groups know about each others existence, and they also know about their IT structure. Yet, they did not try to integrate their systems on their own. This suggests that someone in these different departments know something that the CTO and CFO don’t.

In this case, the CTO and CFO don’t realize that Marketing and Accounting built their systems this particular because they have huge beef with each other.

For illustration sake, let’s suppose we have Ange as the director of Marketing, and Bryan as the director of Accounting. Suppose that Ange and Bryan do not get along, and do as much as possible to “protect” their respective groups from the other person. That would lead them to create an IT infrastructure that separated themselves from each other.

This situation creates a type of “cold war” between the groups, but still enables them to work together for the benefit of the larger company.

However, the moment the CTO forced them to integrate their systems that “cold war” turns into a “hot war”. Also, those personal problems don’t go away if a third party does the integration on behalf of the two groups. If anything that third party simply gets caught in the cross-fire, or ends up having to act as peacemaker.

Ultimately, you can’t really solve political problems with technology. I’ve never been to business school, but I hear that one of the first things you learn is that most mergers fail through cultural incompatibility between acquirer and acquiree. Having worked with our various subsidiaries, I understand why that is the case.

Personally, I believe that you really have to solve the human problem first before you ever try to integrate systems. If you can’t solve that problem then the only alternative is to simply fire your entire management team and hire completely new people.