How To Build a Prediction API in 10 Minutes with Flask, Swagger, and SciPy

I’ve seen a lot of hype around Prediction APIs, recently. This is obviously a byproduct of the current data science fad.

As a public service, I’m going to show you how you can build your own prediction API … and I’ll do it by creating a very basic version in 10 minutes.

We will build an API that will determine if we should provide credit to someone based on certain demographic information.

We will use Kaggle’s “Give Me Some Credit” dataset as the basis for this example.

Go to the “Give Me Some Credit” page, and download the files.

You will have 4 files:

  • cs-training.csv
  • cs-test.csv
  • sampleEntry.csv
  • DataDictionary.xls

We will only need the cs-training.csv and DataDictionary.xls files for this project.

Create application folder

Use the following commands to create a directory and move into it.

In the directory create a file called create_credit_classifier.py. We will use incrementally build the program in that text file.

Load Data Set

We need to parse the cs-training.csv file so that we can make sense of the data. The following shows the first 5 lines of data from cs-training.csv

We can observe the following fields in the file

  • SeriousDlqin2yrs
  • RevolvingUtilizationOfUnsecuredLines
  • age
  • NumberOfTime30-59DaysPastDueNotWorse
  • DebtRatio
  • MonthlyIncome
  • NumberOfOpenCreditLinesAndLoans
  • NumberOfTimes90DaysLate
  • NumberRealEstateLoansOrLines
  • NumberOfTime60-89DaysPastDueNotWorse
  • NumberOfDependents

DataDictionary.xls contains a description of each column, except for the first column. However, the first column is obviously an identification id column.

For this example, we want to predict if someone is likely to be a credit risk based on past data.

The column “SeriousDlqin2yrs” is our “outcome” feature and the rest of the columns are our “target” features.

We want to create a classifier that given some target features can predict the outcome feature. In order to do that we need to do what is known as “feature extraction”. The following code will do that will pandas.

Generate Training and Testing Set

We now have to separate our data into two disjoint sets: a training set, and a testing set.

We have to do this because we will use “cross-validation” to measure the accuracy of our predictive model.

We will train our classifier on the training set and test it’s accuracy on the testing set.

Intuitively, if our classifier should classify credit risks in the testing set the same as in the real world. This makes the testing set a proxy to how it would behave in production.

Define Classifier Type

Scipy comes with a bunch of baked-in classifiers. We will use the default Naive Bayes classifier for this example.

Train Classifier

To train the model we simply have to feed the classifier the target and output variables

Validate Classifier

Now that we have our classifier we cross verify the results against our test set.

The output for this script is the following

The output shows that we have a 92% accuracy with the following error types

  • 55737 true positives
  • 110 true negatives
  • 212 false positives
  • 4076 false negatives

Save Classifier

With our classifier done we can save it so that we can use it a separate program

Create Web API

With our model created, we can now create our web service that can decide if we should give credit to someone based on certain demographic information.

Create the file credit_api.py.

Install flask-restplus from the command line

flask-restplus makes creating flask and swagger applications much simpler.

The following code will setup the scaffolding for setting up a flask application

The following code will setup the request parameters for our web service

This code will setup take the request parameters, feed them into the model, and determine the eligibility for extending credit.

You can start the flask app from the command line

And you can use the web interface by visiting localhost:5000

credit_api_flask

You can also use curl to get a response from the flask app

Conclusion

So there you have it: a prediction API built in about 10 mins.

I would never actually put this into production. A real production prediction API would need to handle edge cases and we would need to do model section.

However, the basic nuts and bolts of a prediction API are pretty straightforward. There really isn’t any magic to building a prediction engine.

Scalable Cross-Domain “Cookie” Authentication/Authorization: An Example with macaroon.js

Recently, cookie authentication has fallen out of favor in the web community for it’s inability to port across domains and scale.

However, there exists a method to use cookies in a distributed fashion that is both portable across domains and scalable: macaroons (it’s a better kind of cookie).

Google introduced the world to macaroons in 2014 with a white paper. Unfortunately, there does not exist much documentation or code examples for it; so, I thought I’d help fill that void with this blog post.

(Not) Introducing Macaroons

I will not talk about the theory of macaroons. You can use the following resources to get the theory:

A Basic Example

Suppose that you had a website that needs to authorize a user and authenticate their access before it can do anything. You probably would have the server send the user a cookie. The user would then send the cookie value to the server with each request, and the server would use that value to determine authentication and authorization details.

This has severe limitations because we cannot easily share that cookie across domains, and most attempts to do so usually introduce network bottlenecks. I will not go into the details of why this happens; you can easily google it.

Macaroons overcome these limitations by allowing you to put arbitrary restrictions on a “cookie”. Further, these “cookies” can include information from third parties.

This allows you to easily share them across domains and distribute them arbitrarily.

Let’s consider a simple example.

Suppose we had an application that we logically divided into (a) the browser, (b) the service provider, and (c) the identity provider.

The browser contains the code that powers user interaction, the service provider contains the code that powers data access and manipulation, and the identity provider powers user management of the system.

Suppose that the user wants to see his stock portfolio on a web page.

In order to satisfy this use case, our system needs to complete the following tasks:

  • Verify the user’s identity
  • Verify the user’s rights
  • Find the data
  • Show the data

The following sequence diagram shows how we can satisfy those tasks with the help of macaroons.

macaroon_authentication

I will use node.js and the macaroon.js module to power this example. I will only provide code snippets that are relevant to the most important elements of the use case.

Request Macaroon

For this example, the browser will kick-off the macaroon process by making a request to sp.domain.com for a macaroon. Let’s suppose that sp.domain.com exposed the url “/api/macaroons” for that

Generate Macaroon

The code to generate and return a macaroon could look like the following:

This code creates a first party caveat for the macaroon.

A first party caveat is an enforceable restriction on the macaroon created by the minter of the macaroon. In this case, we retrict this macaroon to a particular ip address.

The code also creates a third party caveat for idp.domain.com. A third party caveat is a restriction that we delegate to another party. In this case, we are requiring the recipient of the macaroon to pass the macaroon to the idp to authorize his requests.

The ability to delegate attenuation to third parties is the secret sauce for macaroons. With normal cookies the service provider would have to manage and coordinate all the services associated with authentication/authorization. However, with macaroons, we can simply delegate authority to a trusted third party.

For security reasons, we generate the caveat key on sp.domain.com and encrypt it using the public key for idp.domain.com. We have to pass the encrypted caveat key to browser along with the macaroon.

The following is the code that could handle the response from /api/macaroons.

The browser is responsible for coordinating the authentication for the macaroon. In this case,

the callback sends the encrypted caveat key and the macaroon to idp.domain.com along with a username and password.

Verify Third Party Caveats

The idp will verify the username and password of the user. If that passes then we can create a discharge macaroon and apply it to our original macaroon. When we create a discharge macaroon, we are satisfying the delegation request from sp.domain.com. In this case, we also assign a timestamp to the macaroon to enable revocation. We are telling sp.domain.com that this macaroon is only good until the time set.

We expect sp.domain.com to obey that restriction.

idp.domain.com will send the original macaroon and the discharge macaroon to the callback. We need them both because macaroon’s hmac chaining algorithm protocol requires them to verify the authenticity of the macaroon.  

The above code makes a request to sp.domain.com for the portfolio information. It also sends the root macaroon and discharge macaroon with the request.

Request Data

We know have enough information to verify the authenticity of the macaroon. I know that we have some handwavy helper functions that verify the authenticity. The particulars are not important, though. The verification details will always changed depending on the application.  

Conclusion

This is not production ready code. There is no error checking or edge case handling. This is just a bare bones examples of how you could use macaroons in a distributed environment.
We can easily scale this process up to an arbitrary number of servers. Hopefully, you’ll be able to use this as a starting point to building your own macaroons.

How to Unit Test Your D3 Applications


TL;DR: you can get the entire code at my repository.


I recently found a way to unit test d3 code, and it has transformed my approach to writing d3 applications.

I want to share this with the d3 community because they typically don’t emphasis unit testing.

Consider the following d3 code.

This code does the following:

  • append a red circle to a canvas
  • change the color of the circle to green when the user clicks on it.

Most of the time, we rely on our eyeballs to verify the correctness of d3 applications.

For example, this is what the canvas looks like when we D3 renders the canvas.

pic1

and D3 will render the following when the user clicks on the circle

pic2

In this case, we can manually verify that the code works.

Unfortunately, we can’t always rely on human eyeball testing.

However, with the jsdom module, we can programmatically verify whether the browser updated the DOM properly: we can (a) set expectations for our DOM, (b) write d3 code to fulfill our expectations, and (c) automatically run the tests with gulp.

First let’s setup our project.

Setup Project

Create a directory for our project, and initialize it as an npm package

Create a directory for our source code and our tests

Install the necessary npm modules

Create gulpfile.js

Create the d3 Application

Put the following code under src/Circle.js

The Circle module takes two arguments. The first argument is the dom element that we will append the svg canvas to. The second argument is the id we will give circle. We will need this id for testing.

Testing the d3 Application

Create the file test/Circle.spec.js with the following contents

So far we have not added any tests to the test suite. This is only boilerplate code that we will need to write our tests.

The function d3UnitTest encapsulates the jsdom environment for us. We will use this function in each test to alter the dom and test the dom afterwords.

Let’s create our first test.

Append the first test to our test suite.

In this test, we simply create check if the id “circleId” exists within the DOM. We assume that if it exists then the circle exists.

Notice the done parameter in our function. We do this to trigger an asynchronous test. jsdom will run asynchronously. We have to provide a callback to it; so, we manually have to call done() to inform jasmine when we are ready for it to test.

Append the second test to our test suite.

Append the third test to our test suite

This code simulates a click on the circle. You have to know a little bit about the internals of d3 to make it work. In this case, we have to pass the circle object itself, and the datum associated with the circle to the click function.

FYI, I couldn’t find a way to get jsdom to simulate user behavior; so, you have to understand the d3 internals if you want to test user behavior. If you know how to use jsdom to simulate user behavior then please let me know how you do it because this method is very error prone. 

Run the Tests

When we run gulp from the command line we should see the following

Conclusion

I understand that I gave a pretty simple and contrived example. However, the principles that I demonstrated apply to more sophisticated situations. I can vouch for this because I have already written tests for very complicated behaviors.

There is a downside to this approach. You have to use node.js and write your d3 code as node modules. Some people might not like to have this dependency. Personally, I don’t find it very inconvenient. In fact, it has forced me to write very module d3 code.

It is also extremely easy to deploy my d3 code using browserify with this method.
Your milage may vary with this approach. I’m sure that there are ways you can achieve the same result with browser based tools. For example, I’m pretty sure that you could do something similar with protractor.

The Nature and Scope of E-Prime

A friend recently criticized my understanding and use of E-Prime. This resulted in a short conversation about the nature and scope of E-Prime, and a deeper understanding of E-Prime for us.

First let me give you some context.

My friend writes science fiction short stories, and he occasionally will give me a pre-print of something he wants to publish for feedback.

Recently, he started to experiment with E-Prime in his stories.

E-Prime prescribes that you never use the verb “to be” or any of its conjugations and contractions.

For example, E-Prime would not allow me to say “I am Jonathan” because the word “am” is the first person present tense conjugation of the verb “to be”.

In order to express the same idea under E-Prime, I would have to say something like “I call myself Jonathan”, or “you can call me Jonathan”.

Why Communicate in E-Prime?

My friend wanted to constrain his language to be less judgemental, and he argues that E-Prime leads to less judgemental language.

When you write a sentence like “Jonathan is a bad person” the reader might assume that you are passing judgement, and are thus a judgemental person. However, if you rephrase the sentence to “Jonathan seems like a bad person” then the reader can only conclude that you are simply giving an opinion.

The clause “Jonathan is a bad person” is an example of a class membership.

In this case, you are saying that Jonathan belongs to the set of bad people. However, some people might argue that Jonathan belongs to the set of good but misunderstood people. Afterall, who are you to pass judgement on such a person?

Further, it is very hard to control how people will interpret your words. A good writer will find ways to clearly express his ideas in unambiguous ways (assuming that is his intention).  Since the verb “to be” can have many different uses depending on the context, readers might interpret it differently.

This happens because the word “to be” has many different uses. For example, the E-Prime wikipedia page states that the verb “to be” can be used to express (a) identity, (b) class membership, (c) class inclusion, (d) prediction, (e) auxiliary, (f) existence, and (g) location.

Hence, E-Prime leads to stronger less ambiguous writing which happens to be less judgemental: you can’t really express class membership unless you really go out of you way to do it.

When Things Get Awkward

I personally believe that E-Prime is a great rule to follow. However, I couldn’t help but notice odd phrasings when I read his story.

The odd phrasings throughout the story made it incredibly difficult to actually enjoy the story because it would interrupt my train of thought.

For example, the clause “she had not stopped eating by the time he arrived” just seems incredibly awkward, and that awkwardness defocused me.

Further, the best way I know how to rewrite this clause is as “she was eating when he arrived”. However, this rewriting involves the third person present progressive conjugation of the verb “to be”.

The Intention of the Law vs. the Letter of the Law

I realized at this point that we actually use the verb “to be” for both semantic and syntactic reasons. Identity, class membership, class inclusion, prediction, existence, and location are all semantic uses of the verb “to be”, but using a conjugation of “to be” as an auxiliary verb is a syntactic use.

For example, “Jonathan is a bad person” is a semantic use of the verb “to be”. In this case, we are expressing class membership. However, “she was eating when he arrived” is a syntactic use of the verb “to be”. In this case, we are simply conjugating the verb “to eat”, and we are using a conjugate of “to be” to do it.

By disallowing the verb “to be” in all of it’s uses we also disallow all forms of the progressive and perfect progressive tenses. This is a very high price to pay, IMO.

Conclusion

E-Prime is great. However, you should use it to constrain your use of semantics, and not syntax. E-Prime is a means to an ends.

I personally like to use it to achieve a more concise and expressive writing style. However, you can only do that if you use it with intention, and not blindly following the rule.

How To Use Facebook Login with React and Babel


TL;DR: You can check out the complete application at this Github repository


Update (2015-12-17): This post is out of date. I will update this post for React 0.14, soon.

I have a side project that uses React and Babel which I needed to use Facebook Login. Unfortunately, I had to figure it out myself because I couldn’t find good documentation.

I’ve created this blog post so those who want to do the same thing don’t have to suffer like me. Hopefully, Google managed to index this page for you to find me.

I will go through a step-by-step process . You will likely need to customize this process for your application. This is just a bare bones example.

Let’s start by setting up your Facebook Account.

Setup Your Facebook Application

Login to your Facebook Developer Account

Go to https://developers.facebook.com/products/analytics/, and login to your account.  

You should see something similar to the screenshot below when you click on the “My Apps” link

001_create_facebook_app

Create a New App

When you click on the “Create a New App” button, the website should show a popup similar to the screenshot below.

002_create_a_new_app

Give you project a name, and category.

I name my project “React Login POC”, and assigned it to the category “Apps for Pages”.

Move Back to the App Dashboard

After you create your app, you can go to the dashboard to configure it. I have provided a screenshot of what my dashboard looks like.

003_app_dashboard

Choose Platform

When you click on the “Choose a Platform” button on the dashboard, the website will present you with the following options.

004_choose_platform

Choose the website option.

Setup the Site Url

You now have to configure the site url for your app. I choose to host my application at http://localhost:3000. I have provided a screenshot below.

006_site_url

How you setup your site url will determine how you build your gulp workflow later.

Go the Advanced Settings

Go to the settings page for your application and click on the Advanced tab.

007_advanced_settings

Setup the Redirect URI

Enable “Embedded Browser OAuth Login”, and include the same host information you provided for the site url as one of the options for “Valid OAuth redirect URIs”.

008_redirect_uri

Setup the Project

Now that we have configured our facebook app, we can start to build the react application. Let’s start my setting up the project.

I will assume that you are using linux.

Create a directory for the project and change to the directory:

Initialize the directory as a node project:

Install (a) gulp, (b) jasmine, (c) browserify, (d) eventemitter, (e) run-sequence, (f) vinyl-source-stream, (g) gulp-connect, (e) opn, and (f) react with the following commands

Create the following folder structure

  • src
  • app

with the following commands

We will place our react code in src, and the generated code in app.

Build the Gulp Workflow

I modified the gulp workflow from gulpfile that I made for the project in my last blog post.

This workflow uses gulp-connect and opn to start a webserver and open a web browser after successfully building the application.

We have to do this because the Facebook makes us register a suite url and redirect uri with them in order to use their Facebook Connect SDK.

If we tried to run the application directly from the file system then Facebook would deny our attempt to authenticate with them.

Build the React Application

Generate the Facebook Login Button

Facebook provides an easy way to generate a Facebook Login Button.

Go to the following webpage to generate a Facebook Login Button: https://developers.facebook.com/docs/facebook-login/web/login-button

009_generate_fb_button

Make sure to enable the logout button because we will use that feature in our application.

When you click on the “Get Code” button, you will see the following popup.

010_javascript_code

We will use and modify the provided code to suite our react application.

Build the Facebook Login Button in React

Facebook provided us with the following code snippet as part of the facebook login button process.

We will use this in our React Component to build our Facebook Button.

Create the file src/FacebookButton.jsx with the following contents:

This component will do several things:

  • It will display a facebook login button based on what we generated above
  • When we login, it will show us our name
  • When we logout, it will remove our name

Notice that we provided a FB module to the constructor, and that we use it to respond to events. We will need to setup the mechanism where we can provide this component with the FB module through dependency injection.

Create the Main React Component

Every application has a main component that bootstraps the application.

The following code servers that purpose.

Create the file src/Main.jsx with the following contents

Create the Index Page

When we generated the login button, Facebook provided us with the following code snippet.

We have to modify this a bit to get our Facebook Button to work properly.

Create app/index.html with the following contents:

I am eagerly loading the javascript sdk instead of dynamically loading it because I need to make sure that the FB module is available when I inject it into FacebookButton.

Execute the Application

We can now run our application with gulp.

Once gulp completes, it should open a browser and display the webpage. It should looks something like this:

login_before

When you click on the “Log In” button, the webpage will open a popup for you to enter your username and password.

facebook_login

After you successfully, login then you will have to grant the application permission to access your profile information.

facebook_permission

Once you click “Okay”, you should see the welcome message with your name, and the “Log Out” button.

login_after

Conclusion

This is not a particularly elegant way of doing it. I’m sure that there are much better ways, but it does get the job done. Hopefully, you can use it to build something that suites your needs.

How to Build React Applications with Node.js, Jasmine, Babel, and Gulp


TL;DR: you can find the entire code at my github repository.


Facebook’s React has completely changed the way that I write applications. From my perspective, most javascript frameworks unnecessarily force you into particular programming styles. For example, I often hear people tell me to do things the “angular way” (whatever that means).

This isn’t necessarily a bad thing. Programming standards enable team development. However, I believe that the software architect or the team itself should develop those standards. Framework designers do not know your particular needs and concerns.

In the end, framework designers try to solve their problems. They do not generally care to solve your problems. Many times our problems will overlap with their problems, but it becomes incredibly hard to development when they do not.

We do not always know what designs work at different stages in our software lifecycle; so, we need to have as much control over the design of our applications as possible

In my opinion, React has a much better design simply because it makes much less assumptions.

Consider Angular 1.x. While it is a great framework, it also has an all or nothing philosophy. It is very hard to use it’s data-binding feature without also using its controllers, directives, modules, etc …

React only concerns itself with views.

This makes it possible to create a customized architecture around my particular use cases.

I created simple todo application that demonstrates how you can do this.

I would like to use this blog post to demonstrate in a step-by-step way to build it.

First let’s describe the use case for the application.

Gather the Requirements

Suppose we have the following requirements for our todo application

Use Case 1.0: Add List Items

We expect to give a list item to the application. The application should store this list item, and provide visual feedback that it has stored it.

Use Case 1.1: Detect Invalid List Items

We consider an empty string as invalid input. If we give invalid input we expect the system to detect it and give some visual feedback to the user about it.

Use Case 2.0: Remove List Items

We expect the application to allow us to remove items. The application should provide some visual of all the list items, and it should expose some way to remove individual list items.

When we remove a list item, the application should provide some visual feedback about removing the list item.

Decide on an Tool Set

In order to build the application, we need to make some initial design decisions.

For this tutorial, I decided to build the application in the following tools:

  • React + Babel for views
  • Node.js for the main application
  • Jasmine for testing
  • Gulp for building the application

Setup the Project

I will assume that you are using linux.

Create a directory for the project

change to the directory

Initialize the directory as a node project

Install (a) gulp, (b) jasmine, (c) browserify, (d) eventemitter, (e) run-sequence, and (f) vinyl-source-stream with the following commands:

Create the following folder structure

  • src/common
  • src/model
  • src/view
  • test
  • app

with the following commands

We will put (a) our react code in the “src/view” folder, (b) our node code in the “src/model” and “src/common” folders, (c) our jasmine tests in the “test” folder, and (c) our application into the “app” folder.

Build the Model

Let’s encapsulate the algorithms that fulfill these uses cases into a single node.js module.

Create the following file: src/model/TodoListModel.js.

TodoListModel references another node file called Guid. By design, we force the caller of TodoListModel.addItem to provide a guid as an argument.

I did this to make unit testing easy. We will illustrate unit testing in the next section.

Create the following file: src/common/Guid.js

Create the Unit Tests

We need to validate that TodoListModel actually implements our use cases correctly. We will create separate jasmine specs for each use case.

Create Tests for Use Case 1.0

We want to verify that the model obeys the following constraints when we add list items:

  • The model adds the list item properly when we add one list item
  • The model adds list items properly when we add two list items
  • The model fires events that informs subscribers that the model has added a list item

Create test/AddListItems.spec.js with the following contents to validate that our model obeys the rules for “Use Case 1.0”.

Create Tests for Use Case 1.1

We want to verify that the model obeys the following constraints when we add add list items:

  • Detect when the caller does not provide a valid list item
  • Detect when the caller does not provide a valid guid
  • Verify that the model fires the proper events for an invalid input

Create test/DetectInvalidListItem.spec.js

Create Tests for Use Case 2.0

We want to verify that the model obeys the following constraints when we try to remove items:

  • The model properly removes one item
  • The model properly removes 2 of three items
  • The model fires the proper events when it removes items

Create test/RemoveListItems.spec.js

Execute Unit Tests

We will use gulp to execute our unit tests.

In the main directory, create gulpfile.js with the following contents

Change the NODE_PATH to include ./src. This will ensure that model/TodoListModel.js and common/Guid.js are discoverable.

Our gulp file will now execute our jasmine tests by default. Let’s run it and see the result.

Build the Gulp Workflow

Now that we have a working model let’s build the application. I want to have a very simple deployment. I would like the entire application to run using this html.

I want bundle.js to bootstrap the entire application for index.html.

Let’s modify gulpfile.js to support this.

This will browserify all our react code (which we have not created yet) and package it into bundle.js in the app folder. Further, gulp will use babel to transpile our react code into ES5 javascript. We want to use babel with react because it will provide us with a better syntax than standard javascript.

Build the Views

Create src/Main.jsx with the following contents.

This code shows that we have a UI with three general components:

  • A message component
  • An input component
  • A list component

Further, each component takes a TodoListModel instance as an argument to it’s constructor.

Let’s build out each component one at a time.

Create a MessageArea Component

Create src/view/MessageArea.jsx

Create InputArea Component

Create src/view/InputArea.jsx

Create a TodoList Component

Create src/view/TodoList.jsx

ListBox.jsx creates several ListItems components as children.

Let’s create the component definition.

Create src/view/ListItem.jsx

Build the Application

We can now run gulp.

This gulp build uses babel to transpile the ES6 code into standard javascript and browserfys it into bundle.js in app.

You can open index.html in a browser to execute the application. It should look something like this:

Screenshot - 09162015 - 06:20:49 PM

Conclusion

I know that this project is very artificial. I only want you to take away the principles. You ultimately have to make decisions for you applications. Architecting your projects like this may or may not make sense for you.

How To Automate Jasmine Tests for Typescript with Gulp

I have a gulp workflow that I’d like to share with the javascript community. This workflow automates the compilation of typescript and tests the generated code with Jasmine.

Why Another Workflow?  

I personally like to build my own workflows whenever possible.

I know that advanced editors like Visual Studio have tools that automate a lot. However, I feel that software development is too complicated for a single workflow to automate everything.

IMO, there are simply too many details in software development. At some point, you always need to do something custom. 

Enter Gulp

I feel that gulp has the best approach to creating workflows, currently.

Gulp has a massive collection of plugins that you can reuse for common tasks, and you can stitch various plugins together programmatically.

Creating the Workflow  

I will provide a step-by-step walk-through that shows how to use the linux command line to setup a project. You may not be able to use this process if you use something like Visual Studios to setup your typescript applications.

Create the Folder Structure

Create the following folder structure

  • src/ts
  • src/js
  • spec

with the following commands

We will put our typescript code in src/ts, our generated javascript code into src/js, and our jasmine tests into spec.

Setup the Project

Initialize the directory as a node project.

Install node plugins

Create a custom node module

We will need to take the contents of the generated javascript file that typescript produces and evaluate it in the jasmine files. In order to make this easier, we will create a helper module called __require.js in the root folder.

By design, we will place all generated javascript code under src/js. If we had an alternate location then we would have to reflect that in __require.js.

Alter NODE_PATH

In order to make __require.js discoverable by node, we can change the NODE_PATH environment variable to include the current directory

Create Program and Unit Tests

Create a Set class with Typescript at src/ts/Set.ts

This Set class is a very naive implementation of a basic set for the sake illustration.

Place the following Jasmine code under spec/Set.spec.js

These lines of code in Set.spec.js deserves some explaining:

By design, node modules need to follow the commonjs format. If you “require” a file that does not follow the commonjs format then node will return an empty object.

Since Typescript does not actually generate code that obeys the commonjs format, we need to create a method for us to get the generated code into the scope of a jasmine test.

I created __require.js because it makes it very easy to include the contents of a file under the directory I designated as generated code.

Create Gulp Tasks

We now have everything in place to execute our workflow. Create gulpfile.js under the root directory with the following contents.

This workflow will do the following:

  • compile all typescript code in the src/ts directory
  • place the generated javascript code in the src/js directory
  • run all jasmine tests in the spec directory

runSequence will make sure that your workflow will compile all the Typescript files first before gulp runs the unit tests. If we did not use runSequence then gulp would run the tests concurrently with the compilation.

In order to initiate this workflow we simply have to execute gulp from the directory like so:

Conclusion

In practice, there are many different ways of creating workflows. I don’t claim that this is somehow the best or only way to do it.

I hope that this particular example can help you build a workflow that works best for you.