Setting up PostgreSQL (9.6) on Debian Stretch

I have been trying to setup a home network of connected devices for… lets just say a long time! I have made some progress in recent days and the next step is to start using some sort of backing store to store some values.

While MongoDB works well with the Nodejs stack, it isn’t well supported on Raspbian because Raspbian is 32bit and MongoDB stopped supporting 32 bit fully due to data size restrictions (biggest DB you can have is 2GB). Raspbian Stretch does provide an updated MongoDB but it is still a few (minor versions) short of the shiniest.

However using Jessie Backports I was able to install PostgreSQL 9.6 (the shiniest version). Having installed it, I had no idea what to do next. Here’s my journey to get PostgreSQL to a place where I can start using it.

My dev environment’s base OS is OSX and I run my DB in a VM hosted by VirtualBox. So I have setup a Debian Stretch VM called dev-db before I start with PostgreSQL installation. I am trying to setup a parallel runtime environment on a Raspberry Pi as well so if there are any exceptions to these steps I’ll document them for the Raspbian as well.

Installing on Debian/Raspbian Stretch

Both the distros have the required repositories in the apt list, out of the box, so all you have to do is

sudo apt-get install postgresql-9.6

This will create a PostgreSQL default user called ‘postgres’.

sudo service start postgres

Setting up your User Account

PostgreSQL databases are usually tied to a user. Out of the box, PostgreSQL users are not the same as *nix account users, however, if you a common name between your *nix account and PostgreSQL account things become much easier. So lets assume account I am using to log in to Debian stretch is called ‘dbuser’. It is in the sudoers list and I logged in to the terminal/desktop using it.

First step is to change user from ‘dbuser’ to ‘postgres’ that PostgreSQL created during installation.

sudo -i -u postgres

This is the admin account for postgresql, in a production environment, it is a good idea to protect this with a password at least.

Next step is to tell PostgreSQL, we have a ‘dbuser’ account that we want to be able to use for create databases in PostgreSQL.

postgres@dev-db: ~$ createuser dbuser --pwprompt

It will ask for password, I entered the same password as my *nix account but that may not be a best practice.

Finally create a db for the dbuser

postgres@dev-db: ~$ createdb dbuser

Now log out of the postgres account and go back to dbuser account

postgres@dev-db: ~$ exit
dbuser@dev-db: ~$

Start psql client and you should get logged into the pi database using the PLSQL

dbuser@dev-db: ~$ psql
psql (9.6.4)
Type "help" for help.

Enter \q to exit psql.

We have setup PostgreSQL and have established local client access

Enabling remote access

Default PostgreSQL setup enables local client access only. In Dev environment I like to access my VM or PI using a client on my local desktop/laptop. This needs a few additional steps.

Step 1: Open up firewall on VM or Raspberry Pi hosting the DB Server

sudo iptables -A INPUT -s -p tcp --destination-port 5432 -m state --state NEW,ESTABLISHED -j ACCEPT
sudo iptables -A OUTPUT -s -p tcp --destination-port 5432 -m state --state NEW,ESTABLISHED -j ACCEPT

Step 2: Edit PostgreSQL config to allow remote access

Edit the pg_hba.conf file

sudo nano /etc/postgresql/9.6/main/pg_hba.conf

Scroll to the bottom and add the following line

host all all trust

Edit the file postgresql.conf

sudo nano /etc/postgresql/9.6/main/postgresql.conf

Find the setting ‘listen_address’ and set its value to the IP address of the VM/Raspberry Pi. This line maybe set commented (starts with #), so uncomment it first.

listen_addresses = ''

Restart PostgreSQL server

sudo service postgresql restart

You should now be able to access if from a Remote Client.

I use the standard pgAdmin 4 and the connection settings were as follows

Screen Shot 2017-09-10 at 5.48.34 PM

And done!

Up next compiling and setting up Redis on the Pi 🙂

Tagged , , ,

Quickbytes: How to connect to MongoDB in a VM, from OSX (bindIP)

I like to keep my base system clean of databases and web-servers etc. So when I wanted to play around with MongoDB on my laptop instead of cluttering it up, I setup a little Virtual Box VM running Debian 8.5 and got MongoDB 3.2 on it in a jiffy using the official docs.
I then installed my favourite MongoDB client Robomongo and was all set to connect to the DB in the VM itself.
But when I installed Robomongo on OSX it just wouldn’t connect to the VM. I assumed its getting blocked by default OS settings on Debian. So I updated the IP Tables as follows
sudo iptables -A INPUT -s -p tcp –destination-port 27017 -m state –state NEW,ESTABLISHED -j ACCEPT
This enables incoming connections to port 27017, MongoDB default port.
sudo iptables -A OUTPUT -s -p tcp –source-port 27017 -m state –state NEW,ESTABLISHED -j ACCEPT
This enabled outgoing connections.
Replace [] with IP address of you machine/laptop on which the VM is hosted.
I assumed this would be enough but nope. Robomongo on OSX kept refusing to connect with the error “Network is not reachable”. After running up lots of wrong trees I finally found out that MongoDB forces local IP binding by default as a security measure. This setting is in /etc/mongod.conf
port: 27017
I changed the bindIp to [,]. Final settings were:
port: 27017
bindIp: [,]
Save the conf, restart mongodb service and Bazinga!
Quick vote of thanks to the following articles:
Security Note: Setting bindIP to is the worst move from a security point of view. Do not do it!

How to format a Disk in Debian Jessie

I keep forgetting how to format and label a disk on my Debian system so here’s a quick note to self:
Formatting == Making a File System
Usually adding a new disk means creating a partition first and then making a file system on the partition. I will come back to the creation of partition some other day. For now I just need to format my file system.
We use fdisk to identify partition.
fdisk -l

Disk /dev/sda: 232.9 GiB, 250059350016 bytes, 488397168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x8eb2d6f9

Device     Boot     Start       End   Sectors   Size Id Type
/dev/sda1  *         2048 468553727 468551680 223.4G 83 Linux
/dev/sda2       468555774 488396799  19841026   9.5G  5 Extended
/dev/sda5       468555776 488396799  19841024   9.5G 82 Linux swap / Solaris

Disk /dev/sdb: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: dos
Disk identifier: 0xf8b85d91

Device     Boot Start        End    Sectors   Size Id Type
/dev/sdb1        2048 1953521663 1953519616 931.5G  7 HPFS/NTFS/exFAT

Disk /dev/sdc: 111.8 GiB, 120034123776 bytes, 234441648 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x9fd1cfdb

Device     Boot Start       End   Sectors   Size Id Type

/dev/sdc1        2048 234441647 234439600 111.8G 83 Linux
As seen above the disks are /dev/sda, /dev/sdb and /dev/sdc Each of them have their own partitions
I want to ‘format’ the /dev/sdc drive, but I already have a partition on it, so I don’t need to create a partition.
To format, I need to unmount the file system first. I did it by right clicking it on Dolphin file system manager and click the Unmount context menu. You can use the umount command as well.
Once unmounted you format it using the following
sudo mkfs.ext4 /dev/sdc1   
The above command give the following output
mke2fs 1.42.12 (29-Aug-2014)
/dev/sdc1 contains a ext3 file system labelled 'WinVM'
        last mounted on /media/sumitkm/WinVM on Sun May 22 16:13:34 2016
Proceed anyway? (y,n) y
Discarding device blocks: done
Creating filesystem with 29304950 4k blocks and 7331840 inodes
Filesystem UUID: 6fdb55c8-95f9-4591-a76e-f5b0ab85a606
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
        4096000, 7962624, 11239424, 20480000, 23887872
Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
Dolphin will auto mount it but it will use a big GUID as its label. To fix the label use the e2label command
First we check for existing label and it comes back blank
sudo e2label /dev/sdc1
Next, we apply the label WinVM
sudo e2label /dev/sdc1 WinVM
Next, we check the label again and confirm it is WinVM
sudo e2label /dev/sdc1

Finally, don’t forget to change owner. Since our mount point was previously defined, it will be picked up automatically as soon as you apply the label. However the ownership is changed to root. Change ownership back to owner yourself using the chown command
chown -R sumitkm /media/sumitkm/WinVM

Where sumitkm is the username and the folder is the mount folder.

Taa daa, you are done!
P.S. This happens to be my first post using my custom built Electron JS based blog editor for Linux and OSX. Check it out at (A how to article on Electron JS has been in the works for the last 4 months now ;-)… will see the light of the day someday)
Tagged , ,

Setting up a Makibes (Waveshare) 1024×600 touchscreen with your Raspberry Pi Zero

I have been eyeing a touchscreen to go with one Raspberry Pi from my collection (O_o), for a while now. The official Raspberry Pi screen is perpetually out of stock and backordered for months. The resellers are charging a hefty markup.
Only option left was a third-party screen. After a lot of deliberations I settled down on this screen by WaveShare

My criteria were:

  1. Atleast 7 inches (plan to use it as a dashboard at some point)
  2. Capacitive touch (Resistive touch isn’t as responsive, blame iPhones for ruining us ;-)…)
  3. Least number of addon boards to keep things compact
  4. Works with stock Raspbian.

From the looks of it, the WaveShare screen checked all the boxes though the last point is still debatable.

It is available from multiple resellers on Amazon. I bought it via In4dealz and fulfilled my Amazon. They were offering a stand and frame for 6 GBP extra. Total cost 46 GBP. Plus, I bought an Anker Astro E1 external battery pack to power the thing. It’s 5200 mAh and should be able to power the Pi for a while.


Here are the unboxing images:

As you can see they come in a neat bundle, well packed but devoid of any instructions what-so-ever. Even for the frame and stand you have to use your ‘imagination’ to put things together which isn’t too bad for the tinkerer in you.

Setting it up to work with the Raspberry Pi Zero

The sequence I am writing here isn’t the same in which I got things working, but things got really simple after I RTFM 😉

Anyway I will not repeat the mistakes here.

Before you start off make sure you have the latest Raspbian Jessie image setup and running for your Pi Zero.

  1. Assuming your Pi Zero is connected to a regular monitor and able to access the internet. Navigate to
  2. The Drivers you need are at the bottom of the page (section 5.6) or
  3. If you are setting up the Pi Zero get the B/B+ drivers for version 4.1.13x. Its about 20 MB download.
  4. Once it finishes extract the tar file
    1. sudo tar zxvf filename
  5. Before you run the executable make sure max_usb_current=1 is setup in the /boot/config.txt file.
    1. sudo nano /boot/config.txt
    2. Scroll to the settings and either uncomment the max_usb_current line or add it in a new line at the end of the file.
    3. Exit nano (Ctrl +x)
  6. Now change into the extracted folder (big massive name starting with RPIB+.).
  7. Execute the installer
    1. sudo ./USB_TOUCH_CAP_7.0_RASPBIAN
  8. It takes about 20-30 seconds and reboots automatically.
  9. Let it reboot.
  10. Shut it down. If your regular monitor freezes on reboot, hot unplug the power
  11. Connect the Pi Zero’s HDMI out to the LCDs HDMI in using the Pi Zero’s adapter and the provided flat HDMI cable.
  12. Connect the Pi’s Micro USB to the LCDs Micro USB connector via the provided adapter + cable.
  13. Power up and you are good to go :-)!
  14. If you have an older Raspberry Pi running a version of Raspbian Wheezy, then get the version 3.x drivers. The screen works with them too.
  15. You can install a software keyboard called matchbox-keyboard using apt-get, but I have doubts about it because it seems to consume a lot of CPU resources and even after you have closed it doesn’t release resource (quickly). I haven’t used it a lot though, so I’ll give it another go later.

Assembling the frame/stand

If you bought the additional frame/stand you can assemble it as shown in the slideshow below. The Pi Zero does fit horizontally towards the top of the frame!

And that’s it, you are all set.

I powered it using the Anker Astro E1 battery pack. You can do something similar to keep it portable.

I ever get to making this into a portable/carryable ‘tablet’ I shall post updates here :-).

EDIT: As suspected the ‘drivers’ provided by the vendor don’t work once you update your Raspberry Pi image. I had found a PHP driver that worked initially, but with the latest update of Raspbian that’s broken as well. As of now it is a non-touch screen till I am able to tinker with the PHP driver and get it back online.

However, to get it working with any stock Raspbian as a screen, all you have to do is add the following lines at the bottom of your config.txt.

After you have burned the image to an SD card you can simply add the following lines in the config.txt before you put it in the Pi. This way you don’t need a separate monitor to get going.

hdmi_cvt 1024 600 60 6 0 0




Tagged , , , , ,

Getting started with NodeJS – Part 1: Fumbling around

I’ve been meaning to try out NodeJS for a while now, and finally got around to doing it over the last few days. I thought I would share my experience as I go along.

Update: Those ‘few days ago’ are actually a couple of months now 😉

I have used NodeJS as a (build) tool to help me ‘compile’ front-end scripts, that involves taking dev source code and minifiying it into cache-busted deployable code. I use gulp for it and it works pretty okay. Fact is, while writing the gulp script I got pretty interested in NodeJS.

Also given the fact that ASP.NET vNext is pretty much going the ‘Node way’, I thought I should know what the real deal is, before I muck around with ASP.NET vNext.

So here is my first go at building ‘something’ using NodeJS as a platform as opposed to just a Dev/Build tool. The article expects you to have heard of NodeJS and NPM (Node Package Manager, something like Nuget but runs off the command line and available to both Windows and *nix). If you have never used either of them, it’s fine.



Debian 8 (Jessie).

My readers using Windows fear not, you can use NodeJS on Windows using nearly the same steps, so if you get stuck just let me know and I’ll try and help.


node --version


npm --version


Side note on upgrading Node in Debian: If you read my previous article I had mentioned Jessie comes with a Node package by default, but it’s a rather old one. I un-installed that one using

sudo apt-get remove nodejs

Thereafter I followed the step outlined on . Reproduced here

curl -sL | sudo -E bash -

sudo apt-get install -y nodejs

This basically downloads the latest package from the official node repository and installs it.

Windows Users: Just get the latest Node installers from and rock on! NPM is installed as a part of Node.
OSX Users: You guys are all good, just install Node and the rest of the commands should be all the same.


Well, you could use anything you want, starting with Visual Studio full on, to Visual Studio Code, or any other IDE/Editor that suits your fancy. I am using Atom by Github. I am new to Atom as well, so there might be some moments when the experienced Atom user in you wince at my noobish-ness.

The Project

Well, I want to figure out what it takes to use QuillJS wrapped in a KO Component and then save the text in component into Azure Blob Storage. Simple right ;-). The project is called ParchmentScroll. Why? Well you use quills to write Scrolls on Parchment paper… 😉 😉 😉

Oh, BTW, QuillJS is a really cool JavaScript library for add Rich Text capabilities to your Web Application. It was open sourced by Sales Force and is available under a permissive BSD license.

So lets get started, but before that lets try wrapping our head around ‘Server-side’ JavaScript.

JavaScript… umm… TypeScript everywhere (client-side and server-side)

You either love JavaScript or loathe it! I have made my peace with it and I kind of like its dynamic quirkiness. After I started using TypeScript I like JavaScript even better.

Anyway, traditionally we all know how to use JavaScript in the browser. But Node JS takes JavaScript and runs it through Google’s V8 engine on the server so you can actually write HTTP services in JavaScript. So you can have a HTML page hosted on IIS, NGINX, Apache or wherever, do an AJAX post to your NodeJS application that you’ve written in JavaScript and send back a response. To put things in contrast with the .NET world, think of writing Web API services, but instead of writing ApiControllers in C# you get to write it in JavaScript.. err TypeScript!!!

Down to some code… err well… kind of

Since I am using Atom, a lot of the steps I describe here, to setup a blank project, will look long drawn, when compared to Visual Studio’s File->New Project->Wizard->Done.

So lets get started.

Open a terminal.

Select/Create your favourite project folder and navigate to it. Mine is at

cd /home/sumitkm/myprojects/demo/parchmentscroll

Packages, their managers and Node JS

The NodeJS ecosystem thrives on a huge repository of third party libraries that are distributed as packages. Packages inherit the idea of Linux packages. They bundle self contained units of code/binaries that can be installed and updated using their respective package managers.

In this project I have used three package managers

1. The Node Package Manager aka npm – This is Node’s default package manager and is used to distribute all node packages, binaries and extensions. Fun fact, you use npm to install other package managers :-). So npm is the alpha dog of package managers in Node and is installed by default with Node. Node packages are mostly used for installing dependencies that you will use on the server side. For client side Script/style dependencies you use the next package manager – Bower.

2. Bower – The front-end package manager. Bower installs front-end dependencies that are mostly distributable versions of the libraries or frameworks that you will use e.g. KnockoutJS, RequireJS, QuillJS etc. To get started with Bower you first need to install it using npm at the global npm repository location as follows.

Please note if you are are not the administrator but have sudo-er rights, you need to prepend sudo to every shell command unless I say you don’t need one explicitly.

npm install bower -g

3. TSD – The TypeScript Definitions package manager. While the good thing about TypeScript is it provides better code management due to type enforcement at compile time, the flip side is you need TypeDefinitions for existing libraries written in ES3-4. DefinitelyTyped is a nice Open Source repository of TypeDefinitions that users have contributed as they have used existing libraries. While creating a TypeDefinition is relatively easy, its good to have a leg up with the existing libraries. So we install TSD a package manager that helps you retrieve type definitions for libraries you will use from the DefinitielyTyped repository

npm install tsd -g

We start by initializing an NPM ‘project’. This creates a package.json file which has the list of dependencies as well as details like the Project Name, version, Git repository, Author name, License information etc.

npm init

(don’t use sudo here)

This will present you with a series of prompts where you provide the requested details and it will in turn scaffold a package.json file for you. I provided the following details:

This utility will walk you through creating a package.json file.
It only covers the most common items, and tries to guess sensible defaults.
See `npm help json` for definitive documentation on these fields
and exactly what they do.
Use `npm install  --save` afterwards to install a package and
save it as a dependency in the package.json file.
Press ^C at any time to quit.
name: (parchmentscroll) 
version: (1.0.0) 
description: A blogging platform built using Node, QuillJS and TypeScript
entry point: (index.js) 
test command: 
git repository:
keywords: QuillJS, NodeJS, TypeScript
author: Sumit Kumar Maitra
license: (ISC) MIT
About to write to /home/sumitkm/myprojects/demo/parchmentscroll/package.json:
  "name": "parchmentscroll",
  "version": "1.0.0",
  "description": "A blogging platform built using Node, QuillJS and TypeScript",
  "main": "index.js",
  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1"
  "repository": {
    "type": "git",
    "url": ""
  "keywords": [
  "author": "Sumit Kumar Maitra",
  "license": "MIT"
Is this ok? (yes) yes

If you do ls now, you’ll see that a package.json file exists in the folder.

Here on, you have to decide your project structure. There are lots of sensible defaults, you can look up on the net. I am trying out one that I feel comfortable with. I may change it as we go along and build the project.

Since there are no csproj files or equivalent (package.json is a distant cousin, more like a .sln file than anything else), I am going to create top level folders as projects. So I create two main folders

(no sudo required)

mkdir www

mkdir server

Next will initialize the typescript definition file tsconfig.json
(no sudo required)

tsc --init

This creates a TypeScript config file that helps TypeScript compiler with location of the ts files in the project and other configuration items. If you open the file in an editor you’ll see the default:

"compilerOptions": {
"module": "commonjs",
"target": "es3",
"noImplicitAny": false,
"outDir": "built",
"rootDir": ".",
"sourceMap": false
"exclude": [


The compiler options are same ones available via command line. I tend to remove the outDir attribute completely. This results in the .js files being generated in the same folder as the .ts file. This fits better with my deploy script that we’ll see sometime in the future.

The exclude array tells typescript compiler which folder it shouldn’t look at. Currently only node_modules is excluded.

The final tsconfig.json file we are starting with is


  "module": "commonjs",
  "target": "es3",
  "noImplicitAny": false,
  "rootDir": ".",
  "sourceMap": true
 "exclude": [

This completes our ‘File->New Project’. Here on we’ll get on with some real code.

Application layout in a little more details

In the previous section we created two folder server and www as our two ‘projects’. The server folder will be root folder for all the server side logic and the www folder will hold whatever resources the browser needs to serve up the web page. So folder names basically help us with a mental segregation of what goes where.


Node JS can open ports and server content on ports if you want it to. But we don’t want to go that low level. Instead we’ll get help from a framework called Express JS to the low level stuff of opening/listening to ports, parsing requests, sending back responses etc. Basically we’ll use ExpressJS to bootstrap the application. The handy bit is, Express can serve up static files as well, so we’ll use the same framework to host the front-end and handle backend request/responses.

Down to some code, finally!

Getting started with ExpressJS

Setting up anything in node basically means npm –install. Express is no different. In the ‘parchmentscroll’ folder using the following

npm install express --save 

Save tells npm to update the package.json file with this particular dependency. So when you get sources on to a new folder all you have to do is npm install and all dependencies listed in package.json will be installed for you.

Time to start up Atom in the parchmentscroll folder enter

(no sudo required)

atom .

This should launch atom with the following layout

Under the server folder create a folder called app

Add app.ts file under app folder. This is going to be our entry point into the application.

But before we start writing code, we need a little more ‘configuration’ to do.

Back to the console in the parchmentscroll folder we’ll use the tsd package manager to install typescript definitions for node itself
(no sudo required)

tsd query node --action install --save

This tells tsd to look for the node type definition and if found install it and save it to tsd.json

Similarly we install typedefinitions for ExpressJS as well

ts query express --action install --save

Next we’ll install a couple of npm modules that Express JS uses for parsing a request body and serving up static files.

npm install body-parser

npm install serve-static

We also need the typescript definition for these two, so invoke tsd again –

tsd query --action install serve-static --save --

tsd query --action install body-parser --save --resolve

Note the — resolve flag that we’ve used in the above two commands. This tells tsd to resolve sub-dependencies of the library and get their type-definitions as well. You’ll note both use another dependency call mime that gets installed automatically.

Back to Atom in app.ts paste the following code and save the file.

/// <reference path="../../typings/tsd.d.ts"/>
import * as express from "express";
var app = express();
var bodyParser = require('body-parser');
app.use(bodyParser.json()); // for parsing application/json
app.use(bodyParser.urlencoded({ extended: true })); // for parsing application/x-www-form-urlencoded
var server = app.listen(3001, () =>
 var host = server.address().address;
 var port = server.address().port;
 console.log('Example app listening at http://%s:%s', host, port);

– This code initializes express.
– Initializes an instance of the body parser module and sets it up to handle HTTP request body of type application/json

– Sets up the bodyParser module to handle parsing of url encoded HTTP requests

– Sets up express to handle static files in the ‘www’ folder (which is currently empty).

– Finally it setup up the express instance to listen to port 3001 and once the server starts print out a console message.
With the code setup, switch back to the terminal and in the parchmentscroll folder run tsc.


The code should compile silently and come back with no messages.

Next we try to run the app using the following command

node .

The . tells Node to use the package.json to start up. However you’ll get an error at this point.

Error: Cannot find module '/home/sumitkm/myprojects/demo/parchmentscroll'
at Function.Module._resolveFilename (module.js:338:15)
at Function.Module._load (module.js:289:25)
at Function.Module.runMain (module.js:467:10)
at startup (node.js:136:18)
at node.js:963:3

This because when we setup our package.json we said index.js was our ‘main’ file. Problem, easily fixed, switch to Atom and open package.json.

Set the “main” attribute to “server/app/app.js” instead of the initial “index.js”.

Save the file and flip back to the terminal. Run node . Again

node .

This time you should see a message like the following:

Example app listening at http://:::3001

If you open your browser and go to localhost:3001/ you’ll get a message saying “Can’t GET /”

So switch back to Atom and add a file under www called

Add a bit of hello world markup

Hello Node JS

Save the file.

Refresh the browser and voila!


Phew! Lot of work for a Hello World!

To sum up…

That may have seemed a lot of work up-front but all of it can be automated and scaffold-ed if we wanted to. Open source tool chains are much lighter weight when compared to enterprise apps like Visual Studio. However, they give you a lot more freedom to mix and match and hey all of them are actually free without you signing away your keystrokes in some EULA.

We have not even scratched the surface on NodeJS yet. In the next part, I’ll jump straight into more real life Node JS concepts like routing and middleware and show how to build front-end clients as well as HTTP services using it.

To be continued… (oh and wish you all a Happy new 2016)!

Tagged ,
%d bloggers like this: