Category Archives: OS

Setting up PostgreSQL (9.6) on Debian Stretch

I have been trying to setup a home network of connected devices for… lets just say a long time! I have made some progress in recent days and the next step is to start using some sort of backing store to store some values.

While MongoDB works well with the Nodejs stack, it isn’t well supported on Raspbian because Raspbian is 32bit and MongoDB stopped supporting 32 bit fully due to data size restrictions (biggest DB you can have is 2GB). Raspbian Stretch does provide an updated MongoDB but it is still a few (minor versions) short of the shiniest.

However using Jessie Backports I was able to install PostgreSQL 9.6 (the shiniest version). Having installed it, I had no idea what to do next. Here’s my journey to get PostgreSQL to a place where I can start using it.

My dev environment’s base OS is OSX and I run my DB in a VM hosted by VirtualBox. So I have setup a Debian Stretch VM called dev-db before I start with PostgreSQL installation. I am trying to setup a parallel runtime environment on a Raspberry Pi as well so if there are any exceptions to these steps I’ll document them for the Raspbian as well.

Installing on Debian/Raspbian Stretch

Both the distros have the required repositories in the apt list, out of the box, so all you have to do is

sudo apt-get install postgresql-9.6

This will create a PostgreSQL default user called ‘postgres’.

sudo service start postgres

Setting up your User Account

PostgreSQL databases are usually tied to a user. Out of the box, PostgreSQL users are not the same as *nix account users, however, if you a common name between your *nix account and PostgreSQL account things become much easier. So lets assume account I am using to log in to Debian stretch is called ‘dbuser’. It is in the sudoers list and I logged in to the terminal/desktop using it.

First step is to change user from ‘dbuser’ to ‘postgres’ that PostgreSQL created during installation.

sudo -i -u postgres

This is the admin account for postgresql, in a production environment, it is a good idea to protect this with a password at least.

Next step is to tell PostgreSQL, we have a ‘dbuser’ account that we want to be able to use for create databases in PostgreSQL.

postgres@dev-db: ~$ createuser dbuser --pwprompt

It will ask for password, I entered the same password as my *nix account but that may not be a best practice.

Finally create a db for the dbuser

postgres@dev-db: ~$ createdb dbuser

Now log out of the postgres account and go back to dbuser account

postgres@dev-db: ~$ exit
dbuser@dev-db: ~$

Start psql client and you should get logged into the pi database using the PLSQL

dbuser@dev-db: ~$ psql
psql (9.6.4)
Type "help" for help.

Enter \q to exit psql.

We have setup PostgreSQL and have established local client access

Enabling remote access

Default PostgreSQL setup enables local client access only. In Dev environment I like to access my VM or PI using a client on my local desktop/laptop. This needs a few additional steps.

Step 1: Open up firewall on VM or Raspberry Pi hosting the DB Server

sudo iptables -A INPUT -s -p tcp --destination-port 5432 -m state --state NEW,ESTABLISHED -j ACCEPT
sudo iptables -A OUTPUT -s -p tcp --destination-port 5432 -m state --state NEW,ESTABLISHED -j ACCEPT

Step 2: Edit PostgreSQL config to allow remote access

Edit the pg_hba.conf file

sudo nano /etc/postgresql/9.6/main/pg_hba.conf

Scroll to the bottom and add the following line

host all all trust

Edit the file postgresql.conf

sudo nano /etc/postgresql/9.6/main/postgresql.conf

Find the setting ‘listen_address’ and set its value to the IP address of the VM/Raspberry Pi. This line maybe set commented (starts with #), so uncomment it first.

listen_addresses = ''

Restart PostgreSQL server

sudo service postgresql restart

You should now be able to access if from a Remote Client.

I use the standard pgAdmin 4 and the connection settings were as follows

Screen Shot 2017-09-10 at 5.48.34 PM

And done!

Up next compiling and setting up Redis on the Pi 🙂

Tagged , , ,

Quickbytes: How to connect to MongoDB in a VM, from OSX (bindIP)

I like to keep my base system clean of databases and web-servers etc. So when I wanted to play around with MongoDB on my laptop instead of cluttering it up, I setup a little Virtual Box VM running Debian 8.5 and got MongoDB 3.2 on it in a jiffy using the official docs.
I then installed my favourite MongoDB client Robomongo and was all set to connect to the DB in the VM itself.
But when I installed Robomongo on OSX it just wouldn’t connect to the VM. I assumed its getting blocked by default OS settings on Debian. So I updated the IP Tables as follows
sudo iptables -A INPUT -s -p tcp –destination-port 27017 -m state –state NEW,ESTABLISHED -j ACCEPT
This enables incoming connections to port 27017, MongoDB default port.
sudo iptables -A OUTPUT -s -p tcp –source-port 27017 -m state –state NEW,ESTABLISHED -j ACCEPT
This enabled outgoing connections.
Replace [] with IP address of you machine/laptop on which the VM is hosted.
I assumed this would be enough but nope. Robomongo on OSX kept refusing to connect with the error “Network is not reachable”. After running up lots of wrong trees I finally found out that MongoDB forces local IP binding by default as a security measure. This setting is in /etc/mongod.conf
port: 27017
I changed the bindIp to [,]. Final settings were:
port: 27017
bindIp: [,]
Save the conf, restart mongodb service and Bazinga!
Quick vote of thanks to the following articles:
Security Note: Setting bindIP to is the worst move from a security point of view. Do not do it!

How to format a Disk in Debian Jessie

I keep forgetting how to format and label a disk on my Debian system so here’s a quick note to self:
Formatting == Making a File System
Usually adding a new disk means creating a partition first and then making a file system on the partition. I will come back to the creation of partition some other day. For now I just need to format my file system.
We use fdisk to identify partition.
fdisk -l

Disk /dev/sda: 232.9 GiB, 250059350016 bytes, 488397168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x8eb2d6f9

Device     Boot     Start       End   Sectors   Size Id Type
/dev/sda1  *         2048 468553727 468551680 223.4G 83 Linux
/dev/sda2       468555774 488396799  19841026   9.5G  5 Extended
/dev/sda5       468555776 488396799  19841024   9.5G 82 Linux swap / Solaris

Disk /dev/sdb: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: dos
Disk identifier: 0xf8b85d91

Device     Boot Start        End    Sectors   Size Id Type
/dev/sdb1        2048 1953521663 1953519616 931.5G  7 HPFS/NTFS/exFAT

Disk /dev/sdc: 111.8 GiB, 120034123776 bytes, 234441648 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x9fd1cfdb

Device     Boot Start       End   Sectors   Size Id Type

/dev/sdc1        2048 234441647 234439600 111.8G 83 Linux
As seen above the disks are /dev/sda, /dev/sdb and /dev/sdc Each of them have their own partitions
I want to ‘format’ the /dev/sdc drive, but I already have a partition on it, so I don’t need to create a partition.
To format, I need to unmount the file system first. I did it by right clicking it on Dolphin file system manager and click the Unmount context menu. You can use the umount command as well.
Once unmounted you format it using the following
sudo mkfs.ext4 /dev/sdc1   
The above command give the following output
mke2fs 1.42.12 (29-Aug-2014)
/dev/sdc1 contains a ext3 file system labelled 'WinVM'
        last mounted on /media/sumitkm/WinVM on Sun May 22 16:13:34 2016
Proceed anyway? (y,n) y
Discarding device blocks: done
Creating filesystem with 29304950 4k blocks and 7331840 inodes
Filesystem UUID: 6fdb55c8-95f9-4591-a76e-f5b0ab85a606
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
        4096000, 7962624, 11239424, 20480000, 23887872
Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
Dolphin will auto mount it but it will use a big GUID as its label. To fix the label use the e2label command
First we check for existing label and it comes back blank
sudo e2label /dev/sdc1
Next, we apply the label WinVM
sudo e2label /dev/sdc1 WinVM
Next, we check the label again and confirm it is WinVM
sudo e2label /dev/sdc1

Finally, don’t forget to change owner. Since our mount point was previously defined, it will be picked up automatically as soon as you apply the label. However the ownership is changed to root. Change ownership back to owner yourself using the chown command
chown -R sumitkm /media/sumitkm/WinVM

Where sumitkm is the username and the folder is the mount folder.

Taa daa, you are done!
P.S. This happens to be my first post using my custom built Electron JS based blog editor for Linux and OSX. Check it out at (A how to article on Electron JS has been in the works for the last 4 months now ;-)… will see the light of the day someday)
Tagged , ,

Getting started with NodeJS – Part 1: Fumbling around

I’ve been meaning to try out NodeJS for a while now, and finally got around to doing it over the last few days. I thought I would share my experience as I go along.

Update: Those ‘few days ago’ are actually a couple of months now 😉

I have used NodeJS as a (build) tool to help me ‘compile’ front-end scripts, that involves taking dev source code and minifiying it into cache-busted deployable code. I use gulp for it and it works pretty okay. Fact is, while writing the gulp script I got pretty interested in NodeJS.

Also given the fact that ASP.NET vNext is pretty much going the ‘Node way’, I thought I should know what the real deal is, before I muck around with ASP.NET vNext.

So here is my first go at building ‘something’ using NodeJS as a platform as opposed to just a Dev/Build tool. The article expects you to have heard of NodeJS and NPM (Node Package Manager, something like Nuget but runs off the command line and available to both Windows and *nix). If you have never used either of them, it’s fine.



Debian 8 (Jessie).

My readers using Windows fear not, you can use NodeJS on Windows using nearly the same steps, so if you get stuck just let me know and I’ll try and help.


node --version


npm --version


Side note on upgrading Node in Debian: If you read my previous article I had mentioned Jessie comes with a Node package by default, but it’s a rather old one. I un-installed that one using

sudo apt-get remove nodejs

Thereafter I followed the step outlined on . Reproduced here

curl -sL | sudo -E bash -

sudo apt-get install -y nodejs

This basically downloads the latest package from the official node repository and installs it.

Windows Users: Just get the latest Node installers from and rock on! NPM is installed as a part of Node.
OSX Users: You guys are all good, just install Node and the rest of the commands should be all the same.


Well, you could use anything you want, starting with Visual Studio full on, to Visual Studio Code, or any other IDE/Editor that suits your fancy. I am using Atom by Github. I am new to Atom as well, so there might be some moments when the experienced Atom user in you wince at my noobish-ness.

The Project

Well, I want to figure out what it takes to use QuillJS wrapped in a KO Component and then save the text in component into Azure Blob Storage. Simple right ;-). The project is called ParchmentScroll. Why? Well you use quills to write Scrolls on Parchment paper… 😉 😉 😉

Oh, BTW, QuillJS is a really cool JavaScript library for add Rich Text capabilities to your Web Application. It was open sourced by Sales Force and is available under a permissive BSD license.

So lets get started, but before that lets try wrapping our head around ‘Server-side’ JavaScript.

JavaScript… umm… TypeScript everywhere (client-side and server-side)

You either love JavaScript or loathe it! I have made my peace with it and I kind of like its dynamic quirkiness. After I started using TypeScript I like JavaScript even better.

Anyway, traditionally we all know how to use JavaScript in the browser. But Node JS takes JavaScript and runs it through Google’s V8 engine on the server so you can actually write HTTP services in JavaScript. So you can have a HTML page hosted on IIS, NGINX, Apache or wherever, do an AJAX post to your NodeJS application that you’ve written in JavaScript and send back a response. To put things in contrast with the .NET world, think of writing Web API services, but instead of writing ApiControllers in C# you get to write it in JavaScript.. err TypeScript!!!

Down to some code… err well… kind of

Since I am using Atom, a lot of the steps I describe here, to setup a blank project, will look long drawn, when compared to Visual Studio’s File->New Project->Wizard->Done.

So lets get started.

Open a terminal.

Select/Create your favourite project folder and navigate to it. Mine is at

cd /home/sumitkm/myprojects/demo/parchmentscroll

Packages, their managers and Node JS

The NodeJS ecosystem thrives on a huge repository of third party libraries that are distributed as packages. Packages inherit the idea of Linux packages. They bundle self contained units of code/binaries that can be installed and updated using their respective package managers.

In this project I have used three package managers

1. The Node Package Manager aka npm – This is Node’s default package manager and is used to distribute all node packages, binaries and extensions. Fun fact, you use npm to install other package managers :-). So npm is the alpha dog of package managers in Node and is installed by default with Node. Node packages are mostly used for installing dependencies that you will use on the server side. For client side Script/style dependencies you use the next package manager – Bower.

2. Bower – The front-end package manager. Bower installs front-end dependencies that are mostly distributable versions of the libraries or frameworks that you will use e.g. KnockoutJS, RequireJS, QuillJS etc. To get started with Bower you first need to install it using npm at the global npm repository location as follows.

Please note if you are are not the administrator but have sudo-er rights, you need to prepend sudo to every shell command unless I say you don’t need one explicitly.

npm install bower -g

3. TSD – The TypeScript Definitions package manager. While the good thing about TypeScript is it provides better code management due to type enforcement at compile time, the flip side is you need TypeDefinitions for existing libraries written in ES3-4. DefinitelyTyped is a nice Open Source repository of TypeDefinitions that users have contributed as they have used existing libraries. While creating a TypeDefinition is relatively easy, its good to have a leg up with the existing libraries. So we install TSD a package manager that helps you retrieve type definitions for libraries you will use from the DefinitielyTyped repository

npm install tsd -g

We start by initializing an NPM ‘project’. This creates a package.json file which has the list of dependencies as well as details like the Project Name, version, Git repository, Author name, License information etc.

npm init

(don’t use sudo here)

This will present you with a series of prompts where you provide the requested details and it will in turn scaffold a package.json file for you. I provided the following details:

This utility will walk you through creating a package.json file.
It only covers the most common items, and tries to guess sensible defaults.
See `npm help json` for definitive documentation on these fields
and exactly what they do.
Use `npm install  --save` afterwards to install a package and
save it as a dependency in the package.json file.
Press ^C at any time to quit.
name: (parchmentscroll) 
version: (1.0.0) 
description: A blogging platform built using Node, QuillJS and TypeScript
entry point: (index.js) 
test command: 
git repository:
keywords: QuillJS, NodeJS, TypeScript
author: Sumit Kumar Maitra
license: (ISC) MIT
About to write to /home/sumitkm/myprojects/demo/parchmentscroll/package.json:
  "name": "parchmentscroll",
  "version": "1.0.0",
  "description": "A blogging platform built using Node, QuillJS and TypeScript",
  "main": "index.js",
  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1"
  "repository": {
    "type": "git",
    "url": ""
  "keywords": [
  "author": "Sumit Kumar Maitra",
  "license": "MIT"
Is this ok? (yes) yes

If you do ls now, you’ll see that a package.json file exists in the folder.

Here on, you have to decide your project structure. There are lots of sensible defaults, you can look up on the net. I am trying out one that I feel comfortable with. I may change it as we go along and build the project.

Since there are no csproj files or equivalent (package.json is a distant cousin, more like a .sln file than anything else), I am going to create top level folders as projects. So I create two main folders

(no sudo required)

mkdir www

mkdir server

Next will initialize the typescript definition file tsconfig.json
(no sudo required)

tsc --init

This creates a TypeScript config file that helps TypeScript compiler with location of the ts files in the project and other configuration items. If you open the file in an editor you’ll see the default:

"compilerOptions": {
"module": "commonjs",
"target": "es3",
"noImplicitAny": false,
"outDir": "built",
"rootDir": ".",
"sourceMap": false
"exclude": [


The compiler options are same ones available via command line. I tend to remove the outDir attribute completely. This results in the .js files being generated in the same folder as the .ts file. This fits better with my deploy script that we’ll see sometime in the future.

The exclude array tells typescript compiler which folder it shouldn’t look at. Currently only node_modules is excluded.

The final tsconfig.json file we are starting with is


  "module": "commonjs",
  "target": "es3",
  "noImplicitAny": false,
  "rootDir": ".",
  "sourceMap": true
 "exclude": [

This completes our ‘File->New Project’. Here on we’ll get on with some real code.

Application layout in a little more details

In the previous section we created two folder server and www as our two ‘projects’. The server folder will be root folder for all the server side logic and the www folder will hold whatever resources the browser needs to serve up the web page. So folder names basically help us with a mental segregation of what goes where.


Node JS can open ports and server content on ports if you want it to. But we don’t want to go that low level. Instead we’ll get help from a framework called Express JS to the low level stuff of opening/listening to ports, parsing requests, sending back responses etc. Basically we’ll use ExpressJS to bootstrap the application. The handy bit is, Express can serve up static files as well, so we’ll use the same framework to host the front-end and handle backend request/responses.

Down to some code, finally!

Getting started with ExpressJS

Setting up anything in node basically means npm –install. Express is no different. In the ‘parchmentscroll’ folder using the following

npm install express --save 

Save tells npm to update the package.json file with this particular dependency. So when you get sources on to a new folder all you have to do is npm install and all dependencies listed in package.json will be installed for you.

Time to start up Atom in the parchmentscroll folder enter

(no sudo required)

atom .

This should launch atom with the following layout

Under the server folder create a folder called app

Add app.ts file under app folder. This is going to be our entry point into the application.

But before we start writing code, we need a little more ‘configuration’ to do.

Back to the console in the parchmentscroll folder we’ll use the tsd package manager to install typescript definitions for node itself
(no sudo required)

tsd query node --action install --save

This tells tsd to look for the node type definition and if found install it and save it to tsd.json

Similarly we install typedefinitions for ExpressJS as well

ts query express --action install --save

Next we’ll install a couple of npm modules that Express JS uses for parsing a request body and serving up static files.

npm install body-parser

npm install serve-static

We also need the typescript definition for these two, so invoke tsd again –

tsd query --action install serve-static --save --

tsd query --action install body-parser --save --resolve

Note the — resolve flag that we’ve used in the above two commands. This tells tsd to resolve sub-dependencies of the library and get their type-definitions as well. You’ll note both use another dependency call mime that gets installed automatically.

Back to Atom in app.ts paste the following code and save the file.

/// <reference path="../../typings/tsd.d.ts"/>
import * as express from "express";
var app = express();
var bodyParser = require('body-parser');
app.use(bodyParser.json()); // for parsing application/json
app.use(bodyParser.urlencoded({ extended: true })); // for parsing application/x-www-form-urlencoded
var server = app.listen(3001, () =>
 var host = server.address().address;
 var port = server.address().port;
 console.log('Example app listening at http://%s:%s', host, port);

– This code initializes express.
– Initializes an instance of the body parser module and sets it up to handle HTTP request body of type application/json

– Sets up the bodyParser module to handle parsing of url encoded HTTP requests

– Sets up express to handle static files in the ‘www’ folder (which is currently empty).

– Finally it setup up the express instance to listen to port 3001 and once the server starts print out a console message.
With the code setup, switch back to the terminal and in the parchmentscroll folder run tsc.


The code should compile silently and come back with no messages.

Next we try to run the app using the following command

node .

The . tells Node to use the package.json to start up. However you’ll get an error at this point.

Error: Cannot find module '/home/sumitkm/myprojects/demo/parchmentscroll'
at Function.Module._resolveFilename (module.js:338:15)
at Function.Module._load (module.js:289:25)
at Function.Module.runMain (module.js:467:10)
at startup (node.js:136:18)
at node.js:963:3

This because when we setup our package.json we said index.js was our ‘main’ file. Problem, easily fixed, switch to Atom and open package.json.

Set the “main” attribute to “server/app/app.js” instead of the initial “index.js”.

Save the file and flip back to the terminal. Run node . Again

node .

This time you should see a message like the following:

Example app listening at http://:::3001

If you open your browser and go to localhost:3001/ you’ll get a message saying “Can’t GET /”

So switch back to Atom and add a file under www called

Add a bit of hello world markup

Hello Node JS

Save the file.

Refresh the browser and voila!


Phew! Lot of work for a Hello World!

To sum up…

That may have seemed a lot of work up-front but all of it can be automated and scaffold-ed if we wanted to. Open source tool chains are much lighter weight when compared to enterprise apps like Visual Studio. However, they give you a lot more freedom to mix and match and hey all of them are actually free without you signing away your keystrokes in some EULA.

We have not even scratched the surface on NodeJS yet. In the next part, I’ll jump straight into more real life Node JS concepts like routing and middleware and show how to build front-end clients as well as HTTP services using it.

To be continued… (oh and wish you all a Happy new 2016)!

Tagged ,

Pulling the plug (on Windows10) and moving to Linux on my desktop

If you have followed my recent posts, you know I am working on this experiment on what it takes to wean myself off non Open Source OSes. Over the past weekend I decided it was time to walk the talk and pull the plug on Windows 10 on my Desktop. To be honest, it is not possible for me to avoid Windows 10 because my current Dev platform at work is heavily dependent on Windows (and Visual Studio), but the experiment is to see if I can avoid using Windows/OSX when not working and for my hobbies. So far there are some glaring gaps in my requirements (that I know have solutions but), I used to skirt them by going back to Windows, so I decided lets pull the plug and then figure out how to solve those issues. So here’s how I went about it.

Virtualizing current Windows system

I wanted to convert my current desktop into a VM and then pave the machine to run Debian Jessie. Luckily I have two Disks – a 250 Gig SSD for OS and software and a 1TB HDD. This makes things really easy. If you don’t have a second disk, have a spare external disk with lots of free space.

Using Disk2vhd to create VHDs out of your current Windows system

Turns out Microsoft (Sysinternals) has a fabulous little utility called Disk2vhd.

Download it-> run it -> point it to the Disks that you want to convert to VDHs and provide an output folder. Default output folder is where you are running the util from.

I planned to created Virtual Box VM out of my VHDs, but it supports VHDX only in readonly mode, so I unchecked the ‘VHDX’ option in Disk2Vhd before I initiated the creation so that I can use the VHD directly in Virtual Box on Linux.

Note: If you used the D: of your system for storing files it’s worth including it in the VHD. I made the mistake of not including it, and as a result later with Windows started up, it was iffy about OneDrive, because I had moved all my account folders to the D:. There is no data loss but just that you’ll have to run some hoops to get the VM doing exactly as it was on the Desktop.

Once the VHD creation is complete, you are good to go. This is actually a very good way of backing Windows up, wonder why I didn’t do it more often in the past. Anyway, done now.

Installing Debian with KDE Plasma

In the past I have played around with Cinnamon as my Desktop of choice, though on the laptop I installed Cinnamon and KDE. In such a setup Cinnamon was the default but it had KDE apps around. This time I wanted to go all KDE and see their Plasma desktop in action. So I installed KDE only when asked for choice of Desktops.

When I logged in the first time I was very impressed with how smooth the Desktop experience was. It has elements of Windows, OSX and Gnome in it, but it is really really refreshing.

Cinnamon supports all the Windows shortcuts by default like WinKey to bring up Start Menu, WinKey + E for File Explorer and so on. KDE seems to need a bit tweaking to get it to work like Windows, maybe it has OSX keys by default, not sure. Anyway, I was easily able to support Winkey + Left or Winkey + Right for window align and WinKey + R for bringing up the Application Search. WinKey doesn’t bring up anything on its own though (yet).

Though Konqueror is the default Browser on KDE, I setup IceWeasel (Firefox for Debian) as my default.

I think I like Plasma, but time will tell if it’s just the new fangled toy syndrome or if it will actually help be more efficent than Windows or OSX.

Restoring Windows and a major SNAFU

Once I got Debian going I set it up using the steps in my previous posts. That went smooth.

I followed the official Debian documentation to install VirtualBox.

On VirtualBox I created a new VM and selected ’64Bit Windows 8.1′ and pointed it to the VHD I had created using Disk2Vhd earlier. (I did make a copy of the original VHD knowing its the only way to get back my Windows Machine if anything went south). I had to give the VM 8 Gigs of RAM and selected ICH9 chipset for Windows to recognize the Sound Card correctly.

However after Windows 10 re-initialized itself and came back, it refused to activate saying the ID had been blocked. Now I didn’t put in a Key for Windows 10 specifically, I had upgraded from Windows 8.1. But Windows kept refusing to recognize the 8.1 Key. So currently I have a working VM that’s not activated. This possible means I’ll have to repave the entire machine and go back to Windows 8.1 (which wouldn’t be so bad actually). But that’s for another day. This is just a heads up, Microsoft doesn’t really want you to move your Windows license anywhere other than where its installed even if it happens to be the same machine where you had it initially. This is the type … oh well, I promised myself I wouldn’t rant, so stop.

Replacing OneDrive

A cloud synced storage platform is the only thing I need to do my work efficiently. While I could easily use DropBox as a paid replacement for OneDrive, I lost bit of faith in DropBox with their policy changes in the past.

So I am working on a little project of my own that allows me to maintain a cloud repository of important files (mostly pictures of scenery I capture ;-)…) . It’s tentatively called KalliopeSync and uses Azure (I know about the irony) Blob storage at the moment.

The cost of using Azure Blob Storage for my files is less than to cost of a latte per month. So I tell myself on a random day of the month, that today I am paying for my cloud storage and skip the latte ;-). There you go, free cloud storage 🙂 🙂 :-).

It adds to my long list of personal projects, and yes, this CommitStrip is meant for me, but hey, its the idea that counts… or maybe not, whatever!

Looking ahead

I am hoping, by forcing myself to move out of my comfort zone I will be able to explore Linux more and help me expand my toolbox of Web Development tools/technologies and platforms.

I don’t hate Microsoft or anything, just that with Windows 10 we’ve come to an ideological crossroad that forces me to take the path less traversed. If Windows ever came with a “Leave me alone” button that, in a single click, transferred ownership of my computer and data back to me, I would happily pay for it and use it. Till such time …

P.S. I will miss Live Writer… hope it gets open-sourced some day…

Tagged , ,
%d bloggers like this: