Windows 10, the story of ‘Default settings’

Okay folks this is rant, you have been warned!

TLDR; Windows 10 is a privacy nightmare if you installed it with default settings. I hate those settings! Good news is, you can turn most of it off unlike Google’s omnipresent web tracking!

The longer version

I’ve been a Windows 10 insider for a long time and knowingly given up my usage history to ‘help make Windows better’ (and of course use the latest and greatest Windows)! I had one VM on the main desktop, and when I got an SSD in my old laptop I installed Win 10 as my primary OS (boy that was a mistake, but that’s for later). Having used it fulltime for 1+ month and part-time for another month, I was extremely disappointed with its performance on old hardware (or lack thereof), but kept deferring judgment till the final release arrived. Well it arrived today morning and boy did it annoy the hell out of me! But it wasn’t about performance (on a year old desktop you can barely tell the difference), it was about the default settings that came out of the box for an upgrade from Windows 8.1 that I had purchased legitimately!

Default settings nightmare

As a self-respecting software developer, I never install Express/Recommended/Default Configuration of a software. I like to know what’s getting installed. I guess I have Java and its toolbars to thank for it!

In past Windows 8.1 always had two clear options “Express” and “Custom”. The buttons were equally sized right next to each other. Express was the default and it said clearly that it was going to sniff into everything you do on your computer. However selecting “Custom”, Windows would turn off all the naughty bits and leave some for you to turn off/on. This is what Windows 8.1 looked like

image

When Windows 10 starts configuration you get this:

image

Customize settings is now obscured on the left side as a link, as if it links to text like the ‘learn more’ link on top of it.

Before you click on ‘Customize settings’ do you care to read the consequences of “Use Express settings”?

Paragraph 1

In de-jargonized form the first paragraph says – Windows will send what you say (“Personalize your speech”), what you type (“Personalize your contacts and calendar details”), what you view, and what you do to Microsoft (“…along with other associated input data”). What do you get out of it?
– A computer whom you can talk to and will probably respond 60-70% of the time correctly (Cortana). I am sure Microsoft must have imagined a floor full of developers in an Enterprise IT shop shouting “Hey Cortana” was a brilliant idea.
– Faster auto completion when you are typing email addresses?!?!? (I mean come on…)
– Better handwriting recognition on touch devices by ‘teaching Microsoft’s central AI server about handwriting patterns’ (my guess here).

Paragraph 2

The second paragraph says – Microsoft will replace your name with an ID and reveal that ID’s location/location history to whoever they can sell this information. I DON’T KNOW – If ID is tied to Microsoft account, if it is, there you go, all advantages of the OS with regards to settings syncing comes with the additional baggage of being sold out to advertisers. By the way Advertising ID was there in Windows 8 as well, so not really Windows 10 specific. What do you get in return?
– Better targeted ads in not only your web browsing experience but in ALL FREE apps that Microsoft (and its partners) are going to ‘give you’… (thanks but no thanks, keep your free stuff with you)!

Paragraph 3

The third paragraph says in so many words – we’ll send everything you do on the browser to Microsoft. Apparently it’s good for you. All the feature will do for you is vet internet links you visit presumably on Edge/IE only. MS will have a blacklist and if you visit a blacklisted site you’ll be suitably warned. In exchange all your browsing history will be sent to Microsoft.

Now if you are alarmed by this, your favorite Chrome browser does the same and is unfortunately owned by an advertising company whose business is to make money off you by selling your info. The very reason I am (was?) a Windows fan is this did not happen with on it, welcome to Windows 10.

(Yes phishing is the worst possible online offense and most commonly used for hacking/installing malware, but detecting phishing is a global thing, crawl faster, deeper, why bother about my browsing history)?

Paragraph 4

Automatically connect to suggested open hotspots and shared networks is the expanded form of the feature “WiFi-Sense”. Someone at Microsoft thought it would be brilliant for them to store all the WiFi passwords in the world so that they could make it easier for your Facebook/Skype friends to connect to your WiFi. How crazy is that?
Well, there is a lot of FUD on the internet about what it actually does. But my understanding is the express settings do not start sharing by default. They just start collecting by default. To start sharing you have to connect Windows to your Facebook account and then share each WiFi you connect to one at a time (read Ed Bott’s article on ZdNet for details).
Whatever, even collecting my WiFi passwords by default is an unacceptable behavior. Microsoft is doing it because WiFi information tied with other meta information just helps them pinpoint your location – to sell targeted ads. They don’t really need the passwords for it as far as I understand! Again, their reasoning is Google did it and got away, so can we!!!

Pssssttt MS, when I took my router from US to India and gave it to my Sis, Google Maps still thought I was in US and brought that location up as default, not a brilliant strategy I say!

Paragraph 5

Send diagnostic information to Microsoft means it sends crash reports to Microsoft. Now I usually turn this on selectively, depending on what type of computer I am on, what kind of internet bandwidth I have and so on. I’ve let Visual Studio and its derivatives (SQL Management Studio etc.) run diagnostics mode silently in the past. However Microsoft now has made this mandatory. There is no opt out! There are three levels of information ‘sending’ (maybe more based on variant of Windows 10 you have, I have the Pro edition). But you simply cannot tell Windows to not send crash reports. Who pays for the bandwidth on a metered connection Microsoft? This isn’t a privacy issue (maybe but it is in the Privacy section of your Settings Panel) but it is an imposition I have to deal with on what is ironically called a ‘Personal’ Computer.

What now – is there a way out?

Well kind of, but you have to be very careful to start off with.

After windows has downloaded and done its eye of Sauron installation/copy of files, the first thing you will come up with is the Settings screen as shown above.

1. Click on the Customize Settings link. I have disabled all Personalization stuff but kept the Location on. Keeping it on turns on a few ad related stuff in browsing, that you can turn off but to be safe just turn the dang thing off.

image

2. Next You can keep smart screen on and disable the rest. I had kept the Send error and diagnostic information to Microsoft on, only to find out there isn’t really a way to turn it off ever. So on/off doesn’t matter. I’ll show how to reduce levels in a bit. But do make sure you turn off automatic connection to open hotspots. This is important for laptops and portable devices. Also keep Automatic connect to networks by your contacts. Slyly this is not referred to as WiFi-Sense here in the settings.

image

I don’t have any more screen captures, so I am guessing that ends the setup options.

Reviewing your Privacy settings

This is the part I still appreciate about Microsoft that we have some semblance of control.

Once the setup completes go into the new control panel aka the ‘Settings App’

To go to settings app, press the Windows Key and start typing ‘Settings’. When you see the ‘Settings App’ on top of the list hit enter.

Privacy – General Settings

image

The last setting ‘Let websites provide locally relevant content by accessing my language list’ might be enabled. Decide what you want to do with it. I’ve disabled it.

Don’t be confused by the ‘Some settings are managed by your organization’ message. I started getting that after I turned off a few more settings under privacy, I don’t quite remember the sequence of events after which I started getting the message. Yes it’s a dumb message that doesn’t make sense on a computer that is sitting at home.

Privacy – Location tab

image

On my desktop I have no need for the Location to be on, so I just turned it off. If you turn it off apps will not be able to use it. Alternately you can keep it on and turn off individual app access just like you do on phones.

Privacy – Camera Tab

The following apps had access to my web camera by default. I turned off access for a bunch

image

Privacy – Microphone Tab

This one is rather important so not only do you have to set this up in the beginning be sure you come back to it occasionally to review it

image

Speech, Inking and Typing

While speech is a strict no-no for me, Inking would have been okay if I had a touch screen device I could write on (read Surface tablets). If you don’t you are better off tell Windows to buzz off and not bother about knowing you.

image

The ‘Get to know me’ button indicates it is disabled. This is because we had turned it off in the very first page of Customization screen.

Privacy – Account info

I’ve set my name and picture to be visible, but I have no idea what other account info is being ‘sold off’.

image

Privacy – Contacts

There are two weird apps here that are trying to access my Contacts. I have no idea what they are and I don’t really want to give them access. So Windows if you give a crap about what users think either explain what those two Apps (App connector and Windows Shell Experience) are or get turned off.

image

Privacy – Feedback & diagnostics

This is the screen I was talking about. You cannot set the Diagnostic and usage data to NONE or Never. Basic is the lowest you can go.

image

When I moaned about it online (okay I called the privacy settings ‘vile’), Clemens Vasters a Microsoftie whom I respect a lot, came back saying that’s got nothing to do with privacy. This is his exact tweet.

Follow the conversation on twitter, it’s an interesting one.

I am guessing Diagnostic data somehow tells Microsoft whether you are part of a bot-net or maybe more. Either ways, just be aware if your browser crashes while watch things you don’t want others to know about, there is an off chance Windows might send off a screenshot of what you were watching. Wonder what companies using Office products to create competitive and potentially secret documents think about it.

That comes from their ‘Learn more about feedback & diagnostics settings’ link. Give it a read.

Internet Bandwidth at mercy of Windows

That was Privacy, now a few things if you are on a metered internet connection like I was in India two months ago. My monthly quota was 4GB @ 2Mbps per month. After 4Gb it is unlimited data at @512 Kbps. Such a connection being called Broadband is laughable, but that’s what the reality is.

In such a scenario you have to do a few more settings to optimize.

Go back to Setting home and click on ‘Update & Security’

image

Click on Advanced options.

image

Next click on ‘Choose how updates are delivered’

image

Microsoft has introduced peer-to-peer delivery of updates, but guess what, it wants to use your internet connection to deliver updates to others on the ‘Internet’. Ahem why? Using peer transfer on local network is commendable, but internet, that too by default? What where you thinking Microsoft? I have changed the setting to PCs on my local network.

Paranoia Level 5/5 – Banishing Bing

When in India over my piss poor internet, every time I tried to use the new ‘Search Windows’ box to find/launch apps, the system would irritatingly freeze trying to do a Bing search on every keystroke. It got so bad that I edited my hosts file and set bing.com to 127.0.0.1. This immediately speeds things up because the launcher thinks you are not connected to the internet and searches Windows locally only.

But earlier today I found out that you could switch off Web searches as follows.

image

That will improve your ‘Search Windows’ performance significantly over slow internet connections or older hardware or both (funny its called ‘Search Windows’ but tries to search the internet by default)!

So if you still want to use Bing elsewhere (bwahaa haa haa, sorry couldn’t help) and use the ‘Search Windows’ for local system searches only, turn off Web Results.

Banishing Bing might reduce data transmission over the internet, by how much I have no clues.

Ever since I discovered the Windows Key shortcut’s use as an App launcher (in Windows 7) that has become my default method of starting apps. Windows 8 and 8.1 caused a bit of disruption but still worked almost similarly. Glad Windows 10 has set it back to Windows 7 almost and thanks for having the switch to turn off Web Results.

So there you go, everything I did to setup Windows 10 and start using it. After this, I used WireShark to sniff packets and found some data still going to some Microsoft servers with no applications ‘visibly’ running in the task bar. Thankfully they were over https. I also found somehow my machine was talking to Amazon (the music downloader has put in something deep somewhere) and Dropbox. Haven’t looked at it any deeper, my paranoia level is maxed out at this point.

Conclusion

Many years ago once I tried to search for a substitute medicine for my son on Bing. It returned some offensive results, resulting in me dropping a lot of F-Bombs at Bing (the results were promptly removed by the team to their credit, and I apologized for cussing). At the time I had blogged how Microsoft had completely missed the boat on search context because it didn’t care to keep user context.

With Windows 10, Microsoft has done a U-turn on that, but in process has created privacy invading defaults that put Google’s tracking to shame. Does Microsoft want to go from the number 1 enterprise and development platform to the number 1 botnet owner or does it want to become a cheap online ad agency dealing in fake ad clicks???

None of the data it is seeking helps its users more or less. All the claims of improved usability is simply to learn user behavior for the singular goal of selling ads and remember Microsoft is selling off/closing down it’s own online ad business, so literally, all it is doing is selling your data to third parties. Won’t be surprised if Google signs up!!!

We are slowly broiled frogs, Windows 10 just finished cooking us

It would be dishonest to not mention how as users we have been slowly broiled by services like Facebook, Twitter and Google to become their products by giving away personal information. But Microsoft using that excuse to setup the defaults in an OS that’s used by ~90% of computer users is still improper! (And I am just an old fart)!

Constructive suggestions?

Clemens, the gentleman he is, asked me to email him a list of constructive suggestions. Well I have a few:

1. Set defaults to what I have described above, to start with.
2. Seek permissions when applicable, e.g. if I want to use Cortana tell me clearly what it entails before enabling it and enable data gathering for Cortana only (segue into 3 below).
3. Drop the ‘click Yes and all your data is ours’ global behavior. If I allow the Maps application to access location, I am giving it to the Maps application because either I derive value from it knowing my location and/or I trust the Map application to use my Location information responsibly. Use my location information in context of the Map application. You don’t get a free pass to share my location information to every Tom, Dick and Harriett and neither does ANY OTHER APP. You can keep a global setting, but I need to see more value than ‘better ads’ to ever enable it.
4. If you care about being a platform, the ‘Search Windows’ box should support search providers for Web Search. Windows search can and should remain native.
5. Explain mystery apps better so I know what they are. Enough with the bolt-ons.
6. Tell me who you’ve shared my data with.
7. Last but most important – Give me control over the data you take from me. My computer, my data! I need to know what Windows has taken to its servers, how it has used it and I need to be able to clear it out at the source if required.

My last word about Ads sponsored anything

Google has somehow convinced the world that you can literally provide any service for free as long as you can somehow stick ads on them. This is a misconception beyond  reason. Google’s gravy train is running on bot views and high hopes from its clients that someday it will provide better RoI than anything else. Well, sooner or later people will realize spending money on ads is like buying Housing debt. There is nothing of real value in them.

Example, remember the good old days of terrestrial television? Ads were few, programs were few but tolerable and you barely paid anything to watch them (after you had bought the TV and the ariel). Fast forward to now – You see programs clips between series of ads, and that is after you are paying a handsome sum of money to ‘subscribe’ to those channels on a monthly basis. Most of the times you just get re-runs anyways. So basically you are paying to see ads interspersed with some programs. So what did we do to break free from it, created services like Netflix that don’t show ads. Life’s come a full circle. Now tell me one good reason why this won’t be the case in future for ‘free’ ad supported Software??? Think!!!

Tagged , , ,

Controlling Relays from the web using a Pi, OWIN, Mono and SignalR

In my previous article I built a C# library to communicate with the GPIO ports of the Raspberry Pi 2 (and Raspberry Pi B+). We also built a small console application to trigger the ports. While the console application was a nice proof of concept, the real fun is when you are able to flip switches from remote locations (when not at home, over phone etc. etc.) As a first attempt at this I built a OWIN based self hosting web site that communicates with the Pi over HTTP using SignalR.

The Idea

The idea is pretty simple, the GPIOManager is embedded into a Console application that is also a SignalR client.

The SignalR client has SwitchOn, SwitchOff methods taking GPIOPinIds as parameters and call upon the GPIOManager to do so.

Any web application can host the SignalR Hub and a SignalR web client. The web client relays the actions to the remote client. Simple!

Before we start

In reality I built the sample app first and then did the refactoring but now that the code is already refactored might as well mention it beforehand.

The PiOfThings.net

I got so involved with this whole IoT things, that I decided to go the whole nine yards

– Bought the domain www.piofthings.net. It is currently the Github.io page for the piofthings account. I’ll eventually move it to docs.piofthings.net and replace the www with something more interesting Winking smile 
– Created a new Github account http://www.github.com/piothings.  
– Refactored code and checked it into Github
– Created Nuget Packages PiOfThings.GpioCore and PiOfThings.GpioUtils and uploaded to www.Nuget.org
– So now you can install the dependencies using Nuget Package Manager

install-package PiOfThings.GpioCore

install-package PiOfThings.GpioUtils

As you can guess from the namespaces the refactoring involved separating the helper classes into a separate DLL. This is because I wanted to refer the utility enumerations on the Web application but the web application didn’t need the manager. So now you can install the Core which has the manager and it will install GpioUtils automatically. Or if you want the Utils only, just install the PiOfThingsUtils directly.

With all the refactoring in place, we are not ready to take the next step.

Creating the SelfHosted Owin Web App

I started with a Console App in Mono and added the following Packages

Microsoft.Owin

Microsoft.Owin.SelfHost

Microsoft.Owin.Host.HttpListener

Microsoft.AspNet.SignalR.Core

Microsoft.AspNet.SignalR.JS

and of course PiOfThings.GpioUtils

These all install certain sub-dependencies so you’ll end up with a bigger list of packages

My final packages.config looks like this

<?xml version="1.0" encoding="utf-8"?>
<packages>
  <package id="jQuery" version="2.1.3" targetFramework="net45" />
  <package id="Microsoft.AspNet.Cors" version="5.2.3" targetFramework="net45" />
  <package id="Microsoft.AspNet.SignalR.Core" version="2.2.0" targetFramework="net45" />
  <package id="Microsoft.AspNet.SignalR.JS" version="2.2.0" targetFramework="net45" />
  <package id="Microsoft.Owin" version="3.0.1" targetFramework="net45" />
  <package id="Microsoft.Owin.Cors" version="3.0.1" targetFramework="net45" />
  <package id="Microsoft.Owin.Diagnostics" version="3.0.1" targetFramework="net45" />
  <package id="Microsoft.Owin.FileSystems" version="3.0.1" targetFramework="net45" />
  <package id="Microsoft.Owin.Host.HttpListener" version="3.0.1" targetFramework="net45" />
  <package id="Microsoft.Owin.Hosting" version="3.0.1" targetFramework="net45" />
  <package id="Microsoft.Owin.Security" version="3.0.1" targetFramework="net45" />
  <package id="Microsoft.Owin.SelfHost" version="3.0.1" targetFramework="net45" />
  <package id="Microsoft.Owin.StaticFiles" version="3.0.1" targetFramework="net45" />
  <package id="Newtonsoft.Json" version="6.0.8" targetFramework="net45" />
  <package id="Owin" version="1.0" targetFramework="net45" />
  <package id="PiOfThings.GpioUtils" version="0.1.0" targetFramework="net45" />
</packages>

Adding a SignalR Hub

Added a new class called IoTHub that inherit from Microsoft.AspNet.SignalR.Hub. Added a method Handshake that simply returns true. This is for the client to check if the server is there or not when it starts off.

Next we add two methods switchOn and switchOff, both take the GpioId as input parameter. All that this method does is calls the switchOff or switchOn function on all other clients.

The final Hub class is as follows:

using System;
using Microsoft.AspNet.SignalR;
using Microsoft.AspNet.SignalR.Hubs;
using PiOfThings.GpioUtils;

namespace IoTWeb
{
    [HubName("IoTHub")]
    public class IoTHub : Hub
    {
        public bool Handshake()
        {
            return true;
        }

        public void SwitchOn(GpioId gpioPinId)
        {
            Console.WriteLine("Switching ON - " + gpioPinId.ToString("D"));
            Clients.Others.switchOn(gpioPinId);
        }

        public void switchOff(GpioId gpioPinId)
        {
            Console.WriteLine("Switching OFF - " + gpioPinId.ToString("D"));

            Clients.Others.switchOff(gpioPinId);
        }
    }
}

Setting up SignalR in the OWIN pipeline

We add new class called IoTStartup and add decorate it with the OwinStartup attribute.

using System;
using Microsoft.AspNet.SignalR;
using Microsoft.Owin.Cors;
using Microsoft.Owin;
using Owin;

[assembly: OwinStartup(typeof(IoTWeb.IoTStartup))]

namespace IoTWeb
{
	public class IoTStartup
	{
		public IoTStartup ()
		{
		}
	}
}

Next we add a method Configuration with an input parameter of type IAppBuilder. A function with an input parameter of type IAppBuilder is the standard convention for chaining OWIN components.

We setup SignalR in the OWIN pipeline as follows

public void Configuration(IAppBuilder app)
{
	app.UseCors (CorsOptions.AllowAll);
	app.MapSignalR ();		
}

Getting it hosted

Now to host SignalR. We go back to the Program.cs and add update it as follows:

using System;
using Microsoft.Owin.Hosting;

namespace IoTWeb
{
	class MainClass
	{
		public static void Main (string[] args)
		{
			string baseUrl = "http://localhost:5000";
			using (WebApp.Start<IoTStartup>(baseUrl))
			{
				Console.WriteLine("Press Enter to quit.");
				Console.ReadKey();

			}
		}
	}
}

We are using the WebApp to host all other OWIN components. The WebApp is hosted at the provided URL (localhost:5000) We could have used SignalR self hosting too, but you’ll see in a minute why I used Web App.

Build the application and run from the console


pi@raspberry ~/projects/IoTLightbulb/IoTWeb/bin/Debug $ sudo mono IoTWeb.exe

Next launch browser and go to localhost:5000/SignalR/hubs you should see the generated Hub proxy. To ensure it’s correctly generated, scroll down till you can see your Hub’s name (IoTHub) and the three functions you created in your hub (handshake, switchOff and switchOn

image

Great, we have a SignalR hub that can issue commands. Now we need two things, a UI to issue the commands and a client connected to this hub to receive those commands. Let set these up.

Setting up a simple Static File Host using OWIN

In real life we would build a nice ASP.NET MVC around the SignalR app, but today we are only demoing stuff. All we need is an HTML page, some JavaScript so lets use simple static file hosting and host an HTML file and required JavaScript.

1. Update the jQuery dependency (you got from installing the Microsoft.AspNet.SignalR.JS) from 1.6.4 to the latest using Update package feature of Mono from the solution itself.

2. Add a folder called Web in your project and move the Scripts folder package under Web. It should look something like this

image

3. Now right click on the min.js files one at a time, go to quick properties and check the ‘Copy to Output Directory’

image

Do the same for Index.html. This will ensure the Web folder is create under wherever the exe file is generated and contain the HTML and required JavaScript files.

4. Under Web add an Html file – Index.html and add the following markup

<!DOCTYPE>
<html>
	<head>
		<title>IoT Web</title>
		<script type="text/javascript" src="/Web/Scripts/jquery-2.1.3.min.js"></script>
		<script type="text/javascript" src="/Web/Scripts/jquery.signalR-2.2.0.min.js"></script>
		<script type="text/javascript" src="/signalr/hubs"></script>
	</head>
	<body>
		<h1>Controlling Relays from the web using SignalR </h1>
		<button id="turnOn1">Turn On 1</button>
		<button id="turnOff1">Turn Off 1</button> 

		<script type="text/javascript">
		$(function() 
		{
			var hub = $.connection.IoTHub;

			$.connection.hub.start().done(function () 
			{
				$('#turnOn1').click(function(){
					hub.server.switchOn(17);
				});
				$('#turnOff1').click(function(){
					hub.server.switchOff(17);
				});
			});
		});
		</script>
	</body>

</html>

We have basically done the following:

– Added reference to jQuery, SignalR client (jQuery.SignalR) and the SignalR hub

– Added two buttons with the ids turnOn1 and turnOff1

– Initialized the hub and once the connection started attached a click handler to each button.

– Each click calls the respective switchOn and switchOff method. The number 17 basically implies GPIO 17 that we are trying to turn on an off

5. Finally in the IoTStartup.cs add two lines of code in the Configuration function to initialize static hosting

public void Configuration(IAppBuilder app)
{
	app.UseCors (CorsOptions.AllowAll);
	app.MapSignalR ();		
	string exeFolder = System.IO.Path.GetDirectoryName (System.Reflection.Assembly.GetExecutingAssembly ().Location);
	string webFolder = System.IO.Path.Combine (exeFolder, "Web");
	Console.WriteLine ("Hosting Files from : " + webFolder);
	app.UseStaticFiles ("/Web");
}

All set with the Web page then. Run it again and navigate to http://localhost:5000/Web/Index.html . You should see something similar:

image

Cool. Now lets build the client that will communicate with this server

Creating SignalR client to communicate with GPIO ports

Lets start with another console application, I have called it RelayControllerService. This service does the following:

Create a SignalR HubProxy, a GpioManager and establish a connection with the SignalR server.

Once setup it does the handshake and then wait for commands from the server.

When it receives SwitchOn or SwitchOff command, it uses the GpioManager to write to the appropriate GPIO pin.

Before we get started we add references to the following packages:


install-package Microsoft.Owin
install-package Microsoft.AspNet.Cors
install-package PiOfThings.GpioCore

This should setup all other required dependencies. My packages.config is as follows:

<?xml version="1.0" encoding="utf-8"?>
<packages>
  <package id="Microsoft.AspNet.Cors" version="5.2.3" targetFramework="net45" />
  <package id="Microsoft.AspNet.SignalR.Client" version="2.2.0" targetFramework="net45" />
  <package id="Microsoft.Owin" version="3.0.1" targetFramework="net45" />
  <package id="Microsoft.Owin.Diagnostics" version="3.0.1" targetFramework="net45" />
  <package id="Microsoft.Owin.Security" version="3.0.1" targetFramework="net45" />
  <package id="Newtonsoft.Json" version="6.0.8" targetFramework="net45" />
  <package id="Owin" version="1.0" targetFramework="net45" />
  <package id="PiOfThings.GpioCore" version="0.1.0" targetFramework="net45" />
  <package id="PiOfThings.GpioUtils" version="0.1.0" targetFramework="net45" />
</packages>

Now that we are all set, let’s look at the code for the RelayControllerService.

The Constructor

using System;
using Microsoft.AspNet.SignalR.Client;
using PiOfThings;
using PiOfThings.GpioCore;
using PiOfThings.GpioUtils;

namespace RelayControllerService
{
	public class RelayControllerService
	{
		readonly GpioManager _manager = new GpioManager ();

		private IHubProxy IoTHub { get; set; }

		private HubConnection IoTHubConnection { get; set; }

		public RelayControllerService (string url)
		{
			IoTHubConnection = new HubConnection (url);
			IoTHub = IoTHubConnection.CreateHubProxy ("IoTHub");

			IoTHub.On<GpioId> ("SwitchOn", OnSwitchedOn);

			IoTHub.On<GpioId> ("SwitchOff", OnSwitchedOff);

			Console.Read ();
		}

 

We are setting up our service by initializing the GpioManager, ProxyHub and HubConnection. We are also assigning an event handler for the SwitchOn and SwitchOff events that will be invoked someone clicks on the buttons on our website.

The Event handlers

The event handlers have the same code from our previous sample console application. Whenever a switchOn or switchOff is received, we first check if the _manager’s current pin is the same as the pin requested, if not it is selected. Once selected we write the Low or High appropriately.

        private void OnSwitchedOn(GpioId gpioPinId)
        {
            Console.WriteLine("SWITCH ON RECIEVED " + gpioPinId);
            if (_manager.CurrentPin != gpioPinId)
            {
                _manager.SelectPin(gpioPinId);
            }
            _manager.WriteToPin(GpioPinState.Low);
        }

        private void OnSwitchedOff(GpioId gpioPinId)
        {
            Console.WriteLine("SWITCH OFF RECIEVED " + gpioPinId);

            if (_manager.CurrentPin != gpioPinId)
            {
                _manager.SelectPin(gpioPinId);
            }
            _manager.WriteToPin(GpioPinState.High);
        }

Connection Initialization and termination

Our service is setup and ready to provide event handlers. All we have to do is start connection and wait for server. Also when user wishes they can stop the connection.

The StartConnection creates an async task that attempts to connect to the server. Once connected to the server is calls our HandShake method and reports that the Handshake was successful. In future the Handshake will be more involved with the device sending specific device identifiers to the server so that the server can keep track of which device is connected and connect to that particular device only.

The StopConnection simply calls the GpioManager’s Release all function to release all the GPIO pins and then calls a SignalR Stop to gracefully close the connection.

using System;
using Microsoft.AspNet.SignalR.Client;
using PiOfThings;
using PiOfThings.GpioCore;
using PiOfThings.GpioUtils;

namespace RelayControllerService
{
    public class RelayControllerService
    {
        readonly GpioManager _manager = new GpioManager();

        private IHubProxy IoTHub { get; set; }

        private HubConnection IoTHubConnection { get; set; }

        public RelayControllerService(string url)
        {
            IoTHubConnection = new HubConnection(url);
            IoTHub = IoTHubConnection.CreateHubProxy("IoTHub");

            IoTHub.On<GpioId>("SwitchOn", OnSwitchedOn);

            IoTHub.On<GpioId>("SwitchOff", OnSwitchedOff);

            Console.Read();
        }

        private void OnSwitchedOn(GpioId gpioPinId)
        {
            Console.WriteLine("SWITCH ON RECIEVED " + gpioPinId);
            if (_manager.CurrentPin != gpioPinId)
            {
                _manager.SelectPin(gpioPinId);
            }
            _manager.WriteToPin(GpioPinState.Low);
        }

        private void OnSwitchedOff(GpioId gpioPinId)
        {
            Console.WriteLine("SWITCH OFF RECIEVED " + gpioPinId);

            if (_manager.CurrentPin != gpioPinId)
            {
                _manager.SelectPin(gpioPinId);
            }
            _manager.WriteToPin(GpioPinState.High);
        }

        public void StartConnection()
        {
            //Start connection
            IoTHubConnection.Start().ContinueWith(task =>
            {
                if (task.IsFaulted)
                {
                    Console.WriteLine("There was an error opening the connection:{0}",
                        task.Exception.GetBaseException());
                }
                else
                {
                    Console.WriteLine("Connected");

                    IoTHub.Invoke<string>("HandShake").ContinueWith(joinGroupTask =>
                    {
                        if (task.IsFaulted)
                        {
                            Console.WriteLine("There was an error calling send: {0}",
                                task.Exception.GetBaseException());
                        }
                        else
                        {
                            Console.WriteLine("Handshake successful - " + joinGroupTask.Result);
                        }
                    });
                }

            }).Wait();
        }

        public void StopConnection()
        {
            _manager.ReleaseAll();
            IoTHubConnection.Stop();
        }
    }
}

And that’s it, we are done. Compile the project, open a terminal, change directory to the /bin/Debug folder and execute


sudo mono RelayControllerService.exe

Initially you should see a successful handshake

image

Next click on the ‘Turn On 1’ button and give it about a second to see the response on the RelayController window

image

If you had the relay circuitry setup as shown in my previous article, the LED would have gone bright red and the relay clicked.

Now click on the Turn Off 1 button and the service should respond accordingly.

image

And we are done for today !!!

Conclusion

We stepped up from connecting to GPIO from the console to connecting using a Web page. So we are one step closer to the ‘Internet’ in ‘Internet of Things’. From here on, how you want to build your service to be able to connect to your Pi is entirely up to you. I’ll keep you posted with my progress! Cheers!

Code is up at the same repository https://github.com/sumitkm/IoTLightbulb

Tagged ,

Getting started with Internet of things using a Raspberry Pi 2 and Mono

Some of you may have spotted my previous experiments with the $35 wonder computer that’s the Raspberry Pi. I have since then added two more Raspberry Pies to my collection. One goes into the amazeballs Diddyborg (by @Pi_Borg) and the other one is a the latest and greatest Raspberry Pi 2 bought on the day of launch in early February.

The Diddyborg is nice kit created by PiBorg.org. It showcases their motor controller which can control 6-8 motors at a time per controller. It also has the ‘batt bot’ board included which works towards prolonging the battery life for the Diddborg. It comes with a bunch of sample programs like ball follow that uses optical image recognition via the Pi Camera module. It was a nice fun project that I did with junior over Christmas. I also ended up doing a small keyboard driver using python for it. The current code for it is up here (not the best Python ever written, you have been warned). There are lots of experiments planned with it, but that’s for another day.

I have been sitting on the sidelines of the Internet of Things (IoT) buzz for a while now and waiting for the ‘right moment’. Apparently the trigger was launch of the rather capable Raspberry Pi 2 early in February this year. With the bump in spec to a Quad Core 900MHz processor with 1 Gig of RAM, the Raspi 2 is about as capable as mid-market contemporary smartphone. At 25GBP it’s a steal!!! Along with the Pi I bought a microcontroller controlled 8 relay board. Idea was to toggle lightbulbs on/off from my phone Smile. There are ready made kits, ready made bulbs already in the market. So it’s not ground breaking, but hey, where’s the fun in that right!

Anyway, I missed the first lot of Pi deliveries but manage to squeak into the second lot. The microcontroller board arrived quickly enough but the female-female jumper connectors took forever to arrive from Hong Kong. Here’s the entire kit:

1. Raspberry Pi 2 + Power Supply + 16Gig SD card (went big assuming Win IoT would be a massive hog).
2. 5V 10A, 8 Relay microcontroller board by Anoder. The relays are not opto-coupler (aka solid state relays) and are ON when low meaning the connection is ON without any input. You have to provide an input to turn it off. This is good in a way that you can expect your lights to remain on (and in control of the mains switch) if your Pi crashes. However it also means while programming it, you have to send in a 1 to turn it off and 0 to turn it on. Kinda reversed!
3. 40 pin Female-Female jumper cables.
4. Optionally I bought a breadboard and a bunch of breadboard connection wires.

Wiring things up

After untold warnings on the internet about not working on mains power I decided to heed to people’s warning and play with my LED Christmas lights that already had a 220 – 12V step down. (NOTE: I am not a certified electrical engineer in UK, but I DO know my way around electrical boards, supplies and wiring and have had enough ‘shocking’ experiences as a kid to not treat 240V mains supply lightly). So this blog will not show you stuff beyond driving the relay. What you connect to the other side of the relay is up to you. I used a Christmas light that works off a 5V DC supply provided by a built in control unit.

To run this code you don’t need anything on the other end of the relays. You can hear the Relays click quite distinctly and there is an indicator LED that will also give you ample hint that the relays are working.

Pin Outs

I connected two of the 8 relays to GPIO 17 and GPIO 22. So the connections were like this

GPIOtoRelay

The board on the left is a not-to-scale representation of the Raspberry Pi 2 as seen from the CPU side. The GPIO ports take up more space on my diagram than in the actual board, but you get the point.

I have only labeled some of the pins to keep the diagram clean (actually to get it done quickly enough). Here is a complete pin out if you need one handy!

Like the Pi board, the Relay board diagram is representative and not to scale too. The blue boxes are the relays and the Red diode symbols represent the LEDs on board.

Connections

IO Pin 2 is what is powering the relay controller. So it’s connected to the VCC pin of the Relay board (Brown wire on my connector strip).
IO Pin 6 ground is connected to the ground (first pin) of relay board (Green wire on my connector strip)
IO Pin 11 represents GPIO 17 (blame Broadwell the CPU makers for the weirdness in numbering schemes). A 0 or 1 to GPIO 17 will drive the first relay hence it’s connected to In 1 on the relay board using the red wire from the connector strip.
IO Pin 15 represent GPIO 22. I’ve connected it to In 2 on the relay board using the blue wire from the connector strip.

That covers the wiring between the Pi and the Relay board.

Switching Relays using Python

To test the controller board out, I first tried a sample Python program. The source is here. As you can see it’s pretty rudimentary but it did what it was asked to do. Switch the lights off for 20 seconds.

But I wanted more. I wanted to direct the GPIO ports over the web. I tried reading up on making web services in Python and realized there wasn’t a quick way out. That’s when I came across Jan Tielens’ excellent series on getting started with .NET on Raspberry Pi.

This article by Jan will get the Pi Setup with Mono + MonoDevelop. Jan starts with a barebones Pi and walks you all the way up to the Mono Setup. If you just want to setup Mono here is the gist:

1.Update Wheezy package list to the latest

sudo apt-get update

2. Upgrade Wheezy to latest and greatest

sudo apt-get upgrade

3. Retrieve and install key for the mono GPG key

sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 3FA7E0328081BFF6A14DA29AA6A19B38D3D831EF

4. Next add repository reference to apt-get. This is where the mono-repositories are (two repository references).

echo "deb http://download.mono-project.com/repo/debian wheezy main" | sudo tee /etc/apt/sources.list.d/mono-xamarin.list

echo "deb http://download.mono-project.com/repo/debian wheezy-apache24-compat main" | sudo tee -a /etc/apt/sources.list.d/mono-xamarin.list

5. Repeat step 1 and 2 to make sure all dependencies are in place.

6. Install Mono (finally)!!!

sudo apt-get install mono-complete

7. Install MonoDevelop (for on Pi development)!

sudo apt-get install monodevelop

A lot of people develop on Visual Studio and deploy on the Pi. But MonoDevelop is a very capable IDE and I wanted to develop on device. So I opted to use MonoDevelop for my Pi development.

Once I had C# running on Pi, the world was my oyster. I realized how brave a path Miguel Icaza had blazed when he started the Mono project. I am no OSS fanatic, but I am no shill either. Heartfelt thanks to the Mono Team for sticking around against lots and lots of odds. Yes, now I believe you guys love C# more than Microsoft itself does!

Talking to GPIO pins using C#

Once I had C# going, I followed some more of Jan’s tutorials to see how you could communicate with GPIO pins. Turns out the GPIO port is registered as a folder (like most devices on Linux), and all I had to do was write appropriate ‘text’ to appropriate files in assorted folder. Since this all sounded very easy I assumed someone had already done the hard work. Indeed there are two nice libraries on Github. However when I tried to use RaspberryGPIOManager I found it was locking itself up (on the Raspberry Pi 2). So my friend Raj and I got busy following Jan’s tutorial and write the code ourselves. Sure enough, Raj had the code going in about 45 minutes and were were able to talk to one of the GPIO ports and do the same thing the Python program was doing.

Next day I sat down and wrote up a rudimentary library in line with RaspberryGPIOManager and now it’s a neat reusable component.

Show me the code

Okay, enough rambling, time to see some code.

I started with a simple Console application on Mono and split up the reusable component into a separate class library. So essentially I have two projects

1. My IoT library (PiOfThings) that for now has the GPIO interaction library (GPIOManager) and

2. Sample code (GpioCs).

GPIO Manager

The GPIO Manager project has two files GPIOManager.cs which is the actual driver and GPIOPinReferences.cs a bunch of helpers and enumerations.

GPIOPinReferences.cs

This file has two static classes and two enums. The GPIOPinState enum does what it says, encapsulates the Pin statuses which is essentially 0 or 1 with 3 indicating error/unknown condition.

public enum GPIOPinState
{
	Low = 0,
	High = 1,
	Unknown = 3
}

The second enum called GPIOId maps directly to the pin numbers on the Raspberry Pi 2 (or B+) I/O port. GPIOUnknown is assigned the value -1 for any non GPIO pins.

public enum GPIOId
{
	GPIOUnknown = -1,
	GPIO02 = 2,
	GPIO03 = 3,
	GPIO04 = 4,
	GPIO07 = 7,
	GPIO08 = 8,
	GPIO09 = 9,
	GPIO10 = 10,
	GPIO11 = 11,
	GPIO14 = 14,
	GPIO15 = 15,
	GPIO17 = 17,
	GPIO18 = 18,
	GPIO22 = 22,
	GPIO23 = 23,
	GPIO24 = 24,
	GPIO25 = 25,
	GPIO27 = 27
}

Next we have a helper class that maintains two dictionaries of pin number-to-port and port-to-pin number.

public static class GPIOPinMapping
{
	private static Dictionary&amp;lt;GPIOId, int&amp;gt; GPIOToPin = new Dictionary&amp;lt;GPIOId, int&amp;gt;
	{
		{ GPIOId.GPIO02, 3 },
		{ GPIOId.GPIO03, 5 },
		{ GPIOId.GPIO04, 4 },
		{ GPIOId.GPIO07, 26 },
		{ GPIOId.GPIO08, 24 },
		{ GPIOId.GPIO09, 21 },
		{ GPIOId.GPIO10, 19 },
		{ GPIOId.GPIO11, 23 },
		{ GPIOId.GPIO14, 8 },
		{ GPIOId.GPIO15, 10 },
		{ GPIOId.GPIO17, 11 },
		{ GPIOId.GPIO18, 12 },
		{ GPIOId.GPIO22, 15 },
		{ GPIOId.GPIO23, 16 },
		{ GPIOId.GPIO24, 18 },
		{ GPIOId.GPIO25, 22 },	
		{ GPIOId.GPIO27, 13 }
	};

	private static readonly Dictionary&amp;lt;int, GPIOId&amp;gt; PinToGPIO = new Dictionary&amp;lt;int, GPIOId&amp;gt;
	{
		{ 1, GPIOId.GPIOUnknown },
		{ 2, GPIOId.GPIOUnknown },
		{ 3, GPIOId.GPIO02 },
		{ 4, GPIOId.GPIO04 },
		{ 5, GPIOId.GPIO03 },
		{ 6, GPIOId.GPIOUnknown },
		{ 7, GPIOId.GPIOUnknown },
		{ 8, GPIOId.GPIO14 },
		{ 9, GPIOId.GPIOUnknown },
		{ 10, GPIOId.GPIO15 },
		{ 11, GPIOId.GPIO17 },
		{ 12, GPIOId.GPIO18 },
		{ 13, GPIOId.GPIO27 },
		{ 14, GPIOId.GPIOUnknown },
		{ 15, GPIOId.GPIO22 },
		{ 16, GPIOId.GPIO23 },
		{ 17, GPIOId.GPIOUnknown },
		{ 18, GPIOId.GPIO24 },
		{ 19, GPIOId.GPIO10 },
		{ 20, GPIOId.GPIOUnknown },
		{ 21, GPIOId.GPIO09 },
		{ 22, GPIOId.GPIO25 },
		{ 23, GPIOId.GPIO11 },
		{ 24, GPIOId.GPIO08 },
		{ 25, GPIOId.GPIOUnknown },	
		{ 26, GPIOId.GPIOUnknown },
		{ 27, GPIOId.GPIOUnknown },
		{ 28, GPIOId.GPIOUnknown },
		{ 29, GPIOId.GPIOUnknown },
		{ 30, GPIOId.GPIOUnknown },
		{ 31, GPIOId.GPIOUnknown },
		{ 32, GPIOId.GPIOUnknown },
		{ 33, GPIOId.GPIOUnknown },
		{ 34, GPIOId.GPIOUnknown },
		{ 35, GPIOId.GPIOUnknown },
		{ 36, GPIOId.GPIOUnknown },
		{ 37, GPIOId.GPIOUnknown },
		{ 38, GPIOId.GPIOUnknown },
		{ 39, GPIOId.GPIOUnknown },
		{ 40, GPIOId.GPIOUnknown },
	};


Two static methods help you get the appropriate values out of the dictionaries depending on what you are looking for (pin number or GPIO id).

	public static int GetPinNumber (GPIOId gpioNumber)
	{
		return GPIOToPin [gpioNumber];
	}
	public static GPIOId GetGPIOId(int pin)
	{
		if (pin &amp;gt; 0 &amp;amp;&amp;amp; pin &amp;lt;= 40)
		{
			return PinToGPIO [pin];
		}
		else
		{
			throw new ArgumentOutOfRangeException (&amp;quot;pin&amp;quot;, string.Format (&amp;quot;Invalid pin {0}. Please enter value between 1 and 40 (both inclusive).&amp;quot;, pin));
		}
	}
}

The Driver (GPIOManager.cs)

Overview of communication mechanism

A GPIO pin like any digital connection can represent either 0 or 1. ‘Talking’, ‘connecting’, ‘sending signal’ or ‘communicating’ to a GPIO port simply means you are either reading values (0 or 1) or writing values (0 or 1). Since the Linux OS represents the I/O ports as streams you can using any File System based APIs and a direct your output to specific folders to write to appropriate files.

For GPIO communication on the Pi, the specific folder is as follows

/sys/class/gpio/gpio{pin id}/value

Here {pin id} is the GPIO pin number as specified in the GPIOId enum above

This destination can be used to write “0” or “1” for sending low or high signals to the pin.

To read, you use the same destination but instead of Writing to it, you read from it using FileSystem.

Before you can communicate with a particular pin you need to make sure no one else is communicating with it, so you should try to ‘Reserve’ the pin and then set “Direction” of communication. Again all this can be done by writing appropriate values to the ports via the file system.

So the API that we are writing as the following calls:

1. Select (Reserve) a pin

2. Write to pin

3. Read from pin

4. Release pin

The actual code

I’ll break up the actual code into the above mentioned calls. You can refer to the entire thing together on Github.

The GPIOManager has a readonly GPIO_ROOT_DIR folder that can be passed in the constructor if you want to mock the PI and run the manager when the actual GPIO pins are not available.

The Manager has an internal list of Pins that have been selected and hence are busy. Note the Manager is not a singleton so this list may not be the single source of truth on the Pi.

You call the SelectPin method and provide the GPIOId identifying the pin that needs to be selected.

If successful, the CurrentPin property on GPIOManager is set to the pin you requested for, else it throws an exception.

The private call to ReservePin function is what actually selects the Pin. All it does is writes to the ‘export’ stream, value of the Pin that’s selected.

Once you have Selected a pin you can write to the pin using the WriteToPin call.

public bool WriteToPin (GPIOPinState state)
{
	try
	{
		File.WriteAllText (String.Format (&amp;quot;{0}gpio{1}/direction&amp;quot;, GPIO_ROOT_DIR, CurrentPin.ToString (&amp;quot;D&amp;quot;)), GPIOPinDirection.Out);
		File.WriteAllText (String.Format (&amp;quot;{0}gpio{1}/value&amp;quot;, GPIO_ROOT_DIR, CurrentPin.ToString (&amp;quot;D&amp;quot;)), state.ToString (&amp;quot;D&amp;quot;));
		return true;
	}
	catch (Exception ex)
	{
		Console.WriteLine (&amp;quot;Failed to WriteToPin: &amp;quot; + CurrentPin.ToString (&amp;quot;D&amp;quot;) + &amp;quot; &amp;quot; + ex.Message + &amp;quot;\n&amp;quot; + ex.StackTrace);
	}
	return false;
}

As you can see the WriteToPin call is essentially creating a File handle using the Fire.WriteAllText helper and first setting the Direction in the ‘direction’ stream.

Next writing the actual value  (0 or 1) to the ‘value/[pin]’ stream.

Similarly if you are reading from the GPIOPin, after you have selected the pin you use the ReadFromPin API.

public GPIOPinState ReadFromPin (GPIOId pin)
{
	GPIOPinState currentState = GPIOPinState.Unknown;
	try
	{
		string state = File.ReadAllText (String.Format (&amp;quot;{0}gpio{1}/value&amp;quot;, GPIO_ROOT_DIR, pin.ToString (&amp;quot;D&amp;quot;)));
		currentState = (state == &amp;quot;1&amp;quot; ? GPIOPinState.High : GPIOPinState.Low);			
	}
	catch (Exception ex)
	{
		Console.WriteLine (&amp;quot;Failed to ReadFromPin: &amp;quot; + pin.ToString (&amp;quot;D&amp;quot;) + &amp;quot; &amp;quot; + ex.Message + &amp;quot;\n&amp;quot; + ex.StackTrace);
	}
	return currentState;
}

ReadFromPin API again uses the File handle at the ‘value/[pin]’ stream and reads the value, that is either “0” or “1” and returns the appropriate GPIOPinState value.

Once Read/Write operation is done you have to Release the pin by calling the ReleasePin API, that needs the GPIOId of the pin you want to release. There is a helper method that Releases the CurrentPin selected.

public bool ReleasePin (GPIOId pin)
{
	try
	{
		File.WriteAllText (GPIO_ROOT_DIR + &amp;quot;unexport&amp;quot;, pin.ToString (&amp;quot;D&amp;quot;));
		_busyPins [pin] = false;
		CurrentPin = GPIOId.GPIOUnknown;
		return true;
	}
	catch (Exception ex)
	{
		Console.WriteLine (&amp;quot;Failed to ReleasePin: &amp;quot; + pin.ToString (&amp;quot;D&amp;quot;) + &amp;quot; &amp;quot; + ex.Message + &amp;quot;\n&amp;quot; + ex.StackTrace);
	}
	return false;
}
public bool ReleasePin ()
{
	return ReleasePin (CurrentPin);
}

That completes our GPIOManager API. Lets write a sample to use it.

A Sample Console Application

I added a simple console project to the mix and wrote the following in the Program.cs’ main function.

All it does is sends a Low on GPIO pin 17 and 22 and waits for a ‘return’ on the console. Once you hit return it cleans up by releasing the pins and exits.

namespace GpioCs
{
	class MainClass
	{
		public static void Main (string[] args)
		{
			try
			{
				GPIOManager gpioManager = new GPIOManager();
				gpioManager.SelectPin(GPIOId.GPIO17);
				gpioManager.WriteToPin(GPIOPinState.Low);
				Console.ReadLine();
				GPIOPinState state = gpioManager.ReadFromPin(gpioManager.CurrentPin);
				Console.WriteLine(&amp;quot;Current Pin 17: &amp;quot; + state);

				gpioManager.SelectPin(GPIOId.GPIO22);
				gpioManager.WriteToPin(GPIOPinState.Low);
				Console.ReadLine();
				GPIOPinState state22 = gpioManager.ReadFromPin(gpioManager.CurrentPin);
				Console.WriteLine(&amp;quot;Current Pin 22: &amp;quot; + state22);


				Console.WriteLine(&amp;quot;Press enter to close!&amp;quot;);
				Console.ReadLine();
				gpioManager.ReleasePin(GPIOId.GPIO17);
				gpioManager.ReleasePin(GPIOId.GPIO22);
				Console.WriteLine (&amp;quot;Completed without errors&amp;quot;);
			}
			catch (Exception ex)
			{
				Console.WriteLine(ex);
			}
		}
	}
}

In my case this flips on the first and second relays. So even if you don’t have anything connected to the relays you’ll hear them ‘click’ on and off. The LED indicator on the board for each relay will glow as well.

Conclusion

With this simple code we have opened up a plethora of opportunities for us. Next step would be to bundle the code inside a service and then respond to web requests. My idea is to make the service a SignalR client that will connect to a CnC server on the Web/Cloud. The CnC server will have a Web interface allowing you to switch each relay on or off. Now once you connect the relays to appropriate electrical devices you are good to control those devices from the web.

Code on Github.

Tagged , ,

Part 7: Sharing Data in Knockout Components (2/2) – Events and Messages

In my previous articles I’ve shown how to build KO components that start off with a data source of their own and are pretty self contained with respect to interactions with the rest of the page. But, in the first part of this article we built a tree control that’s I am planning to use as an Index page. So an action like ‘Clicking’ a link needs to affect another part of the application/web page.

Beauty (and often the bane) of JavaScript is that there are multiple ways to do this. You could go low down to core JavaScript event handling or jQuery event handling quite easily. However, the beautiful View-ViewModel separation that you have created using KO Components would be completely ruined if you had to know the ID of the UI element to hook an event handler in another component or in the parent page for that matter.

Solution? A light weight message passing system, that subscribes to events for you and publishes the event to all who have subscribed.

Message passing whaa…

Did you just wince at the sound of ‘message passing system’, fear not, it’s not rocket science, in fact it’s pretty simple. Take the two scenarios below.

Scenario 1

The good ol’ fashioned jQuery way of handling events. You take a document element and attach an event handler function to it. When the event occurs the attached function is called. You can do this anywhere in your JavaScript code, only requirement is that the element referred to by the id is available in the HTML DOM.

Here, reference by the id of the element, breaks the model view separation where the View Model has explicit knowledge of the View’s elements. This is hard coupling and should be avoided.

Scenario 2

Instead of attaching our event handler code directly, we list our function in a centrally located – let’s call it a ‘registry’ for now, using a unique string key.

Next when the particular event happens we tell the ‘registry’ that a ‘named’ event has happened and that all subscribers should be notified.

The registry takes the name and looks up to see if any functions where registered against it, if so, those functions are called.

Thus, by using a central location for registering event names and their handlers, we have decoupled View and ViewModel. Now View can raise an event, ViewModel handles the event via KO’s event handling mechanism and then tells the central registry to dispatch a named event to all the subscribers.

The subscribers need not be a part of this ViewModel, they could be part of any ViewModel on the page. They would register to listen of the ‘named event’ by providing an event handler function. This is the key, no ‘ids’ need to be shared across ViewModels.

Once the named event occurs the event handler functions are called.

Using AmplifyJS

Now that we’ve got an idea of how we can decouple the UI from the view model and still pass data around components, let’s use AmplifyJS as our ‘events broker’ and see what it takes to implement it.

Amplify was created by the team at http://appendto.com/team and is made available under MIT and GNU OSS licenses.

Installing AmplifyJS

Installing AmplifyJS is easy. Use nuget package manager console and install package using the following command

install-package AmplifyJS

This will create a Scripts/amplify folder and put each amplify component in a separate file here. It will also place amplify.js and amplify.min.js in the Scripts folder. Since we’ll be using the entire library we’ll delete the separate component files and move the amplify.* files from Scripts to Scripts/amplify

Since we’ll need Amplify globally to watch over all messages passed to and from it, we’ll set it up in the configuration require.config.js as highlighted below

var require = {
    baseUrl: "/",
    paths: {
        "bootstrap": "Scripts/bootstrap/bootstrap",
        "historyjs": "Scripts/history/native.history",
        "crossroads": "Scripts/crossroads/crossroads",
        "jquery": "Scripts/jquery/jquery-1.9.0",
        "knockout": "Scripts/knockout/knockout-3.2.0beta.debug",
        "knockout-projections": "Scripts/knockout/knockout-projections.min",
        "signals": "Scripts/crossroads/signals",
        "text": "Scripts/require/text",
        "app": "app/app",
        "underscore": "Scripts/underscore/underscore",
        "amplify": "Scripts/amplify/amplify.min",
    },
    shim: {
        "bootstrap": {
            deps: ["jquery"]
        }
    }
}

Updating tree-node to handle click event

We update the tree-node view to wrap the text span in an anchor, and attach a click handler

<ul class="nav nav-stacked" style="padding-left:10px">
    <li class="nav list-group-item-heading selected">
        <div class="row">
            <div class="col-sm-1 hidden-xs">
                <!-- ko if: nodes().length > 0 -->
                <!-- ko if: expanded() -->
                <a data-bind="click: changeState" role="button" href="">
                    <i class="glyphicon-minus"></i>
                </a>
                <!-- /ko -->
                <!-- ko ifnot: expanded() -->
                <a data-bind="click: changeState" role="button" href="">
                    <i class="glyphicon-plus"></i>
                </a>
                <!-- /ko -->
                <!-- /ko -->
            </div>
            <div class="col-sm-5">
                <a href="" data-bind="click: nodeClicked">
                    <span data-bind="text: title"></span>
                </a>
            </div>
        </div>
    </li>
    <!-- ko if: nodes().length > 0 && expanded() === true -->
    <!-- ko foreach: nodes -->
    <li>
        <tree-node params="node: $data"></tree-node>
    </li>
    <!-- /ko -->
    <!-- /ko -->
</ul>

In the viewModel we handle the click event and raise

The following snippet handles the click event

self.nodeClicked = function (currentNode) {
	currentNode.expanded(!currentNode.expanded())
	alert("hello " + currentNode.title());
}

First up it simply raises an alert that says hello with the name of the node. We have also setup the node to flip the expanded flag.

If we run the application now and click a node, we’ll see the popup.

image

So far so good. The node has directly responded to the click event.

Raising Event using Amplify

In the tree-node.js’ nodeClicked function we can ‘raise’ an event or pass message to our queue that the click event has happened. To do this we first add reference to amplify in our module. Since we have registered it globally all we have to do is use its alias:

define(["knockout", "text!./tree-node.html", "amplify"], function (ko, treeNodeTemplate) {

Next we upload the clicked event function as follows:

self.nodeClicked = function (currentNode) {
	currentNode.expanded(!currentNode.expanded());
   var nodeName = currentNode.title();
   amplify.publish(&amp;quot;CurrentNode-NodeClicked&amp;quot;, nodeName );
}

As you can see the call to ‘raise’ or ‘publish’ the event via amplify is pretty simple. All we have done is specified a string key, and then passed on the value we want to event receiver to get, which in this case is the nodeTitle. We could have passed the entire node if we wanted.

Note: I have removed the alert from the nodeClicked Event. Now we need another component to subscribe to this event and handle it when it occurs.

The ‘Content Pane’

The node click is supposed to be handled by a ‘Content Pane’ component that pulls content from another source. For brevity, I’ll ‘re-use’ the greeter component. We update the greeter component such that it handles the click event and updates the greeting.

First we add reference to Amplify

define(["knockout", "text!./greeting.html", "amplify"], function (ko, greeterTemplate) {

Next we add a ‘subscription’ for the event and update the greeting text

    function greeterViewModel(params) {
        var self = this;
        self.greeting = ko.observable(params.name);
        self.date = ko.observable(new Date());
        amplify.subscribe("CurrentNode-NodeClicked", function (value) {
            self.greeting(value);
        });
        return self;
    }

As we can see, subscribing to the ‘event’ is as simple as

– Specifying the same key that we published it with,

– Providing a function that will be called when the event happens.

In the function we have changed the greeting property of the viewModel.

Now we add the greeter page component in the docs.html

<div class="container">
    <h1>SilkThread Documentation</h1>
    <p>
        This page serves as the root for SilkThread documentation.
        It is itself built with SilkThread and we'll see how muliple
        web-components can be put together to create a neat App.
    </p>
    <div class="row">
        <div class="col-lg-3">
            <tree params="data: data"></tree>
        </div>
        <div class="col-lg-9">
            <greeter></greeter>
        </div>
    </div>
</div>

Demo Time

We are all set, when we run the app initially we come up with this

image

When we click on Node 0 the greeting changes to

image

Also note that the node has collapsed because of the code we wrote to toggle the expanded state.

Conclusion

This light weight message Publishing-Subscribing system is an established pattern often referred to as Pub-Sub. It is a very useful pattern in terms of decoupling interactions. It is used not just in JavaScript, it can be used in highly scalable backend systems.

SignalR is another example of a Pub-Sub system that maintains subscribers on the server and publishes actions to all connected clients. But it is used across server and client systems.

So as you can see the Pub-Sub system is a versatile software development pattern that we leveraged on the client side today.

That concludes this sub-series of my overall Silkthread series. Next we will see how we can use Amplify to make http requests and send data to server and retrieve data back.

The code as always is on Github. This specific branch is saved as Part7-2.

Tagged , , ,

Part 7: Sharing data in Knockout Components (1/2) – Building a Tree Component

Continuing my account of using KO Components and the building of the Silkthread SPA, today I am going to build another component – a Tree view component.

Note: All my articles on KO Components are now categorized and you can bookmark the this link for easy reference. All future articles will automatically appear in that list.

No one really uses a TreeView in a public facing Website today, but the underlying hierarchical data representation is still very valid and the TreeView has transformed into a series of Folding panels, cascading dropdowns, CSS Menus or in case of mobile devices cascading set of lists.

I came across the need for the TreeView because I wanted to build a Documentation Index for the SilkThread site. So this is like dog-fooding of SilkThread SPA itself. Today we’ll explore the following things:

  1. How to layout the TreeView, creating the Tree and TreeNode components (1/2).
  2. How to pass data into nested Components (we have done this in the previous article also) (1/2)
  3. Styling the TreeView (1/2)
  4. How to raise and handle events using Amplify.js (2/2)

Creating the Tree and the TreeNode components

I am starting with v6 label of my BuildingSpaWithKO project in Github.

Adding a new Page

1. First up, we’ll add a new ‘page component’ called docs.

Adding a new Page Component

a. Add new folder under App/pages called docs
b. Add a docs.js and docs.html file in the docs folder. Remove all the boilerplate code/markup that Visual Studio adds and setup the component as follows

define(['knockout', 'text!./docs.html'], function (ko, docsTemplate) {
    function docsViewModel(params) {
        var self = this;
        self.title = ko.observable('SilkThread - Yet another SPA Framework');
        self.data ={
            title: 'Documentation Home',
            nodes: []
        };
        return self;
    }
    return { viewModel: docsViewModel, template: docsTemplate };
});

The component is pretty self explanatory, it has two properties, title and data. The title property has a default text but is an observable, so if we bind it to an html element if will be updated in the UI if we decide to change the Title at runtime.

The ‘nodes’ property has an empty JavaScript object at the moment and we will pump mock data into it once our component is ready.

Registering the Docs page

To register our new page we update the app.js by adding the new page definition to it.

app = {
    components: {
        greeter: {
            name: 'greeter',
            template: 'App/components/greeter/greeting'
        },
        tabitem: {
            name: 'tabitem',
            template: 'App/components/tabitem/tabitem'
        },
        tabbedNavigation: {
            name: 'tabbed-navigation',
            template: 'App/components/tabbed-navigation/tabbed-navigation'
        }
    },
    pages: {
        home: {
            name: 'home',
            template: 'App/pages/home/home'
        },
        docs: {
            name: 'docs',
            template: 'App/pages/docs/docs'
        },
        settings: {
            name: 'settings',
            template: 'App/pages/settings/settings'
        }
    }
}

We register the component in startup.js as follows:

ko.components.register(app.pages.docs.name, { require: app.pages.docs.template });

To navigate to the ‘Docs’ page we add a link in _Layout.cshtml page. as seen below the href is pointing to the page-name we registered above.

                <ul class="nav navbar-nav">
                    <li>
                        <a href="/">Home</a>
                    </li>
                    <li>
                        <a href="settings">Settings</a>
                    </li>
                    <li>
                        <a href="docs">Docs</a>
                    </li>
                </ul>

image

Finally we add the docs route to our router in router.js as follows

    return new Router({
        routes: [
            { url: '/', params: { page: 'home' } },
            { url: 'home', params: { page: 'home' } },
            { url: 'docs', params: { page: 'docs' } },
            { url: 'samples', params: { page: 'samples' } },
            { url: 'settings', params: { page: 'settings' } }
        ]
    });

With the Docs page in place we can get down to the business of creating the actual Tree component.

Building a Tree View using KO Components

A tree component can logically have two sub-parts. First the container and second a Node object that can itself contain a list of Nodes. So we create two components for this purpose:

1. The Tree-Node component
2. The Tree component

The Tree Node Component

We look at the node view model before we look at the container. As standard with all KO Components we create a Tree folder under the Scripts/app/components folder and add two files tree-node.html and tree-node.js

The Node view model (tree-node.js)

define(["knockout", "text!./tree-node.html"], function (ko, treeNodeTemplate) {
    function treeNodeViewModel(params) {
        var self = this;
        self.title = ko.observable('Default');
        self.url = ko.observable('/');
        self.nodes = ko.observableArray();
        self.expanded = ko.observable(true);
        if (params.node) {
            self.title(params.node.title);
            if (params.node.expanded != null) {
                self.expanded(params.node.expanded);
            }
            self.nodes().push.apply(self.nodes(), params.node.nodes);
        }

        self.changeState = function () {
            self.expanded(!self.expanded());
        }
        return self;
    }
    return { viewModel: treeNodeViewModel, template: treeNodeTemplate };
});

The treeNodeViewModel has four fields and a function() to store the information and react to actions on it.

title – This is the text that’s displayed on the Node
url – A relative or absolute URL to navigate to when user clicks on the node.
nodes – A collection of more treeNodeViewModels that are child nodes of the current node. Note that, the nodes collection essentially makes the view model recursive.
expanded – A property storing the state of the current node, as to whether its child nodes are shown or hidden. This is applicable only when the nodes collection has more than one element.
changeState() – This function is attached to the click event of the span element in the view. The span has the text + or – depending on the value of ‘expanded’ property.

The Node View (tree-node.html)

The corresponding markup for the Node view is as follows:

<ul class="nav nav-stacked" style="padding-left:10px">
    <li class="nav list-group-item-heading selected">
        <div class="row">
            <div class="col-sm-1 hidden-xs">
                <!-- ko if: nodes().length > 0 -->
                <!-- ko if: expanded() -->
                <a data-bind="click: changeState" role="button" href="">
                    <i class="glyphicon-minus"></i>
                </a>
                <!-- /ko -->
                <!-- ko ifnot: expanded() -->
                <a data-bind="click: changeState" role="button" href="">
                    <i class="glyphicon-plus"></i>
                </a>
                <!-- /ko -->
                <!-- /ko -->
            </div>
            <div class="col-sm-5">
                <span data-bind="text: title"></span>
            </div>
        </div>
    </li>
    <!-- ko if: nodes().length > 0 && expanded() === true -->
    <!-- ko foreach: nodes -->
    <li>
        <tree-node params="node: $data"></tree-node>
    </li>
    <!-- /ko -->
    <!-- /ko -->
</ul>

 

The tree-node is wrapped in a <ul> just to make use of Bootstrap’s default indentation. If you want you can use any markup. The <div class=”row”> is where things get interesting. We have two columns in the row.

The first column is rendered conditionally. We check if there are elements in the nodes collection. If there are one or more nodes we check the expanded() property to see if we should render a [-] or a [+] using an anchor tag. We also attached the click event of the anchor to the changeState function.

The second column is a span that’s bound to the title property of the tree-node viewmodel.

Finally we need to render the rest of the nodes so again we check if the nodes collection is empty or not. If there are one or more nodes we loop though each node and render a tree-node itself with the current instance of the node object as provided by KO in the $data variable.

The Tree Component

Now that we have seen what the node looks like lets setup the container

The Tree view model (tree.js)

define([
    "knockout",
    "text!./tree.html"], function (ko, treeTemplate) {
    function treeViewModel(params) {
        var self = this;
        self.title = ko.observable('');
        self.nodes = ko.observableArray([]);

        if (params.data) {
            self.title(params.data.title);
            self.nodes().push.apply(self.nodes(), params.data.nodes);
        }
        return self;
    }
    return { viewModel: treeViewModel, template: treeTemplate };
});

As we can see, the viewModel is pretty simple, it has two properties:
title – A string that can be shown on top of the TreeView
nodes – A collection of node objects.

The Tree view (tree.html)

The markup for the Tree component simply binds the nodes collection from the viewModel into a unordered list.

<ul class="nav nav-stacked">
    <li class="nav list-group-item-heading" data-bind="text: title">
    	<!-- ko if: nodes().length > 0 -->
    		<li>
        	<!-- ko foreach: nodes -->
        		<tree-node params="node: $data"></tree-node>
        	<!-- /ko -->
    	<!-- /ko -->
   	</li>
</ul>

Registering the new components

We go back to the app.js file and add our two new components to the ko.components’ list.

app = {
    components: {
        greeter: {
            name: 'greeter',
            template: 'App/components/greeter/greeting'
        },
        tabitem: {
            name: 'tabitem',
            template: 'App/components/tabitem/tabitem'
        },
        tabbedNavigation: {
            name: 'tabbed-navigation',
            template: 'App/components/tabbed-navigation/tabbed-navigation'
        },
        treeNode: {
            name: 'tree-node',
            template: 'App/components/tree-node/tree-node'
        },
        tree: {
            name: 'tree',
            template: 'App/components/tree/tree'
        }
    },
    pages: {
        home: {
            name: 'home',
            template: 'App/pages/home/home'
        },
        docs: {
            name: 'docs',
            template: 'App/pages/docs/docs'
        },
        settings: {
            name: 'settings',
            template: 'App/pages/settings/settings'
        }
    }
}

 

Adding some dummy data

Now that we’ve got the components in place, let’s add some dummy data to bring up the tree. We go back to the docs.js file and update the data source.

self.data =
	{
            title: "Documentation Home",
            nodes: [
                {
                    title: "Node 0",
                    nodes: [
                    {
                        title: "Node 0-1",
                        expanded: false,
                        nodes: [
                            {
                                title: "Node 0-1-0"
                            },
                            {
                                title: "Node 0-1-1"
                            }
                        ]
                    },
                    {
                        title: "Node 0-2"
                    }]
                },
                {
                    title: "Node 1",
                    expanded: false,
                    nodes: [
                    {
                        title: "Node 1-1"
                    },
                    {
                        title: "Node 1-2"
                    }]
                },
                {
                    title: "Node 2"
                }]
        }

Demo Time

Now that we are setup with the new components lets run the app and see how it works.

Navigating to the docs page shows us the following:

image

Notice ‘Node 0-1’ and ‘Node 1’ are not expanded by default but ‘Node 0’ is. This is because these nodes have the expanded property explicitly set to false. By default expanded is assumed true.

So we have a functional Tree layout. However clicking on the nodes don’t do anything because their HREFs are empty and there is no ‘click’ handler to do something with.

Initially I wanted to introduce AmplifyJS in this article itself, but this one is already 1500+ chars and is straining on everyone’s patience. So I am going to split up this article and introduce AmplifyJS in the next article where we’ll see how we can use it to handle events and share data across components.

Tagged , , ,
Follow

Get every new post delivered to your Inbox.

Join 1,408 other followers

%d bloggers like this: