Thursday, July 7, 2016

Learn Something New! My introduction to Data Science.

There are so many interesting things to learn and not nearly enough time.

While I am an able technologist and experienced software engineer I am not a data scientist.  I want to learn more about that arena.  Much more.

I also want to explore the world of massive open online courses (MOOCs).

A professor I know who is an expert in data driven analysis (among other things) recommended the Data Science Specialization taught by a team at Johns Hopkins University and offered through Coursera.  That was good enough for me.  I am following the recommended course sequence.  While I am at the front end of the program, so far so good.

iTinker.net is all about exploring technology and implementing small but often complex systems in my home lab as a means to fuel real world work (and not just technology focused work).  The MOOC universe looks like it will become an important tool in my tool kit.

If you are interested in Data Science, check out the Coursera course.  More importantly, if you want to learn more about a topic see if there's a relevant MOOC out there.

Thursday, May 5, 2016

Hardware for Virtualization Home Lab

I started my VMware home lab in late 2012 by building a server using Chris Wahl's blueprint found here.

I'm now in the process of upgrading my servers to machines that allow for more memory.  With 8 cores (16 threads), 4 on board NICs (2 x 1 GB and 2 x 10 GB), a dedicated IPMI port and up to 128gb of memory in a small, quiet, mini tower case, the Supermicro Superserver 5028D-TN4T is tailor made VMware home labs.  More details on that system can be found here.  I bought mine from WiredZone - their prices are good and customer service terrific.  I'm partial to their Bundle 2 package with an additional 64gb of memory tacked on.

If you are looking for a solid home server for your VMware lab but aren't ready to purchase the Supermicro system described above, you might consider that server I built in 2012, which I'm now selling on eBay.

Friday, April 8, 2016

Getting Out Of The Stone Age -- I Finally Discovered WordPress

I am not a web site designer.  That said, I have set up a few simple web sites over the years.  Nothing fancy.  I learned enough html to hand code what I needed.  Those old sites are now hopelessly out of date.

Two weeks ago I decided that it was time to rebuild one of my old sites.  I was not in the mood to acquire and then learn Adobe Dreamweaver.  I suspect that if I was a web site design professional that's the tool I'd use.  But I'm not, so I don't.

My old tool of choice, Microsoft Frontpage, was discontinued over a decade ago. While Microsoft Expression is available without charge, that too looks to require more than a little bit of work to master.

I enjoy using Blogger for this site, but I don't get the sense that it's well suited for more general purpose web sites.

I know that WordPress is considered by many to be the go-to tool for blogging sites.  What I didn't realize is that it's also geared for developing web sites that are much more than just blogs.

I'm astonished at how easy it was for me to create a much more modern rendition of my old site.  I can't believe I hadn't figured this out years ago.

Oh, if you want to see what I created, take a look at http://www.usml.net

I have lots of ideas for things I will do with that site going forward.

Apologies in advance to any professional web site designers out there who will likely scoff at that little site.  If you have any suggestions for ways to make that site better, don't hesitate to comment.

Perhaps I will migrate this blog to WordPress too. . .

Thursday, March 10, 2016

Network Level Threat Protection For Home Networks

Not too long ago an elderly relative asked me about an email message she received. Even though the message purported to be from Apple, she was leery about clicking on the link and reluctant to supply whatever information the sender sought.  She said she had forwarded the message to me for my review. I hadn’t received anything at all.  She attempted to send it again.  I received nothing.

While that email message purported to be from Apple, it was not.  It was a thinly veiled phishing attempt.  Kudos to my relative for not falling for the trick.
We’ve had “that talk” more than once.

So why didn't the forwarded message make its way to me?  It's because I have deployed a variety of network level threat management tools – and one of them blocked it.  In this case, one that leverages technology to identify likely phishing attempts.

I realized a long time ago that there was no way I'd be able to take precautions to protect each device that might connect to my network.  So I supplement reasonable device-specific antivirus tools and firewalls with network level intrusion prevention, phish blocking, antivirus and content filtering tools.

I’ve used these so-called "unified threat management systems" on my home network for many years – starting with when my children were very young. While they are not typically deployed in home systems, there’s no reason why that needs to be the case.  There are good choices for home use.

My current favorite is Untangle.  Untangle can be deployed on a small, silent, inexpensive appliance with a variety of free and licensed modules.  You can put the software on your own hardware or purchase a purpose built appliance from Untangle or other vendors.  I bought mine from Nexgen Appliances.  Right now both Untangle and Nexgen offer appliances that are ideal for home networks.  I will not hesitate to do business with Untangle or Nexgen.  It all comes down to what offering makes the most sense at the time of purchase.  Untangle employees actively participate in online forums and the user community is very supportive. And I can't say enough great things about my experience as a Nexgen Appliances customer.  When I've had questions, Nexgen has responded in the most helpful way I can think of.  It's an embarrassment of riches.

The Untangle free configuration is very nice.  Nevertheless, the licensed modules are a step up.  Untangle recently started to offer a home use license for $5 a month, with discounts for longer subscriptions.  Home users can get the benefit of the full suite of modules offered by Untangle for that very low fee.
So now, my elderly relative will be getting an Untangle.  And with any luck, so will other members of my family.

Is this overkill for a home network?  Not in my book.

Sunday, March 6, 2016

How To Geo-Tag All Your Digital Photos

Last year I discovered how nice it is to have digital pictures tagged with GPS coordinates.  I was experimenting with an iPhone camera and realized that the images were tagged with locations – something I was aware of but had never really thought about.  I liked it.  


If you have an iPhone and want to learn more, just read up on Apple’s geo-tagging feature -- and if this sort of thing bothers you, learn how to disable geo-tagging.

Following the iPhone camera experiments I wanted to be able to get GPS coordinates added to digital pictures shot with cameras that don’t do so automatically.  

While I am happy to use a smart phone for a quick snapshot, when I’m serious about my photos I’m more likely to reach for one of my digital SLRs or perhaps a smaller enthusiast oriented digital camera.  Don’t get me wrong, you can produce wonderful images with smartphone cameras, but with fast moving subjects, or in low light, or when you really want to fiddle with exposure or focus settings to get a particular image with just the right depth of field, you need better tools.

It turns out that the relatively few cameras offer the ability to capture GPS coordinates automatically.  In some cases, you can tack on that feature with an accessory built for your camera.  There are also some really cool Bluetooth add-ons for a few higher end DSLRs that let you tether your camera to a GPS tracking device so that a stream of GPS information is fed to your camera as you shoot and the images get tagged that way.  Even though either of those approaches would work for my DSLRs, it was not going to help me with my other digital cameras. Moreover, I wanted a cheap way to experiment and wasn't inclined to spend what either of these alternatives would cost.

Then I discovered a very inexpensive and flexible way to solve the problem for any digital camera, it’s called gps4cam.  I’m hooked.

While you are shooting pictures you also run a gps tracking app on your smart phone (iPhone and Android versions are available).  Then, when you are done shooting, you grab the tracking information from the phone and run your pictures through a desktop application that uses the tracking information from your phone to tag your photos.
  
One really nifty part of their system is the way you synchronize your camera to the gps4cam app.  You take a picture of an image displayed on your phone by the app and their software uses the embedded information to figure out how far off your camera clock is from the clock used to generate the gps information. There are a number of different ways to use app, all are very simple.
  
For the price of a few dollars to buy a smartphone app you can add GPS tagging to all your digital photos. If this is something you are curious about there’s no reason to avoid experimenting.

Saturday, March 5, 2016

The iTinker Network

I call my home network the iTinker Network.  I will take some time soon to discuss its evolution and future, as well as some of the things I've built with it.  I'm not particularly good at creating network drawings and recently stumbled on a tool (yEd) that allowed me to quickly create a picture without too much of a fuss.  Here's a current snapshot of the network.


Sunday, February 21, 2016

Untangle VPN Part 2 -- Amazon Web Services Software VPN Connection to an Untangle Firewall Using OpenVPN

I recently managed to get an Amazon Web Services (AWS) hardware VPN connection running between a Virtual Private Cloud (VPC) and a home lab with an Untangle firewall via the Untangle IPSec module.  I described the necessary steps here.

After getting the connection running I decided that I wanted to try a lower cost alternative, a software connection between an instance I’d deploy in a VPC and my existing physical network.  I don’t require the extra bandwidth or higher availability that the AWS hardware VPN connection affords out of the box.

While the IPSec connection I had configured was working well in general, there was one problem I struggled to solve.  I use OpenVPN to permit remote access to my network.  The Untangle OpenVPN module makes using OpenVPN for the so-called “road warrior” scenario very easy.

I found that OpenVPN clients were unable to traverse the IPSec tunnel to connect with hosts on the remote end of the network. I believe that this was nothing more than a routing or firewall problem between the relevant networks, however, it was one I struggled to solve.

My limited review of IPSec vs. OpenVPN discussions left me with the sense that OpenVPN is considered more secure and, at least by some, more efficient than IPSec, whereas IPSec is more established and better supported, generally speaking.

Several people had told me that it would be challenging to implement an OpenVPN site-to-site connection between the Untangle firewall and some other OpenVPN implementation.  As I thought about how easy it was to implement the OpenVPN point-to-site connections it occurred to me that a network-to-network connection shouldn’t be that tough.  After all, a point-to-site connection can become a site-to-site connection with not much more than the addition of a static route on one side.

I assumed that if I could limit myself to the Untangle OpenVPN module on the physical network I’d stand a better chance at having my remote clients being able to traverse the tunnel to get to the other side of the site-to-site connection.  As for the AWS side I considered extending one of the special purpose AWS Linux NAT instances by adding an OpenVPN client or by using OpenVPN already included as part of a VyOS instance.  As I describe here, I recently chose to deploy a VyOS instance to provide NAT between the public and private subnets that reside in my VPC.  Unfortunately, the documentation for VyOS is somewhat lacking and I struggled to find the kind of reference material that made me confident I’d configure the VyOS OpenVPN components properly without undue difficulty.  For that reason, I elected to deploy an Amazon Linux NAT instance for the OpenVPN client.
  
I could have deployed a full OpenVPN server in the VPC but since I already had a perfectly good OpenVPN server running on the Untangle firewall I didn’t see a need to deploy yet another server.  I chose the AWS NAT instance because I knew it was already slimmed down to provide nothing more than NAT, which meant that port forwarding and the few other things you’d like to see in a firewall/router were already in place.  I’d only need to add the OpenVPN client. It wouldn’t have been too difficult to start from virtually any standard linux distribution.

Step 1 – Create A Remote Network Entry in the Untangle OpenVPN Module

The first task is to create a remote network client entry in the Untangle OpenVPN module.  (I assume that you have a working knowledge of the Untangle firewall and that you are also familiar with the OpenVPN module and how to use it to create a connection with a remote host or mobile device.  If you aren’t there are ample descriptions available.)

Go to the Untangle OpenVPN module Server tab and, if you’ve not already done so, enter a site name for your VPN.  

Check the box to enable the server.  

The OpenVPN server allocates addresses in its own space that’s separate from your other network spaces.  Make sure that the address space indicated in the box doesn’t conflict with an address space you are using.  

You will also need to decide if you want to NAT the LAN-bound OpenVPN traffic to a local address.  Your implementation will be simpler if you check the box.
Here’s what that tab looks like on my system after having added the entry for the AWS-VPC.



Press the button to add a new remote client.  Choose to add a Network rather than an Individual Client.  Pick a name for the entry and add the CIDR specification for the remote network.


















Click the “Done” button and then the “Apply” button.  Click on the “Download Client” button for the client you just created.  The system will generate a few files that you can use depending on what you will be using to connect to the Untangle server.  In this case, you should select the link to download the configuration zip file for other OSs.












Hang on to the zip file.  You will need it to configure the OpenVPN client.

Step 2 – Export Networks

The next task is to identify the networks that your OpenVPN clients should be able to access.  In my case, I’ve got the local LAN attached to the Untangle appliance, the AWS VPC LAN, and the LAN that consists of the various other remote clients that may be connected to the OpenVPN server at any given time. Set up your list of exported networks accordingly and click the Apply button.
















Step 3 – Deploy the Linux Instance and Add the OpenVPN Client

Deploy a linux instance into your VPC in any way that suits you.  

I chose one of the special purpose linux NAT instances supplied by Amazon.  By doing so I knew that I was getting an instance with port forwarding enabled, which is important.  The instructions for deploying a NAT instance are found here.  Do not forget to disable source/destination checking as described in those instructions.

You will want the instance to have a public IP address so make sure to assign an Elastic IP too.

Update the instance software and install the openvpn client with the following commands:

$ sudo yum update
$ sudo yum upgrade
$ sudo yum install openvpn

Step 4 – Extract and Place the Configuration Files



Use your favorite zip file extraction tool to extract the files in that zip file you got from the Untangle OpenVPN server and copy them to the /etc/openvpn directory in the instance you created on AWS.  



When you are done, the directory should look something like this (with the file names reflecting whatever you named the client).















Step 5 – Modify VPC Route Tables


Add static routes on your private AWS subnet to route traffic for the remote networks through your VPN tunnel.  In my case, I added routes pertaining to the my local LAN and the OpenVPN client subnet.






















Step 6 – Start OpenVPN

The standard openvpn distribution file includes scripts to start, stop and reload the openvpn service.

In the following screen capture you see that initially openvpn is not running and that accordingly there are no tunnel devices, then, we use the openvpn start command to initiate the openvpn client at which time a tunnel device (tun0) is created.





















Once you get to this point you should be able to ping from hosts on the two private networks that you have now connected.





















Step 7 – Start The Remote Client Automatically

Use the chkconfig command to cause the openvpn client to start whenever you boot the AWS instance.










Please let me know if you find any mistakes in this posting.  If you do, drop me a line and I will update the description.

Thursday, February 18, 2016

Setting Up An Amazon Web Services Hardware VPN Connection to an Untangle Firewall

I have configured an Amazon Web Services hardware VPN connection to the IPSec module of an Untangle firewall.  

Although I am very comfortable with the Untangle firewall in general and have used the OpenVPN module for point to site connections, I’m new to IPsec and Amazon Web Services.  

I was unable to find any place where someone had documented the steps they took to establish this particular connection so I will fill that void now.

Amazon provides a lot of documentation, with varying degrees of granularity, for a number of tasks.  Usually, all of the necessary steps are provided with very clear guidance.  In this case I got tripped up by relying on less than complete, higher level summaries.

I followed the procedures to manually set up the VPN connection contained here in the Amazon Virtual Private Cloud User Guide. To set up a VPN connection, you need to complete the following steps:
  1. Create a Customer Gateway
  2. Create a Virtual Private Gateway
  3. Enable Route Propagation in Your Route Table
  4. Update Your Security Group to Enable Inbound SSH, RDP and ICMP Access
  5. Create a VPN Connection and Configure the Customer Gateway
  6. Launch an Instance Into Your Subnet
These procedures assume that you have a VPC with one or more subnets, and that you have the required network information (see What You Need for a VPN Connection).

As noted in the Amazon documents, you can use the VPC wizard to complete many of these steps.  I opted to do things manually to remove the mystery and to put me in a better position to be able to refine and troubleshoot things rather than needing to routinely resort to a wizard to start from scratch.

As you work through the process you will see that there are two broad kinds of Amazon hardware VPN connections, those that make use of Border Gateway Protocol (BGP) and those that don't.  The Untangle firewall does not include Border Gateway Protocol, so when a particular instruction varies based on whether or not you have BGP, choose the alternative that does not rely on BGP.

It's easy to overlook the third step above, enabling route propagation.  I made that mistake which caused no end of headaches.  Take your time and follow the steps, especially that one.

The fifth step is where you will get the information you need to configure the Untangle IPsec module.  After you have created your VPN Connection you will need to navigate to that VPN connection and download configuration details. The Untangle firewall is not one of the devices for which specific configuration information is prepared.  You should choose the Generic / Vendor Agnostic alternative as shown below.















Now we shift to the Untangle side of the house.

Go to the IPsec module.

Select the IPsec Options tab and decide if you want the Untangle firewall to process or bypass all IPsec traffic.  








When this checkbox is enabled, traffic from IPsec tunnels will bypass all applications and services on the Untangle server. If you disable the checkbox, traffic from IPsec tunnels will be filtered through all active applications and services.

Select the IPsec Tunnels tab and add a pair of tunnels.  You will add a pair of tunnels because the Amazon hardware VPN provides a pair of tunnels.  The configuration file you downloaded earlier in this step contains all of the configuration information you will need.  

Here is what a tunnel configuration page looks like initially:















Here is what my first tunnel configuration page looked like when I completed the configuration:




















The second tunnel page is identical except that it references a different public address for the remote IPsec gateway and it has a different shared secret.

Again, the information you need to configure the IPsec Tunnels on Untangle will come from the configuration instructions you downloaded.

If you haven't already done so, enable the IPsec VPN module.  The button should be green and not grey.




















If everything is working, one of your tunnels will be active.  Amazon provides a pair of tunnels but you should not expect to see both active in Untangle at any given time.  Your IPsec status tab should look like this:







Now that you have the Untangle configured, you should be able to work with instances launched into your private subnet.  The Tunnel Details tab of your Amazon VPN Connection should look something like this:










And the Static Routes tab should include the remote networks you are connecting to your Amazon VPC network.  In the example below I've got a pair of those networks.







You should have at least one tunnel that's up and with any luck you've now got a working IPsec tunnel.

If you spot any obvious mistakes in this summary please let me know and I will revise accordingly.

Deploying a VyOS AWS VPC NAT Instance

The easiest way to provision NAT internet access for a private AWS VPC subnet is to create a NAT gateway.  If you choose to create your VPC with the AWS VPC creation wizard, you can select a scenario that creates one for you automatically.  On the other hand, you can create a NAT gateway from the AWS console too.  The Amazon Virtual Private Cloud User Guide has a section covering NAT gateways with everything you need to know, including a comparison of NAT gateways and NAT instances.

I am going to deploy a NAT instance.  A NAT instance provides more configuration flexibility and for my use case will be less expensive.  On the other hand, it requires manual maintenance, for the instance type I will use, has less bandwidth, and without additional work, doesn't provide for high availability.  Finally, I want to experiment with VyOS so I will use it as the platform for my NAT instance.  I hope to make use of the same instance for IPSec VPN too.

The Amazon Virtual Private Cloud User Guide also has a section covering NAT instances that explains how to deploy a NAT instance using an Amazon Linux Machine Image (AMI) configured to run as a NAT instance.  Even though Amazon has specifically prepared their AMI for use a NAT instance, I want to use VyOS.  So I followed the Amazon NAT instance instructions but substituted a VyOS AMI for the Amazon Linux pre-configured AMI.

VyOS is a Linux-based operating system for routers and firewalls that allows you to configure all network functions via a single unified command line interface just like in classic hardware routers. It includes static and dynamic routing (OSPF, RIP, and BGP), stateful firewall and NAT, various VPN options (IPsec, OpenVPN, PPTP, L2TP), DHCP server and more. The system is fully open source and extensible.  To learn more about VyOS, visit the VyOS web site.  The VyOS AWS AMI can be found here.

I am not going to replicate the steps outlined in the Amazon VPC User Guide other than to stress three things that are covered there.  
  1. You must disable source/destination checking for your instances as described in the instructions.
  2. If you did not assign a public IP address to your NAT instance during launch, you need to associate an Elastic IP address with it.
  3. Update the main route table to send traffic to the NAT instance.
I had assumed that I needed to build the NAT instance with two network interfaces, one for incoming traffic and another for outgoing traffic.  That's typical.  I couldn't understand why the VyOS device only included a single network interface.  I assumed I'd just tack on a second network interface, attach it to the private subnet as eth1, and then route traffic to the public subnet via eth0.  I took a look at the Amazon pre-configured NAT instance and noticed that it only had one network interface as well.

At that point it dawned on me that since my private and public subnets were both part of the same VPC network, I really only needed one network interface.  In my case, the VPC network space is the class B space at 10.0.0.0/16 with a class C public subnet at 10.0.0.0/24 and a class C private subnet at 10.0.1.0/24.  Given those networks, a single interface will be able to address both incoming and outgoing traffic.  I had never thought about a one network interface NAT.  Usually the inside and outside networks are truly separate networks so two interfaces are required.  Not here.

In the AWS VPC world you use security groups, which exist outside of instances, and which handle typical firewall concerns.  You use route tables to handle routing concerns.  Given those two facts, the only thing that the VyOS NAT instance needs to do is handle the NAT function itself.

From the VyOS command line:

$ configure
set nat source rule 100 outbound-interface 'eth0'
set nat source rule 100 source address '10.0.1.0/24'
# set nat source rule 100 translation address 'masquerade'
# commit
# save
Saving configuration to '/config/config.boot'...
Done
# exit
$

You will need to substitute your private subnet CIDR specification for 10.0.1.0/24 in the example above.

In my case I have been thinking about using 10.0.0.0/16 instead to allow for the possibility that I might create other private subnets down the road, in which case specifying the entire 10.0.0.0/16 network would allow me to use the NAT as configured without making additional changes.  I am not too concerned about the fact that the public subnet at 10.0.0.0/24 would be covered by that rule since I wouldn't be routing through the NAT instance from that subnet in any event.  For now though, I'm going to leave things as set forth above.

This is a somewhat lengthy post for what amounts to three lines of configuration code plus one command to get into configuration mode and three to make the changes permanent and then exit configuration mode.

This was a very good learning exercise for me and a splendid introduction to VyOS.

My Introduction to Amazon Web Services

I'm learning more about the nuts and bolts of cloud computing by exploring Amazon Web Services (AWS) and making increasing use of the AWS Free Tier.

My first goal was to use Amazon Glacier to provide off-site backup of all of the data I'm already backing up locally.  The iTinker network includes three Synology network attacked storage devices, one dedicated to shared data including a multimedia library, one that is my primary VMware datastore, and a third dedicated to backup.  Synology makes it very easy to backup data to cloud storage providers, including Amazon Glacier.  I've been very happy with this particular backup strategy and my introduction to AWS.

When I first looked at the AWS console I was overwhelmed by the range of services offered.  I still am.

I had been considering upgrading the iTinker.net VMware resources.  I thought about getting more powerful host hardware that would support more datacenter memory and networking capacity in particular.  Then I decided that instead of adding more physical infrastructure I would supplement my network in the cloud.

Amazon's documentation is wonderful.  After reviewing Getting Started with AWS I was off to the races.  AWS virtual computers are called "instances" and the web service that provides the computing capacity  is known as the Amazon Elastic Compute Cloud or more commonly, "EC2".  Instances live in an Amazon Virtual Private Cloud, usually just called a "VPC".

Amazon's documentation and online wizards make it easy to start building a network that can include both public and private computers.  In less than thirty minutes I had built a public facing web server with a blogging tool on a linux platform and then, just to complete the exercise, I did the same thing on a Windows Server 2012 platform.  All of this was well within the constraints of the free tier.

After learning about Amazon security groups (which serve as a way to implement traditional firewall services) and route tables (which identify routing rules for networks and sub-networks independent of any particular computer), I decided to deploy a VPC with the following characteristics.  
  1. The VPC will have two sub-networks, one public facing and one private.
     
  2. The private subnet will make use of Network Address Translation to allow private instances to access internet resources without being exposed to the internet more generally.
      
  3. The public subnet will include a device to allow me to administer the private instances, a so-called bastion.
      
  4. The VPC will include a site-to-site virtual private network connection to the current iTinker.net network so that computers on the current network and in the AWS VPC can connect without additional software.  And yes, I realize that once this element is in place I don't need the bastion, but it's a learning exercise.
Amazon provides many wizards to automatically create a great deal (nearly all) of what I seek.  For example, the VPC creation wizard will create a VPC with public and private subnets, a NAT gateway and hardware VPN where my only task would be to attach the VPC VPN to my existing network.  Nevertheless, in order to understand things better, I will build by hand elements that I find particularly interesting.

Finally, I note that Amazon typically provides several ways to solve any given problem with various pros and cons of any approach and some guidance to help make good choices.  I will tend to build things to minimize cost and trade off ease of implementation or maintenance or capacity or high availability.  In a production environment I'd certainly rethink many of the decisions.

My next post will discuss the NAT instance I built.  After that I will get into virtual private networking issues.

Monday, February 8, 2016

Did You Know???? (For all of you music fans out there)

Did you know that a current generation Mac or Windows computer is capable of producing great sound without a sound card?

You do not need to put a high-end sound card into a computer to get really solid sound out.  All you need is a USB cable and a digital to analog converter (a DAC) or an audio device that includes one.  These are not hard to find and run the gamut from surprisingly cheap to outrageously expensive.

I usually end up somewhere in the middle on these things.

In my study, where I spend a lot of time, I feed the USB output of my computer to a Music Hall digital to analog converter.  The Music Hall DAC also has a digital optical input connected to a Sonos player.  

The Sonos system is marvelous.  I've got mine connected to a music collection stored on a networked file server.  I alternate between music fed by my computer or coming from the Sonos system by selecting which input on the DAC I need.  The DAC converts the digitized music into good old fashioned analog sound waves, either driving some headphones or a set of DynAudio speakers including a modest subwoofer.

I have placed Sonos devices throughout my house so that the music accessible to me in my study can be played anywhere I want.  The Sonos devices connect to each other through their own wireless network.  The setup is really simple.

I use a pair of Sonos speakers in my basement to create a no frills sound system.  I use more elaborate gear with a different DAC to feed one of the inputs to my traditional stereo system.  The gear is very flexible.

The Sonos devices are controlled from a computer or with an app on your tablet or smart phone.

And it all starts with a simple USB output from a computer and little else . . .

P.S. 

Just as I was prepared to push the publish button on this note, I decided to see what the good people at The Wire Cutter had to say on this topic.  I am not surprised that they raved about the Sonos system too.  Here's their review.

Follow Up On P2V Project

Vladan Seget has graciously posted the follow up regarding the completion of my project to do a P2V conversion of an outdated but still working Windows NT production system.  Thank you Vladan.

Friday, February 5, 2016

This Cloud Thing Just Might Catch On :-)

I am fascinated by virtualization technology in general and have invested a lot of my time learning about the VMware ecosystem in particular.  I am very proud of my home lab.  And now I'm starting to wonder if I should get rid of all of it and focus on cloud based infrastructure.

I want to supplement the processing, storage and network capacity of my home lab, but I'm not eager to make additional investments in hardware.  I started to look at Amazon Web Services and was blown away.  The array of resources is dazzling and the pricing structure makes sense.

I have been using Amazon Glacier to archive all of my data.  I'm ready to start learning about cloud servers and networking using Amazon Web Services EC2 and VPC technology.

I was starting to read the relevant AWS documentation and was really impressed.  Just when I put down those things, this article came to my attention.

Oh my.

My next series of projects will be about extending my home lab into the cloud.


Thursday, February 4, 2016

The cloud, autonomous vehicles and Winston Churchill

As I listen to people argue that we should avoid putting data and IT infrastructure in the cloud because of security concerns or that autonomous vehicles are a bad idea because there will be situations in which they fail, I am reminded of the Winston Churchill quote: "Democracy is the worst form of government, except for all the others."

When evaluating whether to put data and IT infrastructure in the cloud shouldn't we be thinking about how that arrangement compares to the alternatives as opposed to whether the cloud is somehow ideal?  Who is better at securing data, Amazon, Google, Microsoft etc. on the one hand, or the technology staff at Company X, where Company X is not in the business of providing secure IT solutions on the other? While one can argue that Company X may be less of a target, it's not clear to me that that's a consistent winning argument either.

Similarly, when it comes to whether autonomous vehicles are a good idea, I think about all of the distracted or really bad drivers I see on the road every day, and the sleep or substance impaired drivers that are out there too.  Who would you rather see behind the wheel?

Monday, February 1, 2016

Redeploying An Old System As A Virtual Computer

I have spent a great deal of my spare time the last few years learning about virtual computing and building a virtual computing lab at home.  This has become my favorite part of the iTinker network – which I will talk about a bit more in the days ahead.

A few weeks ago my dear friend Mike asked if we could build a virtual computer to host a Windows NT 4 server that one of his clients uses.  While this is a project that’s still in process, we recently reached a major milestone.  We took a fresh install of Windows NT 4 running on some old hardware and turned it into a stable and very fast virtual machine.  The virtual system is meaningfully faster than the physical system.  In retrospect that’s not surprising, but it’s really good to see. 

I shared the results with Vladan Seget, a virtualization expert with a great web site and twitter feed.  I’ve relied on his work for many things.  Today he posted my work on his site together with some kind words on twitter.  Thank you, thank you, thank you.

Before talking about this task I want to set the stage and pay homage to my best friend.

I met Mike 40 years ago while in high school.  Our high school had a Digital Equipment Corporation PDP-8/e.  We programmed it in Basic and, on occasion, in assembly code.  While there was a teletype device with a tape reader in the computer room, most student programming was done on hand-marked Hollerith cards and processed in batches each afternoon.  Mike helped run that computer.  He saw my programs and commented on the print-outs.  From his comments I could tell that he was very smart and more than a little bit of a wise-ass.  My kind of guy.  We met and have been close friends ever since.  That’s the start of my journey with computers and how I met my best friend.  While Mike and I share many interests we are also polar opposites in many ways.  He has been a balancing force in my life and helps me to keep perspective on things.

Mike is a jack of all trades.  He’s a can-do guy.  One of the things he does is provide all manner of technology consulting and support to small businesses.  He has patiently listened to me in recent years drone on about linux and open source software as well as virtualization technology.  I have been convinced that while not working with big data centers and server farms, his business and clients could benefit from virtualization.  He agreed in concept.

Now, back to the story.

Mike explained that his client’s server runs custom software that could not be readily ported to a newer operating system and, due to the passing of time, even reinstalling Windows NT on fresh hardware would be a challenge.  Finding spare parts for the computer has also become a challenge.  I pressed him to consider porting the software to a newer operating system or even doing a clean install on new hardware.  Those approaches were simply non-starters. 

Most of my virtualization work has been with virtual machines running current operating systems and built from the ground up as virtual machines.  I had toyed around with VMware Work Station Stand Alone converter and converted a few Windows 7 computers to virtual machines, but nothing more than that.  I was game to help Mike deploy a virtualization environment and to tackle the NT P2V challenge.

We have met with considerable success so far.  The specific steps are described on Vladan’s site.  We expect to complete our work on the production server very soon and then move on to a few other older systems – but none as old as the NT server.

There is a lot of useful information to be found on the web from when these specific P2V conversions were routine.  Trying to accomplish the conversion today into a current VMware environment presents some different challenges as the tools used before are not readily available and they wouldn’t necessarily work well with current virtualization environments.


I’m pleased with what we have done so far and hope that by writing everything down and sharing it publicly we can make things at least a little easier for others who choose to go down the same road.  It’s proving to be a great way to learn more about virtualization environments and also to test out some performance monitoring tools (Cacti in particular) that I finally got up and running on one of my virtual machines.