Integrating Trendnet IP Cameras with my homelab

What do you get when you take an IT Systems Engineer with more time on his hands than usual and an unfinished home project list that isn’t getting any shorter?

You get this:

Daytime
My home automation/Internet of Things ‘play’

That’s right. I’ve stood-up some IP surveillance infrastructure at my home, not because I’m a creepy Big Brother type with a God-Complex, rather:

  1. Once my 2.5 year old son figured out how to unlock the patio door and bolt outside, well, game over boys and girls….I needed some ‘insight’ and ‘visibility’ into the Child Partition’s whereabouts pronto and chasing him while he giggles is fun for only so long
  2. My home is exposed on three sides to suburban streets, and it’s nice to be able to see what’s going on outside
  3. I have creepy Big Brother tendences and/or God complex

I had rather simple rules for my home surveillance project:

  • IP cameras: ain’t no CCTV/600 lines of resolution here, I wanted IP so I could tie it into my enterprise home lab
  • Virtual DVR, not physical: Already have enough pieces of hardware with 16 cores, 128GB of RAM, and about 16TB of storage at home.
  • No Wifi, Ethernet only: Wifi from the camera itself was a non-starter for me because 1) while it makes getting video from the cameras easier, it limits where I can place them both from a power & signal strength perspective 2) Spectrum & bandwidth is limited & noisy at distance-friendly 2.4GhZ, wide & open at 5ghz, but 5 has half the range of 2.4. For those reasons, I went old-school: Cat5e, the Reliable Choice of Professionals Evereywhere
  • Active PoE: 802.3af as I already own about four PoE injectors and I’ve already run Cat5e all over the house
  • Endpoint agnostic:  In the IP camera space, it’s tough to find an agnostic camera system that will work on any end-device with as little friction as possible. ONVIF is, I suppose, the closest “standard” to that, and I don’t even know what it entails. But I know what I have: Samsung GS6, iPhone 6, a Windows Tiered Storage box, four Hyper-V hosts, System Center, an XBox One and 100 megabit internet connection.
  • Directional, no omni-PTZ required: I could have saved money on at least one corner of my house by buying a domed, movable PTZ camera rather than use 2 directionals, but 1) this needed to work on any end-point and PTZ controls often don’t

And so, over the course of a few months, I picked up four of these babies:

TV-IP310PI_d02_2

Trendnet TV-IP310PI

Design

I liked these cameras from the start. They’re housed in a nice, heavyweight steel enclosure, have a hood to shade the lens and just feel solid and sturdy. Trendnet markets them as outdoor cameras, and I found no reason to dispute that.

My one complaint about these cameras is the rather finicky mount. The camera can rotate and pivot within the mount’s attachment system, but you need to be careful here as an ethernet cable (inside of a shroud) runs through the mount. Twist & rotate your camera too much, and you may tear your cable apart.

And while the mount itself is steel and needs only three screws to attach, the interior mechanism that allows you to move the camera once mounted is cheaper. It’s hard to describe and I didn’t take any pictures as I was cursing up a storm when I realized I almost snapped the cable, so just know this: be cognizant that you should be gentle with this thing as you mount it and then as you adjust it. You only have to do that once, so take your time.

Imaging and Performance:

ircam
Nighttime

Trendnet says the camera’s sensor & processing is capable of pushing out 1080p at 30 frames per second, but once you get into one of these systems, you’ll notice it can also do 2560×1440, or QHD resolutions. Most of the time, images and video off the camera are buttery smooth, and it’s great.

I’m not sharp enough on video and sensors to comment on color quality, whether F 1.2 on a camera like this means the same as it would on a still DSLR, or understand IR Lux, so let me just say this: These cameras produce really sharp, detailed and wide-enough (70 degrees) images for me, day or night. Color seems right too; my lawn is various hues of brown & green thanks to the heat and California drought, and my son’s colorful playthings that are scattered all over do in indeed remind me of a clown’s vomit. And at night, I can see far enough thanks to ambient light. Trendnet claims 100 foot IR-assisted viewing at night. I see no reason to dispute that.

Let the camera geeks geek out on teh camera; this is an enterprise tech blog, and I’ve already talked abou the hardware, so let’s dig into the software-defined & networking bits that make this expensive project worthwhile.

Power & Networking

These cameras couldn’t be easier to connect and configure, once you’ve got the power & cabling sorted out. The camera features a 10/100 ethernet port; on all four of my cameras, that connects to four of Trendnet’s own PoE injectors. All PoE injectors are inside my home; I’d rather extend ethernet with power than put a fragile PoE device outside. The longest cable run is approximately 75′, well within the spec. Not much more to say here other than Trendnet claims the cameras will use 5 watts maximum, and that’s probably at night when the IR sensors are on.

From each injector, a data cable connects to a switch. In my lab, I’ve got two enterprise-level switches.

One camera, the garage/driveway camera, is plugged into trunked, native vlan 410 port on my 2960s in the garage,

The other switch is a small CIsco SG-300 10p. The three other cameras connect to it. The SG-300 serves the role of access-layer switch and has a 3x1GbE port-channel back to the 2960s. This switch wasn’t getting used enough in my living room, so I moved it to my home office, where all ports are now used. Here’s my home lab environment, updated with cameras:

The Homelab as it stands today
The Homelab as it stands today

Like any other IP cam, the Trendnet will obtain an IP off your DHCP server. Trendnet includes software with the camera that will help you find/provision the camera on your network, but I just saved a few minutes and looked in my DHCP table. As expected, the cameras all received a routable IP, DNS, NTP and other values from my DHCP.

Once I had the IP, it was off to the races:

  • Set DHCP reservation
  • Verify an A record was created DNS so I could refer to the cameras by names rather than IP
  • Login, configure new password, update firmware, rename camera, turn-off UPNP, turn-off telnet
  • Adjust camera views

Software bits – Server Side

Trendnet is nice enough to include a fairly robust and rebadged version of Luxriot camera software, which has two primary components: Trendnet View Pro (Fat Client & Server app) and VMS Broacast server, an http server. Trendnet View Pro is a server-like application that you can install on your PC to view, control, and edit all your cameras. I say server-like because this is the free-version of the software, and it has the following limits:

  • Cannot run as a Windows Service
  • An account must be logged in to ‘keep it running’
  • You can install View Pro on as many PCs as you like, but only one is licensed to receive streaming video at a time

Upgrading the free software to a version that supports more simultaneously viewers is steep: $315 to be exact.

Smoking the airwaves with my beater kiosk PC in the kitchen. This is the TrendNet View client, limited to one viewer at a time
Smoking the airwaves with my beater kiosk PC in the kitchen. This is the TrendNet View client, limited to one viewer at a time

Naturally, I went looking for an alternative, but after dicking around with Zoneminder & VLC for awhile (both of which work but aren’t viewable on the XBox), I settled on VMS Broadcast server, the http component of the free software.

Just like View Pro, VMS Broadcast won’t run as a service, but, well, sysinternals!

So after deliberating a bit, I said screw it, and stood-up a Windows 8.1 Pro VM on a node in the garage. The VM is Domain-joined, which the Trendnet software ignored or didn’t flag, and I’ve provisioned 2 cores & 2GB of RAM to serve, compress, and redistribute the streams using the Trendnet fat client server piece as well as the VMS web server.

Client Side

On that same Windows 8.1 VM, I’ve enabled DLNA-sharing on VLAN 410, which is my trusted wireless & wired internal network. The thinking here was that I could redistribute via DLNA the four camera feeds into something the XBox One would be able to show on our family’s single 48″ LCD TV in the living room via the Media App. So far, no luck getting that to work, though IE on the XBox One will view and play all four feeds from the Trendnet web server, which for the purposes of this project, was good enough for me.

Additionally, I have a junker Lenovo laptop (Ideapad, 11″) that I’ve essentially built into a Kiosk PC for the kitchen/dining area, the busiest part of the house. This PC automatically logs in, opens the fat client and loads the file to view the four live feeds. And it does this all over wifi, giving instant home intel to my wife, mother-in-law, and myself as we go about our day.

Finally, both the iOS & Android devices in my house can successfully view the camera streams, not from the server, but directly (and annoyingly) from the cameras themselves.

The Impact of RTSP 1080p/30fps x 4 on Home Lab 

I knew going into this that streaming live video from four quality cameras 24×7 would require some serious horsepower from my homelab, but I didn’t realize how much.

From the compute side of things, it was indeed alot. The Windows 8.1 VM is currently on Node2, a Xeon E3-1241v3 with 32GB of RAM.

Typically Node2’s physical CPU hovers around 8% utilization as it hosts about six VMs in total.

With the 8.1 VM serving up the streams as well as compressing them with a variable bit rate, the tax for this DIY Home surveillance project was steep: Node2’s CPU now averages 16% utilized, and I’ve seen it hit 30%. The VM itself is above 90% utilization.hosts

More utilization = more worries about thermal as Node2 sits in the garage. In southern California. In the summertime.

Ambient air temperature in my garage over the last three weeks.
Ambient air temperature in my garage over the last three weeks.

Node2’s average CPU temperature varies between 22c and 36c on any given warm day in the garage (ambient air is 21c – 36c). But with the 8.1 VM, Node2 has hit as high as 48c. Good thing I used some primo thermal paste!

trsp

All your Part 15 FCC Spectrum are belong to me, on channel 10 at least
All your Part 15 FCC Spectrum are belong to me, on channel 10 at least

From the network side, results have been interesting. First, my Meraki is a champ. The humble MR-18 802.11n access point doesn’t break a sweat streaming the broadcast feed from the VM to the Lenovo Kiosk laptop in the kitchen. Indeed, it sustains north of 21mb/s as this graph shows, without interrupting my mother in law’s consumption of TV broadcasts over wifi (separate SSID & VLAN, from the SiliconDust TV tuner), nor my wife’s Facebooking & Instagramming needs, nor my own tests with the Trendnet application which interfaces with the cameras directly.

Meraki’s analysis says that this makes the 2.4ghz spectrum in my area over 50% utilized, which probably frustrates my neighbors. Someday perhaps I’ll upgrade the laptop to a 5ghz radio.

vSwitch, the name of my Converged SCVMM switch, is showing anywhere from 2megabits to 20 megabits of Tx/Rx for the server VM. Pretty impressive performance for a software switch!network

Storage-wise, I love that the Trendnets can mount an SMB share, and I’ve been saving snapshots of movement to one of the SMB shares on my WindowsSAN box.

I am also using Trendnet’s email alerting feature to take snapshots and email them to me whenever there’s motion in a given area. Which is happening a lot now as my 2 year old walks up to the cameras, smiles and says “Say cheeeese!”

All in all, a tidy & fun sub-$1000 project!

The Trouble with Cipher Suites

So I was tooling around one day in the lab, reading Ivan Ristic’s book on SSL/TLS, when I came across his advice on securing Windows-based Infrastructures from offering up the use of out-of-date/obsolete or otherwise insecure cipher suites to hosts on the other end of an https connection.

I read Ristic’s chapter a couple of times, reviewed TechNet, and selected a set of cipher suites in Group Policy in the order I wanted them used, based largely on Ristic’s text, but with a few others I knew I’d need after the policy went live. Then I pushed out the new policy, named “Strong Crypto,” to all physical, virtual and laptops in my home lab.

A few gpupdates later, I was pleased to see that nothing was broken. Schannel wasn’t showing any errors, User & Computer accounts were authenticating and getting kerb tickets, and pleasantly, my Outlook fat client didn’t even hiccup; it happily was using TLS 1.2 cipher suites to talk with my Office 365 Exchange instance.

Happy dance.

And then, two days later, I noticed it. OneDrive for Business was busted, had gone Pear Shaped, and was now totally t***-up as my English friends would say.

A couple hundred gigabytes of files no longer syncing to my Sharepoint Online site, as evidenced by these Microsoft Icons of Distress:

onedrive

So, what’d I break?

I’ll get to that in a moment, but first: why would you bother with something as obscure as cipher suites and their order? I mean beyond the fact that toggling the cipher suite sounds cool?

Why Cipher Suites are Important

helloCipher suites are a critical part of your AD infrastructure. They’re critical as they represent a sort of baseline set of standards that client & server negotiate over during the complicated and very important tête à tête that is the TLS/SSL handshake between client/server.

You can and should read more about TLS handshakes in this RFC, but the bottom line is this: client & server are supposed to negotiate with each other, find the most secure and common set of cipher package, and use it during the secured session.

If client & server can’t find at least one common cipher suite, you have a busted TLS connection. And that’s no bueno, unless it was your intent.

In Microsoft-land, the default set of cipher suites is pretty good. Who am I kidding, it’s an acronym rich playground of security paradigms, as evidenced by the Group Policy editor:

ciphertsuite
Holy Acronym soup, batman!

Don’t be intimidated by all the crypto terms on this screen. What you see is the list of cipher suites -and the order in which they are presented to a host- by default.

The way to read one of these cipher suites is by breaking it down into its constituent parts:

cipherbreakdown

So, the Cipher suite above uses TLS as its protocol (vs SSL), can exchange keys via the Elliptic Curve Diffie Helman ephemeral mechanism, accepts an RSA x.509 certificate, and is willing to encrypt the session via the AES 256 bit block cipher. The last bit, we’ll get to in a moment.

Be cautious when modifying

Since I was doing this in my lab, I had no concern about legacy applications, but in a production environment, you’ll want to tread lightly and deliberately here. Consider:

If your’e in a typical Microsoft IT shop, you probably have a few legacy applications hanging around that may rely on old cipher suites, or vice-versa, the application server can’t use the newer cipher suites that come built into your desktops & laptops.

Take Windows Server 2003, for example. The base OS doesn’t support Elliptic Curve Diffie Helman for Key Exchange, so right off the bat, if you’ve got 2003 Hosts serving up https Sharepoint or Exchange in-house, your clients & servers will never utilize TLS_ECDHE as that suite is not common to both of them. The contrary is also true; your Windows 8.1 laptop isn’t going to support the oldest suites that your 2003 server does; TLS_RSA_WITH_DES_CBC_SHA is never going to be the cipher suite watering hole your clients/servers meet around ((thank Goodness!!)) unless you go out of your way to make it happen.

The lesson here is that old cipher suites never die, dependency on them just fades away as your modernize/replace your legacy in-house applications with modern, streamlined, and properly TLS-secured ones. So be cautious, lest you break a legacy application.

You might be thinking I’m full of great advice, yet I still managed to wreck my OneDrive for Business sync app. And you’d be right!

So what happened?

Essentially, I broke my little OneDrive for Business sync app because I didn’t include SHA1 as possible hash algorithm in any of the cipher suites I selected.

By leaving SHA1 out of my cipher suites, OneDrive for Business couldn't find common ground with Sharepoint Online, which broke my OneDrive Sync.
By leaving SHA1 out of my cipher suites, OneDrive for Business couldn’t find common ground with Sharepoint Online, which broke my OneDrive Sync.

And SHA1 is used by Microsoft IT ((as a side note, it’s really awesome to see Microsoft IT’s PKI, built out as it should be. Here’s a PKI serving not just Microsoft internal employees -all 100k of them- but millions of customers. If Microsoft IT can build a PKI to that scale, surely you and I can build one for the users dependent on us!)) in at least two places: as the Signature Hash algorithm on the root certificate of my Sharepoint site, and as the hashing mechanism for the Thumbprint on *.sharepoint.com certificate.

Had I visited my Sharepoint site in IE, I would likely have seen an error message in my browser; but I use Opera normally, and Opera -like Chrome & Firefox- have cipher suites apart from Windows’ so I never saw an error.

Adding the strongest cipher suite that included SHA1 fixed the error right away. ((Interesting aside: Google, and many security researchers, consider SHA1 to be end-of-life as it is now, or will be very soon now, computationally feasible to crack it, if that’s the right word. Google wants to sunset SHA1 in its browser this year; Ivan Ristic’s site will give https sites that use SHA1 a D- rating by the end of 2015. Microsoft IT, meanwhile, still uses it in production, but plans to deprecate it at the end of 2016. What gives? You could say there’s a pissing match between these leviathans of technology, or that one is trying to screw the other. But in essence, all parties agree SHA1 should fade away, they just differ on how aggressive deprecation efforts should be.))

Fixed Wireless is the WAN builder’s best friend

This is Joe. He's an American hero.
This is Joe. He’s an American hero.

Just how hard is it in 2015  to order & deploy a cheap commodity internet circuit to connect a remote office/branch office (ROBO) to the rest of your corporate WAN via the internet? ((Commodity = business class internet, something less reliable but orders of magnitude less expensive than a traditional private line, T1, or managed MPLS circuit. Commodity also means fat, dumb internet pipe, a product that cable internet companies consider an existential threat))

Pretty damned hard.

Why so difficult Jeff?!? you’re thinking. I stand-up tunnels and tear them down all day long, I route/switch in my sleep and verily I say unto you that my packets always find their way home, tags intact, whether on the WAN, between switch closets in the campus, or between nodes in the datacenter!

Verily they do indeed, and I salute you, you herder of stray packets!

It’s not that the technology connecting core to branch is hard or difficult, no, what I’m bitching about today is connecting the branch site to the internet in the first place.

It’s layer 1, stupid.

Truly, ordering internet service for a small or even medium-sized branch office is one of the most painful exercises in modern IT.

Here, let me show you:

  1. You Bing/Google various iterations of “Lake Winnepesaukah ISPs,” , “Punxatawney Packet Delivery,” , “Broadband Service in Topeka,” “Ethernet over Copper + Albuquerque,” “Business Cable Internet – Pompano Beach, FL” and such. Dismissing the spam URL results on Page 1-12, you eventually arrive at Comcast, Time Warner, or Charter nee Spectrum Business, or whatever little coax fiefdom has carved out a franchise at the edge of your business. You visit their website, click “Business” and fight your way through pop-ups and interstitials to a page that says it can verify service at your branch office’s address.
  2. Right, you think, I’ll just Tab-tab my way through this form, input my branch office address here, punch that green submit button there, and get these nasty Layer 1 bits out of the way. But this isn’t the old days of 2009 when you could order a circuit online or at least verify service…oh no, no sir, this is the future…this is 2015. In 2015, you see, the Cable providers demand audience with you, so that they can add value.
  3. Pay the Last Mile Toll:  So you surrender your digits and wait for a phone call. When it rings 36-72 hours later, you’re determined to keep it short. What you want is a simple yes/no on service at your ROBO, or an install date, but what you get is a salesperson who can’t spell TCP/IP and wants to sell you substandard VoIP & TV. “Will you be uploading or downloading with this internet connection?” is just one of the questions you’ll suffer through to mollify the last mile gatekeepers standing between you and #PacketGlory on the WAN.
  4. At long last, install day arrives: You’ve drop-shipped the edge router/overlay device, you’ve coordinated with the L-con, and the CableCo tech is on site at your ROBO to install your circuit. Hallalelujah, you think, as you wait for the tunnel to come up. But it never does, because between your awesome zero-touch edge device & your datacenter lies some crazy bespoke 2Wire gateway device that NATs or offers up a free wifi connection to the public on your dime. Another phone call, another fight to get those things turned off.

Nuts to all that, I say.

This is America jack, and the great thing about America is choice. Even when you don’t have choice (and you don’t in the case of cable franchises & municipalities), all you may need is line of sight to one of these things:

Mmmm. Microwaves.
Mmmm. Microwaves.

That’s right. Fixed wireless, baby. I’m hot on fixed wireless in 2015. It’s everything CableCo isn’t. It’s:

  • Friction free: In place of the coax fiefdoms and gatekeepers, the 1-800 numbers, and the aggressive salespeople, there’s just Joe, a real engineer at a local fixed wireless ISP. Joe’s great because Joe’s local, and Joe takes your order, gives you his mobile, installs the antenna at your branch, and hands you a blue wire with three static IPs.
  • Super-fast to deploy. You want internet at your ROBO? Well guess what? It’s already there, you just need the equipment to catch it.
  • More reliable than it used to be: Now of course this all depends on the application you’re trying to deliver to your ROBO, but I’ll say this: Fixed Wireless has improved. You don’t need to fear (as much) a freak snowstorm, a confused flock of Canada Geese, or rain. For a small ROBO, a fixed wireless connection might be enough to serve as the primary WAN link. For larger ROBOs, I think the technology is mature enough to serve as a secondary WAN link, or even your primary Internet circuit. ((Routing business traffic over the expensive wired link and internet over the cheap fixed wireless link is a recipe I’d recommend all day long and twice on Sundays ))
  • As Secure as Anything Else These Days: How difficult would it be to perform a man in the middle attack via interception of a fixed wireless connection? I’m not sure, to be honest, but if you aren’t encrypting your data before it leaves your datacenter, you have a whole lot more to worry about than a blackhat with a laptop, a stick, and a microwave antenna.
  • Cost competitive: I’ve deployed a couple of fixed wireless connections and I find the cost to be very competitive with traditional cable company offerings. Typically you’ll pay about $200 for the antenna install, but unlike the fee Comcast would charge you to install their modem, I think this is justified as it involves real labor and a certain amount of risk.
  • Regional/Hyper-local but still innovative: For whatever reason, fixed wireless ISPs have proven resistant to the same market forces that killed off your local dial-up/DSL ISP. Yet this isn’t a stagnant industry; quite the opposite in fact, with players like Ubiquiti Networks releasing new products.

I’ve been working on the WAN a lot lately and I’ve deployed two fixed wireless circuits at ROBOs. If you’ve got similar ROBO WAN pains, you should have a look at fixed wireless, you might be surprised!

Find Office problems before they find you with Telemetry server

I’ve not always had a bromance with Microsoft’s Office suite. I cut my word processing teeth on WordPerfect 5.1, did most of my undergrad papers in BeOS’ one productivity suite ((GoBe Productive, still the best Office suite name)) , and touch-typed my way to graduating cum laude in grad school with countless Turabian-style Google Docs papers.

Office?

That was for corporate suits, man. Rich corporate suits.

But all that’s ancient history. Or maybe I’ve become a suit. Either way, I’m loving Office today.

In 2015, Office has transformed into the ultimate agnostic git ‘r done productivity package. It’s free to use in many cases, but if you want to ‘own’ it, you can subscribe to it, just like HBO ((For the IT Pro, this is a huge advantage, as a cheap E-class sub gives you access to your own Exchange instance, your own Sharepoint server, and your own Office tenant. It’s awesome!)) . It’s also available on just about any device or computing system you can think of, works just as well inside a browser as Google Docs does, and has an enormous install base.

telemetry
From the Office Telemetry PDF guide, linked below

Office has become so impressive and so ubiquitous that it’s truly a platform unto itself, consumed a la carte or as part of a well-balanced Microsoft meal. I’m bullish on Windows but if Office’s former partner ever sunsets, I’m convinced my kid and his kid will still grow up in an Office world.

All of that makes Office really important for IT, so important that you as an IT Guy should consider standing-up some easy instrumentation around it.

Enter Office Telemetry, a super-simple package that flows your Office data to a SQL collector, mashes it up, and gives you important insight into how your users are using Office. It also surfaces the problems in Office -or Office documents- before your users do, and it’s free.

Oh, did I mention it’s called Office Telemetry? This thing makes you feel like an astronaut when you’re using it!

Here’s how you deploy it. Total time: about an hour.

  1. Download the Office 2013 ADMX/ADML files for Group Policy and deploy them to your Domain Controllers.
  2. Spin-up a 2008 R2 or 2012 VM, or find a modestly-equipped physical box that at least has Windows Management Framework 3.0/Powershell 3.0 on it. If it has a SQL 2012 instance on it that you can use, even better. If not, don’t stress and proceed to the next step.
  3. Set-aside a folder on a separate volume (ideally) for the telemetry data. If you’re going to flow data from hundreds of Office users, plan for a minimum of 5-25 megabytes per user, at a minimum.
    • If your users are on the WAN, plan accordingly. Telemetry data is pretty lightweight (50k chunks for older Office clients, 64k chunks for Office 2013)
  4. gptelemetryInstall Office ProPlus 2013 or 365 on the VM. You do not need to use an Office 365 license for it to run.
  5. Download the Deploy Office Telemetry powershell script package from TechNet or via Script Browser in Powershell ISE.
  6. Because it’s a script, you’ll need to temporarily change your server’s execution policy, self-sign it, or configure Group Policy as appropriate to run it. TechNet has instructions.
  7. Run the script; it will download SQL 2012 express and install it for you if you don’t have SQL. It will also set proper SMB read/modify permissions on that folder you set up earlier.
  8. As if that wasn’t enough, the script will give you a single registry keyfile you can use to deploy to your user’s machines.
  9. But I prefer the Group Policy/SCCM route. Remember the ADMX files you deployed? Flip the switches as appropriate under User Configuration>Administrative Templates>Microsoft Office 2013> Telemetry Dashboard.
  10. Sit back, and watch the data flow in, and pat yourself on the back because you’re being a proactive IT Pro!

As I’ve deployed this solution, I’ve found broken documents, expensive add-ons that delay Office, and multiple other issues that were easy to resolve but difficult to surface. It’s totally worth your time to install it.

Office Telemetry PDF

It’s been awhile since I posted about my home lab, Daisettalabs.net, but rest assured, though I’ve been largely radio silent on it, I’ve been busy.

If 2013 saw the birth of Daisetta Labs.net, 2014 was akin to the terrible twos, with some joy & victories mixed together with teething pains and bruising.

So what’s 2015 shaping up to be?

Well, if I had to characterize it, I’d say it’s #LabGlory, through and through. Honestly. Why?

I’ve assembled a home lab that’s capable of simulating just about anything I run into in the ‘wild’ as a professional. And that’s always been the goal with my lab: practicing technology at home so that I can excel at work.

Let’s have a look at the state of the lab, shall we?

Hardware & Software

Daisetta Labs.net 2015 is comprised of the following:

  • Five (5) physical servers
  • 136 GB RAM
  • Sixteen (16) non-HT Cores
  • One (1) wireless access point
  • One (1) zone-based Firewall
  • Two (2) multilayer gigabit switches
  • One (1) Cable modem in bridge mode
  • Two (2) Public IPs (DHCP)
  • One (1) Silicon Dust HD
  • Ten (10) VLANs
  • Thirteen (13) VMs
  • Five (5) Port-Channels
  • One (1) Windows Media Center PC

That’s quite a bit of kit, as a former British colleague used to say. What’s it all do? Let’s dive in:

Physical Layout

The bulk of my lab gear is in my garage on a wooden workbench.

Nodes 2-4, the core switch, my Zywall edge device, modem, TV tuner, Silicon Dust device and Ooma phone all reside in a secured 12U, two post rack I picked up on ebay about two years ago for $40. One other server, core.daisettalabs.net, sits inside a mid-tower case stuffed with nine 2TB Hitachi HDDs and five 256GB SSDs below the rack.

Placing my lab in the garage has a few benefits, chief among them: I don’t hear (as many) complaints from the family cluster about noise. Also, because it’s largely in the garage, it’s isolated & out of reach of the Child Partition’s curious fingers, which, as every parent knows, are attracted to buttons of all types.

Power & Thermal

Of course you can’t build a lab at home without reliable power, so I’ve got one rack-mounted APC UPS, and one consumer-grade Cyberpower UPS for core.daisettalabs.net and all the internet gear.

On average, the lab gear in the garage consumes about 346 watts, or about 3 amps. That’s significant, no doubt, costing me about $38/month to power, or about 2/3rds the cost of a subscription to IT Pro TV or Pluralsight. 🙂

Thermals are a big challenge. My house was built in 1967, has decent insulation and holds temperature fairly well in the habitable parts of the space. But none of that is true about the garage, where my USB lab thermometer has recorded temps as low as 3C last winter and as high as 39c in Summer 2014. That’s air-temperature at the top of the rack, mind you, not at the CPU.

One of my goals for this year is to automate the shutdown/powerup of all node servers in the Garage based on the temperature reading of the USB thermometer. The $25 thermometer is something I picked up on Amazon awhile ago; it outputs to .csv but I haven’t figured out how to automate its software interface with powershell….yet.

Anyway, here’s my stack, all stickered up and ready for review:

IMG_20150329_214535914

Beyond the garage, the Daisetta Lab extends to my home’s main hallway, the living room, and of course, my home office.

Here’s the layout:

homelab2015

Compute

On the compute side of things, it’s almost all Haswell with the exception of core and node3:

[table]

Server, Architecture, CPU, Cores, RAM, Function, OS, Motherboard

Core, AMD A-series, A8-5500, 2, 8GB, Tiered Storage Spaces & DC/DHCP/DNS, Server 2012 R2, Gigabyte D4

Node1, Haswell, i7-4770k, 4, 32GB, Main PC/Office/VM host/storage, 2012R2, Supermicro X10SAT

Node2, Haswell, Xeon E3-1241, 4, 32GB, Cluster node, 2012r2 core, Supermicro X10SAF

Node3, Ivy Bridge, i7-2600, 4, 32GB, Cluster node, 2012r2 core, Biostar

Node4, Haswell, i5-4670, 4, 32GB, Cluster node/storage, 2012r2 core, Asus

[/table]

I love Haswell for its speed, thermal properties and affordability, but damn! That’s a lot of boxes, isn’t it? Unfortunately, you just can’t get very VM dense when 32GB is the max amount of RAM Haswell E3/i7 chipsets support. I love dynamic RAM on a VM as much as the next guy, but even with Windows core, it’s been hard to squeeze more than 8-10 VMs on a single host. With Hyper-V Containers coming, who knows, maybe that will change?

Node1, the pride of the fleet and my main productivity machine, boasting 2x850 Pro SSDs in RAID 0, an AMD FirePro, and Tiered Storage Spaces
Node1, the pride of the fleet and my main productivity machine, boasting 2×850 Pro SSDs in RAID 0, an AMD FirePro, and Tiered Storage Spaces

While I included it in the diagram, TVPC3 is not really a lab machine. It’s a cheap Ivy Bridge Pentium with 8GB of RAM and 3TB of local storage. It’s sole function in life is to decrypt the HD stream it receives from the Silicon Dust tuner and display HGTV for my mother-in-law with as little friction as possible. Running Windows 8.1 with Media Center, it’s the only PC in the house without battery backup.

Physical Network
About 18 months ago, I poured gallons of sweat equity into cabling my house. I ran at least a dozen CAT-5e cables from the garage to my home office, bedrooms, living room and to some external parts of the house for video surveillance.
I don’t regret it in the least; nothing like having a reliable, physical backbone to connect up your home network/lab environment!

Meet my underlay
Meet my underlay

At the core of the physical network lies my venerable Cisco 2960S-48TS-L switch. Switch1 may be a humble access-layer switch, but in my lab, the 2960S bundles 17 ports into five port channels, serves as my DG, routes with some rudimentary Layer 3 functions ((Up to 16 static routes, no dynamic route features are available)) and segments 9 VLANs and one port-security VLAN, a feature that’s akin to PVLAN.

Switch2 is a 10 port Cisco Small Business SG-300 running at Layer 3 and connected to Switch1 via a 2-port port-channel. I use a few ports on switch2 for the TV and an IP cam.

On the edge is redzed.daisettalabs.net, the Zyxel USG-50, which I wrote about last month.

Connecting this kit up to the internet is my Motorola Surfboard router/modem/switch/AP, which I run in bridge mode. The great thing about this device and my cable service is that for some reason, up to two LAN ports can be active at any given time. This means that CableCo gives me two public, DHCP addresses, simultaneously. One of these goes into a WAN port on the Zyxel, and the other goes into a downed switchport

Love Meraki's RF Spectrum chart!
Love Meraki’s RF Spectrum chart!

Lastly, there’s my Meraki MR-16, an access point a friend and Ubiquity networks fan gave me. Though it’s a bit underpowered for my tastes, I love this device. The MR-16 is trunked to switch1 and connects via an 802.3af power injector. I announce two SSIDs off the Meraki, both secured with WPA2 Personal ((WPA2 Enterprise is on the agenda this year)). Depending on which SSID you connect to, you’ll end up on the Device or VM VLANs.

Virtual Network

The virtual network was built entirely in System Center VMM 2012 R2. Nothing too fancy here, with multiple Gigabit adapters per physical host, one converged logical vSwitch and a separate NIC on each host fronting for the DMZ network:

Nodes 1, 2 & 4 are all Haswell, and are clustered. Node3 is standalone.

Thanks to VMM, building this out is largely a breeze, once you’ve settled on an architecture. I like to run the cmdlets to build the virtual & logical networks myself, but there’s also a great script available that will build a converged network for you.

A physical host typically looks like this (I say typically because I don’t have an equal number of adapters in all hosts):

I trust VLANs and VMM's segmentation abilities, but chose to build what is in effect air-gapped vSwitch for the DMZ/DIA networks
I trust VLANs and VMM’s segmentation abilities, but chose to build what is in effect air-gapped vSwitch for the DMZ/DIA networks

We’re already several levels deep in my personal abstraction cave, why stop here? Here’s the layout of VM Networks, which are distinguished from but related to logical networks in VMM:

labnet13

I get a lot of questions on this blog about jumbo frames and Hyper-V switching, and I just want to reiterate that it’s not that hard to do, and look, here’s proof:

jumbopacket

Good stuff!

Storage

And last, and certainly most-interestingly, we arrive at Daisetta Lab’s storage resources.

My lab journey began with storage testing, in particular ZFS via NexentaCore (Illumos), NAS4Free and Solaris 11. But that’s ancient history; since last summer, I’ve been all Windows, all the time in my lab, starting with SAN.Daisettalabs.net ((cf #StorageGlory : 30 Days on a Windows SAN)).

Now?

Well, I had so much fun -and importantly so few failures/pains- with Microsoft’s Tiered Storage Spaces that I’ve decided to deploy not one, or even two, but three Tiered Storage Spaces. Here’s the layout:

[table]Server, #HDD, #SSD, StoragePool Capacity, StoragePool Free, #vDisks, Function

Core, 9, 6, 16.7TB, 12.7TB, 6 So far, SMB3/iSCSI target for entire lab

Node1,2, 2, 2.05TB, 1.15TB,2, SMB3 target for Hyper-V replication

Node4,3,1, 2.86TB, 1.97TB,2, SMB3 target for Hyper-V replication

[/table]

I have to say, I continue to be very impressed with Tiered Storage Spaces. It’s super-flexible, the cmdlets are well-documented, and Microsoft is iterating on it rapidly. More on the performance of Tiered Storage Spaces in a subsequent post.

Thanks for reading!

Sign of the Times or just the best PKI book ever?

Like a lot of IT Pros, I’ve been studying up on security topics lately, both as a reaction to the increasing amount of breach news (Who got breached this week, Alex?) and because I felt weak in this area.

So, I went shopping for some books. My goals were simply to get a baseline understanding of crypto systems and best-practice guidance on setting up Microsoft Public Key Infrastructures, which I’ve done in the past but without much confidence in the end result.

Well, it turns out there’s not a whole lot of literature on Microsoft PKI systems. It seems the best of the genre is Windows Server 2008 PKI & Certificate Security, a Microsoft Press book published in 2008 and authored by Brian Komar:

pkiwin

This 3.2lb, 800 page book has a 4.9 out of 5 star rating on Amazon, with reviewers calling it the best Microsoft PKI guide out there.

Great! I thought, as I prepared to shell out about $80 and One Click my way to PKI knowledge.

That’s when I noticed that the book is out of print. There are digital versions available from O’Reilly, but it appears most don’t know that.

For the physical book itself, the least expensive used one on Amazon is $749.99. You read that right. $750!

If you want a new copy, there’s one available on Amazon, and it’s $1000.

I immediately jumped over to Camelcamelcamel.com to check the history of this book, thinking there must have been a run on Mr. Komar’s tome as Target, Home Depot, JP Morgan, and Sony Pictures fell.

Result:

pkiprice

 

The price of this book has spiked recently, but Peak PKI was a full three years ago.

I looked up security breaches/events of early 2012. Now correlation != causation, but it’s interesting nonetheless. Hopefully this means there’s a lot of solid Microsoft PKI systems being built out there!

Rather than shell out $750 for the physical book, I decided to get Ivan Ristic’s fantastic Bulletproof SSL/TLS, which I highly recommend. It’s got a chapter on securing Windows infrastructure, but is mostly focused on crypto theory & practical OpenSSL. I’ll buy Komar’s as a digital version next or wait for his forthcoming 2012 R2 revision.

Big Data for Server Guys : Azure OpsInsight Review

Maybe it’s just my IT scars that bias me, but when I hear a vendor push a “monitoring” solution,  I visualize an IT guy sitting in front of his screen, passively watching his monitors & counters, essentially waiting for that green thing over there to turn red.

He’s waiting for failure, Godot-style.

That’s not a recipe for success in my view. I don’t wait upon failure to visit, I seek it out, kick its ass, and prevent it from ever establishing a beachhead in my infrastructure. The problem is that I, just like that IT Guy waiting around for failure, am human, and I’m prone to failure myself.

Enter machine learning or Big Data for Server Guys as I like to think of it.

Big Data for Server Guys is a bit like flow monitoring on your switch. The idea here is to actively flow all your server events into some sort of a collector, which crunches them, finds patterns, and surfaces the signal from the noise.

Big Data for Server Guys is all about letting the computer do what the computer’s good at doing: sifting data, finding patterns, and letting you do what you  are good at doing: empowering your organization for tech success.

But we Windows guys have a lot of noise to deal with: Windows instruments just about everything imaginable in the Microsoft kingdom, and the Microsoft kingdom is vast.

So how do we borrow flow-monitoring techniques from the Cisco jockeys and apply it to Windows?

Splunk is one option, and it’s great: it’s agnostic and will hoover events from Windows, logs from your Cisco’s syslog, and can sift through your Apache/IIS logs too. It’s got a thriving community and loads of sexy, AJAX-licious dashboards, and you can issue powerful searches and queries that can help you find problems before problems find you.

It’s also pretty costly, and I’d argue not the best-in-class solution for Hoovering Windows infrastructure.

Fortunately, Microsoft’s been busy in the last few years. Microsoft shops have had SCOM and MOM before that, but now there’s a new kid in town ((He’s been working out and looks nothing like that the old kid, System Center Advisor)) : Azure Operational Insights, and OpsInsight functions a lot like a  good flow collector.

opsinsight3

And I just put the finishing touches on my second Big Data for Server Guys/OpsInsight deployment. Here’s a mini-review:

The Good:

  • It watches your events and finds useful data, which saves you time: OpsInsight is like a giant Hoover in the sky, sucking up on average about 36MB/day of Windows events from my fleet of nearly ~150 VMs in a VMware infrastructure. Getting data on this fleet via Powershell is trivial, but building logic that gives insight into that data is not trivial. OpsInsight is wonderful in this regard; it saves you from spending time in SSRS, Excel, or diving through the event viewer haystack MMC or via get-event looking for a nugget of truth.
  • It has a decent config recommendation engine: If you’re an IT Generalist/Converged IT Guy like me, you touch every element in your Infrastructure stack, from the app on down to the storage array’s rotating rust. And that’s hard work because you can’t be an expert in everything. One great thing about OpsInsight is that it saves you from searching Bing/Google (at worst) or thumbing through your well-worn AD Cookbook (at best) and offers Best practice advice and KB articles in the same tab in your browser. Awesome!
  • Thanks Opsinsight for keeping me out of this thing
    Thanks Opsinsight for keeping me out of this thing

    Query your data rather than surfing the fail tree: Querying your data is infinitely better than walking the Fail Tree that is the Windows Event Viewer looking for errors. OpsInsight has a powerful query engine that’s not difficult to learn or manipulate, and for me, that’s a huge win over the old school method of Event Viewer Subscriptions.

  • Dashboards you can throw in front of an executive:  I can’t understate how great it is to have automagically configured dashboards via OpsInsight. As an IT Pro, the less time I spend in SSRS trying to build a pretty report the better. OpsInsight delivers decent dashboards I’m proud to show off. SCOM 2012 R2’s dashboards are great, but SCOM’s fat client works better than its IIS pages. Though it’s Silverlight-powered, OpsInsight wins the award for friction-free dashboarding.
  • Flexible Architecture: Do you like SCOM? Well then OpsInsight is a natural fit for you. I really appreciate how the System Center team re-structured OpsInsight late last year: you can deploy it at the tail end of your SCOM build, or you can forego SCOM altogether and attach agents directly to your servers. The latter offers you speed in deployment, the former allows you to essentially proxy events from your fleet, through your Management Group, and thence onto Azure. I chose the latter in both of my deployments. Let OpsInsight gate through SCOM, and let both do what they are good at doing.
  • It’s secure: The architecture for OpsInsight is Azure, so if you’re comfortable doing work in Azure Storage blobs, you should be comfortable with this. That + encrypted uploads of events, SCOM data and other data means less friction with the security/compliance guy on your team.

The Bad:

  • It’s silverlight, which makes me feel like I’m flowing my server events to Steve Ballmer: I’m sure this will be changed out at some point. I used to love Silverlight -and maybe there’s still room in my cold black heart for it- but it’s kind of an orphan media/web child at the moment.
  • There’s no app for iOS or Android…yet: I had to dig out my 2014 Lumia Icon just to try out the OpsInsight app for Windows phone. It’s decent, just what I’d like to see on my 2015 Droid Turbo. Alas there is no app for Android or IOS yet, but it’s the #1 and #2 most requested feature at the OpsInsight feedback page (add your vote, I did!)
  • It’s only Windows at the moment: I love what Microsoft is doing with Big Data crunching; Machine Learning, Stream Analytics and OpsInsight. But while you can point just about any flow or data at AzureML or Stream Analytics, OpsInsight only accepts Windows, IIS, SQL,Sharepoint, Exchange. Which is great, don’t get me wrong, but limited. SCOM at least can monitor SNMP traps, interface with Unix/Linux and such, but that is not available in OpsInsight. However, it’s still in Preview, so I’ll be patient.
  • It’s really only Windows/IIS/SQL/Exchange at the moment: Sadface for the lack of Office 365/Azure intelligence packs for OpsInsight, but SCOM will do for now.
  • Pricing forecast is definitely…cloudy: Every link I find takes me to the general Azure pricing page. On the plus side, you can strip this bad boy down to the bare essentials if you have cost pressures.

The Ugly:

  • Where are my cmdlets? My interface of choice with the world of IT these days is Powershell ISE. But when I typed get-help *opsinsight, only errors resulted. How’d this get past Snover’s desk? All kidding aside, SCOM cmdlets work well enough if you deploy OpsInsight following SCOM, and I’m sure it’s coming. I can wait.

All in all, this is shaping up to be a great service for your on-prem Windows infrastructure, which, let’s face it, is probably neglected.

System Center MVP Stanislav Zhelyazkov has a great 9-part deep dive on OpsInsight if you want to learn more.