Meet my new Storage Array

So the three of you who read this blog might be wondering why I haven’t been posting much lately.

Where’s Jeff, the cloud praxis guy & Hyper-V fanboy, who says IT pros should practice their cloud skills? you might have asked.

Well, I’ll tell you where I’ve been. One, I’ve been working my tail off at my new job where Cloud Praxis is Cloud Game Time, and two, the Child Partition, as adorable and fun as he is, is now 19 months old, and when he’s not gone down for a maintenance cycle in the crib, he’s running Parent Partition and Supervisor Module spouse ragged, consuming all CPU resources in the cluster. Wow that kid has some energy!

Yet despite that (or perhaps because of that), I found some time to re-think my storage strategy for the Daisetta Lab.

Recall that for months I’ve been running a ZFS array atop a simple NAS4Free instance, using the AMD-powered box as a multi-path iSCSI target for Cluster Shared Volumes. But continuing kernel-on-iscsi-target-service homicides, a desire to combine all my spare drives & resources into a new array, and a vacation-time cash-infusion following my exit from the last job lead me to build this for only about $600 all-in:

Software-defined. x86. File and block. Multipath. Intel. And some Supermicro. Storage utopia up in the Daisetta Lab

Software-defined. x86. File and block. Multipath. Intel. And some Supermicro. There’s some serious storage utopia up in the Daisetta Lab

Here are some superlatives and other interesting curios about this new box:

  • WP_20140705_01_19_31_ProIt was born on the 4th of July, just like ‘Merica and is as big, loud, ostentatious and overbearing as ‘Merica itself
  • I would name it ‘Merica.daisettalabs.net if the OS would accept it
  • It’s a real server. With a real Supermicro X10SAT server/workstation board. No more hacking Intel .inf files to get server-quality drivers
  • It has a real server SAS card, an LSI 9218i something or other with SAS-SATA breakout cables
  • It doesn’t make me choose between file or block storage, and is object-storage curious. It can even do NFS or SMB 3…at the same time.
  • It does ex post facto dedupe -the old model- rather than the new hot model of inline dedupe and/or compression, which makes me resent it, but only a little bit
  • It’s combining three storage chipsets -the LSI card, the Supermicro’s Intel C226, and ASMedia 1061- into one software-defined logical system. It’s abstracting all that hardware away using pools, similar to ZFS, but in a different, more sublime & elegant way.
  • It doesn’t have the ARC -ie RAM AS STORAGE- which makes me really resent it, but on the plus side, I’m only giving it 12GB of RAM and now have 16GB left for other uses.
  • It has 16 Disks : 12 rotational drives (6x1TB 5400 RPM & 6x2TB 7200RPM) and four SSDs (3x256GB Samsung 840 EVO & 1x128GB Samsung 830) and one boot drive (1x32GB SanDisk ReadyCache drive re-purposed as general SSD)
  • Total capacity RAW: nearly 19TB. Usable? I’ll let you know. Asking
    “Do I need that much?” is like asking “Does ‘Merica need to stretch from Sea to Shining Sea?” No I don’t, but yes ‘Merica does. But I had these drives in stock, as it were, so why not?
  • It uses so much energy & power that it has, in just a few days, erased any greenhouse gas savings I’ve made driving a hybrid for one year. Sorry Mother Earth, looks like I’m in your debt again
  • But seriously, under load, it’s hitting about 310 watts. At idle, 150w. Not bad all things considered. Haswell + full C states & PCIe power management work.
  • It’s built as veritable wind-tunnel as it lives the garage. In Southern California. And it’s summer. Under load, the CPU is hitting about 65C and the south-bridge flirts with 80c, but it’s stable.
  • It has six, yes, six, 1GbE Intel NICs. Two are on the motherboard, and I’m using a 4 port PCIe 2 card. And of course, I’ve enabled Jumbo Frames. I mean do you have to even ask at this point?
  • It uses virtual disks. Into which you can put other virtual disks. And even more virtual disks inside those virtual disks. It’s like Christopher Nolan designed this storage archetype while he wrote Inception…virtual disk within virtual disk within virtual disk. Sounds dangerous, but in the Daisetta Lab, Who Dares Wins!

So yeah. That’s what I’ve been up to. Geeking out a little bit like a gamer, but simultaneously taking the next step in my understanding, mastery & skilled manipulation of a critical next-gen storage technology I’ll be using at work soon.

Can you guess what that is?

Stay tuned. Full reveal & some benchmarks/thoughts tomorrow.

 

 

All the WANs are a stage

All the WANs are a Stage,

and all the packets and flows are players. 

They have their ingress and egress

from a vm here, through an F5 there, out the traffic shaper and then to the next hop

The Great Unknown, the Slash 8

Truly one packet in its time plays many routes

alas,  aggregate, balance or seek diverse routes

the packets do not

Into oblivion go the flows

when the WAN LED no longer glows

Let’s take a step together into a place unfamiliar and dark. A place that is, by all rights, strange and bewildering. A little place I like to think of as just one order of magnitude less rational than the Twilight Zone…a place few understand, and even fewer have mastered. A place just beyond my gateway, a place I really don’t care about except when I do, a place I like to call, the Wide Area Network.

That’s right. Let’s talk about the next hop. The land of BGP and OSPF and NAT and VPNs and QoS and CoS and DSCP and the “Goddamn ASA” and static routes and the “Goddamn firewall” all these words, phrases and acronyms you heard once, but dismissed as just so much babble out of the networking guy’s mouth, the one guy on your team who seems to age faster than all the others.

lacpHell, if it were up to you, Mr. Storage Networking Engineer, you’d do some LACP trunks or hook up MPIO up to that WAN and call it a day, amiright?  I mean what’s so complicated here? Of course links go down, that’s why teams (and virtual teams-of-teams!) are so cool!

But alas, all the world’s not a storage array, and all links to it are not teamed GigE interfaces with sub-millisecond latency.

And your business WAN, particularly the links to/from remote sites that comprise the RFC-1918d, encapsulated, virtual private wide-area network your typical mid-sized business with a large footprint depend on, fail far too often.

Or at least they have for me when I look back and survey the glories & wreckage of my 15 year IT career.

Verily I say unto you, the WAN is my White Whale, and I am an IT Ahab.

Here are some of the tools & techniques networking firms, engineers, architects and people way smarter than I have come up with to deal with the multiple pains of the WAN, followed by my snarky, yet honest, hurt, yet hopeful, lust-filled yet realistic view of them:

  • Multiprotocol layer switching (MPLS): The go-to solution for WAN pain, particularly for businesses that can’t/won’t employ a networking wonk equal to Mr. Ivan Pelpjnak. MPLS is a god-send for some firms, but it’s very costly. To really get value out of an MPLS strategy, you almost have to couple it with a session vritualization or in-datacenter-computing model (XenApp, RDS, VDI etc). Why? While MPLS makes the WAN as reliable and as accessible as your LAN, it doesn’t defeat latency. And latency is a hard thing to explain. Go on. Try it. On your spouse or significant other.
  • MPLS part two: And just so that I can get it off my chest…when the primary link at a branch site does go down, why do MPLS providers have such a hard time failing over to a secondary? I mean for real guys? Just keep the secondary WAN/VPN link up, or do something fancy with VRRP or VARP or something. Without a failover link, a downed-MPLS is worth less than a regular commodity internet circuit.
  • MPLS part three: In previous roles, I worried that maintenance of the MPLS became an end unto itself. I can see how this would happen, and I’ve been guilty of it myself; sometimes IT guys think in IP addresses, when they should have an eye to the future and think in FQDN, as the former is and shall forever be not routable, while the latter is the future. Underlining this point is the argument (well-supported in 2014, I think) that MPLS is, at best, a transitional technology. Build your business on it if you have to, but don’t tie anything to it, in other words. Sure it’s cloud-compatible, but so is dial up.
  • Inline Compression/dedupe: As a storage networking nerd, I Heart me some Riverbed and SilverPeak. But those are tools on the WAN that, in my experience, are just one CapEx ask too much. I’ve never actually used one of them. Love the idea, can never justify the cost. Open source alternatives? There’s really none (Except for this brave guy), speaking, perhaps, to how sophisticated and well-engineered these devices are, which justifies their cost but also makes them unobtainable for SMB shops.
  • Pertino and the like: I’ve been a fan of Pertino since I first started using this “Cloud VPN” product, which I likened more to a Layer 2 switch in the sky than a traditional VPN service. It’s some great tech; not clear that it can scale to 100s and 100s of users though. But very promising nonetheless, especially for really small but geographically-diverse environments.
  • It's just like Least Queue depth, you see, only ON YOUR WAN

    It’s just like Least Queue depth, you see, only ON YOUR WAN

  • Link aggregation + VPN all in one device: If you’re going to go hub & spoke because MPLS costs too much, or you can’t quite do full-cloud yet, this is a promising strategy, and one I’ll soon be testing out. I know I’ m not alone in the WAN-is-my-white-whale meme because companies like Peplink, Talari Networks, and even Cisco are still building products that address WAN problems. I have used Peplink before; was impressed, would use again, want one in my home with a second internet line, A+++++. The only thing that scuttled wider adoption in my last role was voice, a particularly difficult problem to sort out when you slap some good ol’ LACP-style magic onto your WAN ills. These devices, ranging from a few hundred bucks to several thousand, are almost too good to be true, as they tell the IT Pro that yes, he can have his cheap but rapidly-deploy-able commodity internet circuits aggregate into one, high speed, fault-tolerant link, and yes, that “unbreakable VPN” (as Peplink dubs it), can connect back to the HQ. Doesn’t defeat latency, true, but it sure makes the ASA look old-hat doesn’t it?
  • Cloud: The default winner, of course. But OpEx is hard to quantify. Sure, I guess I could up and move my datacenter assets to a CDN and let the network take care of the rest, or I could stand up a VM in a datacenter close to my users. But replication to on-prem assets/sources can be difficult, and, in some ways, in a really wide WAN, don’t we start worrying about version control, that what the New York branch is looking at is the same as the Seattle branch? Even so, I’m down with it, just need to fully comprehend it first.

What’s worked for you?

In defense of pizza boxes

Lately on the Twitters there has been much praise among my friends and colleagues for what I like to think of as datacenters on dollies: Cisco’s UCS, FlexPod, Dell’s vStart etc…You know what these are as I’m sure you’ve come across them: pre-configured, pre-engineered datacenters you can roll out to the datacenter floor, align carefully and then -put your back into it lads!- carefully drop onto the elevated tiles. Then you grab that bulky L1430P and jack the stack into your 220v 30 amp circuit that has A/B power and bam! #InfrastructureGlory achieved.

feat_fig1_flexpod_expressSupport’s not a concern because the storage vendor, the compute vendor, and the network vendor are simpatico under the terms of an MOU…you see, the vendors engineered it out so you don’t have to download and memorize the mezzanine architecture PDF. All you have to do now is turn it on and build some VMs in vSphere or VMM or what-have-you.

Where’s the fun in that?

Don’t get me wrong, I think UCS is awesome. I kind of want an old one in my lab.

But in my career, it’s always been pizza boxes. Standard 2U, 30″ deep enclosures housing drives & fans up front, two or four CPU sockets in the middle surrounded by gobs of RAM, and NICs…lots and lots of NICs guarding the rear.

mmmmm....pizza

mmmmm….pizza

And I wonder why that is. Maybe it’s just the market & space I tend to find employment in, but it seems to me that most IT organizations aren’t purchasing infrastructure in a strategic way…they don’t sit down at a table and say, ‘Right. Let’s buy some compute, storage, and network, let’s make it last five years, and then, this time five year’s from now, we’ll buy another stack. Hop to it lads!”

A good IT strategic planner would do that, but that’s not the reality in many organizations.

So I’ve come to love pizza boxes because they are almost infinitely configurable. Like so:

  • Say you buy five pizza boxes in year 1 but in year 2, a branch office opens and it’s suddenly very critical to get some local infrastructure on-prem. Simple: strip a node out of your handsome 10U compute cluster and drop-ship it to the branch office. Even better: you contemplated this branch when you bought the pizza boxes and pre-built a few of them with offlined but sufficiently large direct attached storage.
  • You buy a single pizza box with four sockets but only two are populated in year 1. By year three, headcount is surging and demand on the server -for whatever reason- is extraordinary. What do you do hotshot, what do you do? Easy: source some second-hand Xeons & heatsinks, drop them into the server and watch your cpu queue lengths fall (not quite in half, but still). But check your SQL licensing arrangements first and be prepared to consolidate and reduce your per-socket VMs!
  • Or maybe you need to reduce your footprint in the datacenter. If you bought pizza boxes in a strategic way, you just dump the CPUs and memory out of  node1 into node 2, node 3 into node 4 and so on. You won’t achieve the same level of VM density but maybe you don’t need to.
  • Or maybe you don’t want or need 10GbE this year; that would require new switching. But in year 2? Break a node out and drop in some PCIe SFP+ cards and Bob’s your uncle.

I guess the thing about Pizza boxes I like the most is that they are, in reality, just big, standardized PCs. They are whatever architecture you decide you want them to be in whatever circumstances you find yourself in.

A FlexPod or vStart, in contrast, feel more constricting, even if you can break an element or two out and use it in another way.  I know I’d be hesitant to break apart the UCS fabric.

You’d think a FlexPod would be perfect for small to medium enterprises, and in many cases, it is. Just not in the ones I’ve worked at, where costs are tight, strategic planning rare, and the business’ need for agility outstrips my need for convenience.

Also, isn’t it interesting that when you compute at “Google-scale” (love that term, is it still en-vogue with VARs?) or if you’re Facebook, you pick a simple & flexible architecture (in-house x86/64 pizza boxes) with very little or no shared storage at all. You pick the seemingly more primitive architecture over the highly-evolved pod architecture.

30 Days hands-on with VMTurbo’s OpsMan #VFD3

Stick figure man wants his application to run faster. #WhiteboardGlory courtesy of VM Turbo's Yuri Rabover

Stick figure man wants his application to run faster. #WhiteboardGlory courtesy of VM Turbo’s Yuri Rabover

So you may recall that back in March, yours truly, Parent Partition, was invited as a delegate to a Tech Field Day event, specifically Virtualization Field Day #3, put on by the excellent team at Gestalt IT especially for the guys guys who like V.

And you may recall further that as I diligently blogged the news and views to you, that by day 3, I was getting tired and grumpy. Wear leveling algorithms intended to prevent failure could no longer cope with all this random tech field day IO, hot spots were beginning to show in the parent partition and the resource exhaustion section of the Windows event viewer, well, she was blinking red.

And so, into this pity-party I was throwing for myself walked a Russian named Yuri, a Dr. named Schmuel and a product called a “VMTurbo” as well as a Macbook that like all Mac products, wouldn’t play nice with the projector.

You can and should read all about what happened next because 1) VMTurbo is an interesting product and I worked hard on the piece, and 2) it’s one of the most popular posts on my little blog.

Now the great thing about VMTurbo OpsMan & Yuri & Dr. Schmuel’s presentation wasn’t just that it played into my fevered fantasies of being a virtualization economics czar (though it did), or that it promised to bridge the divide via reporting between Infrastructure guys like me and the CFO & corner office finance people (though it can), or that it had lots of cool graphs, sliders, knobs and other GUI candy (though it does).

No, the great thing about VMTurbo OpsMan & Yuri & Dr. Schmuel’s presentation was that they said it would work with that other great Type 1 Hypervisor, a Type-1 Hypervisor I’m rather fond of: Microsoft’s Hyper-V.

I didn't even make screenshots for this review, so suffer through the annotated .pngs from VMTurbo's website and imagine it's my stack

I didn’t even make screenshots for this review, so suffer through the annotated .pngs from VMTurbo’s website and imagine it’s my stack

And so in the last four or five weeks of my employment with Previous Employer (PE), I had the opportunity to test these claims, not in a lab environment, but against the stack I had built, cared for, upgraded, and worried about for four years.

That’s right baby. I put VMTurbo’s economics engine up against my six node Hyper-V cluster in PE’s primary datacenter, a rationalized but aging cluster with two iSCSI storage arrays, a 6509E, and 70+ virtual machines.

Who’s the better engineer? Me, or the Boston appliance designed by a Russian named Yuri and a Dr. named Schmuel? 

Here’s what I found.

The Good

  • Thinking economically isn’t just part of the pitch: VMTurbo’s sales reps, sales engineers and product managers, several of whom I spoke with during the implementation, really believe this stuff. Just about everyone I worked with stood up to my barrage of excited-but-serious questioning and could speak literately to VMTurbo’s producer/consumer model, this resource-buys-from-that-resource idea, the virtualized datacenter as a market analogy. The company even sends out Adam Smith-themed emails (Famous economist…wrote the Wealth of Nations if you’re not aware). If your infrastructure and budget are similar to what mine were at PE, if you stress over managing virtualization infrastructure, if you fold paper again and again like I did, VMTurbo gets you.
  • Installation of the appliance was easy: Install process was simple: download a zipped .vhd (not .vhdx), either deploy it via VMM template or put the VHD into a CSV and import it, connnect it to your VM network, and start it up. The appliance was hassle-free as a VM; it’s running Suse Linux, and quite a bit of java code from what I could tell, but for you, it’s packaged up into a nice http:// site, and all you have to do is pop in the 30 day license XML key.
  • It was insightful, peering into the stack from top to nearly the bottom and delivering solid APM:  After I got the product working, I immediately made the VMturbo guys help me designate a total of about 10 virtual machines, two executables, the SQL instances supporting those .exes and more resources as Mission Critical. The applications & the terminal services VMs they run on are pounded 24 hours a day, six days a week by 200-300 users. Telling VMTurbo to adjust its recommendations in light of this application infrastructure wasn’t simple, but it wasn’t very difficult either. That I finally got something to view the stack in this way put a bounce in my step and a feather in my cap in the closing days of my time with PE. With VMTurbo, my former colleagues on the help desk could answer “Why is it slow?!?!” and I think that’s great.
  • Like mom, it points out flaws, records your mistakes and even puts a $$ on them, which was embarrassing yet illuminating: I was measured by this appliance and found wanting. VMTurbo, after watching the stack for a good two weeks, surprisingly told me I had overprovisioned -by two- virtual CPUs on a secondary SQL server. It recommended I turn off that SQL box (yes, yes, we in Hyper-V land can’t hot-unplug vCPU yet, Save it VMware fans!) and subtract two virtual CPUs. It even (and I didn’t have time to figure out how it calculated this) said my over-provisioning cost about $1200. Yikes.
  • It’s agent-less: And the Windows guys reading this just breathed a sigh of relief. But hold your golf clap…there’s color around this from a Hyper-V perspective I’ll get into below. For now, know this: VMTurbo knocked my socks off with its superb grasp & use of WMI. I love Windows Management Instrumentation, but VMTurbo takes WMI to a level I hadn’t thought of, querying the stack frequently, aggregating and massaging the results, and spitting out its models. This thing takes WMI and does real math against the results, math and pivots even an Excel jockey could appreciate. One of the VMTurbo product managers I worked with told me that they’d like to use Powershell, but powershell queries were still to slow whereas WMI could be queried rapidly.
  • It produces great reports I could never quite build in SCOM: By the end of day two, I had PDFs on CPU, Storage & network bandwidth consumption, top consumers, projections, and a good sense of current state vs desired state. Of course you can automate report creation and deliver via email etc. In the old days it was hard to get simple reports on CSV space free/space used; VMTurbo needed no special configuration to see how much space was left in a CSV
  • vFeng Shui for your virtual datacenter

    vFeng Shui for your virtual datacenter

    Integrates with AD: Expected. No surprises.

  • It’s low impact: I gave the VM 3 CPU and 16GB of RAM. The .vhd was about 30 gigabytes. Unlike SCOM, no worries here about the Observer Effect (always loved it when SCOM & its disk-intensive SQL back-end would report high load on a LUN that, you guessed it, was attached to the SCOM VM).
  • A Eureka! style moment: A software developer I showed the product to immediately got the concept. Viewing infrastructure as a supply chain, the heat map showing current state and desired state, these were things immediately familiar to him, and as he builds software products for PE, I considered that good insight. VMTurbo may not be your traditional operations manager, but it can assist you in translating your infrastructure into terms & concepts the business understands intuitively.
  • I was comfortable with its recommendations: During #VFD3, there was some animated discussion around flipping the VMTurbo switch from a “Hey! Virtualization engineer, you should do this,” to a “VMTurbo Optimize Automagically!” mode. But after watching it for a few weeks, after putting the APM together, I watched its recommendations closely. Didn’t flip the switch but it’s there. And that’s cool.
  • You can set it against your employer’s month end schedule: Didn’t catch a lot of how to do this, but you can give VMTurbo context. If it’s the end of the month, maybe you’ll see increased utilization of your finance systems. You can model peaks and troughs in the business cycle and (I think) it will adjust recommendations accordingly ahead of time.
  • Cost: Getting sensitive here but I will say this: it wasn’t outrageous. It hit the budget we had. Cost is by socket. It was a doable figure. Purchase is up to my PE, but I think VMTurbo worked well for PE’s particular infrastructure and circumstances.

The Bad:

  • No sugar coating it here, this thing’s built for VMware: All vendors please take note. If VMware, nomenclature is “vCPU, vMem, vNIC, Datastore, vMotion” If Hyper-V, nomenclature is “VM CPU, VM Mem, VMNic, Cluster Shared Volume (or CSV), Live Migration.” Should be simple enough to change or give us 29%ers a toggle. Still works, but annoying to see Datastore everywhere.
  • Interface is all flash: It’s like Adobe barfed all over the user interface. Mostly hassle-free, but occasionally a change you expected to register on screen took a manual refresh to become visible. Minor complaint.
  • Doesn’t speak SMB 3.0 yet: A conversation with one product engineer more or less took the route it usually takes. “SMB 3? You mean CIFS?” Sigh. But not enough to scuttle the product for Hyper-V shops…yet. If they still don’t know what SMB 3 is in two years…well I do declare I’d be highly offended. For now, if they want to take Hyper-V seriously as their website says they do, VMTurbo should focus some dev efforts on SMB 3 as it’s a transformative file storage tech, a few steps beyond what NFS can do. EMC called it the future of storage!
  • VFD-Logo-400x398Didn’t talk to my storage: There is visibility down to the platter from an APM perspective, but this wasn’t in scope for the trial we engaged in. Our filer had direct support, our Nimble, as a newer storage platform, did not. So IOPS weren’t part of the APM calculations, though free/used space was.

The Ugly:

  • Trusted Install & taking ownership of reg keys is required: So remember how I said VMTurbo was agent-less, using WMI in an ingenious way to gather its data from VMs and hosts alike? Well, yeah, about that. For Hyper-V and Windows shops who are at all current (2012 or R2, as well as 2008 R2), this means provisioning a service account with sufficient permissions, taking ownership of two Reg keys away from Trusted Installer (a very important ‘user’) in HKLM\CLSID and one further down in WOW64, and assigning full control permissions to the service account on the reg key. This was painful for me, no doubt, and I hesitated for a good week. In the end, Trusted Installer still keeps full-control, so it’s a benign change, and I think payoff is worth it. A Senior VMTurbo product engineer told me VMTurbo is working with Microsoft to query WMI without making the customer modify the registry, but as of now, this is required. And the Group Policy I built to do this for me didn’t work entirely. On 2008 R2 VMs, you only have to modify the one CLSID key

Soup to nuts, I left PE pretty impressed with VMTurbo. I’m not joking when I say it probably could optimize my former virtualized environment better than I could. And it can do it around the clock, unlike me, even when I’m jacked up on 5 Hour Energy or a triple-shot espresso with house music on in the background.

vmturboStepping back and thinking of the concept here and divesting myself from the pain of install in a Hyper-V context: products like this are the future of IT. VMTurbo is awesome and unique in an on-prem context as it bridges the gap between cost & operations, but it’s also kind of a window into our future as IT pros

That’s because if your employer is cloud-focused at all, the infrastructure-as-market-economy model is going to be in your future, like it or not. Cloud compute/storage/network, to a large extent, is all about supply, demand, consumption, production and bursting of resources against your OpEx budget.

What’s neat about VMTurbo is not just that it’s going to help you get the most out of the CapEx you spent on your gear, but also that it helps you shift your thinking a bit, away from up/down, latency, and login times to a rationalized economic model you’ll need in the years ahead.

Hyper-V 29% of Hypervisors shipped and Second Place Never Felt so Good

Click!!

Click!!

I couldn’t help but cheer and raise a few virtual fist bumps to the Microsoft Server 2012 and 2012 R2 team as I read the latest report out of some industry group or other. Hyper-V 3.0, you see, is cracking along with just a tick under 1/3rd of the hypervisor market.

Meanwhile, VMware -founder of the genre, much respect for the Pater v-Familias- is running about 2/3rds of virtualized datacenters.

And that’s just fine with me. 

Hyper-V is still in a distant second place. But second place never felt so good as it does right now. And we got some vMomemntum on our side, even if we don’t have feature parity, as I’ve acknowledged before. 

Hyper-V is up in your datacenter and it deserves some V.R.E.S.P.E.C.T.

Testify IDC, testify:

A growing number of shops like UMC Health System are moving more business-critical workloads to Hyper-V. In 2013, VMware accounted for 53 percent of hypervisors deployed last year, according to data released in April by IT market researcher IDC. While VMware still shipped a majority, Hyper-V accounted for 29 percent of hypervisors shipped.

The Redmond Magazine report doesn’t get into it beyond some lame analyst comments, but let me break it down for you from a practitioner point of view.

Why is Hyper-V growing in marketshare, stealing some of the vMomentum from the sharp guys at VMware?

Four reasons from a guy who’s worked it:

  • The Networking Stack: It’s not that Windows Server 2012 & 2012 R2 and, as a result, Hyper-V 3.0, have a better network stack than VMware does. It’s that the Windows server team rebuilt the entire stack between 2008 R2 & Server 2012. And it’s OMG SO MUCH BETTER that the last version. Native support for Teaming. Extensible VM switching. Superb layer 3 and layer 2 cmdlets. You can even do BGP routing with it. It’s built to work, with minimal hassle, and it’s solid on a large amount of NICs. I say that as someone who ran 2008 R2 Hyper-V clusters then upgraded the cluster to 2012 in the space of about two weekends. Trust me, if you played around with Windows Server 2008 R2 and Hyper-V and broke down in hysterics, it’s time for another look.
  • SMB 3.0 & Storage Spaces/SOFS…don’t call it CIFS and also, it’s our NFS: There’s a reason beyond the obvious why guys like Aidan Finn, the Hyper-Dutchman and DidierV are constantly praising Server Message Block Three dot Zero. It kicks ass. Out of the box, multi-channel is enabled on SMB 3.0, meaning that anytime you create a \\Hyper-V-Kicks-Ass\ file share on a server with at least two distinct IP addresses, you’re going to get two distinct channels to your share. And that scales. On Storage Spaces and its HA (and fault tolerant?) big brother Scaled out File Server: what Microsoft gave us was a method by which we could abstract our rotational & SSD disks and tier them. It’s a storage virtualization system that’s quite nifty. It’s not quite VSAN except that both Storage Spaces/SOFS & VSAN seem to share common cause: killing your SAN.

    "Turn me on!" Hyper-V says to the curious

    “Turn me on!” Hyper-V says to the curious

  • Only half the Licensing headaches of VMware: I Do Not Sign the Checks, but there’s something to be said for the fact that the features I mention above are not SKUs. They are part & parcel of Server 2012 R2 Standard. You can implement them without paying more, without getting sign-off from Accounts payable or going back to the well for more spend.Hyper-V just asks that you spend some time on Technet but doesn’t ask for more $$$ as you build a converged virtual switch.
  • It’s approachable: This has always been one of Microsoft’s strengths and now, with Hyper-V 3.0, it’s really true. My own dad -radio engineer, computer hobbyist, the original TRS-80 fan- is testing versions of radio control system software within a Windows 7 32 bit & 64 bit VM right from his Windows 8.1 Professional desktop. On the IT side: if you’re a generalist with a Windows server background, some desire to learn & challenge yourself, and, most importantly, you want to Win #InfrastructureGlory, Hyper-V is tier one hypervisor that’s approachable & forgiving if you’re just starting out in IT.

It’s also pretty damn agnostic. You can now run *BSD on it, several flavors of linux and more. And we know it scales: Hyper-V, or some variant of it, powers the X-Box One (A Hypervisor in Every Living Room achieved), it can power your datacenter, and it’s what’s in Azure.

Turning the page

WP_20140605_23_00_24_ProToday (Thursday) I voluntarily concluded my employment with a well-known Southern California company where I’ve worked as Sr. Systems Engineer for the last four years. On Monday, I open a new page in my IT Career with another firm, and I’m very excited to start.

But tonight, I’m in a mood to reminisce and reflect.

I know it’s cliche, but truly, when I consider where I was at four years ago this night compared with where I’m at professionally & personally tonight, this was the job opportunity of a lifetime. It literally lifted me out of the IT ghetto and put me on a track on which I could, if I executed properly, end up in the IT Hall of Fame, clutching my #InfrastructureGlory trophy as if it was the Stanley Cup.

And I capitalized on it in just about every way I knew how, both for myself, and for the infrastructure I fretted over constantly.

Parting is always bittersweet, but I’m resting tonight knowing that I -thanks to some IT strategery from the IT management guys who hired me- have left my former employer a higher performing, more durable, and cost effective Infrastructure stack than I had when I started.

Some superlatives & memories from my time with this company for the enjoyment of other engineers like me:

  • Proudest Engineering feat: Planning, wargamming and executing -in concert with my former boss- on an overnight virtual datacenter relocation involving two Dell R810s running Windows Server 2008 R2 & Hyper-V in Denver and four 2008 R2 nodes in Los Angeles over a 100meg Layer 2 VPWS circuit with two NetApp DoT 7.3.x filers at each end doing SnapMirrors of CSVs & RDMs by the hour, then the half-hour, then by the minute during Go-Live week. Sixty+ VMs, countless direct-mapped iSCSI LUNs, 8 vFilers & and the entire /24 subnet moved in the space of about four hours in spring 2012 with minimal consultant help in a plan I nicknamed the “Double Trident” (don’t ask). And yeah. This was in Hyper-V 2.0 days, where there was nothing awesome about Hyper-V switching.
  • Most humbling defeat: Missing a key “but….” in a Technet article about Exchange 2010 to 2013 migration. And no, it didn’t involve the basics. And yes, I’m sorry I didn’t spot the queues filling up sooner.
  • If I could make a bumper sticker from my time here: “Virtualization Engineers Find ‘em Physical and Leave ‘em Virtual,” or “Give me spindles or give me death,” or “Oh me, oh my NUMA Nodes” or, of course, “I Heart LACP”
  • Funnest project: Storage Refresh & bakeoff.  Picked the best array under the circumstances and achieved #StorageGlory. No regrets and like that Nimble is as hungry for glory & success as I am.
  • The Work/Blog effect: After storage bakeoff post, got noticed by the GestaltIT crew and invited to Virtualization Field Day #3. Sat among some incredibly sharp VMware-certified & OpenStack-familiar engineers and architects in the heart of Silicon Valley where we, in the best traditions of agnostic computing, challenged vendors on the products they try to sell guys like you and me (well, mostly guys like you if you’re VMware). And yes, we made fun of each other’s stacks. #PurpleScreenofDeath
  • Racked Gear I’ll miss the most: My old, power-hungry, 6509E and its twin WS-6748-GETX blades onto which I mapped out Hyper-V 3.0′s awesome converged switching architecture. Sure, it may not be a distributed vSwitch, but I made it purr like a kitten, and I extended iSCSi to the limit. Also, Wargamming Live Storage Migration is one of my most popular posts, so I suppose it’s a somewhat famous 6509E.
  • The 3am call that woke me up the most: Session virtualization (RDS/XenApp)
  • Dipped into dev on: .net, Visual Studio & ClickOnce architecture. Also SOAP & REST, which aren’t so dev anymore and are actually quite critical for operations guys
  • Engineering focus: Value.
  • Started With/Ended With Pairs: ESXi 4.5/Hyper-V 3.0, Motorola Droid/Lumia Icon, TDM & Analog Circuits/Cloud-hosted VoIP, 100Mbit Cisco/Gigabit Dell
  • Worst mobile phone I used for work: Toss-up. Windows Phone 7 (HTC Trophy) or Palm Pixi. But they had ActiveSync so there you are.
  • Most Favoritiest Visualization I created: A 24 hour clock arrayed against Netflow egress data on my 6509e, filtered by iSCSI & Live MIgration VLANs, with flags representing the regions as they put load on the infrastructure. Average Gb/s & GB/hr calculated with Excel Pivot tables via spider chart tool & 30 days of data, averaged out hour-by-hour. Netflow v7 & Manage Engine. Wish I hadn’t left the image on work laptop.

Those are some of my fondest memories from this employer, but of course, above & beyond the technology, the hardware, the underlay and the storage are the people. I’m leaving friends, colleagues and fellow veterans behind and it’s hard….can’t believe how thoughtful they were at my going away lunch. The photo at top is of my nameplate + one they made for me.. Hashtag Sickburn was something I ripped from The Vergecast and used liberally in our wild technology debates.

Most of all I’m thankful for this awesome time in my professional life and I wish my friends, colleagues and former colleagues the best.

On Monday I start a new chapter. I’m not sure where that leaves this blog, but I at least want to finish up my Cloud Praxis series, post a hands-on review of VMTurbo, and more so look for that over the days ahead.

Cloud Praxis lifehax: Disrupting the consumer cloud with my #Office365 E1 sub

E1, just like its big brothers E3 & E4, gives you real Microsoft Exchange 2013, just like the one at work. There’s all sorts of great things you can do with your own Exchange instance:

  • Practice your Powershell remoting skills
  • Get familiar with how Office 365 measures and applies storage settings among the different products
  • Run some decent reporting against device, browser and fat client usage

But the greatest of these is Exchange public-facing, closed-membership distribution groups.

Whazzat, you ask?

Well, it’s a distribution group. With you in it. And it’s public facing. Meaning you can create your own SMTP addresses that others can send to. And then you can create Exchange-based rules that drop those emails into a folder, deletes them after a certain time, runs scripts against them, all sorts of cool stuff before it hits your device or Outlook.

All this for my Enterprise of One, Daisetta Labs.net. For $8/month.

You might think it’s overkill to have a mighty Exchange instance for yourself, but your ability to create a public-facing distribution group is a killer app that can help you rationalize some of your cloud hassles at home and take charge & ownership of your email, which I argue, is akin to your birth certificate in the online services world.

My public facing distribution groups, por ejemplo:

distrogroups

 

There are others, like career@, blog@ and such.

The only free service that offers something akin to this powerful feature is Microosft’s own Outlook.com. If the prefixed email address is available @outlook.com, you can create aliases that are public-facing and use them in a similar way as I do.

But that’s a big if. @outlook.com names must be running low.

Another, perhaps even better use of these public-facing distribution groups: exploiting cloud offerings that aren’t dependent on a native email service like Gmail. You can use your public-facing distribution groups to register and rationalize the family cluster’s cloud stack!

app

It doesn’t solve everything, true, but it goes along way. In my case, the problem was a tough one to crack. You see, ever since the child partition emerged out of dev, into the hands of a skilled QA technician, and thence, under extreme protest, into production, I’ve struggled to capture, save & properly preserve the amazing pictures & videos stored on the Supervisor Module’s iPhone 5.

Until recently, Supe had the best camera phone in the cluster (My Lumia Icon outclasses it now). She, of course, uses Gmail so her pics are backed up in G+, but 1) I can’t access them or view them, 2) they’re downsized in the upload and 3) AutoAwesome’s gone from being cool & nifty to a bit creepy while iCloud’s a joke (though they smartly announced family sharing yesterday, I understand).

She has the same problems accessing the pictures I take of Child Partition on the Icon. She wants them all, and I don’t share much to the social media sites.

And neither one of us want to switch email providers.

So….

Consumer OneDrive via Microsoft account registered with general@mydomain.com with MFA. Checks all the Boxes. I even got 100GB just for using Bing for a month

Available on iPhone, Windows phone, desktop, etc? Check.

Easy to use, beautifully designed even? Check

Can use a public-facing distribution group SMTP address for account creation? Check

All tied into my E1 Exchange instance!

It works so well I’m using general@mydomain.com to sync Windows 8.1 between home, work & in the lab. Only thing left is to convince the Supe to use OneNote rather than Evernote.

I do the same thing with Amazon (caveat_emptor@), finance stuff, Pandora (general@), some Apple-focused accounts, basically anything that doesn’t require a native email account, I’ll re-register with an O365 public-facing distribution group.

Then I share the account credentials among the cluster, and put the service on the cluster’s devices. Now the Supe’s iPhone 5 uploads to OneDrive, which all of us can access.

So yeah. E1 & public facing distribution groups can help sooth your personal cloud woes at home, while giving you the tools & exposure to Office 365 for #InfrastructureGlory at work.

Good stuff!

vSympathy under vDuress

An engineer in a VMware shop that’s using VMware’s new VSAN converged storage/compute tech had a near 12 hour outage this week. He reports in vivid detail at Reddit, making me feel like I’m right there with him:

At 10:30am, all hell broke loose. I received almost 1000 alert emails in just a couple minutes, as every one of the 77 VM’s in the cluster began to die – high CPU, unresponsive, applications or websites not working. All of the ESXi hosts started emitting a myriad of warnings, mostly for high CPU. DRS attempted to start migrating VM’s but all of the tasks sat “In progress”. After a few minutes, two of the ESXi hosts became “disconnected” from vCenter, but the machines were still running.

Everything appeared to be dead or dying – the VM’s that didn’t immediately stop pinging or otherwise crash had huge loads as their IO requests sat and spun. Trying to perform any action on any of the hosts or VM’s was totally unresponsive and vCenter quickly filled up with “In progress” tasks, including my request to turn off DRS in an attempt to stop it from making things worse.

I’m a Hyper-V guy and (admittedly) barely comprehend what DRS is but wow. I’ve got 77 VMs in my 6 node cluster too. And I’ve been in that same position, when something unexpected…rare…almost impossible to wargame…happens and the whole cluster falls apart. For me it was an ARP storm in the physical switch thanks in part to an immature understanding 2008 R2′s virtual switching.

I’m not ashamed to say that in such situations intuition plays a part. Logs are an incomprehensible firehose and not useful and may even distract you from the real problem. Your ops manager VM, if stored within the cluster (cf observer effect) is useless, and so, what do you have?

You have what lots of us have, no matter the platform. A support contract. You spend valuable minutes explaining your situation to a guy on the phone who handles many such calls per day. Minutes, then a half hour, then a full hour tick by. The business is getting restless & voices are being raised. If your IT group has an SLA, you’re now violating it. Your pulse is rising, you’re sweating now.

So you escalate.  Engage the sales team who sold you the product..you’re desperate. This guy got a vExpert on the phone. At times, I’ve had MVPs helping me. Yet with some problems, there are no obvious answers, even for the diligent & extraordinary.

But if you’re good, you’ve a keen sense of what you know versus what you don’t know (cf Donald Rumsfeld for the win), and you know when to abandon one path in favor of another. This engineer knew exactly the timing of his outage…what he did, when he finished the  work he did, and when the outage started. Maybe he didn’t have it down in a spread and proving it empirically in court would never work, but he knew: he was thinking about what he knew during his outage, and he was putting all his knowns and unknowns together and building a model of the outage in his head.

I feel simpatico with this guy…and I’m not too proud to say that sometimes, when nothing’s left, you’ve got to run to the server room (if it’s near, which it’s not in my case or in this engineer’s case I think) and check the blinky lights on the hard drives on each of your virtualization nodes. Are they going nuts? Does it look odd? The CPUs are redlined and the putty session on the switch is slow…why’s that? ‘

Is this signal, or is this noise?

Observe the data, no matter how you come by it. Humans are good at pattern recognition. Observe all you can, and then deduce.

Bravo to this chap for doing just that and feeling -yes feeling at times- his way through the outage, even if he couldn’t solve it.

High five from a Hyper-V guy.

Cloud Praxis #4 : Syncing our Dir to Office 365

praxis4dirsync

The Apollo-Soyuz metaphor is too rich to resist. With apologies to NASA, astronauts & cosmonauts everywhere

Right. So if you’ve been following me through Cloud Praxis #1-3 and took my advice, you now have a simple Active Directory lab on your premises (Wherever that may be) and perhaps you did the right thing and purchased a domain name, then bought an Office 365 Enterprise E1 subscription for yourself. Because reading about contoso.com isn’t enough.

What am I talking about “if”. I know you did just what I recommended you do. I know because you’re with me here, working through the Cloud Praxis Program because you, like me, are an IT Infrastructurist who likes to win! You are a fellow seeker of #InfrastructureGlory, and you will pursue that ideal wherever it is, on-prem, hybrid, in the cloud, buried in a signed cmdlet, on your hybrid iSCSI array or deep inside an NVGRE-encapsulated packet, somewhere up in the Overlay.

Right. Right?

Someone tell me I’m not alone here.

You get there through this thing.

You get there through this thing.

So DirSync. Or Directory Synchronization. In the grand Microsoft tradition of product names, DirSync has about the least sexy name possible. Imagine yourself as a poor Microsoft technology reseller; you’ve just done the elevator pitch for the Glories that are to be had in Office 365 Enterprise & Azure, and your mark is interested and so he asks:

Mark: “How do I get there?”

Sales guy: “DirSync”

Mark: “Pardon me?”

Sales Guy: “DirSync.”

Mark: Are you ok? Your voice is spasming or something. Is there someone I can call?

DirSync has been around for a long, long time. I hadn’t even heard of it or considered the possibility of using it until 2012 or 2013, but while prepping the Daisetta Lab, I realized this goes back to 2008 & Microsoft Online Services.

But today, in 2014, it’s officially called Windows Azure Active Directory Sync, and though I can’t wait to GifCam you some cool powershell cmdlets that show it in action, we’ve got some prep work to do first.

Lab Prep for DirSync

As I said in Cloud Praxis #3, to really simulate your workplace, I recommend you build your on prem lab AD with a fully-routable domain name, then purchase that same name from a registrar on the internet. I said in Cloud Praxis #2 that you should have a lab computer with 16GB of RAM and you should expect to build at least two or three VMs using Client Hyper-V at the minimum.

Now’s the time to firm this all up, prep our lab. I know you’re itching to get deep into some O365, but hang on and do your due dilligence, just like you would at work.

  • Lab DHCP : What do you have as your DHCP server? If it’s a consumer-level wifi router that won’t let you assign an FQDN to your devices, consider ditching it for DHCP and stand-up a DHCP instance in your Lab Domain Controller. Your wife will never know the difference and you can ensure 1) that your VMs (whether 1 or 2 or several) get the proper FQDN suffix assigned, and 2) you can disable NetBIOS via MS DHCP
  • Get your on-prem DNS in order: This is the time to really focus on your lab DNS. I want you to test everything; make some A-records, ensure your PTRs are created automatically. Create some C-Names and test forwarding. Download a tool like Steve Gibson’s DNS Benchmark to see which public name servers are the closest to you and answer the quickest. For me, it’s Level 3. Set your forwarders appropriately. Enable logging & automatic testing
  • Build a second DC: Not strictly required, but best practice & wisdom dictates you do this ahead of DirSync. Do what I did; go with a Windows core VM for your second DC. That VM will only need 768mb of ram or so, and a 15GB .vhdx. But with it, you will have a healthier domain on-prem

Now over to O365 Enterprise portal. Read the official O365 Induction Process as I did, then take a look at the steps/suggestions below. I went through this in April; it’s easy, but the official guides leave out some color.

Office 365 Prep & Domain Port ahead of DirSync

  • Go to your registrar and assign and verify to Microsoft you own the domain via TXT record: Process here
  • Pick from the following options for DNS and read this:
    • Easy but not realistic: Just handover DNS to O365. I took the easy way admittedly. Daisetta Labs.net DNS is hosted by O365. It’s decent as DNS hosting goes, but I wouldn’t have chosen this option for my workplace as I use an Anycast DNS service that has fast CDN Propagation globally
    • More realistic: Create the required A Records, C Names, TXT and SRV records at your registrar or DNS host and point them where Microsoft says to point them
    • Balls of Steel Option: Put your Lab VM in your DMZ, harden it up, point the registrar at it and host your own DNS via Windows baby. Probably not advisable from a residential internet connection.
  • Keep your .onmicrosoft.com account for a week or two: Whether you’re starting out in O365 at work or just to learn the system like I did, you’ll need your first O365 account for a few days as the domain name porting process is a 24-36 hour process. Don’t assign your E1 licenses to your @domain.com account just yet.
  • I wouldn’t engage MFA just yet…let things settle before you turn on Multifactor authentication. Also be sure your backup email account (The oh shit account Microsoft wants you to use that’s not associated with O365) is accessible and secure.
  • Fresh start cause I couldn't build out an Exchange lab :sadface:

    Fresh start cause I couldn’t build out an Exchange lab :sadface:

    If you are simulating Exchange on-prem to hybrid for this exercise, you’ll have more steps than I did. Sadly, I had to give O365 the easy way out and selected “Fresh Start” in the process.

  • Proceed with the standard O365 wizard setups, but halt at OnRamp: I’m happy to see the Wizard configuration method is surviving in the cloud. Setting all this up won’t take long; the whole portal is pretty easy & obvious until you get to Sharepoint stuff.

Total work here is a couple of hours. I can’t stress how important your lab DNS & AD health are. You need to be rock solid in replication between your DCs, your DNS should be fast & reliably return accurate results, and you should have a good handle on your lab replication topology, a proper Sites & Services setup, and dial in your Group Policy and OU structure.

Daisetta Labs.net looks like this:

daisettalabsad

 

and dcdiag /e & repadmin show no errors.

Final Steps before DirSync Blastoff

  • With a healthy Domain on-prem, you need now to create some A Records, C-Names and TXT records so Lync, Outlook, and all your other fat clients dependent Exchange, Sharepoint and such know where to go. This is quite important; at work, you’ll run into this exact same situation.  Getting this right is why we chose to use routable domain, it’s a big chunk of the reason why we’re doing this whole Cloud Praxis thing in the first place. It’s so our users have an enjoyable and hassle-free transition to O365
  • Follow the directions here. Not as hard as it sounds. For me it went very smoothly. In fact, the O365 Enterprise portal gives you everything you need in the Domain panel, provided you’ve waited about 36 hours after porting your domain. Here’s what mine looks like on-prem after manually creating the records.

dns

And that’s it. We’re ready to Sync our Dirs to O365s Dirs, to get a little closer to #InfrastructureGlory. On one side: your on-prem AD stack, on the launch pad, in your lab ready for liftoff.

Sure, it’s a little hair-brained, admittedly, but if you’re like me, this is how you learn. And I’m learning. Aren’t you?

On the other launch pad, Office 365. Superbly architected by some Microsoft engineers, no longer joke-worthy like it was in the BPOS days, a place your infrastructure is heading to whether you like it or not.

I want you to be there ahead of all the other guys, and that’s what Cloud Praxis is all about: staying sharp on this cloud stack so we can keep our jobs and find #InfrastructureGlory.

DirSync is the first step here, and I’ll show you it on the next Cloud Praxis. Thanks for reading!

Cloud Praxis #3 : Office 365, Email and the best value in tech

Email.

What’s the first thought that comes to your mind when you read that word?

Exchange_2013-logo

I seek #ExchangeGlory !! said no IT blogger ever

If you’re in IT in the Microsoft space, maybe you think of huge mailbox stores, Exchange, Outlook, legal discovery requirements, spam headaches and the pressure & demand that stack places on your infrastructure. Terabytes and terabytes of the stuff, going back years. All up in your stack, DAG on your spindles, CAS on your edge, all load balanced at Layer 4/7 behind a physical or virtual device & wrapped up in a nice legitimate, widely-recognized CA-issued SSL cert. The stuff is everywhere.

I almost forgot. You have to back all that stuff up too. To tape in my case.

Oh, and perhaps you also recall the cold chills & instant sense of dread & fear you’ve felt just about every time an end user has asked (sometimes via email no less) “Is our email down?” I know the feeling.

Like a lot of Microsoft IT pros, I have my share of email war stories. I think email is one of those things in technology that lends itself to a sort of dualism, a sort of Devil on this shoulder, Angel on that shoulder . You can’t say something positive about email without adding a “but….” at the end, and that’s ok. Cognitive dissonance is allowed here; you can believe contrary ideas about email at the same time.

I know I do:

I love Email because I hate email because
SMTP is last great agnostic open communication protocol SMTP is too open and prone to abuse
Email is democratic and foundational to the internet Email is fundamentally broken
Email will be around in some form forever There's no Tread Left on this Tire
Email is your online identity Messaging applications are all the rage and so much richer
It's how businesses communicate and thrive One man's business communication is another man's spam
It's always there It goes down sometimes
Spam fighters and blacklists Spam fighters and blacklists
It justifies Infrastructure Spend It uses so much of my stack
Exchange is awesome and flexible I broke Exchange once and fear it

Whatever your thoughts on email are, one thing is clear: for Microsoft Infrastructure guys pondering the Microsoft cloud, the path to #InfrastructureGlory clearly travels through Exchange Country. In fact, it’s like the first step we’re supposed to take via Office 365.

daisettalabs large logoI don’t know about you, but I worry about the bandits in Exchange Country. Bandits that may break mail flow, or allow the tidal wave of spam in, prompt my users excessively for passwords, engage in various SSL hijinks, or otherwise change any of the finely-tuned ingredients in the delicate recipe that is my Exchange 2010 stack.

And yet, I bet if you polled Microsoft IT guys like me, you would find that of all the things they want to stick up in the Microsoft Cloud, Exchange & the email stack is probably at the top of the list. Just take it off our plate Microsoft as Exchange and email are in a sort of weird place in IT; it’s mission-critical and extremely important to have a durable Exchange infrastructure, yet raise your hand if you think Exchange Administration/Engineering are good career paths to take in 2014.

Didn’t think so.

So how do we get there?

I don’t have all the answers yet, but I at least have a good picture of the project, some hands-on experience, and some optimism, all of which means I’m one step closer to #InfrastructureGlory in the cloud.

Hard to build a realistic Exchange Lab 

First of all, recognize this. While it’s easy to build out a lab infrastructure (Cloud Praxis #2) for Active Directory, it’s quite another thing to build out an Exchange lab as I found out. You can’t do SMTP from home anymore (the spammers ruined that) which means you need resources at work, which might or might not be available. They aren’t in my case, so I struggled for awhile.

Maybe you have some resources at work (a few extra public IPs, a walled-off virtual network, some storage) with which you can build out an Exchange lab. If so, evaluate whether that’s going to benefit you and your organization. It might be a black hole of wasted time; it might pay off in a huge way as you wargame your way from on-prem to hybrid then to cloud and finally #InfrastructureGlory

Office 365 Praxis with the E1 Plan

For me and Daisetta Labs.net, I decided I couldn’t adequately simulate my workplace Exchange. So I did the next best thing.

I bought an Office 365 Enterprise E1 subscription.

That’s right baby. Daisetta Labs.net is on the O365 Enteprise E1 plan. It’s an Enteprrise of 1 (me!) but an Enterprise-scaled O365 account nonetheless.

And it’s fantastically cheap & easy to do, less than $100 a year for all this:

o365e1

For that measly amount, you can be an Enterprise of one in O365 and get all this:

  • A real Office 365 Enterprise account with Exchange 2013 and all of its incredibly rich features & options, including Powershell remoting, which you’ll need in your real O365 migration
  • That’s private email too...no ad bots gathering data against your profile. Up to you, but I moved my personal stack to O365 (more on that later)
  • Lync 2013. Forget Skype and all the other messengers. You get Lync service! Which interfaces with Skype and many others and makes you look like a real pro. Also useful if you have on-prem Lync, though I’m sad to report to you that, as of this month, Lync 2013 in O365 can’t kill your PBX off…yet.
  • Sharepoint & OneDrive for Business : I’ll admit it, I’ve done my fair share of Sharepoint hating but IT Infrastructurists need to realize Sharepoint is the gateway drug to many things businesses are interested in, like Business Intelligence & SQL, data visualizations and more. Besides, Sharepoint 2013 is not your daddy’s Sharepoint; it can do some neat stuff (not that I can show you, yet).
  • OneDrive for Business, again: If you’re in a Microsoft shop that’s still mostly on-prem, you probably experience Dropbox creep, where your users share documents via dropbox or other personal online storage solutions. With E1, you can get familiar with OneDrive for Business,within the context of Sharepoint & O365 management, dirsync, and all the rest.
  • One Terabyte of OneDrive for Business Storage. Outstanding. This was a recent announcement. It tickles me to think that my data is being deduped by a Windows storage spaces VM somewhere, just like I do on my storage at work.
  • Office Online : full on WAC server baby, with Excel in your Chrome or IE browser. Better, and better looking, than Google Docs.
  • With this plan, you can really test out Office for the iPad. You’ll get read and write to your O365 documents via an iPad, which can help you at work with that one C-level who loves his iPad as much as he loves Excel.
  • DirSync: The very directory synchronization tool you have stressed over at work is available to you with this simple, cheap E1 subscription. And it’s working. I’ve done it. Daisetta Labs.net is dirsynced to O365 from my home lab and I have SSO between my on-prem AD & Office 365. I deliberately kept my passwords separate between the two, but now they are in sync.

Anyway you cut it O365 E1 is an amazingly affordable and a very effective way to confront your cloud angst and get comfortable with Office 365. Even if you can’t fully simulate your workplace Exchange stack, you should consider doing this; you will use these same tools (particularly Powershell Remoting, the wizards in O365 & dirsync) at some point; best to get familiar with them now.

I could have hosted my Daisetta Labs.net domain anywhere; but I have zero regrets putting it in O365 on the E1 plan and committing for 12 months. If you’re an IT pro like me trying to get your infrastructure to the Microsoft cloud, you’d be well-served by doing the same thing I did. You may even want to ditch your personal email account and just go full Office 365…to eat the same dog food we’re going to serve to our users soon.

More to come on this tomorrow, suffice it to say, DaisettaLabs.net is dirsyncing as I write this. I’ll have screenshots, wizard processes and more to show.

Cloud Praxis #2 : Let’s build some on-prem Infrastructure

daisettalabs large logo

If you haven’t seen the opening post in this series, I encourage you to read it.The TL;DR version is this: I’m an IT guy in Infrastructure supporting the Microsoft stack, specializing mostly in virtualization & storage. I’ve been in IT for almost 15 years now and I love what I do, but what I do is being disrupted out of existence by the Microsoft cloud. 

Cloud Praxis is my response. It’s a response addressed to my situation and to other IT guys like me. It’s a response & a method that’s repeatable, something you can practice, hone, and master. That’s how I learn- hands-on experimentation. Like the man Cicero said at the top of the mission patch above, “Constant practice devoted to one subject often outdoes both intelligence and skill.” 

Above all, Cloud Praxis is a recognition that the 1) The Microsoft cloud is real & it’s here to stay, 2) my skills are entirely based in the old on-prem model, 3) I better adapt to the new regime, lest I find myself irrelevant, 4) it’s urgent that I tackle this weakness in my portfolio; I can’t wait on my workplace to adopt the cloud, I need some puffy cloud stuff in my CV post-haste, not next year or in two years. 

This is I how did it.

It may not be the Technet way, or the only way, but it was my way. And I’m sharing it with you because maybe you’re like me; a mid-career IT generalist with a child partition at home, perhaps a little nervous about this cloud thing yet determined to stay competitive, employable and sharp. Or maybe you are just a fellow seeker of #InfrastructureGlory.

If that’s the case, join me; I’ll walk you through the steps I took get a handle on this thing.

Oh, it’s also a lot of fun. Join me!

PRAXIS #2 : BUILD THEE SOME INFRASTRUCTURE - Infrastructure Requirements
Item Type Suggested Config  Cost License? Notes
Compute/Storage A PC with at least 16GB RAM & ethernet Depends No  Needs to be virutalization capable
Compute/Hypervisor Windows 8.1 Pro or Ent $200 or free 90 day eval Yes 2012 R2 eval works too
VM OS Windows 2008 R2-2012R2 Standard $0 with 90 day eval Yes Timer starts the day you install
Network Always-on high speed internet at home/work $- No Obviously 

The very first step on our path to #InfrastructureGlory in the Microsoft Cloud is this: we need to build ourselves some on-prem infrastructure of sufficient size & scope to simulate our workplace infrastructure.

The good news is that the very same technology that revolutionized Infrastructure 4-5 years ago (virtualization) is now available downmarket, so downmarket in fact that you can build an inexpensive yet capable virtualization lab on a cheap consumer-level PC your family can use at home.

And you don’t even need a server OS on your parent partition to do this. As remarkable as it sounds, you can build a simple virtualization lab on consumer hardware running (at minimum) Windows 8.1 Pro as long as your PC is 1) virtualization capable and 2) has sufficient memory (16GB RAM but I suppose you could get by with as little as 8GB) and 3) storage resources and 4) a NIC (the one in your motherboard will likely work fine, just connect it to your home router).

If you don't have this installed on your Windows 8 machine, you're missing out.

If you don’t have this installed on your Windows 8 machine, you’re missing out.

How’s this possible?

Client Hyper-V baby. It’s the first and only feature you need to build a modest virtualization lab at home on your road to #InfrastructureGlory in the cloud. Client Hyper-V has about 60% of the features server Hyper-V has and uses a common management snap-in and cmdlets. You can’t build a converged fabric switch in Client Hyper-V nor play around with LACP and Live Migration, but at this scale, you don’t need to. You just need a place to park two or three VMs, an ethernet adapter on top of which you’ll build a virtual switch, and a bit of storage space for your VMs.

And some focus & intensity. I’m a virtualization guy and anything prefixed with a “v” gets me excited. It’s easy to get distracted in lab work, but my advice is to keep it simple and keep your focus where it belongs: Azure & Office 365. As much as I love virtualization, it’s just a bit player now.

Now What? 

Broadly outlined, here’s what you need to do once you’ve got your lab infrastructure ready.

  1. Make with the ISO downloads!: Check with your IT management and ask about your organization’s licensing relationship with Microsoft and/or the reseller your group works with. You might be surprised by what you find; though Microsoft has stopped selling Technet subscriptions to individuals, if your IT group has an Enterprise Agreement, Software Assurance, or MSDN subscriptions, you may be able to get access to those Server products under those licensing schemes. See how far your boss lets you take this; some licenses, for instance, give you $100 worth of credit in Azure, something I’m taking advantage of right now. I am not a licensing expert though, so read the fine print, get sign-off from your boss before you do anything with licensed products and understand the limitations.
  2. Consider your workplace Domain Functional level: If you are at 2008 functional level at work, try to get the Server 2008 iso. If you’re at (gasp!) 2003, get that iso if you can and start reading up on Domain Functional Levels & dirsync requirements. I see some Powershell in your future. At my work, we’re relatively clean & up to date in AD: Forrest functional level is at 2012, limited only by Exchange 2010 at this point (haven’t done the latest roll-up that supports 2012 R2). The idea here is to simulate, to the greatest degree possible, your workplace-to-cloud path.
  3. Build at least two VMs: You can follow the process as outlined here on Technet. VM1 is going to be your domain controller, so if you’re at 2008 functional level at work, build a 2008 VM. Your second VM will host dirsync and other cloud utilities. Technet says it can run 2012 R2, so you can use that. In my lab, I stood up a 2008 R2 server for this purpose
  4. Decide on a domain name: Now for some fun. You need to think of a routable domain name for your Windows domain, unless your workplace is on a .local or other non-routable domain. My workplace’s domain is routable, so I built a routable domain in the lab, then took the optional next step: I purchased the domain name from a registrar ($15) as this most closely simulates my workplace (on-prem domain matching internet domain) You should do the same unless you’re really confident in yourself; this step is very important for the next stage as we start to think about User Principal Name attributes and synchronizing our directory with Office 365 via Windows Azure.

And that, friends, is how Daisetta Labs.net* was born. I needed a domain name. Agnostic Computing.com was taken by some jerk blogger and I wanted something fast.

In retrospect, it might have been better to use Agnostic Computing.com as my lab domain because that’s a more realistic scenario; in the real world, I gots me some internet infrastructure tied to my routable domain and a 3rd party DNS host. I also gots me some on-prem Windows domain infrastructure tied to a routable domain name tied to my Windows DNS infrastructure on-prem. If you’re rusty on DNS, this is your chance to get up to speed as it’s everything in Microsoft-land.

On the domain name itself; pick a domain name and have some fun with it but maintain a veneer of professionalism and respectability. I want you to be able to put this on your resume, which means someone might ask you about it someday. If you take the next step and buy a domain from a registrar, you’ll want a domain name you’re not ashamed to have as an SMTP address.

In Cloud Praxis #3, we’re going to take some baby steps into Office 365. Hope you check back tomorrow!

*Some readers have asked me what a Daisetta is, and why it’s a lab. Not sure how to answer that. Maybe Daisetta is the name of my first love; or my first pet dog. Perhaps it’s a street name or a place in Texas, or maybe it’s the spirit of innovation & excitement that propels me forward, that compels me to build a crazy home lab. Or maybe it’s a fugazi, fogazzi, it’s a wazzi, it’s a woozie..it’s fairy dust.

fogazzi

 

Cloud Praxis #1: Advice for Microsoft IT Pros w/ Cloud angst

It’s been a tough year for those of us in IT who engineer, deploy, support & maintain Microsoft technology products.

First, Windows 8 happened, which, as I’ve written about before, sent me into a downward spiral of confusion and despair. Shortly after that but before Windows 8.1, Microsoft killed off Technet subscriptions in the summer of 2013, telling Technet fans they should get used to the idea of MSDN subscriptions. As the fall arrived, Windows 8.1 and 2012 R2 cured my Chrome fever just as Ballmer & Crew were heading out the door.

Next, Microsoft took Satya Nadella out of his office in the Azure-plex and sat him behind the big mahogany CEO desk at One Microsoft Way. I like Nadella, but his selection spelled more gloom for Microsoft Infrastructure IT guys; remember it was Nadella who told the New York Times that Microsoft’s on-prem infrastructure products are old & tired and don’t make money for Microsoft anymore.

And then, this spring…first at BUILD, then TechEd, Microsoft did the unthinkable. They invited the Linux & Open source guys into the tent, sat them in the front row next to the developers and handed them drinks and party favors, while more or less making us on-prrem Infrastructure guys feel like we were crashing the party.

No new products announced for us at BUILD or TechEd, ostensibly the event built for us. Instead, the TechEdders got Azured on until they were blue in the face, leading Ars’ @DrPizza to observe:

We think it feels pretty shitty Dr. Pizza, that’s how. It feels like we’re about to be made obsolete, that we in the infrastructure side of the IT house are about to be disrupted out of existence by Jeffrey Snover’s cmdlets, Satya’s business sense and something menacingly named the Azure Pack.

And the guys who will replace us are all insufferable devs, Visual Studio jockeys who couldn’t tell you the difference between a spindle and a port-channel, even when threatened with a C#.

Which makes it hurt even more Dr. Pizza, if that is your real name.

But it also feels like a wake-up call and a challenge. A call to end the cynicism and embrace this cloud thing because it’s not going away. In fact, it’s only getting bigger, encroaching more and more each day into the DMZ and onto the LAN, forcing us to reckon with it.

daisettalabs large logoThe writing’s on the wall fellow Microsofties. BPOS uptime jokes were funny in 2011 and Azure doesn’t go down anymore because of expired certs. The stack is mature, scalable, and actually pretty awesome (even if they’re still using .vhd for VMs, which is crazy). It’s time we step up, adopt the language & manners of the dev, embrace the cloud vision, and take charge & ownership of our own futures.

I’d argue that learning Microsoft’s cloud is so urgent you should be exploring it and getting experienced with it even if your employer is cloud-shy and can’t commit.Don’t wait on them if that’s the case, do it yourself!

Because, if you don’t, you’ll get left behind. Think of the cloud as an operating system or technology platform and now imagine your resume in two, five, or seven years without any Office 365 or Azure experience on it. Now think of yourself actually scoring an interview, sitting down before the guy you want to work for in 2017 or 2018, and awkwardly telling him you have zero or very little experience in the cloud.

Would you hire that guy? I wouldn’t.

That guy will end up where all failed IT Pros end up: at Geek Squad, repairing consumer laptops & wifi routers and up-selling anti-virus subscriptions until he dies, sad, lonely & wondering where he went wrong.

Don’t be that guy. Aim for #InfrastructureGlory on-prem, hybrid, or in the cloud.

Over the coming days, I’ll show you how I did this on my own in a series of posts titled Cloud Praxis.

Link On-prem/Hybrid/Cloud? Notes
Cloud Praxis #2 On Prem General guidance on building an AD lab to get started
Cloud Praxis #3 Cloud Wherein I think about on-prem email and purchase an O365 E1 sub
Cloud Praxis #4 - Forthcoming likely Dirsync focused
Cloud Praxis #5 Hybrid Got 24 days & $100 in Azure credits + a wildcard SSL cert. Floor it!

 

Been iterating so fast, log file can’t keep up

Sorry for the lack of content lately; between the hyperactive child partition redlining my cpu and hogging all my spare bandwidth for himself and some interesting developments at work (Why hello there spare MSDN sub and $100/month in Azure credits! and testing out a certain OpsMan package I raved about in March), Agnostic blogging has virtually ground to a standstill. Fail.

I promise some good stuff tomorrow and in the days that follow, including:

  • How to Win the Cloud Wars @ Home and Join the Battle for them at Work
  • So long iSCSI & Block storage, I found a new love : SMB 3 Multicast

 

Azure RemoteApp Announced at #MSTechEd

Application delivery….at the end of the day, it’s really what we do in IT, isn’t it? It’s what all the complexity, all the cost, and all the headaches are for: delivering applications to our users.

Sure it’s cool to talk endlessly of hypervisors and spindles and hybrid storage arrays and network virtualization, but we’re not paid to just have fun with racks of gear. We’re paid to make sure the applications our users need are accessible when they need it, wherever they may be.

Yeah. Let’s talk about Layer 7 baby. User space. C:\Users\AppData, the registry Hive all that good stuff.

RXL004025An executive once came to my IT department and said what he wants out of IT is for it to function like an electrical utility. When the user toggles the light switch, it should just work; the light should turn on, he said. Our job as infrastructure engineers, he continued, was to watch all the turbines and generators and power lines in the background to ensure the reliable delivery of just-in-time electrical capacity for the moment the user toggles that switch.

I wish I could talk with that executive again, because I think that analogy is useful, and what’s more, it’s kind of the model public cloud providers like Google Compute Engine, Amazon, and Azure are selling. They want to become your company’s computing utility provider.

I’m down with the “cloud” and excited by some of its potential, but to go back to that executive’s analogy, I don’t see the whole picture here. Sure, take my infrastructure, cloudify it, put some Azure or AWS way up into it…have at it. I get that.

But what’s the light switch look like? Does it operate the same as the one my users are familiar with on-prem? Does it toggle vertically, or does Cloud Provider.com require horizontally-toggled light switches for some obscure reason? Is it a radically different light switch, operating on Direct Current rather than the familiar but inefficient Alternating Current? What other surprises regarding the light switches are there?

Because I hate surprises. Especially on high visiblility & fundamental things like light switches that my users need to do their jobs.

  • On Google Compute Engine’s cloud, app delivery from what I can gather is some mix of HTML 5, ChromeOS or Android apps, or, perhaps VMware View + ChromeBooks. Lots of Linuxy stuff. For an on-prem Windows environment, with some in-house .net coded business applications, the path to the Google cloud is murky and probably involves quite a bit of dev work.
  • Amazon offers cloud VDI...they say they can virtualize your company’s desktop PC and park it in the cloud. Which is like taking my on-prem light switch and just putting in in the cloud. Cool! But the Windows tech they’re using, as Aidan Finn points out, is still Server 2008 R2. And my line of business applications are all in Windows 2012 & SQL 2012.

So Microsoft, you’re at bat: how do I deliver my apps to my users in Azure? What’s your light switch look like?

Well today they took a big step forward by announcing something I’m intimately familiar with: RemoteApp for Azure.

Virtualization basesRemoteApp, if you’re not familiar with it, is old-school session virtualization, a sort of “first base” in the virtualization story. It’s how we got more out of our hardware before Hypervisors came along (Second Base, in my Virtualization is Like Baseball Bases theory, a diagram of which you can see to the left). It’s the bit of tech that made Citrix into an amazing software company and a valuable Microsoft partner.

RemoteApp is user session virtualization and it’s still around as part of Microsoft’s Remote Desktop Services suite. And it’s how many folks deliver rich Windows apps to their end users (XenApp is king in this space of course) on an increasingly large amount of diverse platforms.

And now Remote App is in Azure, in preview form, but still. This means the light switch in Azure is the same light switch my users are used to. It’s a little less friction in my path to the cloud, both for me, and my users.

That said, session virtualization can be a royal pain in the ass. So from an engineering standpoint, I’d love to see if Microsoft, acting as a computing utility provider, can fix the top three problems I have with session virtualization technologies:

  • The Group Policy Blender: Session virtualization is tricky at scale because a lot of the management aspects for RemoteApp are Group Policy based. This was really true in Server 2008 R2; 2012 offers better control, but still, much can go wrong. If you use RDS/RemoteApp at scale, with multiple child domains logging into an RDS farm, you have to spend considerable time researching & perfecting Group Policy because you’ll be blending User & Computer group policies from multiple sources (and multiple domains) into that session. Guess what? A lot can, and does at times, go wrong when you build a computer that is logged into by multiple people simultaneously; this alone makes session virtualization almost as tough a nut to crack as VDI.Azure has the scale to just build out VMs to address that complexity; I don’t. Hoping there’s some new logic in place that may trickle down to me or justify me offloading this to Azure completely.
  • Localization: Here’s hoping Azure RemoteApp has something more elegant and less-hackish than I what have on-prem to localize sessions. My RDS server is in North America. My user is in Australia. Make the session reflect the Aussie users’ time & date format and the goofy way they use commas instead of periods in monetary units. Oh and when the French user logs into the same RDS box, apply, the….je nais se quoi….qualities the French userbase demands. You know how I do this now? A simple vbs script is triggered upon login; if an LDAP lookup of the user matches certain criteria, the French regional settings.reg file is applied to the registry hive. I want desperately to Powershell this; I wonder how Azure does it…maybe the fix is to park the session in the Azure datacenter closest to Paris & Sydney, something I can’t do. In that case, awesome!
  • Printing: Whole companies have been founded to optimize printing from session virtualization instances to the HP Laserjet on your desk. To say that printing can be a headache in session virtualization is a bit like saying a fire at a gas station can be a reason to call the fire department.

If Azure can solve these things, or at least make them operate reliably & securely and speedily (in the case of printing), they can really put themselves at the head of the pack when it comes to cloud adoption in organizations like mine*. As long as the cost for Azure RemoteApp is the same as or cheaper than on-prem RDS licenses, I don’t see why anyone would want to keep RemoteApp on prem.

Now, about App-V….

* I do not speak for my organization even though I just did

Folding Paper Experiment

Have you ever been in a position in IT where you’re asked to do what is, by rational standards, impossible?

As virtualization engineers, we operate under a kind of value-charter in my view. Our primary job is to continuously improve things with the same set of resources, thereby increasing the value of our gear & ourselves.

paper-fold-ppt-14Looked at economically, our job isn’t so much different than what some people view as the great benefit of a free market economy: we are supposed to be effeciency multipliers, just like entrepreneurs are in the market. We take a set of raw resources, manipulate & reshape them, and extract more value out of them.

I hate to go all tech-crucnh on you, but we disrupt. In our own way. And it’s something you should be proud of.

Maybe you never thought of yourself like that, but you should…and you should never sell yourself short.

For guys and gals like us, compute, storage & network are raw resources at our disposal. Anything capable of being virtualized or abstracted can, or at least should, potentially have some value, as there are so many variables we can fine-tune and manipulate.

That old Dell PowerEdge 2950 with some DDR2 RAM that shipped to you in 2007? Sure it’s old and slow, but it’s got the virtualization bits in its guts that can, in the right hands, multiply & extend its value. Sure it’s not ideal, but raise your hand if you’re an engineer who gets The Platonic Ideal all the time?

I sure don’t. Even when I think it’s inescapably rational & completely reasonable.

Old switches with limited backplane bandwidth & small amounts of buffers? It’s junk compared to a modern Arista 10GbE switch, but when push comes to shove, you, as a virtualization engineer, can make it perform in service to your employer.

This is what we do. Or I should say, it’s what some of us forced to do.

We are, as a group, folding paper again and again, defying the rules & getting more & more value out of our gear.

It can be stressful and thankless. No one sees it or appreciates it, but we are engineers. Many have gone before us, and many will come after us. Resources are always going to be limited for people like us, and it’s our job to manage them well and extract as much as we can out of them.

This post written as much as a pep-talk for myself as for others!

%d bloggers like this:

Because you can't spell emotion without involving IT,

Oh you want me to describe how it's life-changing? Come, emote with me:

Imagine you've been stuck dialing up to the internet with a trusty USRobotics 56k modem while everyone around you streams high-def movies to their 1080p phones over 4G LTE or 802.11ac or Facetimes with loved ones like it's just a normal thing normal people do, and not something that's out of the Jetsons. What's more, you're sharing that 56k dial up line with your teenage daughter, and she's just hit Peak Popularity as a sophomore in high school and thus needs landline access pretty much around the clock, which means you have dial-up resource contention/exhaustion to deal with too.

"Sorry my dear users, there's just no IOPS left to give I need to schedule the dedupes & backups for Saturday 12am, oh wait did that already dial-up to the information superhighway to download a PDF form, print it, sign it, then mail it for your mother," you beg of her as the modem makes its funny sounds (you know what to listen for when you hit the 56k sweet spot or when you're going to slum it at 14.4k). But your daughter doesn't care, all she sees is her stock falling among her peers minute by minute. She starts to loathe you, you start to drink more, your wife leaves you, and then you die sad and lonely, all because of that damned old dial-up line storage array.

But wait! Let's interrupt that hellish dystopian technology nightmare and pretend some amazing & generous person -an angel really- has taken pity on you and given you a 768kb/s DSL connection, some of those old wall jack DSL filters, a funny looking "WiFi" blue router thing, and the freedom to do two, maybe even three things at once. Suddenly you're elated and overjoyed as you stream a 360p YouTube video and only have to wait 15 seconds on the buffer, and your daughter, talking excitedly to her girlfriend about the day's events, smiles and embraces you once again.

It's like that. That's the feeling I'm having right now.