I was excited to hear on the In Tech We Trust podcast this week that the godfather of all the hyperconverged things -Nutanix- may release a community edition of their infrastructure software this year.
That. Would. Be. Amazing.
I’ve crossed paths with Nutanix a few times in my career, but they’ve always remained just a bit out of reach in my various infrastructure projects. Getting some hands-on experience with the Google-inspired infrastructure system in my lab at home would be most excellent, not just for me, but for them, as I like to recommend product stacks I’ve touched above ones I haven’t.
Take Nexenta as an example. As Hans D. pointed out on the show, aside from downloading & running Oracle Solaris 12, Nexenta’s just about the only way one can experience a mature & enterprise-focused implementation of ZFS. I had a blast testing Nexenta out in my lab in 2014 and though I can’t say my posts on ZFS helped them move copies of NexentaStore, it surely didn’t hurt in my view.
VEEAM is also big in the community space, and though I’ve not tested their various products, I have used their awesome stencil collection.
Lest you think storage & hyperconvergence vendors are the only ones thinking ‘community, today my favorite yellow load balancer Kemp announced in effect a community edition of their L4/L7 Loadmaster vAppliance. Kemp holds a special place in the hearts of Hyper-V guys; as long as I can remember, yes even back in the dark days of 2008 R2, they’ve always released a Loadmaster that’s just about on-par with what they offer to VMware shops. In 2015 that support is paying off I think; Kemp’s best-in-class for Microsoft shops running Hyper-V or building out Azure, and with the announcement you can now stress a Kemp at home in your lab or in Azure with your MSDN sub. Excellent.
Speaking of Microsoft, I’d be remiss if I didn’t mention Visual Studio 2013, which got a community edition last fall.
I’d love to see more community editions, namely:
Nimble Storage: I’ve had a lot of success in the last 18 months racking/stacking Nimble arrays in environments with older, riskier storage. I must not be the only one; the company recently celebrated its 5,000th customer. Yet, Nimble’s rapid evolution from storage startup with potential to serious storage player is somewhat bittersweet for me as I no longer work at the places I’ve installed Nimble arrays and can’t tinker with their rapidly-evolving features & support. Come on guys, just give me the CASL caching system in download form and let me evaluate your Fiber Channel support and test out your support for System Center
NetApp: A community release of Clustered Data OnTAP 8.2x would accomplish something few NetApp products have accomplished in the last few years: create some genuine excitement about the big blocky blue N. I’m certain they’ve got a software-only release in-house as they’ve already got an appliance for vSphere and I heard rumors about this from channel sources for years. So what are you waiting for NetApp? Let us build-out, support, and get excited about cDOT community-style since it’s been too hard to see past the 7-mode–>clustered mode transition pain in production.
On his Graybeards on Storage podcast, Howard Marks once reminisced about his time testing real enterprise technology products in a magazine’s tech lab. His observations became a column, printed on paper in an old-school pulp magazine which was shipped to readers. This was beneficial relationship for all.
Those days may be gone but thanks to scalable software infrastructure systems, the agnostic properties of x86, bloggers & community edition software, perhaps they’re back!
Of all the lazy, out-dated constructs still hanging around in computing,SMB shares mapped as drive letters to client PCs has to be the worst.
Microsoft Windows is the only operating system that still employs these stubborn, vestigal organs of 1980s computing. Why?
Search me. Backwards compatibility perhaps, but really? It’s not like you can install programs to shares mapped as drive letters, block-storage style.
If you work in Microsoft-powered shops like me, then you’re all too familiar with lettered drive pains. Let’s review:
Lettered drives are paradigms from another era: Back in the dial-up and 300 baud modem days you got in your car and drove to Babbages to purchase a big box on a shelf. The box contained floppy diskettes, which contained the program you wanted to use. You put the floppy in your computer and you knew instinctively to type a: on your PC. Several hours later after installing the full program to your C: drive, you took the floppy out of its drive and A: ceased to exist. If this sounds archaic to you (it is), then welcome to IT’s version of Back to the Future, wherein we deploy, manage and try to secure systems tied to this model
Lettered drives are dangerous: The Crytpo* malware viruses of the last two years have proven that lettered drives = file server attack vector. I have friends dealing with Gen 3 of this problem today; a drive map from one server to all client PCs must be a Russian crypto-criminal’s dream come true.
Your Users Don’t Understand Absolute/Relative paths: When users want to share a cat video from the internet, they copy + paste the URL into an email, press send, and joyous hilarity ensues. But anger, confusion, despair & Help Desk tickets result when those same users paste a relative path of G:FridayFunDebsFunnyCatVids into an email and press send. Guess what Deb? Not everyone in the world has a G: drive. This is frustrating for IT, and Deb doesn’t understand why they’re so mad when she opens a ticket.
Lettered drives spawn bad practice offspring: Many IT guys believe that lettered drives suck, but they end up making more of them out of laziness, fear or uncertainty. For instance: say the P:HR_Benefits folder is mapped to every PC via Group Policy, and everyone is happy. Then one day someone in HR decides to put something on the P: drive that users in a certain department shouldn’t see. IT hears about this and figures, “Well! Isn’t this a pickle. I think, good sir, that the only way out of this storm of bad design is to go through it!” and either stands-up a new share on a new letter (\fsSecretHRStuff maps to Q:) or puts an NTFS Deny ACL on the sub-folder rather than disabling inheritance. More Help Desk tickets result, twice as many if the drive mapping spans AD Sites and is dependent on Group Policy.
Lettered drives don’t scale: Good on your company for surviving and thriving throughout the 90s, 2000s, and into the roaring teens, but it’s time for a heart-to-heart. That M:Deals thing you stood-up in 1997 isn’t the best way to share documents and information in 2015 when the company you helped scale from one small site to a global enterprise needs access to its files 24/7 from the nearest egress point.
I wish Microsoft would just tear the band-aid off and prevent disk mapping of SMB shares altogether. Barring that, they should kill it by subterfuge & pain ((Make it painful, like disabling signed drivers or something))
But at the end of the day, we the consumers of the Microsoft stack bear responsibility for how we use it. And unfortunately, there is no easy way to kill the lettered drive, but I’ll give you some alternatives. It’s up to you to sell them in your organization:
OneDrive for Business: Good on Microsoft for putting advanced and updated OneDrive clients everywhere. This is about as close to a panacea as we get in IT. OneDrive should be your goal for files and your project plan should go a little something like this: 1) Classify your on-prem file shares, 2) upload those files & classification metadata to OneDrive for Business, and 3) install OneDrive for Business on every PC, device, and mobile phone in your enterprise, 4) unceremoniously kill your lettered drive shares
What’s wrong with wack-wack? Barring OneDrive, it’s trivial to map a \sharefolder to a user’s Library so that it appears in Window Explorer in a univeral fashion just like a mapped drive would
DFS: DFS is getting old, but it’s still really useful tech, and it’s on by default in an AD Domain. Don’t believe me? Type \yourdomain and see DFS in action via your NETLOGON & SYSVOL shares. You can build out a file server infrastructure -for free- using Distributed File Sharing tech, the same kit Microsoft uses for Active Directory. Say goodbye to to mapping \sharesharename to Site1 via Group Policy, say hello to automatic putting bits of data close to the user viaGroup Policy.
Alternatives: If killing off the F: drive is too much of an ask for your organization, consider locking them down top prirority with tools like SMB signing, access-based enumeration and other security bits available in Server 2012 and 2012 R2.
The In Tech We Trust Podcast has quickly became my favorite enterprise technology podcast since it debuted late last year. If you haven’t tuned into it yet, I advise you to get the RSS feed on your favored podcast player of choice ASAP.
The five gents ((Nigel Poulton, Linux trainer at Pluralsight, Hans De Leenheer,datacenter/storage and one of my secret crushes, Gabe Chapman, Marc Farley and Rick Vanover)) putting on the podcast are among the sharpest guys in infrastructure technology, have great on-air chemistry with each other, and consistently deliver an organized & smart format that hits my player on-time as expected every week. Oh, and they’ve equalized the Skype audio feeds too!
And yet….I can’t let the analysis in the two most recent shows slip by without comment. Indeed, it’s time for some tough love for my favorite podcast.
Guys you totally missed the mark discussing hyperconvergence & Microsoft over the last two shows!
For my readers who haven’t listened, here’s the compressed & deduped rundown of 50+ minutes of good stimulating conversation on hyperconvergence:
There’s little doubt in 2015 that hyperconverged infrastructure (HCI) is a durable & real thing in enterprise technology, and that that thing is changing the industry. HCI is real and not a fad, and it’s being adopted by customers.
But if HCI is a real, it’s also different things to different people; for Hans, it’s about scale-out node-based architecture, for others on the show, it’s more or less the industry definition: unified compute & storage with automation & management APIs and a GUI framework over the top.
But that loose definition is also evolving, as Rick Vanover sharply pointed out that EMC’s new offering, vSpex Blue, offers something more than what we’d traditionally (like two weeks ago) think of as hyperconvergence
Good stuff and good discussion.
And then the conversation turned to Microsoft. And it all went downhill. A summary of the guys’ views:
Microsoft doesn’t have a hyperconverged pony in the race, except perhaps Storage Spaces, which few like/adopt/bet on/understand
MS has ceded this battlefield to VMware
None of the cool & popular hyperconverged kids, save for Nutanix and Gridstore, want to play with Microsoft
Microsoft has totally blown this opportunity to remain relevant and Hyper-V is too hard. Marc Farley in particularly emphasized how badly Microsoft has blown hyperconvergence
I was, you might say, frustrated as I listened to this sentiment on the drive into my office today. My two cents below:
The appeal of Hyperconvergence is a two-sided coin. On the one side are all the familiar technical & operational benefits that are making it a successful and interesting part of the market.
It’s an appliance: Technical complexity and (hopefully) dysfunction are ironed out by the vendor so that storage/compute/network just work
It’s Easy: Simple to deploy, maintain, manage
It’s software-based and it’s evolving to offer more: As the guys on the show noted, newer HCI systems are offering more than ones released 6 months or a year ago.
The other side of that coin is less talked about, but no less powerful. HCI systems are rational cost centers, and the success of HCI marks a subtle but important shift in IT & in the market.
It’s a predictable check cut to fewer vendors: Hyperconvergence is also about vendor consolidation in IT shops that are under pressure to make costs predictable and smoother (not just lower).
It’s something other than best-of-breed: The success of HCI systems also suggests that IT shops may be shying away from best-of-breed purchasing habits and warming up to a more strategic one-throat-to-choke approach ((EMC & VMware, for instance, are titans in the industry, with best-in-class products in storage & virtualization, yet I can’t help but feel there’s more going on than the chattering classes realize. Step back and think of all the new stuff in vSphere 6, and couple it with all the old stuff that’s been rebranded as new in the last year or so by VMware. Of all that ‘stuff’, how much is best of breed, and how much of it is decent enough that a VMware customer can plausibly buy it and offset spend elsewhere?))
It’s some hybrid of all of the above: HCI in this scenario allows IT to have its cake and eat it too, maybe through vendor consolidation, or cost-offsets. Hard to gauge but the effect is real I think.
((As Vanover noted, EMC’s value-adds on the vSpex Blue architecture are potentially huge: if you buy vSpex Blue architecture, you get backup & replication, which means you don’t have to talk to or cut yearly checks to Commvault, Symantec or Veeam. I’ve scored touchdowns using that exact same play, embracing less-than-best Microsoft products that do the same thing as best-in-class SAN licenses))
And that’s where Microsoft enters the picture as the original -and ultimate- Hyperconverged play.
Like any solid HCI offering, Microsoft makes your hardware less important by abstracting it, but where Microsoft is different is that they scope supported solutions to x86. VMware, in contrast only hands out EVO:RAIL stickers to hardware vendors who dress x86 up and call it an appliance, which is more or less the Barracuda Networks model. ((I’m sorry. I know that was a a cheapshot, but I couldn’t resist))
With your vanilla, Plain Jane whitebox x86 hardware, you can then use Microsoft’s Hyperconverged software system (or what I think of as Windows Server) to virtualize & abstract all the things from network (solid NFV & evolving overlay/SDN controller) to compute to storage, which features tiering, fault-tolerance, scale-out and other features usually found in traditional SAN systems.
But it doesn’t stop there. That same software powers services in an enormous IaaS/PaaS cloud, which works hand-in-hand with a federated productivity cloud that handles identity, messaging, data-mining, mail and more. The IaaS cloud, by the way, offers DR capabilities today, and you can connect to it via routing & ipsec, or you can extend your datacenter’s layer 2 broadcast domain to it if you like.
On the management/automation side, I understand/sympathize with ignorance of non-‘softies. Microsoft fans enthuse about Powershell so much because it is -today- a unified management system across a big chunk of the MS stack, either masked by GUI systems like System Center & Azure Pack or exposed as naked cmdlets. Powershell alone isn’t cool though, but Powershell & Windows Server aligned with truly open management frameworks like CIM, SMI-S and WBEM is very cool, especially in contrast to feature-packed but closed APIs.
On the cost side,there’s even more to the MS hyperconverged story: Customers can buy what is in effect a single SKU (the Enterprise Agreement) and get access to most if not all of the MS stack.
Usually,organizations pay for the EA in small, easier-to-digest bites over a three year span, which the CFO likes because it’s predictable & smooth. (( Now, of course, I’m drastically simplifying Microsoft’s licensing regime and the process of buying an EA as you can’t add an EA to your cart & checkout, it’s a friggin negotiation. And yes I know everyone hates the true-up. And I grant that an EA just answers the software piece; organizations will still need the hardware, but I’d argue that de-coupling software from hardware makes purchasing the latter much, much easier, and how much hardware do you really need if you have Azure IaaS to fill in the gaps?))
Are all these Microsoft things you’ve bought best of breed? No, of course not. But you knew that ahead of time, if you did you homework.
Are they good enough in a lot of circumstances?
I’ll let you judge that for yourself, but, speaking from experience here, IT shops that go down the MS/EA route strategically do end up in the same magical, end-of-the-rainbow fairy-tale place that buyers of HCI systems are seeking.
That place is pretty great, let me tell you. It’s a place where the spend & costs are more predictable and bigger checks are cut to fewer vendors. It’s a place where there are fewer debutante hardware systems fighting each other and demanding special attention & annual maintenance/support renewals in the datacenter. It’s also a place where you can manage things by learning verb-noun pairs in Powershell.
If that’s not the ultimate form of hyperconvergence, what is?
My last two posts on Microsoft were filled with angst and despair at Microsoft’s announcement that the next gen versions of Server & System Center would be delayed until sometime in 2016. Why, I cried out, why the delay on Server, and what’s to become of my System Center, I wondered?
Well, I was wrong on all that, or perhaps I was only a little bit right.
There was a shakeup, but it wasn’t Nadella who had angrily overturned a gigantic redwood table at System Center HQ, spilling Visio shapes & System Center management packs as he did so, rather it was Mr Windows himself, the Most Distinguished of Distinguished Technical Fellows, Dr. Jeffrey Snover who had shaken things up.
Yes. The Padre of Powershell himself filled in the gaps for me on why System Center & Windows Server were delayed during a TechDays online one day after my last post.
During that talk, he announced that the Windows Server Team has been meshed with the System Center Team and, even better, the Azure team. Hot dog.
[Snover] explained that the System Center team and the Windows Server team are now “a single organization,” with common planning and scheduling. He said that the integration of the two formerly separate organizations isn’t 100 percent, but it’s better than it’s been in the past. The team also takes advantage of joint development efforts with the Microsoft Azure team, he added.
That’s outstanding news in my view.
Microsoft’s private|hybrid|public cloud story is second to none as far as I’m concerned. No one else offers deep integration between cutting edge public cloud systems (Azure) with your on-prem legacy infrastructure stack.
Yet that deep integration (not speaking of AAD Sync & ADFS 3 here) was becoming confused and muddled with overlap between the older tools (System Center) and the newer tools like Desired State Configuration, mixed in with AzurePack, an on-prem/cloud management engine.
It sounds to me like Snover’s going to put together a coherent strategy using all the tools, and I can’t think of a better guy to do the job.
But what of Windows server?
It’s getting Snovered too, but in a way that’s not as clear to me. Again, Redmond mag:
The next Windows Server product will be deeply refactored for cloud scenarios. It will have just the components for that and nothing else, Snover explained. Next, on top of that, Microsoft plans to build a server that will be the same as the Windows Servers that organizations currently use. This server it will have two application profiles. One of the application profiles will target the existing APIs for Windows Server, while the other will target the subsets of the APIs that are cloud optimized, Snover explained. On top of the server, it will be possible to install a client, he added. This redesign is happening to better support automation, he explained.
I watched most of Snover’s talk, took a few days to think about it, and still have no idea what to make of the high-level architecture slide below that flashed on screen briefly:
Some thoughts that ran through my head: is the cloud-optimized server akin to CoreOS, with active/passive boot partitions, something that will finally make Patch Tuesday obsolete? One could hope that with further abstraction, we’ll get something like that in Windows Server vNext.
In some sense, we already have parts of this: if you enable the Hyper-V feature on a bare-metal computer, you emerge, after a few reboots, running a Windows virtual machine atop a Type-1 Hypervisor.
Big deal right? Well, Snover’s slide seems to indicate this will be the default state for the next generation of Windows server, but more than that, it seems to indicate that what we think of as the Type-1 Hyperivisor is getting a bunch of new features, like container support.
We knew Docker support was coming, but at this level, and almost indistinguishable from the hypervisor itself?
That’s potentially all kinds of awesome.
Interestingly, Server Roles & Features look like they’re being recast into a “Client” level that operates above a Windows Server.
Which, if we continue down the rabbit hole, means we have to ask the question: If my AD Domain Controller or my RemoteApp session host farm servers are now clients, what are they running on? It certainly doesn’t seem to be a Windows server anymore, but rather a kind agnostic compute fabric, made up of virtual “Servers” and/or “Containers” operating atop a cloud-optimized server running on bare-metal…an agnostic computing ((Damn straight, had to work that in there)) fabric that stretches across my old on-prem Dells all the way up to the Azure cloud…right?!?
I’m like four levels deep into Jeffrey Snover’s subconscious so I’ll stop, but suffice it to say, the delay of Windows Server & System Center appears to be justified and I can’t wait to start testing it in 2016.
I couldn’t help but cheer and raise a few virtual fist bumps to the Microsoft Server 2012 and 2012 R2 team as I read the latest report out of some industry group or other. Hyper-V 3.0, you see, is cracking along with just a tick under 1/3rd of the hypervisor market.
Meanwhile, VMware -founder of the genre, much respect for the Pater v-Familias- is running about 2/3rds of virtualized datacenters.
And that’s just fine with me.
Hyper-V is still in a distant second place. But second place never felt so good as it does right now. And we got some vMomemntum on our side, even if we don’t have feature parity, as I’ve acknowledged before.
Hyper-V is up in your datacenter and it deserves some V.R.E.S.P.E.C.T.
Testify IDC, testify:
A growing number of shops like UMC Health System are moving more business-critical workloads to Hyper-V. In 2013, VMware accounted for 53 percent of hypervisors deployed last year, according to data released in April by IT market researcher IDC. While VMware still shipped a majority, Hyper-V accounted for 29 percent of hypervisors shipped.
The Redmond Magazine report doesn’t get into it beyond some lame analyst comments, but let me break it down for you from a practitioner point of view.
Why is Hyper-V growing in marketshare, stealing some of the vMomentum from the sharp guys at VMware?
Four reasons from a guy who’s worked it:
The Networking Stack: It’s not that Windows Server 2012 & 2012 R2 and, as a result, Hyper-V 3.0, have a better network stack than VMware does. It’s that the Windows server team rebuilt the entire stack between 2008 R2 & Server 2012. And it’s OMG SO MUCH BETTER that the last version. Native support for Teaming. Extensible VM switching. Superb layer 3 and layer 2 cmdlets. You can even do BGP routing with it. It’s built to work, with minimal hassle, and it’s solid on a large amount of NICs. I say that as someone who ran 2008 R2 Hyper-V clusters then upgraded the cluster to 2012 in the space of about two weekends. Trust me, if you played around with Windows Server 2008 R2 and Hyper-V and broke down in hysterics, it’s time for another look.
SMB 3.0 & Storage Spaces/SOFS…don’t call it CIFS and also, it’s our NFS: There’s a reason beyond the obvious why guys like Aidan Finn, the Hyper-Dutchman and DidierV are constantly praising Server Message Block Three dot Zero. It kicks ass. Out of the box, multi-channel is enabled on SMB 3.0, meaning that anytime you create a Hyper-V-Kicks-Ass file share on a server with at least two distinct IP addresses, you’re going to get two distinct channels to your share. And that scales. On Storage Spaces and its HA (and fault tolerant?) big brother Scaled out File Server: what Microsoft gave us was a method by which we could abstract our rotational & SSD disks and tier them. It’s a storage virtualization system that’s quite nifty. It’s not quite VSAN except that both Storage Spaces/SOFS & VSAN seem to share common cause: killing your SAN.
Only half the Licensing headaches of VMware: I Do Not Sign the Checks, but there’s something to be said for the fact that the features I mention above are not SKUs. They are part & parcel of Server 2012 R2 Standard. You can implement them without paying more, without getting sign-off from Accounts payable or going back to the well for more spend.Hyper-V just asks that you spend some time on Technet but doesn’t ask for more $$$ as you build a converged virtual switch.
It’s approachable: This has always been one of Microsoft’s strengths and now, with Hyper-V 3.0, it’s really true. My own dad -radio engineer, computer hobbyist, the original TRS-80 fan- is testing versions of radio control system software within a Windows 7 32 bit & 64 bit VM right from his Windows 8.1 Professional desktop. On the IT side: if you’re a generalist with a Windows server background, some desire to learn & challenge yourself, and, most importantly, you want to Win #InfrastructureGlory, Hyper-V is tier one hypervisor that’s approachable & forgiving if you’re just starting out in IT.
It’s also pretty damn agnostic. You can now run *BSD on it, several flavors of linux and more. And we know it scales: Hyper-V, or some variant of it, powers the X-Box One (A Hypervisor in Every Living Room achieved), it can power your datacenter, and it’s what’s in Azure.
An engineer in a VMware shop that’s using VMware’s new VSAN converged storage/compute tech had a near 12 hour outage this week. He reports in vivid detail at Reddit, making me feel like I’m right there with him:
At 10:30am, all hell broke loose. I received almost 1000 alert emails in just a couple minutes, as every one of the 77 VM’s in the cluster began to die – high CPU, unresponsive, applications or websites not working. All of the ESXi hosts started emitting a myriad of warnings, mostly for high CPU. DRS attempted to start migrating VM’s but all of the tasks sat “In progress”. After a few minutes, two of the ESXi hosts became “disconnected” from vCenter, but the machines were still running.
Everything appeared to be dead or dying – the VM’s that didn’t immediately stop pinging or otherwise crash had huge loads as their IO requests sat and spun. Trying to perform any action on any of the hosts or VM’s was totally unresponsive and vCenter quickly filled up with “In progress” tasks, including my request to turn off DRS in an attempt to stop it from making things worse.
I’m a Hyper-V guy and (admittedly) barely comprehend what DRS is but wow. I’ve got 77 VMs in my 6 node cluster too. And I’ve been in that same position, when something unexpected…rare…almost impossible to wargame…happens and the whole cluster falls apart. For me it was an ARP storm in the physical switch thanks in part to an immature understanding 2008 R2’s virtual switching.
I’m not ashamed to say that in such situations intuition plays a part. Logs are an incomprehensible firehose and not useful and may even distract you from the real problem. Your ops manager VM, if stored within the cluster (cf observer effect) is useless, and so, what do you have?
You have what lots of us have, no matter the platform. A support contract. You spend valuable minutes explaining your situation to a guy on the phone who handles many such calls per day. Minutes, then a half hour, then a full hour tick by. The business is getting restless & voices are being raised. If your IT group has an SLA, you’re now violating it. Your pulse is rising, you’re sweating now.
So you escalate. Engage the sales team who sold you the product..you’re desperate. This guy got a vExpert on the phone. At times, I’ve had MVPs helping me. Yet with some problems, there are no obvious answers, even for the diligent & extraordinary.
But if you’re good, you’ve a keen sense of what you know versus what you don’t know (cf Donald Rumsfeld for the win), and you know when to abandon one path in favor of another. This engineer knew exactly the timing of his outage…what he did, when he finished the work he did, and when the outage started. Maybe he didn’t have it down in a spread and proving it empirically in court would never work, but he knew: he was thinking about what he knew during his outage, and he was putting all his knowns and unknowns together and building a model of the outage in his head.
I feel simpatico with this guy…and I’m not too proud to say that sometimes, when nothing’s left, you’ve got to run to the server room (if it’s near, which it’s not in my case or in this engineer’s case I think) and check the blinky lights on the hard drives on each of your virtualization nodes. Are they going nuts? Does it look odd? The CPUs are redlined and the putty session on the switch is slow…why’s that? ‘
Is this signal, or is this noise?
Observe the data, no matter how you come by it. Humans are good at pattern recognition. Observe all you can, and then deduce.
Bravo to this chap for doing just that and feeling -yes feeling at times- his way through the outage, even if he couldn’t solve it.
If you haven’t seen the opening post in this series, I encourage you to read it.The TL;DR version is this: I’m an IT guy in Infrastructure supporting the Microsoft stack, specializing mostly in virtualization & storage. I’ve been in IT for almost 15 years now and I love what I do, but what I do is being disrupted out of existence by the Microsoft cloud.
Cloud Praxis is my response. It’s a response addressed to my situation and to other IT guys like me. It’s a response & a method that’s repeatable, something you can practice, hone, and master. That’s how I learn- hands-on experimentation. Like the man Cicero said at the top of the mission patch above, “Constant practice devoted to one subject often outdoes both intelligence and skill.”
Above all, Cloud Praxis is a recognition that the 1) The Microsoft cloud is real & it’s here to stay, 2) my skills are entirely based in the old on-prem model, 3) I better adapt to the new regime, lest I find myself irrelevant, 4) it’s urgent that I tackle this weakness in my portfolio; I can’t wait on my workplace to adopt the cloud, I need some puffy cloud stuff in my CV post-haste, not next year or in two years.
This is I how did it.
It may not be the Technet way, or the only way, but it was my way. And I’m sharing it with you because maybe you’re like me; a mid-career IT generalist with a child partition at home, perhaps a little nervous about this cloud thing yet determined to stay competitive, employable and sharp. Or maybe you are just a fellow seeker of #InfrastructureGlory.
If that’s the case, join me; I’ll walk you through the steps I took get a handle on this thing.
Compute/Storage,A PC with at least 16GB RAM & ethernet, Depends, No, Needs to be virutalization capable
Compute/Hypervisor,Windows 8.1 Pro or Ent, $200 or free 90 day eval, Yes, 2012 R2 eval works too
VM OS,Windows 2008 R2-2012R2 Standard, $0 with 90 day eval, Yes, Timer starts the day you install
Network,Always-on high speed internet at home/work,$-,No,Obviously [/table]
The very first step on our path to #InfrastructureGlory in the Microsoft Cloud is this: we need to build ourselves some on-prem infrastructure of sufficient size & scope to simulate our workplace infrastructure.
The good news is that the very same technology that revolutionized Infrastructure 4-5 years ago (virtualization) is now available downmarket, so downmarket in fact that you can build an inexpensive yet capable virtualization lab on a cheap consumer-level PC your family can use at home.
And you don’t even need a server OS on your parent partition to do this. As remarkable as it sounds, you can build a simple virtualization lab on consumer hardware running (at minimum) Windows 8.1 Pro as long as your PC is 1) virtualization capable and 2) has sufficient memory (16GB RAM but I suppose you could get by with as little as 8GB) and 3) storage resources and 4) a NIC (the one in your motherboard will likely work fine, just connect it to your home router).
How’s this possible?
Client Hyper-V baby. It’s the first and only feature you need to build a modest virtualization lab at home on your road to #InfrastructureGlory in the cloud. Client Hyper-V has about 60% of the features server Hyper-V has and uses a common management snap-in and cmdlets. You can’t build a converged fabric switch in Client Hyper-V nor play around with LACP and Live Migration, but at this scale, you don’t need to. You just need a place to park two or three VMs, an ethernet adapter on top of which you’ll build a virtual switch, and a bit of storage space for your VMs.
And some focus & intensity. I’m a virtualization guy and anything prefixed with a “v” gets me excited. It’s easy to get distracted in lab work, but my advice is to keep it simple and keep your focus where it belongs: Azure & Office 365. As much as I love virtualization, it’s just a bit player now.
Broadly outlined, here’s what you need to do once you’ve got your lab infrastructure ready.
Make with the ISO downloads!: Check with your IT management and ask about your organization’s licensing relationship with Microsoft and/or the reseller your group works with. You might be surprised by what you find; though Microsoft has stopped selling Technet subscriptions to individuals, if your IT group has an Enterprise Agreement, Software Assurance, or MSDN subscriptions, you may be able to get access to those Server products under those licensing schemes. See how far your boss lets you take this; some licenses, for instance, give you $100 worth of credit in Azure, something I’m taking advantage of right now. I am not a licensing expert though, so read the fine print, get sign-off from your boss before you do anything with licensed products and understand the limitations.
Consider your workplace Domain Functional level: If you are at 2008 functional level at work, try to get the Server 2008 iso. If you’re at (gasp!) 2003, get that iso if you can and start reading up on Domain Functional Levels & dirsync requirements. I see some Powershell in your future. At my work, we’re relatively clean & up to date in AD: Forrest functional level is at 2012, limited only by Exchange 2010 at this point (haven’t done the latest roll-up that supports 2012 R2). The idea here is to simulate, to the greatest degree possible, your workplace-to-cloud path.
Build at least two VMs: You can follow the process as outlined here on Technet. VM1 is going to be your domain controller, so if you’re at 2008 functional level at work, build a 2008 VM. Your second VM will host dirsync and other cloud utilities. Technet says it can run 2012 R2, so you can use that. In my lab, I stood up a 2008 R2 server for this purpose
Decide on a domain name: Now for some fun. You need to think of a routable domain name for your Windows domain, unless your workplace is on a .local or other non-routable domain. My workplace’s domain is routable, so I built a routable domain in the lab, then took the optional next step: I purchased the domain name from a registrar ($15) as this most closely simulates my workplace (on-prem domain matching internet domain) You should do the same unless you’re really confident in yourself; this step is very important for the next stage as we start to think about User Principal Name attributes and synchronizing our directory with Office 365 via Windows Azure.
And that, friends, is how Daisetta Labs.net* was born. I needed a domain name. Agnostic Computing.com was taken by some jerk blogger and I wanted something fast.
In retrospect, it might have been better to use Agnostic Computing.com as my lab domain because that’s a more realistic scenario; in the real world, I gots me some internet infrastructure tied to my routable domain and a 3rd party DNS host. I also gots me some on-prem Windows domain infrastructure tied to a routable domain name tied to my Windows DNS infrastructure on-prem. If you’re rusty on DNS, this is your chance to get up to speed as it’s everything in Microsoft-land.
On the domain name itself; pick a domain name and have some fun with it but maintain a veneer of professionalism and respectability. I want you to be able to put this on your resume, which means someone might ask you about it someday. If you take the next step and buy a domain from a registrar, you’ll want a domain name you’re not ashamed to have as an SMTP address.
In Cloud Praxis #3, we’re going to take some baby steps into Office 365. Hope you check back tomorrow!
*Some readers have asked me what a Daisetta is, and why it’s a lab. Not sure how to answer that. Maybe Daisetta is the name of my first love; or my first pet dog. Perhaps it’s a street name or a place in Texas, or maybe it’s the spirit of innovation & excitement that propels me forward, that compels me to build a crazy home lab. Or maybe it’s a fugazi, fogazzi, it’s a wazzi, it’s a woozie..it’s fairy dust.