Enterprise Secrets + Privileged Account Management | CyberArk at #XFD1

Managing Enterprise Secrets & Privileged accounts has to be one of the most difficult jobs in Information Technology today, and one of the least transparent to the business. Bad guys have painted a target on admin’s backs, regulators are chomping at the bit as more consumer data is lost online, and Compliance officers are scrambling to understand the landscape and adapt to new rules from overseas. And yet the business may not even realize that unsung heroes in IT are still managing a stack of hardware & software designed to fulfill 1990s-era security models.

Take it from me: I know this pain well. Even if you do have an internal identity system, say Active Directory, it can be difficult to get all the bits from your Storage, Network, Compute & cloud systems to run a proper AAA model against your AD Forest. Even more difficult: figuring out how to audit the records of Active Directory (or NPS/RADIUS or ADFS or OAuth2/SAML glues) to present to your Compliance officers.

Yet in the background, a constant churn of news that only raises the pessimism bar higher: Target. Anthem. Maersk. Equifax. Facebook. Marriot. The goddamned CIA and the f****** National Security Agency. I made a Visio Timeline because I was having difficulty tracking all the breaches, and I’ve run out of room! And let’s not forget the business and your user colleagues’ need for secrets too as consumer technology continues to eat away at the Enterprise and as more of the economy is digitized. By 5pm most days, IT admins are just hoping to make it to retirement in 10 years without their orgs getting popped by a black hat.

cyberark-logoEnter CyberArk. This Silicon Valley company was founded in 1999, which is impressive to me. It’s not often you’ll find a company that’s been selling a product that handles Enterprise secrets + PAM for 20 years, at least a decade longer by my count than the popular consumer password management companies that are now sashaying their way into your Enterprise, as if they understand the challenge you’re facing. At Security Field Day 1 (#XFD1), CyberArk’s maturity & comprehension of the challenge of securing the enterprise really showed.

CyberArk’s Privileged Access Security Suite is a mature & fully-featured secrets + PAM tool. I was super-impressed with the demo their Global Director of Systems Engineering, Brandon Traffanstedt, gave us back in December 2018 in sunny San Jose. I came prepared to endure a boring password management demo; I left impressed at what I had seen, with only a single caveat.

Not only was CyberArk’s product comprehensive, it was bad-ass, with one exception. I saw:

  •  An SSH session opened to a network device’s command line, with a second factor prompt before access was granted
  • Full auditing + screen recordings of a Privileged Account accessing a protected server, just the kind of thing that reassures the business that you, as an admin, have nothing to hide, are not an ‘insider threat’ and are 100% transparent in your work.
  • Deep integration into Windows’ Win32 API, hooking into parts of the OS I’d not seen before outside of Microsoft products, including Credential Management
  • Full integration & support for MacOS
  • OAUTH2/SAML support and full support for your ADFS infrastructure
  • Cloud secrets & PAM management across AWS (and soon) Azure
  • Full support for your RADIUS infrastructure & 802.11x, whether via Microsoft’s NPS or some other solution
  • Automated credential rotation so that you don’t have to scramble when a fellow admin changes jobs, is fired for negligence, or joins Edward Snowden in Moscow
  • Secure sharing of secrets among your privileged IT colleagues
  • An offline, secured, and high-entropy password in a sealed envelope you can hand to the business for peace of mind

I’ve been working in IT for about as long as CyberArk’s been pounding the pavement and trying to convince IT Teams to invest in Enterprise Secrets & PAM software. I was impressed…..particularly because CyberArk scratches an itch that many IT Teams don’t know they have: the security costs & technical debt that a legacy of tactical, rather than strategic, investments that tend to leave an org arrears in 2019’s security landscape.

Por ejemplo: say you’re a mid-market SMB IT shop in the healthcare sector that’s experienced a lot of turnover among its IT admin staff through the years. If you’re the business, you’ve watched as IT Admins come and go, and listened as they’ve pitched tactical solutions to various challenges facing the business. You’ve invested in a few, and most work well enough, but gluing them all together into a comprehensive, strategic, and business-enabling solution has been a challenge.

cyberarkWhile your solutions are working, you’re paying a cost whether you know it or not because more than likely, the technical legwork needed to glue those solutions together into a comprehensive & auditable security framework hasn’t been done. Meanwhile, the regulators are knocking at your door, the pace of breaches quicken, and Brian Krebs’ pen is waiting to write about your company.

CyberArk is a good fit there. No, check that. It’s a *great* fit in that scenario. The product addresses threats to your business from both the inside and the outside. It protects Enterprise secrets -the very thing your admins are targeted for- while shining a bright light on your employee’s Privileged Accounts and how they are used.

It’s a product that’s far beyond anything the consumer password management companies are offering…trust me, I’ve looked at them all. It’s a true Enterprise solution. However….

I will say that one area where CyberArk felt a bit less than polished was in how they’ve architected the sharing & use of secrets with non-admin users working in the business. If we return to the healthcare example, think of a person in your business who needs the credentials to login to a state Medicaid site in order to bill the payor of a medical product.

In fairness, this is a complicated problem…while it’s in the business’ interests to control/maintain/audit all secrets, including to third party sites & services that are outside of IT’s domain, the mix of devices/browser here is a difficult puzzle to solve. Yet it’s here that CyberArk’s product left me perplexed. They propose intercepting TLS traffic on your user’s endpoints & injecting credentials into your business user’s browsers, whatever they may be.

This seemed to me -at the ass-end of 2018- to be a poor solution. For starters, we’ll soon see TLS 1.3 across more and more websites. TLS 1.3, as my fellow Delegate Jerry Gamblin pointed out, is not something you can intercept, decrypt, and inject credentials into. Indeed, other vendors in the security space seem to be steering Enterprise customers away from the expectation that we’ll be able to intercept/inspect/fiddle with TLS 1.3 connections. At best, we’ll be able to refuse TLS 1.3 connections in favor of the more Enterprise-friendly TLS 1.2 connections, but even here, the Enterprise’s political power & ability to influence the market & standards bodies is lacking, and Google, for better & worse, rules the roost. Even Microsoft is playing second fiddle here and announced in late 2018 that it would ditch its new Edge browser’s Trident engine in favor of Chromium open source.

Secondly, CyberArk’s solution even here feels archaic. They propose that you put a middlebox in front of your users to accomplish this. This is definitely old-school, calling to mind the many nights/weekends I spent configuring & troubleshooting BlueCoat devices in server rooms across many Southern California businesses. If you’re going to tackle a problem like TLS intercept, you need to think 21st century and go with a cloud interception service, that will follow your users around on the internet. Middleboxes often make your security posture worse, not better.

In my day job, I intercept/inspect TLS connections across several continents and on several thousand endpoints; it’s a tricky science and one that’s filled with compliance & policy questions above my paygrade. Microsoft’s move in the browser arena fills me with questions, and that’s before we consider mobile devices; so too should it fill you with questions if you are looking at CyberArk with an eye towards sharing secrets with non-admin users.

So, caveat emptor on this narrow point friends: a significant selling point of CyberArk’s featured product (injecting secrets into an HTTPS session) may not work a year or two from now. We raised this issue at #XFD1 and CyberArk says they have a plan for it, but eyes open!

Other than that though, I was really impressed. CyberArk gets the challenge facing Enterprise IT in this Wild West era. It understands intuitively complexities of Enterprise secrets, PAM, insider vs outsider threats, and auditing/compliance requirements. The only place it seems to fall short is in sharing credentials from the ‘Vault’ to non-privileged users.

Check it out if:

  • You’ve got a heterogenous stack of best of breed IT hardware & software and you’ve neglected integrating AAA security across that stack
  • You’re in an environment requiring heavy compliance & auditable proof across your stack against both insider & outsider threats
  • You want 2FA/MFA on old network switches, Macs, and Windows Servers
  • You want screen captures of your admin’s work on devices, servers, and services that you consider privileged
  • You’ve got cloud/SaaS management challenges even as you’ve centralized identity in on-prem Active Directory or other system

Ignore it if:

  • You’ve only ever bought Microsoft, only have Windows PCs & servers and Microsoft applications, and you have an MCSE on staff who understands Kerberos, Active Directory, NPS, RADIUS, ADFS, OAUTH2/SAML, and has configured your AD environment to comply with various regulatory statutes and compliance regimes

Other Coverage:

Disclosures
This blog post was written by me, Jeff Wilson, for publication on my blog, wilson.tech. I was not compensated by CyberArk to compose this blog post, and CyberArk did not see it prior to its publication. I learned about the CyberArk products during Security Field Day 1 (#XFD1) an event for IT, Security, and Enterprise influencers that was held in December 2018 in & around Silicon Valley, California. The Gestalt IT group paid for my airfare, accommodations, and meals during the time I was in greater San Jose, CA area. CyberArk and other sponsors paid Gestalt IT to bring Delegate influencers like me to #XFD1. 
I received no monetary compensation otherwise, save for the swag listed below
CyberArk swag I took home:
  • A ballpoint pen
About Me: My name is Jeff Wilson. I am a 20 year IT Professional with a security focus. I hold a GSEC from the SANS Institute, as well as a Bachelor’s Degree in History & a Master’s in Public Administration, both of which are from CalState. I live & work in Southern California. You can reach me on twitter @jeffwilsontech or via email at blog@wilson.tech

Cloud Field Day 3 | Morpheus Data | #CFD3

Morpheus Data was our first sponsor at #CFD3 and, as is my custom before Tech Field Day events, I had done zero prep work on Morpheus. I had never heard of the firm, and as first-at-bat sponsors for #CFD3, they were facing 12 delegates full of energy and with decades of Information Technology experience between them. So how’d they do? I came away impressed. Let me tell you why: they have a heart for operations, and I’m an operations guy.

Morpheus Data – Background

I found Morpheus Data’s story pretty compelling when I read up on it later. The company started off more or less as an internal product inside a cost center of Bertram Capital, a private equity firm in the Bay Area. Now every company has a founding mythology, but Morpheus’s range true to me. Here, I’ll quote from their site:

Bertram Labs is a world-class team of software developers and ops professionals whose sole purpose is to rapidly implement IT solutions to fuel the growth of the Bertram portfolio. In 2010, that team needed a 100% infrastructure agnostic cloud management platform which would integrate with the DevOps tools they were using to develop and deploy applications for a range of customers on an unpredictable mix of heterogeneous infrastructure. Such a tool didn’t exist so Bertram Labs created their own solution…

Just that phrase right there -an unpredictable mix of heterogenous infrastructure- comprises the je nais se qua of my success as an 18 year IT Pro. Using ratified standards sent to us from on high by the greyhairs at the IETF & IEEE ivory towers, a competent IT Pro like myself can string together disparate hardware systems into something rational because most vendors sometimes follow those standards.

But it’s very hard work.  It’s not cheap either. And that act -that integration of a Cisco PoE switch with an Aruba access point or an iSCSI storage array with a bunch of Dell servers- isn’t bringing much value to the business. Perhaps it would be different if IT Shops could just start over with a rational greenfield infrastructure design, but that’s rare in my experience because the needs of IT aren’t necessarily aligned with the needs of the business.

Morpheus Data says they grew out of that exact scenario, which is immediately familiar to me as an ops guy. I find that story pretty encouraging; an internal DevOps team working for a private equity firm was able to productize their in-house scripts & techniques and are now a separate company. Damn near inspiring!

So what are they selling?

It’s Glue, basically. But well-articulated & rational glue

Morpheus’ pitch is that their suite of products can take the pain out of managing & provisioning services from your stack of heterogenous stuff whether it’s on-premises, in one cloud, or several clouds. And by taking the pain out, you can move faster and bring more value to the business.

I’m not going to get into each product because frankly, I think they’re poorly named and not very exciting (Sharepoint-esque in a way: Analytics, Governance, Automation, Evolution, Integrations). But don’t let the naming confuse or dissaude you; it’s an exciting product and the pricing model is simple to understand.clover-b4ff8d514c9356e8860551f79c48ff7c

Instead, let me describe to you what I saw during Morpheus’ Demo at #CFD:

  • Performance data from On-Premise virtualization servers running Hyper-V, VMware, and even Citrix’s XenServer all in one part of the Morpheus web-based portal
  • You can drill-down from each host to look at VM performance data too. Morpheus says they’re able to hook into both Hyper-V performance counters and VMware’s performance counters. That’s pretty awesome for a hetergeonous shop
  • Performance & controls over IaaS & PaaS instances in both Azure & AWS, again in the same screen
  • Menu-driven wizards that let you instantly provision a new virtual machine pre-configured for whatever service you want to run on it. Again -this could be done in the same tool and you can pick where you want it to go
  • Cost data from each public clouds
  • Rich RBAC controls, which is very important to me from a security & integrity standpoint
  • A composable role-based interface. Por ejemplo, you can let your dev team login to Morpheus and not worry about him or her offlining a .vhdx on a Hyper-V server

This chart from their website sums up their offering nicely in comparison with other vendors in this space.

morpheus

Concluding Thoughts

I’ve worked in IT environments where purchasing has been less than most people would consider as rational. Indeed, I’ve worked at places where we had the very best equipment from multiple vendors, but nobody had the time or talent to integrate it all into a smooth & functional machine in service to the business.

Stepping back, the very nature of the integration puzzle has changed. I mentioned above that a competent IT Pro could stitch together infrastructure that used IETF, IEEE, w3c and other standards-based technologies. Indeed that’s been the story of my career.

But in 2018, the world’s moved on from that, for better and worse. The world’s moved on to proprietary Application Programming Interfaces (APIs), and so I’ve moved with it, creating my own Powershell functions and Python scripts to interact with cloud-based APIs. You can do this too, given enough time & study.

But let’s be honest: it’s hard enough to manage & integrate a heteregenous stack of best-of-breed stuff on-premises. Now your boss comes to you and wants you to add some Azure services & Office 365. And then someone on the business side orders up some Lambdas in AWS, surprise! Or perhaps a distant IT group at your company just went and bought Cloudflare or Rackspace. If you’re still trying to solve standards-based puzzles of yesteryear, while learning how to develop scripts & tools for use in a world of proprietary APIs, you’re probably not bringing much value to the business.

And that’s where Morpheus sees itself slotting in nicely…they’ve done the hard work of integrating with both your legacy on-premises standards-based systems and the API-driven cloud ones, and they release new integrations ‘every two or three weeks.’ They even take requests, so if you’ve got a bespoke stack of stuff that doesn’t surface SNMP properly, you can propose Morpheus build an integration for it.

Sidenote: One of the more dev-focused delegates at #CFD3 criticized the prodcut as too ops-friendly (nobody cares to see all that stuff! he said), but I had to push back on him because details are important for ops teams, and Morpheus can surface an interface that’s safe for devs to use. And that’s why I say they’ve got a heart for operations teams.

On pricing: the products which again, have somewhat confusing names, at least offer simplified pricing. To get workload & ‘core features’ running on a VM in your datacenter, you’ll need to spend $25k to start. That seems high to me, but you’re essentially buying a DevOps integrator & engineer who can work 24/7 and doesn’t need health insurance or take vacation, which is pretty cool, and which helps you bring value to the business.

Disclosures
This blog post was written by me, Jeff Wilson, for publication on my blog, wilson.tech. I was not compensated by Morpheus Data to compose this blog post, and Morpheus did not see it prior to its publication. I learned about the Morpheus Data products during Cloud Field Day 3, an event for IT & Enterprise influencers that was held in April 2018 in Santa Clara California. The Gestalt IT group paid for my airfare, accomodations, and meals during the time I was in Santa Clara. Morpheus and other sponsors paid Gestalt IT to bring Delegate influencers like me to #CFD3
Morpheus Data shwag I took home
  • Cool stickers
  • A t-shirt

Storage Networking excellence in an easy-to-digest .mp3

Can’t recommend the latest Packet Pushers Podcast enough. I mean normally, Packet Pushers (Where too Much Networking would Never be Enough) is great, but their Storage Networking episode this week was excellent.

Whether you’re a small-to-medium enterprise with a limited budget & your only dream is getting your Jumbo frames to work from end-to-end on your 1GigE (“a 10 year old design,” one of the panelists snarked), or you’re a die-hard Fiber Channel guy and will be until you die, the episode has something for you.

Rock star line-up too: Chris Wahl, Greg Ferro and J. Metz, a Cisco PhD.

The only guy missing is Andrew Warfield of Coho Data, who blew my mind and achieved Philosopher King of Storage status during his awesome whiteboarding session at #VFD3.

Check it out.

#VFD3 Day 2 : Kicking it with Coho

coho

So it’s a big day for the NFS and VMware guys here at #VFD3, they can’t stop talking about the VSAN announcement and the #VFD3 Awesomeness that was the last two and a half hours at Coho Data with some of Silicon Valley’s great Storage Philosopher Kings.

For your Hyper-V blogger, it’s time to put on a brave face, and soldier on. Coho’s gotta launch their array (“get to startup escape velocity” as someone on twitter put it) and that means focusing on NFS first. And that’s ok; my delegate friends here seem really interested and excited by this product, and when any virtualization engineer is excited for some new tech, I’m excited with them, even if I have to return home to my tired CSVs.

So what is Coho Data? Aside from having the greatest vendor schwag present ever (I kid!) and the actual best vendor schwag present so far (Chrome bike bag with the Coho logo, seriously a nice bag, thanks!), Coho is a startup with a unique storage product.

And I mean unique. Not sure I even understand it fully.

The Coho Storage architecture, borrowed from another blogger below, looks like any other storage solution, except that it’s completely and totally different. First, it involves a software-defined switch; more or less a switching model in which you let the Coho controller push your storage packets around so that your storage is closer to your hypervisor.

It’s real software-defined switching here; even Tom Hollingsworth was tweeting his approval for the messaging around these switches. For virtualization admins who touch on and worry about storage, compute, and network, it was refreshing for me to hear that Coho’s really putting some thought & interesting tech into the switch, even if I’m wary of letting go of my precious ASICs and my show fabric utilization.

Coho-Data-Rack-Layout-Marketing-Ref-Architecture

On the storage side, Coho sparked my interest for two reasons: Cheap, rebranded Supermicro arrays, SATA spinnners, and -unlike anyone we’ve talked to so far at #VFD3 thus far- PCIe SSD, not SATA/SAS SSD.

Coho’s performance model isn’t RAM-enabled like Atlantis & Pure yesterday. This is not a ZFS-derived model; it’s seemingly been grown organically in response to two things: the difficulty of managing and correctly using SSD, and the flexibility of cloud storage models. Coho has thought hard on maximizing SSD performance, on “not leaving any SSD performance on the table,” as the CTO put it, and in response to cloud flexibility, Coho’s model is designed to scale like that.

Hearing my delegate colleagues talk about Coho, I’ve realized they’ve got something unique and potentially game-changing here. It’s all we could talk about on the VMBus following, and I want to congratulate Coho on the general availability of their new product, something they savvily used #VFD3 to announce today.

Ping them if : IO Blending problems send you into a cold sweat, or you hate your ASIC on your switch

Set Outlook Reminder for When : They get Hyper-V SMB 3.0 support or iSCSI or OpenStack

Send them to /dev/null if: You aren’t brave enough to challenge storage paradigms

Not my finest hour in contingency planning, but it works

As Southern California is the center of the universe as far as I’m concerned, I know you’re all worried sick about me, this website, and other Southern Californians as we endure a a frightening precipitation event of some kind and scale. The Live MegaDoppler 7000 StormSageRadar XXTreme v 2.0 Beta can’t tell us yet if the great California Dampnening of 2014 is Noahic in nature or a $deity-punishment ripped straight from the Book of Revelations, but one thing is for certain: the water is everywhere.

Yes, it’s so wet out there that even the mighty Los Angeles River is flowing once again. It may even be navigable.*

But fret not! Let me calm your nerves. Your blogger Jeff, @agnostic_node1 on the twitters, is ok. So is the Child Partition and the Supervisor Module spouse. We’re all safe, our flood pants all fit and we’ve got buckets and bags of some sort of pre-silicon material at the ready.

The Converged Fabric Agile DevOps ITIL Waterfall Software-Defined Lab @ Home, however, I worry.

See I built it in the garage. The Supervisor Module would never permit such equipment inside the living spaces.

Not only is it in the garage, but it’s close to the garage door, bolted down properly to the wooden workbench.

A few inches outside but very close to being under the garage door tracks that lift what’s now become a very wet garage door.

*Gulp*

I may be able to push 4 or 5,000 IOPS to the home-built ZFS array sitting in the garage. I’m quite confident in my ability to take my home lab, and my skillset too, to new heights. I can spin up a dozen VMs on this handsome 12u stack at once…no problem at all. I can build a lab that’s agnostic and welcoming to all type 1 and 2 hypervisors, no discrimination here!

What I can’t do is anticipate inclement weather that seems to come at me sideways sometimes.

So what’s an Agile guy to do?

Pull out his IT MacGyver manual: Bungie cord, two sticks, and some plastic sheeting:

contingency
Agility Defined. Waterfall no more.

Like I said, not my finest hour in contingency planning, but it’s working. So far. I won’t be putting this lab work on my resume, however.

And yes, it’s okay to laugh.

* By kayak

Free disk space > 15% = Wasted money

Your enterprise’s mileage may vary, but in every place I’ve ever worked at, I’ve taken a pretty dogmatic approach to disk space utilization on VMs, especially ones hosting specialty workloads, such as Engineering or financial applications.

And that dogma is: No workload is special enough that it needs greater than 15% free disk space on its attached volume, non-boot volume. 

This causes no end of consternation and panic among technicians who deploy & support software products.

“Don’t fence me in!” they shout via email. “Oh, give me space lots of space on your stack of spindles, don’t fence me in. Let me write my .isos and .baks till the free space dwindles! Please, don’t fence me in,” they cry.

“I can’t stand your fences. Let my IO wonder over yonder,” they bleat, escalating now, to their manager and mine.

Look, I get it. Seeing that the D: drive is down to 18% free space makes such techs feel a bit claustrophobic. And I mean no disrespect to my IT colleagues who deploy/support these applications. I know they are finicky, moody things, usually ported from a *nix world into Windows. I get it. You are, in a sense, advocating for your customer (the Engineering department, or Finance) and you think I’m getting in your way, making your job harder and your deployment less than optimal.

But from my seat, if you’ve got more than 15% free space on your attached volume in production, you’re wasting my business’ money. I know disk space is cheap, but if I gave all the specialty software vendors what they asked for when deploying their product in my stack, my enterprise would :

  • Still have a bunch of physical servers doing one workload, consuming electricity and generating heat instead of being hyper-rationalized on a few powerful hosts
  • Lots of wasted RAM & disk resources. 400GB free on this one, 500GB free on that one, and pretty soon we’re talking about real storage space

One of the great things about the success of virtualization is that it killed off the sacred cows in your 42U rack. It gave us in the Infrastructure side of the house the ability to economize, to study the inputs to our stack and adjust the outputs based not on what the vendor wanted, or even what us in IT wanted, but on what the business required of us.

And so, as we enter an age in which virtualization is the standard (indeed, some would argue we passed that mark a year or two ago), we’ve seen various software vendors remove the “must be physical server” requirement from their product literature. Which is a great thing cause I got tired of fighting that battle.

But they still ask for too much space. If you need more than 15% free on any of the attached, MPIO-based, highly-available, high performing LUNs I’ve given you, you didn’t plan something correctly. Here’s a hint: in modern IT, discrimination is not only allowed, but encouraged. I’m not going to provision you space on the best disk I have for backups, for instance. That workload will get a secondary LUN on my slow array!

#VFD 3 or bust!

Agnostic Computing is brand new as tech blogs go, rolled out on a whim in August 2013 just to vent some angst, to wax philosophical on some high technology magic (would you believe my first post was about Sharepoint 2013? Uhhh yeah).

My thinking in starting the site was simple: I wanted to write a blog that was as fun and as passionate as the tech debates my friends & colleagues and I enjoyed at work for years. These are debates that start innocently enough (“Check out my new 1080p Android phone”….or “Do you really buy music from the iTunes store?”) but soon escalate into a 45 minute verbal fisticuffs, where low blows & sucker punches are not only permitted, but encouraged.

The geekier the reference, the harder the punch: “That’s a user interface only the mother of Microsoft bob could love,” “You’re just a sad and broken man because both BeOS & WebOS died and you were the only one who noticed,” or “We can’t trust someone who buys music off iTunes to be able to program a switch.” “You’re acting pretty confident for a guy who broke Exchange just last month.”

Good times. I love those debates, and not just cause the normals don’t get them. They’re genuinely fun, so I set to out to capture a bit of that spirit on this blog, and, I hoped, post some genuinely interesting stuff, like a storage bakeoff between bitter rivals, a sincere, screenshot or gifcam heavy how-to sent from my virtualized stack to your’s, and more.

And so it goes for bloggers, who, like chefs, try a little of this, test a little of that, mix it all up and then taste what’s in the pot. And most of the time, it’s forgettable at best, shame-inducing at worst.

Which makes it all the more surprising for me because apparently I’m doing something right.

You see, I’ve been invited as a delegate to Virtualization Field Day #3. In the Disneyland of High Tech, Silcon Valley, where the combined brainpower is bound to rub off on me, I mean, how can it not?

VFD-Logo-400x398That’s right. Me. Agnostic Comptuing guy. Going to Enterprise Tech’s Woodstock.

If you don’t know what #VFD is, then you haven’t been paying close enough attention. From all the interviews I’ve heard of delegates from past #Tech Field Days (Storage, Network, Wireless Network…it’s spreading into all our sacred sub-disciplines and dark arts, surely the ERP & SQL guys will be next) going to a TFD as a delegate puts you face to face with the companies, and more importantly, the engineers who designed the stuff you deploy, support, break, fix, and depend on to keep your enterprise running.

Notice I said engineers. Not sales people. Or not just sales people at any rate.

Deep dives, white papers, new horizons opened, the potential to leave behind painful memories of broken processes and old ways of doing things by meeting the other delegates, some of whom I’ve been reading for years…..these are the things I’m looking forward to as a #VFD delegate.

Oh and challenging vendors and discerning which product is the right one for the business, which is among the most important jobs we as IT pros have.

As a former boss of mine put it memorably: “We’re only as good as our vendors.” And he was right: whether the device in your rack is amazing and incredible, or prone to failure, or the service you’ve contracted is game-changing or more trouble than its worth, managing “the stack” and interfacing with the stack builders and stack sellers is important to your success, and the business’ success.

Two of the sponsor firms at this year’s #VFD already have me excited. I just finished buying a Nimble array at work (gamechanger! no regrets!), but I won’t lie: I’m Coho-Curious. And Atlantis Computing: sharp guys, A++++ on the blogs, would read again, eager to hear about the products.

Thanks to the GestaltIT group (Add them to your RSS feed stat!) for the invite and be sure to check back here -as well as the other delegates’ blogs- for some #VFD thoughts in the weeks ahead.