Broadcom CEO Hock Tan’s line from VMware Explore 2024 — “we are a business, and you should be too!”– will set the tone for this savvy I&O leader for the foreseeable future, at least as long as the VMware question remains unresolved in the minds of enterprise CIOs. As Broadcom’s purchase of VMware reminds us, it ultimately all comes down to cost, regardless of what aspirations around connecting the global community or changing the world of IT we may have.
The $64,000 question on everyone’s mind is: Broadcom bought VMware, so should I stay or should I go? Hock Tan and Broadcom promoted a narrative where enterprises were moving away from public cloud providers due to cost inefficiency, and that his private cloud was the best way to address this issue. Let’s attempt a cost-based analysis based on that question:
- Platform: As distasteful as it has been to some, VMware moving to a subscription model from perpetual licensing should have surprised nobody. In fact, the subscription party has been going on for a long time in this industry. Predictable annual revenue is much more attractive to shareholders than the sometimes seemingly random rise and fall of CapEx-based sales models. As beneficial as this CapEx model is to Broadcom’s bottom line, OpEx does have some benefits. The obvious is that many customers took advantage of the fact that as hardware became more powerful, fewer licenses were needed. OpEx may also make it easier for enterprise customers to compare costs between VMware and its competitors.
- Hosting environment: If you’re going to run a private cloud, then it’s up to you to bring the required pieces to the party. Another benefit of this new go-to-market model is the decoupling of VMware’s software price from hosting or OEM vendors. In fact, your VCF costs will be based only on the particulars of your VCF bill of materials. The types of hosting environments are as follows:
- DIY: This means buying your own compute, storage, and networking solutions and finding somewhere safe, air-conditioned, powered, connected, and redundant to put it all. Generally purchased on a significant CapEx budget with a very fungible refresh expectation and historically hard to quantify OpEx implication, this option includes maintenance, updates, and the periodic refresh projects.
- MSP: Between OVH Cloud, Rackspace, Aptum, and more, there are dozens if not hundreds of very competent managed service providers (MSPs) who can offer capacity elasticity while letting you retain title to bare metal. Many of these MSPs will take care of everything up to the hypervisor layer, leaving you and your team free to focus on higher-value activities. This is the most complete and predictable OpEx model.
- CSP: Google Cloud, AWS, and Azure all allow you to park and auto-scale your VCF instances onto their bare metal. This is OpEx through and through, with only a little imprecision that’s based on the nature of your VCF management.
- Staff: The 1980s were a time where any enthusiast could crack open a manual and become an expert through trial and error. Unfortunately, those days are long behind us and more complicated platforms that run ever more critical workloads require operators with specialized training. These operators should have the knowledge not only for operational efficiency, to help address very real security concerns too. Therefore, hard dollar costs for courses and soft dollar costs for technician time must be budgeted and accounted for. As a small boon to customers, VMware has made extensive training and professional services available for free with your subscription.
This gives us a perfectly fine high-level model to compare the costs of VMware in the IT business. You’ll even see this type of TCO analysis in the moralizing of many analyst and vendor PR firms. That said, we’ve left out one key element in our framework; what is the opportunity cost of staying with or leaving VMware?
Before you click away from this article since I’ve introduced such a rabbit hole, trust me when I say I have no intention of revisiting the server consolidation or PC refresh calculations that were made popular by chipmakers in the 1990s. I can’t recall a single follow up case study where a fleet of 5,000 new PCs reduced FTE headcounts in the enterprise. Did anyone at Intel actually think that spending $2,000 on a new PC for each employee would actually save them four hours a week and increase overall productivity by 10%? Well, in narrow, task-specific use cases, sure! And it’s exactly that level of specificity I’m encouraging you to explore here. There are aspects involved in the choice of your next-generation infrastructure platform that have, if not unique, then very contextually specific implications for your enterprise. Consider, but don’t limit yourself to the following elements:
- Operating model: Form follows function, and the function of VCF is the private cloud. However, is the cloud the best format for your enterprise IT operating model? Many organizations have blunted DevOps implementation due to an appetite for strong central oversight. In fact, on-demand resourcing is what led to what Hock described as public cloud “PTSD” and egregious overspending. Does your organization have the business need to push new code into production once a day?
- Ecosystem: Isn’t it funny how application development moved toward discrete and independent microservices while infrastructure moved towards all-encompassing platforms? Platforms are sticky, and you can be sure that vendors will use this to your disadvantage. Single point-of-failure and vendor management considerations aside, how is the reasonable IT leader to make a strategic decision between Serverless, .NET, BigQuery, and PrivateAI? Standardizing a single technology stack has a host of advantages, but can you make a bold strategy like that work without being second guessed every three months?
- Transition: To even ask and properly answer the question of whether to switch from VMware will come at a cost, between research time, consulting services, andPoCs. Assuming you do choose to switch, now you have the additional one-time costs of migrating your data and virtual machines (VMs) from one form to the other. It all comes down to the question of how your enterprise quantifies and evaluates risk.
- Control: There is a reason why we don’t outsource everything. Contractual controls for compliance objectives are materially different than operational or technical ones. What is the value you get from sharing day-to-day task ownership? Are you even allowed to share access to these assets? Is your enterprise looking to follow a suite of industry-standard best practices, or does your team have the insight to build a unique operational framework? Do your backup and monitoring solutions differentiate you from your competitors and give you a competitive advantage?
During VMWare Explore 2024, VMware, by Broadcom, has claimed that with VCF you can achieve:
- 61% faster workload deployment
- 34% lower infrastructure costs
- 66% quicker data loss incident resolution
- 564% ROI in three years
I guess it’s time to go ask for that raise! I eagerly look forward to a retrospective case study with auditable data and conclusions, or a forward-looking quarterly guidance from a publicly traded company that cites these figures. But for now, IT leaders do at least have a way to dissect this question and push back on wild, headline-grabbing vendor assertions.
Analysts and vendors often rely on general truths because they create products meant for a broader audience, which means they are not optimized for any specific reader. Enterprise and corporate IT leaders are not bound by these limitations. Hock Tan suggests that IT should behave more like a business, so let’s do that. The first rule of negotiation is to counter, refute, or debunk your competitors’ arguments. You possess intimate knowledge of specific details within your own organizations. While vendors might claim a $1 million spend will yield a $5 million return, would any of us agree with this ROI figure?
The second rule is to deny the premise of your opponent’s argument. Hock Tan wants you to believe the best way to achieve data protection is with a big VMware wrapper around all your workloads. I’m suggesting that data protection and portability go hand-in-hand. Organizations should focus on their ability to achieve data resilience for critical workloads, regardless of where they live. An organizations’ technical ecosystem will evolve continuously over the business’s lifespan, particularly in this age of exponential IT. Don’t let your data protection strategy become the way a vendor locks you into their platform.
General truths have their place but should never overshadow your unique understanding of your organization. Would you truly replace your informed perspective with a vendor’s opinion and jeopardize your team’s trust? I doubt it.
Veeam gives customers the data portability they need to move their workloads cross-platform or cross-hypervisor. This is true, regardless of whether it’s to or from any hypervisor or any of the three major cloud providers, or bringing a cloud workload back on-premises.
For more information to help with your evaluation, visit our migration resources page.
Related Resources