Avatar

Every server manufacturer has a Server TCO tool, of sorts. Why do I say “of sorts”? Because rather than a straightforward approach to TCO, some tools color the input parameters with fixed pre-sets, conditions and assumptions. Certainly every tool has some assumptions and pre-sets; they just need to be applied equitably across all scenarios and all vendors. If not, you get results that “…would strain credulity…” in the immortal words of Captain Barbossa [Pirates of the Caribbean – At World’s End; my daughters love these movies. OK, me too.]

TCO analyses are an important part of making strategic planning decisions for data centers and it is critical that analyses are credible, thorough, and fair. Results from tools that aren’t, are worse than useless. They deliver inaccurate information and analyses to decision makers, and as a result can put businesses at a competitive disadvantage: squandering funds and wasting time. BAD decisions (best available data) originate with bad data from bad analyses.

A server TCO tool should not be limited to server cost modeling. Rather it should:

1) Accurately represent server and related architecture cost

2) Incorporate feature and functionality modeling – this is all about how compute, bandwidth and management scale at the individual chassis, single domain and extended domain levels; and,

3) Do a fair and reasonable analysis.

“Fair and reasonable” pretty much begs for a definition. To do a meaningful analysis of a TCO tool, we need to understand what fair means. It is not complicated, nor should it be. If being fair is a complicated process then there are likely monsters under the bed.

Below is what I view as fair and meaningful.

  • Any assumptions and pre-sets should be applied across all vendors and scenarios equally. For Cisco TCO Tools, the only pre-sets are tables supplied by Intel which estimate the additional ‘cost per server’ for growth in an existing environment. This provides a baseline for “old vs. new” scenarios. These are supplied by Intel to all server vendors upon request. The tables do not apply to the cost of new servers or to a new environment. Since this existing environment baseline is used for comparison for all new servers and new environments, regardless of vendor, it supports the “fair and reasonable” criteria.
  • All pricing, not just servers, should be at retail to retail (or MSRP – manufacturer’s suggested retail price) first, and then allow you to apply the appropriate discounts to all vendors. Preset and fixed discounting in TCO tools and the selective use of other than retail or MSRP for baseline comparison is dishonest at best.
  • TCO tools should be able to model Servers, Networking and Management. This is important because servers are not stand alone entities. They are part of a system that provides a product – data processing and access.  Think of it like buying a car. Would you only look at the motor, or would you include the electrical system, drive train, steering, etc.? Driver interaction and control (management) is a critical part of a car, so make sure it is included here as well.
  • CapEx only contributes about 20% (frequently less) to an IT department’s Total Cost of Ownership for their servers [for more on this see – Blade Server TCO and Architecture – You Cannot Separate Them].   For this reason, it is critical that a TCO modeling tool have the ability to capture OpEx variables in the analysis: particularly, the impact of systems management features on operational efficiency for IT staff

If all servers had an identical architecture and feature set you wouldn’t be doing an analysis. Because there are architectural differences between vendors, an analysis should allow you to take the factors above into account. These differences can include server/network architecture and capabilities and management (including management interfaces, software and the effect of server configuration portability and policy management.)

To do a reasonable TCO analysis, you should be able to specify inputs that accurately reflect the existing and future for your data center data. What you don’t want is to have input restrictions dictated to you because of tool limitations or pre-defined assumptions. Supplying your data points definitely makes doing an analysis more complex, but it gives you valid results, eliminating the “smoke and mirrors” effect with which many server TCO tools are burdened.

Cisco UCS TCO-ROI ToolCisco has several UCS TCO tools that meet the fair and balanced requirement. The Cisco UCS TCO/ROI Advisor does a good job of modeling the potential saving which UCS can deliver versus your existing architecture for both rack and blade servers. If you just want a look at some fast results with very minimal inputs, you can get them. Or, you can dig into the details via the “Assumptions” tab and get pretty granular in modeling your data center: servers, networking, and management tasks.

When you want to really get into the details with an analysis, the Cisco UCS TCO Advanced Advisor tool can do that – with your data center information. The tool provides for separate inputs for your existing environment, as well as multiple vendor alternatives for the new solution.   The UCS TCO tool has over 300 “knobs” for inputs that include: up to two layers of switching in addition to blade chassis I/O modules, with varying I/O protocols if you wish, including 1GE, 10GE, FCoE and Fibre Channel; as well as the management network. This is in addition to blade and rack server modeling: server configuration, user specified server consolidation and virtualization rates, server network ports and types.  Variables on the OpEx side include datacenter power and space costs, staff costs, time on various tasks, and many other inputs, including multiple user defined OpEx and CapEx costs. The tool uses the same baseline data discussed earlier for compare/contrast vs. future multi-vendor architectures. The UCS TCO Advanced Advisor tool requires the assistance of your Cisco or Cisco Partner team.   Please reach out to them today for a customized TCO/ROI analysis. 

Caveat emptor as relates to TCO tools.  Be sure to understand any assumptions or presets the vendor has baked into their tool and how it will affect your results.  While it may take more time, using a tool with rich modeling capability is often the only way to get numbers with which you can be confident accurately reflect the reality of your data center.   More so than anywhere else in data center planning, this is a place where you ‘get out of it what you put into it’.