Return on investment has been around for ages, but the meaning of ROI is taking a spin in today’s business world. Companies are no longer purchasing solutions for technology improvement; they are investing in better industry processes as a whole. In return, they can achieve positive cash flows.
Concentra, a national healthcare company, provides a perfect example. With an outdated data center, the company had exhausted their power and cooling resources and was in need of reconstruction.
Concentra did some research and discovered that, by significantly investing in revamping their IT infrastructure, not only could they dramatically improve efficiencies and performance, but they could also create a positive cash flow for the company.
Furthermore, implementation doesn’t have to be risky. Concentra’s Senior Vice President and CIO, Suzanne Kosub, says, “With the right planning and financial analysis, we were able to show exactly how much the project would cost, how long it would take to pay for itself, and what the company would gain moving forward.”
Reduction in the complexity of deploying and managing services, accelerating new service introduction, and reducing capital/operational expenditure overhead are key priorities for network operators today. These priorities are in part driven by the need to generate more revenue per user. But competitive pressures and increasing demand from consumers are also pushing them to experiment with new and innovative services. These services may require unique capabilities that are specific to a given network operator and in addition may require the ability to tailor service characteristics on a per-consumer basis. This evolved service delivery paradigm mandates that the network operator have the ability to integrate policy enforcement alongside the deployment of services, applications, and content, while maintaining optimal use of available network capacity and resources. Read More »
Back in March we announced the third generation of UCS, with significant expansions to the I/O and systems management capabilities of the platform as well as a new lineup of servers. This month we’re continuing to expand the UCS server lineup with the addition of four new models. The latest batch of M3 systems are comprised of three Intel Xeon “EN” class machines (E5-2400 series processors) as well as a four socket “EP” (E5-2600 series) blade server. Specifically: the UCS B22 and B420 M3 blades and the C22 and C24 M3 rack servers. These new servers round out the UCS portfolio with an even stronger set of products optimized for scale-out and light general-purpose computing as well as a new price/performance 4S category in the mid-range.
If you prefer watching than reading , here is a nice conversation between Intel Boyd Davis , VP & GM, Data Center Infrastructure group, Cisco Jim McHugh, VP UCS Marketing, and Scott Ciccone, Sr. Product Marketing Manager, highlighting the key benefits of these new models.
To figure out how these fit in, let’s step back and consider the broader evolution of server technology in play here:
1) Cisco has made server I/O more powerful and much simpler.
One of the key differentiators of UCS is the way in which high-capacity server network access has been aggregated through Cisco Virtual Interface Cards and infused with built-in high performance virtual networking capabilities. In “pre-UCS” server system architectures, one of the main design considerations was the type and quantity of physical network adapters required. Networking, combined with computing sockets/cores/frequency/cache, system memory, and local disk are historically the primary resources considered in the balancing act of cost, physical space and power consumption, all of which are manifested in the various permutations of server designs required to cover the myriad of workloads most efficiently. Think of these as your four server subsystem food groups. Architecture purists will remind us that everything outside the processors and their cache falls into the category of “I/O” but let’s not get pedantic because that will mess up my food group analogy. In UCS, I/O is effectively taken off the table as a design worry because every server gets its full USRDA of networking through the VIC: helping portions of bandwidth, rich with Fabric Extender technology vitamins that yield hundreds of Ethernet and FC adapters through one physical device. Gone are the days of hemming and hawing over how many mezz card slots your blade has or how many cards you’re going to need to feed that hungry stack of VM’s on your rack server. This simplification changes things for the better because it takes a lot of complication out of the equation.
If I become hiring manager for a Data Center team, I’m asking candidates whether they have Tetris skills. Anyone who can neatly fill a space with odd-shaped blocks falling at ever-increasing speed can oversee the rack-and-stack activities in my Data Centers.
I talked in my last two posts – on preparing for and then executing a Data Center move – about planning where you want to place your Data Center hardware. That’s a good idea even if you’re not moving your server environment, because how you deploy your equipment affects how efficiently rack space is used, airflow patterns and more. Read More »
Ah, moving day. You’ve spent weeks packing your valuables into boxes and are now fervently hoping your movers treat them like priceless artifacts rather than testing their bounce factor. Sure, said movers are either complete strangers you’ve hired or friends you’ve enticed with beer and pizza, but what could possibly go wrong?