Recently at our Cisco Live user event, I had the opportunity to talk to many IT organizations about managing and maintaining their data center environment, and the common theme I heard time and again was “how do I get the most value for my existing data center investments?”
It became evident that many IT organizations didn’t build out their data center strategy with an end-to-end hardware and software strategy, but rather purchased numerous point products along the way and are trying to manage their environments as efficiently as possible on top of heterogeneous hardware and multiple element managers.
The good news is that Cisco’s UCS Director can manage their end-to-end infrastructure including multiple element managers and heterogeneous systems – across compute, storage, network, and hypervisor – from a single pane of glass. And if they’re deploying integrated infrastructure systems like Vblock Systems, FlexPod, or VSPEX, the out-of-the-box support provided by Cisco UCS Director ensures faster provisioning processes, greater operational efficiency, and lower costs.
As evidenced in some of our recent case studies below – UCS Director is providing massive financial and time savings through unified infrastructure automation. Here are some of the things these three customers are saying about the benefits they’ve experienced with UCS Director:
“The effects of Cisco UCS Director have been enormous. Teams spend half as much time deploying environments. NESIC looked at other options for automated management tools for virtual environments, but only Cisco UCS Director could manage both virtual and physical environments.”
Head of the cloud architecture department
NEC Networks & System
Read the full case study here or below
Read More »
Tags: Cisco UCS Director, converged infrastructure management, FlexPod, Infrastructure Management, ucs director, unified management, Vblock, vspex
I recently did a project involving several moving parts, including Splunk, VMware vSphere, Cisco UCS servers, EMC XtremSF cards, ScaleIO and Isilon. The project goal was to verify the functionality and performance of EMC storage together with Splunk. The results of the project can be applied to a basic physical installation of Splunk, and I added VMware virtualization and scale-out storage to make sure we covered all bases. And I’d now like to share the project results with you, my dear readers.
Splunk is a great engine for collecting, indexing, analyzing, and visualizing data. What kind of data you ask? Pretty much everything you think of, including machine data, logs, billing records, click streams, performance metrics and performance data. It’s very easy to add your own metric that you want to measure, all it takes is a file or a stream of data that you enter into your Splunk indexers. When all that data has been indexed (which it does very rapidly as seen in my earlier blog post), it becomes searchable and useful to you and your organization. Read More »
Tags: #ciscochampion, Cisco UCS, EMC, Splunk, VMware vSphere
According to GigaOM, the use of cloud-based resources will be what’s “next” for IT in preparation for an in-depth look at the infrastructure that will drive the next decade of application development.
At the recent Structure event, GigaOM tapped into the minds of cloud-technology industry leaders, seeking insight into the “Top 5 Questions for the Titans of Cloud.”
In this post, Gee Rittenhouse, Vice President/General Manager, Cloud and Virtualization Group at Cisco, provides answers and insight on cloud infrastructure, exchange, data security and more.
Top Cloud Question #1: “When will all the major clouds support the same set of APIs?”
Today, there is a three-horse race between two proprietary APIs (Amazon Web Services and VMware’s vCloud API) and one open API (OpenStack). For now, the two proprietary APIs will continue to be the dominant players, leveraging their large public cloud (in the case of AWS) and private cloud (in the case of VMware) deployments.
But, as an increasing number of service providers and enterprises adopt and deploy OpenStack cloud solutions across both public and private models, the balance will shift, more than likely over the next two to four years.
Cisco’s approach is different from other, more infrastructure-centric public cloud offers. We believe that the open API model OpenStack will eventually be the dominant cloud API model and will ultimately become the de-facto standard.
Looking to the future beyond just a hybrid cloud conversation toward the Intercloud, an interconnected global cloud of clouds, built with a commitment to open standards and based on OpenStack, will feature APIs to connect any cloud or hypervisor to any other cloud or hypervisor.
Read More »
Tags: API, Cisco, cisco intercloud, CiscoCloud, cloud, Cloud Computing, cloudquestions, data center, Gee Rittenhouse, Gigaom, Hybrid Cloud, IaaS, InterCloud, openshift, OpenStack, paas, private cloud, Public Cloud, SaaS, XaaS
As storage area networking (SAN) evolves to meet new demands, customers are planning a migration strategy to transparently migrate from a heterogeneous environment. Cisco is committed to making this simple and efficient by expanding its flexibility and simplicity to interoperate smoothly with all industry-standard solutions. Cisco SAN switches have set a new standard by providing interoperability, flexibility, and functionality within MDS switches to meet the needs of today’s changing SANs.
Migrating SANs from one vendor to another requires a specific plan that includes design, configuration, and implementation processes along with post migration analysis. This Webinar helps you evaluate appropriate options for SAN conversion from third-party solutions to Cisco SANs using the Cisco MDS 9000 Family.
Register Now: Live Online Event, Wednesday, June 25, 2014; 8–9 a.m. Pacific Time
When migrating to a Cisco MDS 9000 Family SAN, you can choose among three migration methods: rip and replace, cap and grow, and interoperate. The choice of migration method is determined by several criteria, including whether you want a single-vendor or mixed-vendor operation, risk-mitigation needs, migration timeline, connectivity requirements and overall fabric capacity during the migration process.
Read More »
Tags: Cisco MDS 9700, Migrate to Cisco MDS, SAN migration webinar
Two years back, I disparaged hybrid clouds in my blog: “Why Hybrid Clouds Look Like my Grandma’s Network”. Since then the pain and necessity of many clouds in business environment has become acute. I see a great similarity between Hybrid Clouds and Bring Your Own Device (BYOD) phenomenon that has become well-accepted in today’s organization. IT tried to resist it initially, but the consumer movement proliferated into the workplace and was hard to control. Hence IT had no choice but to follow along.
A similar movement is emerging in Cloud. After Amazon Web Services (AWS) made it simple for application developers to swipe credit cards to buy compute and get up and running in a jiffy, the addiction has been hard to stop. Enterprise stakeholders are consuming cloud infrastructure by the hour and in the process running up total costs for their organizations and leaving gaping holes in security and compliance. But this time around, IT has an opportunity to get ahead of the phenomenon.
Challenges with existing hybrid cloud approaches:
Vendor lock-in: It is hard to argue against the flexibility offered by public clouds. However, few realize that the flexibility comes at the cost of vendor lock-in. Public cloud APIs are typically custom and moving the workload back is almost impossible.
Skyrocketing costs: Granted that public cloud vendors have been driving down costs. However, using public cloud for regular application deployments is like using a rental car for long-term use. If you need a car temporarily, say during a vacation, it makes sense to rent it by the day. However, when you are back at home and need a car for everyday commute, using a rental car will run up costs. This is what enterprises are running into when public cloud charges for resources and bandwidth start to add up. However, it is hard to get out once you are locked into operational practices and workload customization in your favorite cloud.
Security & Compliance holes: Security, what security? When you don’t even know what workloads are running in public clouds and you have no control over who accesses them and how, it is needless to say how big a security and compliance hole this is.
The Solution: Embrace Bring Your Own Cloud (BYOC), build hybrid clouds with Intercloud Fabric
Now that we agree that there’s no way around folks bringing their own clouds, IT needs to provide choice to users while driving consistency, control and compliance for its own sake. Here’s how Intercloud Fabric make this possible:
Choice: Intercloud Fabric enables IT to support a number of clouds including giant public clouds (Amazon, Azure) or their favorite cloud provider including Cisco Powered.
Consistency: Although users get choice of clouds, IT can maintain consistency in networking, security and operations. This is made possible by seamless workload portability across clouds, say vSphere to AWS while maintaining enterprise IP addressing and security profiles.
Compliance: Since public clouds appear as an extension of enterprise data center, current compliance requirements like logging, change control, access restrictions continue to be enforced.
Control: IT controls the cloud in a good way. They don’t have to say “No” to their end users in consuming diverse clouds but can still manage them with a single console and move workloads back and forth.
Seem too good to be true?
See how cloud providers and business customers are getting ready to do it -- replay of recent webcast Securely Moving Workloads Between Clouds with Cisco InterCloud Fabric
Also, if you are Gigaom Structure in San Francisco this week, you can see the solution in action and get further insights in our workshop on Intercloud Fabric.
Tags: AWS, Azure, Cisco cloud, Cisco Data Center, Cisco Powered, cloud, Hybrid Cloud, InterCloud, intercloud fabric