Cisco Blogs


Cisco Blog > Data Center and Cloud

Next-Gen ACI ready VCE Vblock Systems Accelerate Journey to SDN and Cloud

 

VCE2VCEIt is nearly five years since Cisco, EMC and VMware came together to set up VCE and introduce one of the world’s best integrated infrastructure solutions with the Vblocks. The promise was to deliver “dramatic efficiencies” to customers promising significant reduction in capital and operating expenses giving customers flexibility and choice. Customers appreciated the operational simplicity of the model and Vblock sales took off with multi-billion dollar annual run rates.

Much has changed in the industry since then. The social-mobile-cloud-big data revolution has accelerated posing new requirements for IT and increasing the relevance of data centers and private cloud deployments. SDN has moved from being just a buzzword with several use-cases. Server virtualization has continued to drive efficiencies and hybrid clouds have become the new norm. Amidst all this, customers continue to crave operational simplicity and consumable infrastructure for their data center and private cloud deployments making the VCE approach as relevant as ever.

So, today, we’re very happy to share the success and celebrate the joint innovations as VCE rolls out its next generation Vblock systems that drive new levels of convergence. With Cisco continuing to refresh its portfolio with new Nexus products and industry leading SDN with the Cisco Application Centric Infrastructure (ACI) approach, and with Cisco UCS introducing next-generation products, it is natural that these innovations be reflected in the VCE Vblock integrated solutions.

Cisco is helping bring in new innovations to the party. The Nexus 9000 forms a key element with a very compelling form factor and industry leading price-performance. For customers interested in venturing into Software Defined Networking (SDN) and making their infrastructure application centric, the Application Policy Infrastructure Controller (APIC) provides a central point of management and policy application. The result is a simplified operational model and lower TCO across a variety of deployment scenarios.

As VCE introduces Vblock Systems 240, 540 and 740 today, they provide the flexibility of consuming the network elements as standalone switches or SDN deployments in an ACI mode.  Vblocks can therefore operate in a standalone mode with current automation mechanisms or in an ACI ready mode subscribing to the APIC policy-driven model. Customers adopting the new Vblock systems get the operational flexibility to choose.

Read More »

Tags: , , , , , ,

VCE Refreshes Vblock Portfolio with New Cisco UCS servers

Five years ago, VCE was created with the goal of providing a simple, efficient solution to deploy and run IT infrastructure. VCE’s Vblock Systems have enabled customers to focus on business innovation instead of integrating, validating, and managing IT infrastructure. It would be an understatement to say VCE has been successful.  Last year, Vblock Systems, built on Cisco UCS integrated infrastructure, surpassed their 2013 goal of $1 billion in annual sales and was recognized as a leader inthe integrated infrastructure market. In fact, in Gartner’s inaugural Magic Quadrant for integrated systems, VCE Vblock Systems is rated in the Leaders Quadrant, based on the tight integration of industry and market leading technologies from Cisco and EMC.

Today, VCE announced a major update and expansion to their Vblock Systems portfolio using the latest Cisco UCS servers and Cisco ACI-Ready switches. The new Cisco M4 model servers recently celebrated four world-records benchmarks, offering performance improvements up to 145 percent since the last processor generation. Customers can be confident that Cisco UCS servers will deliver outstanding application performance as part of a Vblock System. IT leaders want to accelerate infrastructure and application deployment and these new ACI-Ready Vblock Systems are an extension of Cisco’s application-centric data center strategy. We feel our application-centric approach, where the automated configuration of IT infrastructure in sync with the needs of the application, is essential to keeping pace with todays dynamic business priorities.

VCE also announced a cloud management solution with Cisco UCS Director.  VCE’s Integrated Solution for Cloud Management with Cisco pre-integrates UCS Director with a Vblock System, providing the capability to quickly instantiate an initial private cloud foundation for customer environments. UCS Director enables the automation and provisioning of compute, network, and storage resources, both physical and virtual. This automation of integrated infrastructure can further expedite the deployment of application-ready infrastructure.

Cisco is excited that our new products and technologies have been integrated into the Vblock portfolio and congratulate the VCE team on today’s announcement. We believe these new Vblock Systems and solutions will make it easier for customers to deliver the performance, agility, and availability for the most demanding applications.

Tags: , , , , ,

High Performance at a Compelling Value – Learn more about Cisco MDS 9148S Fabric Switch

Are you looking for a reasonably priced, yet powerful, flexible SAN solution?

Cisco MDS 9148S Multilayer Fabric Switch, a new 16G Fibre Channel SAN solution for small to medium businesses. This switch is powerful and flexible, with up to 48 autosensing line-rate 16G Fibre Channel ports and comprehensive enterprise-class features in a compact one–rack unit form factor. Plus, with an affordable price, the Cisco MDS 9148S brings the power of 16G Fibre Channel to a new level of value.

Join our next webcast (8-Oct-2014 08:00 AM PST) and learn more about the technical capabilities, design considerations, and best practices of implementing small SANs. You will also learn how to grow your  SAN transparently. See use cases, including designing small fabric, core-edge design, and migrating from 8G to 16G.

Watch this video as our experts demonstrate plug-and-play features and simple setup of MDS Fabric Switches with  Device Manager

Subscribe to youtube channel for more videos: https://www.youtube.com/user/ciscomds9000

Read More »

Q&A: Cisco IT’s Lessons Learned In Assessing the Risk of New Cloud Services

2,000+

That is the approximate number of cloud services that Ken Hankoff, Manager of Cisco IT Risk Management’s Cloud and Application Service Provider Remediation (CASPR) Program believes Cisco’s 70,000 employees use. For the last 14 years, this program has assessed and remediated risks associated with using a cloud-hosted service.

An assessment process for new cloud services is a vital step toward reducing the risk of using externally hosted services. Many customers I speak with struggle to rapidly assess cloud services and integrate them into their IT organization. As part of my blog series on governing cloud service adoption, I asked Ken to share some of his ‘lessons learned’ in assessing the risks of cloud services and bringing them into Cisco IT’s fold.

How do you ensure that teams wanting to use new cloud services work with your team?

Our team is not in the business of sourcing cloud vendors. That responsibility lies with the individual business units and their architecture teams who are seeking to use the service, often in partnership with IT. Once a vendor is selected, there are two primary ways in which my team gets engaged. First, through the Global Contracts team as they have made Cloud Service Provider assessment a part of the contracting process, and second when a new service is being integrated within IT.

How do you evaluaimage_CASPR risk rankingte whether a new cloud service is risky to the business?

We look at seven risk factors to create a formula for risk—business criticality, financial viability, security, resiliency, architectural alignment, regulatory compliance, and assessment status.

We establish the business criticality of the service to determine how Cisco would be impacted or disrupted in the event the capability provided by the vendor would go away, and whether we could react or compensate.

We then look at the financial viability of the vendor to give us comfort that they will remain in business. To evaluate vendors we leverage Dunn & Bradstreet’s Predictive Scores & Ratings. We rely heavily on Cisco’s Information Security (InfoSec) organization to provide us with a Security Composite Risk score. Depending on the parameters of the cloud provider engagement, InfoSec will look at the vendor’s application development process, infrastructure, data handling security, system-to-system interoperability, and other areas. For resiliency we focus on how they meet our standards around business continuity and disaster recovery to ensure that our business data will be there when needed, regardless of what happens.

We also need to ensure that we stay compliant with regulations. A vendor that has to comply with HIPAA, SOX, or other regulatory/privacy requirements poses a higher risk than one that doesn’t.  For this reason, we look into whether regulatory compliance is a factor, and if so, that it is addressed appropriately.

Finally, we also assess if the vendor aligns to the broader architecture that Cisco IT is investing in to support the business. Vendors are deemed higher investment risk if they do not align to the business and operational roadmap that Cisco is pursuing.

We re-asses vendors on a periodic basis according to their overall risk score. If a service is overdue for a reassessment, that in itself increases the risk of doing business with the provider, so we factor it in.

In your opinion, what are the three most important things to manage the business risks of cloud services?

First, I would suggest establishing ownership and governance of cloud services via a centralized PMO at enterprise level, not just within IT. This ownership needs to go beyond just assessing vendors for security risk, and focus on establishing company-wide policies for overseeing cloud services at the enterprise level.

Second, provide visibility into existing services and how they are being used. This helps enable a catalog of assessed and approved vendors for people to access. If you can have fewer vendors being used, you can reduce your risk.

Third, continually monitor services across the board to know what risks we might be facing, and ensure that the service providers are meeting their SLAs. Additionally, this helps to ensure that investments aren’t being wasted. There is a natural CSP application lifecycle – selection, implementation, adoption, and eventually that service usage might decline and you may end up supporting something that has very few users if you don’t have a lifecycle approach to phasing out services.

What is your biggest lesson learned in assessing new cloud services?

I wish the program had collected more metrics earlier. What we are finding is that there are a significant number of services being contracted all over the company. By collecting really good metrics we might have been more effective in showing executives what services are being used, who is using them, and how. We are making good progress on this now, but I wish we started earlier.

How are you monitoring cloud services and gathering this intelligence?

Our professional service team has helped us a great deal. With the Cisco Cloud Consumption Services, we have begun to capture an enterprise view of what cloud services are being used, by whom and have a great dashboard of metrics we can now use to inform Cisco executives. I never imagined before we were using the software that we had nearly 2,000 cloud services in use, but with Cisco Cloud Consumption we now know and can monitor activity.

Learn more about how Cisco can help monitor and manage cloud providers at http://www.cisco.com/go/cloudconsumption.

 

Tags: , , , , ,

Enable Automated Big Data Workloads with Cisco Tidal Enterprise Scheduler

In our previous big data blogs, a number of my Cisco associates have talked about the right infrastructure, the right sizing, the right integrated infrastructure management and the right provisioning and orchestration for your clusters. But, to gain the benefits of pervasive use of big data,  you’ll need to accelerate your big data deployments and make a seamless pivot of your “back of the data center” science experiment into the standard data center operational processes to speed delivery of the value of these new analytics workloads.

If you are using a “free” (hint: nothing’s free), or open source workload scheduler, or even a solution that can manage day-to-day batch jobs, you may run into problems right off the bat. Limitations may come in the form of dependency management, calendaring, error recovery, role-based access control and SLA management.

And really, this is just the start of your needs for full-scale, enterprise-grade workload automation for Big Data environments! As the number of your mission-critical big data workloads increases, predictable execution and performance will become essential.

Lucky for you Cisco has exactly what you need! Read More »

Tags: , , , , , , , , ,