Wishing you all a very Happy New Year and thanks to you all for considering ACI as your SDN solution to simplify your data center operation and automation. This is a follow on to my last blog to provide details on how we have delivered a true disruption in networking and opened the gate for agility of your applications where-ever they are.
The ACI’s policy model allows a consistent way of managing your infrastructure. For people new to ACI, let me step back and describe what I mean by policy model.
With traditional networks, application teams put in requirements for their infrastructure teams who then translate it into networking constructs like VLANs, subnets, ports, and routes often times using spreadsheets. The following picture depicts a very simple case for a three tier application. As you can see, the workflow for how application requirements get translated which can be slow, labor intensive, and vulnerable to manual errors.
Read More »
Cisco Systems Application Centric Infrastructure (ACI) is the industry leading SDN platform according to Gartner, outpacing NSX by a factor of 2:1. ACI continues to accelerate past NSX by enabling Micro Segmentation and End-Point Granularity. In real world data centers, there are many simultaneous application delivery end points including VM’s from multiple hypervisors, bare-metal hosts, Linux containers, and layer 4 – 7 appliances that are both physical and virtual.
VMware recently published articles regarding this announcement and appear confused through inaccurately stating ACI capability. Juan Lage, a Principal Engineer at Cisco Systems provides an accurate and detailed description of our capabilities and addresses VMware’s obvious misunderstanding in his article below my introduction.
After reading Juan’s article below, the only thing left to say to VMware NSX is welcome to the “real world”
When we announced last month the 1.2 release of ACI (http://newsroom.cisco.com/press-release-content?articleId=1732204) we knew that we were bringing a lot of value to our customers, but we also knew that as a consequence, we are making it more complicated for competing offerings, and that there would be reactions to our announcement.
This is why VMware’s blog “VMware NSX and Split and Smear Micro-Segmentation”
(https://blogs.vmware.com/networkvirtualization/2016/01/vmware-nsx-and-split-and-smear-micro-segmentation.html ) did not come as a surprise.
The author of the blog attempts to prove that only VMware NSX can provide micro segmentation. Also, it appears the author suggests that you are not protected from “the bad” guys if you don’t have VMware’s Micro Segmentation.
It is an interesting post, but it has several statements that are inaccurate and a few ideas and exaggerations that are recurring in NSX’s marketing and that we certainly disagree with. Read More »
Being fast is important this time of year.
X–Wing Fighters in “Star Wars: The Force Awakens” are fast.
Avoiding that overly excited light saber wielding fan in line requires you to be fast.
Holiday shoppers are snatching up deals fast.
Retailers with transaction spikes need to add infrastructure capacity fast.
Your customers want their IT Infrastructure services fast…and Application Centric Infrastructure (ACI) helps deliver that speed.
This IDC report shows how Pulsant – a UK based IT Infrastructure Services Provider – delivers services fast with ACI. It also quantifies the returns on that speed and other benefits. In some ways, their story is like that of many customers – they need to deliver IT services faster, they need to do more with less…you know the drill. And if you are using ACI, you also know how to address those issues. If not, take a couple minutes and check out the report. In it, Martin Lipka, Head of Connectivity Architecture at Pulsant, addresses a number of interesting issues and IDC helps to quantify them. Check out how Pulsant is:
- Onboarding customers faster with the “simplified automation” ACI provides
- Growing its customer base without needing to add a commensurate number of network engineers
- Reducing the frequency of misconfigurations and improving the security of its services
In the report, Martin explains how “automation and repeatable processes enabled by Cisco ACI have benefited his company by reducing the time needed to provision network resources and speeding up deployment cycles.” For example, “Pulsant needed an average of 7–14 days before moving to Cisco ACI to deliver a bespoke cloud service to a customer, whereas it now needs only 2–3 days.” At the back end, when those services are no longer needed, “the network process of decommissioning a customer and cleansing the configuration has gone from taking hours to seconds thanks to Cisco ACI’s built-in automation.”
ACI helps Pulsant deliver services fast. ACI also delivered a return fast – ROI analysis showed a payback period of under 7 months.
In summary, if you are looking to deploy services fast, tear them down fast, get a return fast – check out the report and check out ACI.
And, oh yeah, as a public safety message, please let’s not swing those light sabers too fast tonight. May the force be with you…
Photo courtesy of commons.wikimedia.org
Tags: ACI, Agile IT, cloud, Cloud Computing, data center, devops, Fast IT
Announced today, TPC-DS V2 is the Industry’s first standard for benchmarking SQL based big data systems.
Over the last two years, the Transaction Processing Performance Council (TPC), has reinvented itself in several ways – with new standard developments in Big Data, Virtualization, and the Internet of Things.
Foreseeing the demand for standards for characterizing big data systems, in August 2014, the TPC announced the TPC Express Benchmark HS (TPCx-HS) – the industry’s first standard for benchmarking big data systems. The TPCx-HS was designed to evaluate a broad range of system topologies and implementation methodologies related to big data. The workload is based on a ‘simple’ application that is highly relevant to Big Data, especially for Hadoop based systems. ‘Simple’ is great – historically the end user customers have adopted simple workloads and easy to understand metrics. (Look at TPC-C! One of the most successful industry standards, with over a thousand publications demonstrating the progress of application performance inline with Moore’s law for over a quarter century. Metric is – transactions per minute – can we think of anything simpler than that?). TPCx-HS has done well so far as a standard, giving verifiable performance and TCO, with over a dozen benchmark publications with products from more than six vendors definitely broke the records for standards since the TPC-H in 1999.
That said, there is an important play for ‘complex‘ workloads, especially in developer and researcher circles. One such example is TPC-DS, originally developed to evaluate complex decision support systems, based on relational database systems. There is a long and interesting history with TPC-DS, it took over ten years for the TPC to develop this standard. Though there have been several research papers and case studies, there has been no official results submission since it became a standard in 2011. There are several technical and non-technical reasons, top among them are (i) the complexity of the workload with 99 query templates and concurrent data maintenance, (ii) complex means uncertainty, vendors are concerned about “over exposure” of their technologies and products in terms of performance and price-performance. So its a successful benchmarks in terms of serving the academic and research community but a failure in terms of serving the customers (purchase decision makers).
Interestingly, in the last two years, the Hadoop community has adopted the TPC-DS workload for performance characterization; this is mainly due to the richness and broad applicability of the schema, data generation, and some aspects of the workload, and its non bias towards relational systems. And, not surprisingly, there have been several claims that are not verifiable and reproducible by the end users – and obviously in violation of the TPC’s fair use polices. To put an end to this in a positive way, the TPC stepped up and created a work stream to extend support for non relational (Hadoop etc.) systems, resulting in the creation of the TPC-DS 2.0. If you go through the specification, you will see well thought out changes to make it Hadoop friendly in ACID compliance, data maintenance, and metric.
I am most excited about it’s use in comparing SQL based systems – traditional relational systems vs. non-relational – in terns of performance and TCO – something on top of mind for many.
The TPC is not stopping here. We are developing another benchmark – TPC Express Benchmark BB (TPCx-BB), that shares several aspects of TPC-DS, which will be offered as an easy to run kit. TPCx-BB is currently available for public review. The TPC is encouraging interested parties to provide their reviews by January 4, 2016 by clicking here TPCx-BB. And, if benchmarking IoT is of interest to you please join the IoT working group.
Significant contributors to the development of TPC-DS include Susanne Englert, Mary Meredith, Sreenivas Gukal, Doug Johnson, Lubor Kollar, Murali Krishna, Bob Lane, Larry Lutz, Juergen Mueller, Bob Murphy, Doug Nelson, Ernie Ostic, Raghunath Nambiar, Meikel Poess (chairman), Haider Rizvi, Bryan Smith, Eric Speed, Cadambi Sriram, Jack Stephens, John Susag, Tricia Thomas, Dave Walrath, Shirley Wang, Guogen Zhang, Torsten Grabs, Charles Levine, Mike Nikolaiev, Alain Crolotte, Francois Raab, Yeye He, Margaret McCarthy, Indira Patel, Daniel Pol, John Galloway, Jerry Lohr, Jerry Buggert, Michael Brey, Nicholas Wakou, Vince Carbone, Wayne Smith, Dave Steinhoff, Dave Rorke, Dileep Kumar, Yanpei Chen, John Poelman, and Seetha Lakshmi.
TPC-DS V2 Specification
TPC Press Release
Vendor-Neutral Benchmarks Drive Tech Innovation
The making of TPC-DS
Transaction performance vs. Moore’s law: a trend analysis
My goodness… Were we ever busy in 2015! Our Cisco Big Data & Analytics teams executed and delivered a tremendous body of work with several key accomplishments these past 12 months. All of our activities – across all of our teams – was focused on delivering to you leading innovation, with industry leading performance & scalability, and offering flexibility via a variety of Big Data choices. Of course all of it based on Cisco UCS, Nexus, and ACI. Let’s take a look at some of the highlights:
We introduced throughout 2015 various versions of our 3rd generation Big Data architecture. The solution, Cisco’s UCS Integrated Infrastructure for Big Data, integrates our industry-leading computing, network, and management capabilities into a unified fabric-based architecture. Packaged as a Cisco Validated Designs (CVD) our architecture supports the leading Hadoop distributions: Cloudera, Hortonworks, IBM, and MapR. Our Big Data CVDs provide you peace of mind as they are tested, validated, and supported. Take a peak at our Big Data CVDs here and see how they can expedite your Hadoop projects and drive operational efficiency.
Performance Read More »
Tags: ACI, Big Data, Cisco, Cisco Nexus, Cisco UCS, Cloudera, Hortonworks, IBM, MapR, Splunk