[Note: This is part 3 in a three part series of blogs discussing how Cisco ACI stands alone in the market. Part 1 | Part 2]
In part 1 we talked about how Cisco ACI simplifies diagnosis and enables DevOps Model compared to competing network virtualization solutions.
In part 2 we talked about how Cisco ACI enables organizations to proactively assure SLAs and supports efficient and scalable architecture for demanding applications.
In part 3 we’ll look at a couple of scenarios impacting security and cloud IT teams. Again, we’ll review it from ACI perspective and compare that to other network virtualization solutions.
1) ACI Secures Bare Metal and Virtual Applications
Security and compliance are always top of mind for most organizations especially if they’re in the healthcare and financial industries. The challenge for these organizations is multi-fold; whether it is related to ensuring security rules are applied correctly and consistently across the entire infrastructure, responding quickly to security breaches and threats, enforcing compliance, etc.
Let’s zoom in on a common scenario that customers are facing today which is managing physical and virtual firewalls to secure both bare metal and virtual apps in a consistent fashion. The need to apply these policies consistently becomes more critical as organizations add virtual firewalls to secure East – West traffic in addition to physical firewalls. With Cisco ACI, all security management occurs from a single place, APIC. Security IT admins will be able to apply whatever policies required for bare metal and virtual applications without worrying about network settings. This means no errors that lead to downtime and faster service deployment to meet business velocity.
The other advantage with an ACI approach is the ability to seamlessly scale the infrastructure without compromise on security.
The approach in virtual network solutions will be limited to virtual firewalls and specific hypervisor. This means inconsistent policy management across physical and virtual environments that can compromise overall security and compliance.
2) ACI Automates Cloud Infrastructure For Any App And Environment
Surveys have shown that the majority of customers deploy multi-hypervisors strategy for various reasons. As such, organizations have to manage workloads on different virtualization stacks and are building a cloud strategy to ensure seamless operation and management.
So a true multi-hypervisor approach is required, and one that can bring the same level of service for all virtualization options and emerging cloud stacks.
See Joe Onisick here talking about a specific scenario where customers want to automate and orchestrate multiple hypervisors and bare metal servers environment in an open fashion. With ACI, we’re hypervisor agnostic and provide open RESTful API’s that allows them to automate and orchestrate through a system of their choice.
When you look at network virtualization solutions you’re limited to a single hypervisor but if you want to go with multi-hypervisors you end up with multiple control system.
With Richard Jacobick Cisco and CommVault have teamed up on a solution aimed squarely at contemporary data protection challenges. Data is the lifeblood of the enterprise, yet the playbook for how you preserve, protect and provide access to data may have been assembled years ago… and a lot has changed in those years. Consider the transformations around compute, networking, storage virtualization and cloud that have occurred over the last decade.
A data protection policy is similar to auto, home and life insurance because the ultimate goal is to mitigate risk by investing in an instrument that keeps the things you value most protected and safe. What would happen to the business if an unplanned event triggered a loss of data access today because of an outdated plan? There is a very good reason why we review our insurance policies on an annual basis and your data protection policy should go through the same periodic review.
A recent survey conducted by market research firm Vanson Bourne outlines how data loss and downtime has cost enterprises nearly $1.7 trillion over the past 12 months. The lack of a well-defined data protection process and comprehensive Disaster Recovery (DR) plan is most often the root cause in cases where data loss or downtime had a significant financial cost to the business.
Next in our series of Why I Love Big Data is Bruce from MapR. Together, Cisco and MapR are working on a very cool solution for keeping data local, but accessing very quickly. Also, come by the Connected Banking stand in the Cisco Live World of Solutions and DevNet area to see a demo of the distributed system. You will see how Cisco and MapR can provide solutions for security and data theft prevention to prevent theft of customer’s personal data and financial information.
Bruce Penn, Principal Solution Architect, MapR Technologies
Bruce is a Principal Solution Architect with MapR Technologies. He has over 22 years of Information Technology experience that includes Data Warehousing, Business Intelligence, Enterprise Architecture, Systems Design, Project Management and Application Programming. Prior to MapR, Bruce spent 8.5 years at Oracle and was instrumental in helping grow the Oracle Exadata Database Machine business through extensive collaboration with several large enterprise customers. Bruce was the first Solution Architect to join MapR’s Sales Engineering team and has been solely focused on the MapR Distribution for Hadoop and associated Apache Hadoop ecosystem technologies ever since. Bruce holds a Bachelor’s Degree in Electrical Engineering from Michigan State University.
Cisco and MapR have long been partners in the big data market, and with enterprises embracing the Internet of Everything (IoE) and moving towards a truly distributed data center environment, the combination of UCS and MapR provide unique capabilities to simplify this architecture.
Cisco UCS servers provide a powerful foundation for running distributed big data/Hadoop MapR clusters with unparalleled performance, availability, and manageability at the hardware level. The MapR Distribution including Apache Hadoop provides similar robustness at the software level, creating a rock-solid distributed platform for many flavors of IoE applications.
With the advent of IoE applications, data often originates at the “edge” of a system’s network, meaning that devices such as routers and switches in one data center will generate log data locally, while devices in other data centers will do the same creating silos of log data. In order for applications built around this log data to react in real time, they need to access that data as quickly as possible, and often those applications will want to aggregate the data across data centers in order to make decisions quickly, while keeping the data local to the originating data center. It may be important to keep the data local for legal and regulatory reasons, as well as for efficient local queries. With Cisco UCS Servers, MapR Data Placement Control, and Apache Drill, this becomes a simple task.
Digital transformation hinges on the performance of the datacenter.
Cisco would like to share how digital transformation is turning traditional business models on their heads, enabling new innovative customer experiences. This is creating new business dynamics where speed is vital for organizations to stay competitive.
IT infrastructures are increasingly complex and include a broad range of technologies and platforms hosted in physical, virtual and cloud environments. Cisco UCS has become a world leading server platform in large part because the unique UCS architecture enables organization to harness the power of virtualization and dramatically simplify infrastructure management.
Splunk is a great complement to Cisco UCS because Splunk also helps organizations deal with the complexity of vast multi-vendor, multi-product, and multi-site environments. Splunk is a platform for real-time big data analytics which enables end-to-end, cross-tier visibility across applications, physical, virtual, and cloud infrastructure.
Do you need insights into your UCS server performance? Would it be valuable to troubleshoot application issues across server, storage, networking, and other domains? Are you already using Splunk to “… make machine data accessible, usable and valuable…”? Then you need to be using the just updated Splunk Add-on for Cisco UCS.
Splunk’s first (and only) out-of-the-box integration for server environments, Splunk integration with Cisco UCS provides real-time operational visibility not just across Cisco UCS domains but across multiple applications and infrastructure tiers. This enables organizations to identify & resolve problems faster, proactively monitor systems & infrastructure, track key performance indicators & understand trends & patterns of activity & behavior.
If you are thinking about adding Splunk insights into your environment, this is even more of a reason to do it on Cisco UCS servers. Last November, Ragu Nambiar blogged about a joint reference architecture with Splunk that improves performance up to 25x over the Splunk reference hardware. Cisco also published a Solution Brief. Look for updates on the reference architecture from Ragu soon.
Will you be at Cisco Live next week? Be sure to go to Splunk’s booth (#2319) to see the UCS app in action (or a number of the other integrations) or join the Big Data Analytics Demonstrations Booth Tours and find Splunk in Cisco’s Connected Transportation IoT, Security Solutions and Enterprise Networks Pavilions.