Interest in Software Defined Networking (SDN) continues to grow through the ability to make networks more programmable, flexible and agile. This is accomplished by accelerating application deployment and management, simplifying automating network operations and creating a more responsive IT model.
Cisco is extending its leadership in SDN and Data Center Automation solutions with the announcement today of Cisco Virtual Topology System (VTS), which improves IT automation and optimizes cloud networks across the entire Nexus switching portfolio. Cisco VTS focuses on the management and automation of VXLAN-based overlay networks, a critical foundation for both enterprise private clouds and service providers. The announcement of the VTS overlay management system follows on Cisco’s announcement earlier this year supporting the EVPN VXLAN standard, which underlies the VTS solution.
Cisco VTS extends the Cisco SDN strategy and portfolio, which includes Cisco Application Centric Infrastructure (ACI), as well Cisco’s programmable NX-OS platforms, to a broader market and for additional use cases, which includes our massive installed base of Nexus 2000-7000 products, and to customers whose primary SDN challenge is in the automation, management and ongoing optimization of their virtual overlay infrastructure. With support for the EVPN VXLAN standard, VTS furthers Cisco’s commitment to open SDN standards, and increases interoperability in heterogeneous switching environments, with third-party controllers, and with cloud automation tools that sit on top of the open northbound API’s of the VTS controller.
Jeff Aboud, Sr. Solutions Marketing Manager, Security Markets, Splunk Jeff Aboud has more than a dozen years in various areas of the security industry, spanning from the desktop to the cloud, including desktop AV, gateway hardware and software, encryption technologies, and how to securely embrace the Internet of Things. His primary focus today is to help business and security professionals understand how to visualize, analyze, and alert across a broad range of data sources in real time to maximize their security posture.
It’s no secret that advanced threats and malicious insiders present increasing security challenges to organizations of all sizes. Security professionals know that it’s not matter a question of if, but when an attack will successfully breach their network. Visibility is often what makes the difference between a breach and a major security incident, and enables proactive security posture throughout the attack continuum – before, during, and after the attack. It’s also essential to understand that the fingerprints of an advanced threat are often located in the “non-security” data, so the effective detection and investigation of these threats, before your data is stolen, requires security and non-security data.
So what does all this really mean, and how can you use it do dramatically improve your security posture?
You need to integrate and correlate the data from your firewalls, intrusion prevention, anti-malware, and other security-specific solutions along with your “non-security” data such as the logs and packet information from your servers, switches, and routers. This is no easy task with the large number of different security solutions present in most enterprise networks. But having all your data at your fingertips will help you improve your detection capabilities and automate the remediation of advanced threats.
But how can you do this, since Security Information and Event Management (SIEM) systems only look at traditional security sources? The partnership between Splunk and Cisco is the answer. Splunk is integrated across Cisco security platforms, as well as other places throughout the network including various Cisco switches, routers and Cisco Unified Computing Systems (UCS) to deliver broad visibility across your environment.
Together, Splunk and Cisco provide security and incident response teams the tools they need to quickly identify advanced threats, visualize them in real-time across potentially thousands of data sources, and take automated remediation action on Cisco firewalls and intrusion prevention systems. Read More »
[Note: This is part 3 in a three part series of blogs discussing how Cisco ACI stands alone in the market. Part 1 | Part 2]
In part 1 we talked about how Cisco ACI simplifies diagnosis and enables DevOps Model compared to competing network virtualization solutions.
In part 2 we talked about how Cisco ACI enables organizations to proactively assure SLAs and supports efficient and scalable architecture for demanding applications.
In part 3 we’ll look at a couple of scenarios impacting security and cloud IT teams. Again, we’ll review it from ACI perspective and compare that to other network virtualization solutions.
1) ACI Secures Bare Metal and Virtual Applications
Security and compliance are always top of mind for most organizations especially if they’re in the healthcare and financial industries. The challenge for these organizations is multi-fold; whether it is related to ensuring security rules are applied correctly and consistently across the entire infrastructure, responding quickly to security breaches and threats, enforcing compliance, etc.
Let’s zoom in on a common scenario that customers are facing today which is managing physical and virtual firewalls to secure both bare metal and virtual apps in a consistent fashion. The need to apply these policies consistently becomes more critical as organizations add virtual firewalls to secure East – West traffic in addition to physical firewalls. With Cisco ACI, all security management occurs from a single place, APIC. Security IT admins will be able to apply whatever policies required for bare metal and virtual applications without worrying about network settings. This means no errors that lead to downtime and faster service deployment to meet business velocity.
The other advantage with an ACI approach is the ability to seamlessly scale the infrastructure without compromise on security.
The approach in virtual network solutions will be limited to virtual firewalls and specific hypervisor. This means inconsistent policy management across physical and virtual environments that can compromise overall security and compliance.
2) ACI Automates Cloud Infrastructure For Any App And Environment
Surveys have shown that the majority of customers deploy multi-hypervisors strategy for various reasons. As such, organizations have to manage workloads on different virtualization stacks and are building a cloud strategy to ensure seamless operation and management.
So a true multi-hypervisor approach is required, and one that can bring the same level of service for all virtualization options and emerging cloud stacks.
See Joe Onisick here talking about a specific scenario where customers want to automate and orchestrate multiple hypervisors and bare metal servers environment in an open fashion. With ACI, we’re hypervisor agnostic and provide open RESTful API’s that allows them to automate and orchestrate through a system of their choice.
When you look at network virtualization solutions you’re limited to a single hypervisor but if you want to go with multi-hypervisors you end up with multiple control system.
With Richard Jacobick Cisco and CommVault have teamed up on a solution aimed squarely at contemporary data protection challenges. Data is the lifeblood of the enterprise, yet the playbook for how you preserve, protect and provide access to data may have been assembled years ago… and a lot has changed in those years. Consider the transformations around compute, networking, storage virtualization and cloud that have occurred over the last decade.
A data protection policy is similar to auto, home and life insurance because the ultimate goal is to mitigate risk by investing in an instrument that keeps the things you value most protected and safe. What would happen to the business if an unplanned event triggered a loss of data access today because of an outdated plan? There is a very good reason why we review our insurance policies on an annual basis and your data protection policy should go through the same periodic review.
A recent survey conducted by market research firm Vanson Bourne outlines how data loss and downtime has cost enterprises nearly $1.7 trillion over the past 12 months. The lack of a well-defined data protection process and comprehensive Disaster Recovery (DR) plan is most often the root cause in cases where data loss or downtime had a significant financial cost to the business.
Next in our series of Why I Love Big Data is Bruce from MapR. Together, Cisco and MapR are working on a very cool solution for keeping data local, but accessing very quickly. Also, come by the Connected Banking stand in the Cisco Live World of Solutions and DevNet area to see a demo of the distributed system. You will see how Cisco and MapR can provide solutions for security and data theft prevention to prevent theft of customer’s personal data and financial information.
Bruce Penn, Principal Solution Architect, MapR Technologies
Bruce is a Principal Solution Architect with MapR Technologies. He has over 22 years of Information Technology experience that includes Data Warehousing, Business Intelligence, Enterprise Architecture, Systems Design, Project Management and Application Programming. Prior to MapR, Bruce spent 8.5 years at Oracle and was instrumental in helping grow the Oracle Exadata Database Machine business through extensive collaboration with several large enterprise customers. Bruce was the first Solution Architect to join MapR’s Sales Engineering team and has been solely focused on the MapR Distribution for Hadoop and associated Apache Hadoop ecosystem technologies ever since. Bruce holds a Bachelor’s Degree in Electrical Engineering from Michigan State University.
Cisco and MapR have long been partners in the big data market, and with enterprises embracing the Internet of Everything (IoE) and moving towards a truly distributed data center environment, the combination of UCS and MapR provide unique capabilities to simplify this architecture.
Cisco UCS servers provide a powerful foundation for running distributed big data/Hadoop MapR clusters with unparalleled performance, availability, and manageability at the hardware level. The MapR Distribution including Apache Hadoop provides similar robustness at the software level, creating a rock-solid distributed platform for many flavors of IoE applications.
With the advent of IoE applications, data often originates at the “edge” of a system’s network, meaning that devices such as routers and switches in one data center will generate log data locally, while devices in other data centers will do the same creating silos of log data. In order for applications built around this log data to react in real time, they need to access that data as quickly as possible, and often those applications will want to aggregate the data across data centers in order to make decisions quickly, while keeping the data local to the originating data center. It may be important to keep the data local for legal and regulatory reasons, as well as for efficient local queries. With Cisco UCS Servers, MapR Data Placement Control, and Apache Drill, this becomes a simple task.