On January 13, 2015, Cisco will celebrate a year of industry adoption of Application Centric Infrastructure (ACI), a ground breaking SDN architecture. It will include a public webcast with ACI customers and ecosystem partners describing a range of new solutions that dramatically simplify data center and cloud deployments . One of these inaugural partners was Red Hat, the leading provider of open source solutions for enterprise IT . Since the ACI launch, Cisco and Red Hat have been working on extending the application policy model, at the heart of Application Centric Infrastructure, to OpenStack. Here is a preview of the Red Hat solution.
Cloud deployments of new mobile, social, and big data applications need a dynamic infrastructure to support higher demand peaks, more distributed users, varying performance needs, 24×7 global usage, and changing security vulnerabilities. These applications need a mix of virtualized and dedicated “bare-metal” resources, to run economically at scale with performance and availability.
To meet these needs, Cisco, Red Hat and other companies, have jointly developed Group Based Policy – a common open policy language that expresses the intent of business and application teams separately from the language of the infrastructure. Group Based Policy offers continuous policy governance while applications are deployed, scaled, recovered and managed for threats. It is ideal for rapidly deploying elastic, secure applications through OpenStack such as CRM, eCommerce, big data, financial reporting, and corporate e-mail.
IT organizations can get several benefits:
o Dramatically accelerate deployment of business applications and services through OpenStack.
o Maintain enforcement of business and application policies during frequent changes to scale, tenants, and the infrastructure.
o Simplify DevOps Release Automation – moving application changes to production.
o Ideal for hybrid cloud – Preserve user-intent and business policies across different infrastructures.
o Prevent shadow IT – empowers internal IT to match the agility of the public cloud while complying with corporate controls .
Network administrators can get additional benefits when Group Based Policy is combined with the full capabilities of Cisco Application Centric Infrastructure, including seamless management of heterogeneous infrastructure, policy based network automation, real-time troubleshooting and performance optimization.
Group Based Policy (GBP) is implemented through a new APIC Group Based Policy plug-in for OpenStack Neutron, the networking service. Since networking connects all compute and storage end points in the data center, it is possible to define groups of endpoints through Neutron that share the same application requirements, regardless of how they are connected. In addition, GBP:
- Captures dependencies between applications, tiers and infrastructure so that respective teams can evolve underlying capabilities independently.
- Works with multiple SDN controllers and extensible to multi-hypervisor infrastructures.
- Brings application policy-based provisioning to existing networking plug-ins.
Group Based Policy will be available and supported in the upcoming release of Red Hat Enterprise Linux OpenStack Platform 6. Learn more about Group Based Policy here. And register for Cisco’s webcast on January 13th.
Tags: ACI, Application policy, group-based policy, OpenStack, red hat, Red Hat Enterprise Linux OpenStack Platfrom
Cisco is announcing another important strategic partner to its list of ACI-compliant vendors with the addition of the Check Point Next Generation Security Gateway to the ecosystem. A couple months ago I wrote about the inherent security architecture in ACI (Security for an Application Centric World), and now the Check Point solutions fit right into that framework as an alternative to Cisco security solutions. Essentially, this means that the ACI controller, APIC, can now configure the application network to include the insertion and provisioning of Check Point virtual and physical security gateways as it does other Layer 4-7 application services and security appliances. The availability of the Check Point solutions will offer customers greater choice and flexibility while underscoring the open, multi-vendor approach of ACI.
[Note: Check Point will be participating in our upcoming ACI Webcast event: “Is Your Data Center Ready for the Application Economy”, January 13, 2015, 9 AM PT, Noon ET, featuring ACI customers and several other key ACI technology partners. Register here.]
In scalable, multitenant cloud environments with flexible resource placement, almost every workload must be secured from every other workload, with detailed security policies enabled between workloads in an application network: a concept called micro-segmentation. This level of security policy detail can become tedious to manage on an application-by-application basis. It also can potentially restrict workload mobility and the ways that applications can be deployed in the cloud.
Cisco ACI policies abstract the network, devices, and services into a hierarchical, logical object model. In this model, administrators specify the Layer 4 through Layer 7 services (firewalls, load balancers, etc.) that are applied, the kind of traffic to which they are applied, and the traffic that is permitted. These services can be chained together and are presented to application developers as a single object with simple input and output. Connection of application-tier objects and server objects creates an application network profile (ANP). When this ANP is applied to the network, the devices are told to configure themselves to support it. Tier objects can be groups of hundreds of servers, or just one device; the same policies are applied to all the objects in a single configuration step (see below).
The Application Profile Defines Security and Application Policies for Application Networks, and Cisco APIC Manages and Provisions Security Resources in the Fabric, Such as a Check Point Firewall, with the Right Policies for Each Application, at the Right Location
The integration with Check Point Next Generation Security Gateway provides automated security provisioning and a full range of security protections and threat-prevention capabilities in a highly dynamic and agile Cisco ACI environment. Check Point Security Gateways can be deployed as physical or virtual solutions and address today’s ever-changing threat landscape with a modular and dynamic security architecture.
Read More »
Tags: APIC, application centric infrastructure, Check Point, Cisco ACI, IPS, Nexus 9000, security
Welcome to 2015!
If you are like me, the New Year is a great opportunity to assess where I am at and where I am going. So let’s do that for Cisco Data Virtualization.
2014 – A Year of Exciting New Products
Before looking ahead at 2015, let’s first take a look at 2014 highlights.
2014 was an incredible year for new Cisco Data Virtualization products:
- In 2014, we shipped Cisco Information Server (CIS) 7.0, a major release of our flagship data virtualization offering. CIS 7.0 extended data virtualization to new audiences, enabled larger, more-complex deployments and integrated more data sources so our customers can run their businesses more effectively by leveraging all of their data.
- On the big data front, we announced Cisco Big Data Warehouse Expansion, a new offering that combines hardware, software and services to help customers control the costs of their ever-expanding data warehouses by offloading infrequently used data to low-cost big data stores. Analytics are enriched as more data is retained and all data remains accessible.
- And with our December 11, 2014, Connected Analytics Portfolio announcement, Cisco added a rich suite of analytics solutions that help organizations capture insights that create new opportunities, simplify business operations, enhance the customer experience, and resolve potential threats.
2014 Adoption Success
2014 was full of amazing Customer Adoption successes as well. The individuals who drove a number of these successes were recognized with Data Virtualization Leadership Awards at the fifth annual Data Virtualization Day on October 1st at New York’s Waldorf Astoria.
- Paul Dzacko, Lead Architect, Risk Systems, BMO and James Evans, Architect & Project Manager, Client Portal, HSBC were awarded Data Virtualization Champion Awards in recognition of their leadership in consistently achieving and promoting data virtualization’s value across their organizations and the broader data integration market.
- Victor Campbell, Principal Architect, Long Island Power Authority (PSEG) received the High Impact Award in recognition of data virtualization leadership in an environment where the result was high impact and critical to the business.
- Pratima Botcha, Sr. Technical Architect, Information Technology, AT&T Services was given the High Impact Award for her work in enhancing business agility through use of data virtualization technology and methods, rapidly establishing a path for high value across the organization.
2015 Will Be Bigger Than Ever
In 2015 the pace of change across the enterprise data landscape will continue to accelerate, disrupting how organizations compete.
The biggest driver of change is massive messy data everywhere, spanning many sources – cloud, data warehouses, devices – and formats – video, voice, text, and images. This distributed data landscape increasingly relies on data virtualization to bring order to the chaos.
To meet these needs in 2015 and beyond, Cisco Data Virtualization’s strategy is multi-faceted including:
- Simplify Use And Adoption – To provide agile data access to today’s self-service business users, Cisco will expand beyond Business Directory, which was data virtualization’s first offering developed exclusively for business users.
- Expand Data Virtualization’s Core – To address more sources and higher volumes and more, Cisco will continue to broaden our platform to scale reliably for the largest workloads and most complex requirements.
- Leverage Cisco Technology – Fortunate to be part of Cisco, we will take advantage of a broad range of Cisco offerings including our Nexus interconnect capabilities, UCS servers, software such as Tidal Enterprise Scheduler and more.
- Bring Data Virtualization’s Benefits to Big Data – Cisco Data Virtualization will continue to strengthen big data deployments with significant data abstraction, federation, directory, delivery, security and governance functions.
- Enable Intercloud and Internet of Everything (IoE) – As Cisco pioneers new Intercloud and IoE solutions, Cisco Data Virtualization capabilities will expand to meet these new challenges.
Beyond capability advancements, Cisco will greatly expand our coverage, partnerships, Customer Advisory Community and more as we go to market globally. Stay tuned to this blog throughout 2015 as we make formal announcements.
Happy New Year!
Join the Conversation
Follow us @CiscoDataVirt.
Learn More from My Colleagues
Check out the blogs of Mala Anand, Mike Flannagan and Nicola Villa to learn more.
Tags: big data warehouse extension, connected analytics, data virtualization, InterCloud, Internet of Everything
TDI and Converged Infrastructure for SAP. The new frontier
Join a google hangout on Jan 14th to discover how TDI and Integrated Infrastructure have simplified the installation and support for SAP installations. The link to register is below. Hope to see you there
SAP announced the Tailored Data Center Integration with storage for SAP HANA 1 year ago. This year just prior to SAP TechEd in Vegas, SAP released their Tailored Data Center Integration for Networking. What does this mean for the customer who might be installing SAP HANA today?
In Part 1 of this blog series, I talked about how data integration provides a critical foundation for capturing actionable insights that generate improved outcomes. Now, in Part 2, I’ll focus on the two other challenges that must be met to extract value from data: 1) automating the collection of data, and 2) analyzing the data to effectively identify business-relevant, actionable insights. This is where things, data, processes, and people come together.
Let’s start with automation.
After IoT data is captured and integrated, organizations must get the data to the right place at the right time (and to the right people) so it can be analyzed. This includes automatically assessing the data to determine whether it needs to be moved to the “center” (a data center or the cloud) or analyzed where it is, at the “edge” of the network (“moving the analytics to the data”).
The edge of the network is essentially the place where data is captured. On the other hand, the “center” of the network refers to offsite locations such as the cloud and remote data centers — places where data is transmitted for offsite storage and processing, usually for traditional reporting purposes. The edge effectively could be anywhere, such as on a manufacturing plant floor, in a retail store, or on a moving vehicle.
In “edge computing,” therefore, applications, data, and services are pushed to the logical extremes of a network — away from the center — to enable analytics knowledge generation and immediate decision-making at the source of the data.
Read More »
Tags: analytics, connected analytics, data, data analytics, edge, edge analytics, edge computing, future workforce, Internet of Everything, internet of things, IoE, IoT