Unified Network Services (UNS) is one of the three architectural pillars of Cisco’s Data Center Fabric, along with Unified Fabric and Unified Computing Services (UCS). UNS represents our portfolio of Layer 4-7 application services, including security, WAN optimization, application controllers, network monitoring and orchestration. This TechWise TV episode is a great overview to the vision behind UNS and the benefits of pulling this all together, especially for virualized and cloud environments.
The Unified Network Services (UNS) portfolio of Layer 4-7 services (such as ACE and WAAS) also includes Cisco’s data center security solutions. A critical part of that security portfolio is our virtualization-aware firewall solution, Virtual Security Gateway (VSG). In a series of upcoming blog posts, I’ll be sharing a few use case scenarios that our customers are implementing with VSG.
For those of you new to VSG, I’ll point out that VSG’s role is to act as a virtual firewall between zones of virtual machines. Isolating traffic between VM zones has been very challenging prior to VSG because: 1) security policies have to be enforced between VMs running on the same server or same virtual switch (where there’s no place to put a firewall), 2) VMs move all around the network and the security policies (as enforced in the firewall) must follow the VM, and 3) the need to maintain segregation of duties for compliance purposes between the security and application server teams, where security is potentially enforced inside the virtual server.
If you are talking Microsoft SharePoint 2010, then chances are you have discussed load balancing at some point. Well, let’s just start with the basics. If we have more than one WFE (Web Front End) server, we are going to need a way to balance requests.
In its most simplistic form, load balancing is a methodology to distribute workload across multiple compute, network or storage resources. We recently published a SharePoint 2010 on FlexPod for VMware Cisco Validated Design which includes a hardware load balancer, the Application Control Engine ACE 30.
With an ever growing mobile and distributed workforce, application developers are being tasked to develop applications that can also be remotely accessed by this global workforce. Application developers, with a very basic understanding of networking, assume the network has no boundaries and applications perform optimally regardless of the mode of access. At the same time, cloud computing is enabling applications to be consolidated into centralized and virtualized data centers, further increasing the distance from where the applications are being accessed. Network architects are also being challenged with current network designs for this application deployment and delivery model. The available bandwidth is being taxed as the ever growing applications portfolio competes for network resources to provide a satisfying user experience across the network without boundaries. This application delivery model also demands capabilities for better visibility and control, WAN optimization, and agility of the network to rapidly deploy and manage enterprise applications.
The Cisco Application Velocity solution addresses all the challenges associated with the delivery and consumption of enterprise applications over the network without boundaries. It is one of the five services in Cisco’s Borderless Network Architecture and is composed of innovative Cisco technologies that help IT professionals meet or exceed business SLAs, maximize user experience, optimize resource utilization, and increase reliability and user expectations.
Tags: ACE, Cisco, Cisco ISR G2 Services-Ready Engine, Cisco Nexus, Cisco UCS, Cisco WAAS, cloud, Exchange 2010, NBAR, netflow, Oracle E-Business Suite, PfR, QoS, Sharepoint, SQL Server, UCS Express, virtualization, VMware, waas, WAAS Express
Today I want to bring up DCI use case that I’ve been thinking about: capacity expansion. As you know, the purpose of DCI is to connect two or more Data Centers together so that they share resources and deliver services. The capacity expansion use case is when you have temporary traffic bursts, cloud bursts, either planned or unplanned, maintenance windows, migrations or really any temporary service event that requires additional service capacity.
To start addressing the challenge of meeting these planned and unplanned cloud burst and capacity expansion requirements, check out the new ACE + OTV feature called Dynamic Workload Scaling announced recently.
Tags: ACE, Burst, Capacity Expansion, Cisco, cloud, Cloud Burst, data center, Data Center Interconnect, DC, DCI, DWS, Dynamic Workload Scaling, locality, Nexus 7000, OTV, SASU, Systems Architecture and Strategy Unit, virtual machine, VM, VM Locality