What if you were able to give everyone in your organization the flexibility and freedom to securely work anywhere in the world and on any device? What types of productivity gains would your company see as a result? What efficiencies or cost savings might your IT department receive from moving desktops to your datacenter and managing these virtual workstations through one single pane of glass?
Our Cisco UCS team is excited to present the new Maxwell generation NVIDIA Tesla M6 GPU for the Cisco UCS B200M4 Blade and the NVIDIA Tesla M60 GPU for Cisco 2U Rack Rack Servers. Cisco and NVIDIA have joined forces to deliver this new graphics solution. Combining security, reliability and manageability from Cisco UCS and adding NVIDIA’s GRID technology, we’re able to deliver performance and speed needed to run high-end applications on virtual desktops. What’s better is that you have two form factor options to fit your organization’s’ data center footprint.
The M60 Rack GPU is supported with UCS Manager 3.1(1) and later & Cisco Integrated Management Controller (CIMC) 2.0(9) and later; the M6 Blade GPU is supported with both UCS Manager 3.1(1) and 2.2(7) and later.
With Cisco UCS & NVIDIA GRID, you can now expand your virtualization footprint without compromising performance or user experience while also increasing security. This means, you can empower your workforce to create anything around the world, from any location with the ease and flexibility.
The new Cisco & NVIDIA M6 Blade GPU solution is fully integrated with the flagship B200M4 Server, supporting all CPU configurations and performance on par with NVIDIA K2 GPU, at less than half the power profile! With two of the M60 GPU’s on Cisco UCS C240M4, you can now enable high density NVIDIA Tesla Compute and GRID 2.0 VDI user consolidation, which is over 8,000 CUDA cores & 32GB of GPU memory for up to 64 GPU-accelerated virtual desktop users.
This is especially exciting for organizations in the Oil and Gas, Manufacturing and Design industries since historically, this type of work demanded high-end applications like ESRI, AutoCAD, Petrel and Siemens to be used in on-site with workstations. But now, these types of applications can be powered virtually through your data center to any device.
Take an airplane manufacturer for example. With a follow-the-sun working model, this organization can now empower its employees to design from anywhere in the world with the device they prefer, while all working on the same application in real-time. This helps the company save time, money and fuels rapid innovation. Last but not least, this is all done while maintaining security for the organization.
Empower your employees to create flexibly, securely and from any device and any location in the world. Now possible with Cisco UCS & NVIDIA GRID.
Productivity, mobility, security, and flexibility for all. This changes everything.
Learn how others are transforming their data centers with Cisco UCS.
Tags: AutoCad, B200M4, Cisco UCS, Cloud Computing, GPU, innovation, mobility, NVIDIA, NVIDIA GRID cards, partner, technology, Tesla, UCS, UCS Manager, vdi, virtualization
Being fast is important this time of year.
X–Wing Fighters in “Star Wars: The Force Awakens” are fast.
Avoiding that overly excited light saber wielding fan in line requires you to be fast.
Holiday shoppers are snatching up deals fast.
Retailers with transaction spikes need to add infrastructure capacity fast.
Your customers want their IT Infrastructure services fast…and Application Centric Infrastructure (ACI) helps deliver that speed.
This IDC report shows how Pulsant – a UK based IT Infrastructure Services Provider – delivers services fast with ACI. It also quantifies the returns on that speed and other benefits. In some ways, their story is like that of many customers – they need to deliver IT services faster, they need to do more with less…you know the drill. And if you are using ACI, you also know how to address those issues. If not, take a couple minutes and check out the report. In it, Martin Lipka, Head of Connectivity Architecture at Pulsant, addresses a number of interesting issues and IDC helps to quantify them. Check out how Pulsant is:
- Onboarding customers faster with the “simplified automation” ACI provides
- Growing its customer base without needing to add a commensurate number of network engineers
- Reducing the frequency of misconfigurations and improving the security of its services
In the report, Martin explains how “automation and repeatable processes enabled by Cisco ACI have benefited his company by reducing the time needed to provision network resources and speeding up deployment cycles.” For example, “Pulsant needed an average of 7–14 days before moving to Cisco ACI to deliver a bespoke cloud service to a customer, whereas it now needs only 2–3 days.” At the back end, when those services are no longer needed, “the network process of decommissioning a customer and cleansing the configuration has gone from taking hours to seconds thanks to Cisco ACI’s built-in automation.”
ACI helps Pulsant deliver services fast. ACI also delivered a return fast – ROI analysis showed a payback period of under 7 months.
In summary, if you are looking to deploy services fast, tear them down fast, get a return fast – check out the report and check out ACI.
And, oh yeah, as a public safety message, please let’s not swing those light sabers too fast tonight. May the force be with you…
Photo courtesy of commons.wikimedia.org
Tags: ACI, Agile IT, cloud, Cloud Computing, data center, devops, Fast IT
Yesterday, Cisco announced a new software release for ACI. If you are looking to automate IT, or build out your cloud environment, and want to do so in an open fashion that provides a lot of flexibility – then you’ll probably be interested.
Why? The new ACI release:
- Makes managing and securing your cloud environment easier;
- Provides openness, expanding customer choice; and
- Delivers operational flexibility
OK, so what does this actually mean?
- Makes managing and securing your cloud environment easier
Three of the most popular cloud management tools include Microsoft Azure Pack, OpenStack and VMware vRealize. Earlier this year, we announced Windows Azure Pack ACI integration. With this new ACI release, we integrate ACI with OpenStack and vRealize, as well. (More details are here.) So this means that if you need to, say, provision a virtual workload in vCenter, ACI automagically orchestrates things to match computing resources and networking infrastructure. So, you can enjoy the policy based automation and all the other benefits of ACI regardless of which of these tools you use to manage your cloud environment.
This also means OpenStack users can now create and manage their own virtual networks, extending ACI policy directly into the hypervisor with a hardware-accelerated, fully distributed OpenStack networking solution – the only one available that integrates both physical and virtual environments.
To more easily and completely secure these environments, the new release provides micro-segmentation support for VMware VDS, Microsoft Hyper-V virtual switch, and bare-metal endpoints. Essentially, this means more granular enforcement of security policies. These can be based on numerous different criteria relevant to attributes associated with the network, e.g. IP address, or the virtual machine, e.g. VM identifier, Name, etc. There are additional capabilities that can, for example, disable communication between devices within a policy group (intra EPG, for those more familiar with ACI) – useful in thwarting lateral expansion of attacks.
- Provides openness, expanding customer choice
Piggybacking off some comments above, it’s worth noting that since ACI’s inception, one of its differentiators has been the ability to integrate physical servers as well as virtual machines, and to apply policy consistently across them. Well, now there’s a new kid on the block, as the industry observes an increasingly popular trend to use containers as another way of operating applications. As part of this announcement, we are extending ACI support to include Docker containers, in addition to VM’s and bare metal servers. This is done by using Project Contiv, which is an open source project that has a Docker network plugin allowing, among other things, automatic configuration of Docker hosts to integrate with ACI. Check out details on this video and/or this white paper. Network Computing commented here, that:
“Given all the hubbub in the industry over Docker, ACI’s new Docker container support is noteworthy.”
Another way this new release is driving openness and providing more choice for customers is around L4-7 services. ACI now supports service insertion and chaining for any service device. So, customers can leverage their existing model of deploying and operating their L4-L7 device, while automating the network connectivity. This is in addition to, not instead of, the device package model, which provides for more comprehensive ‘soup to nuts’ automation. Speaking of which, as part of this announcement, several new partners also joined the ACI Ecosystem. This video provides some insight into how some of them automate your applications.
- Delivers operational flexibility
The new release has a number of tools that create more flexible operating environments. A quick rundown includes the multi-site app, which enables policy-driven automation across multiple datacenters, providing enhanced application mobility and disaster recovery. In short, this means you can run ACI in 2 different data centers, and extend the policy across them. Other tools provide the ability to do configuration rollback, as well as NX-OS Style CLI. This is for the CLI junkie that wants to run the entire ACI fabric as a single switch. There are some other cool nuggets in here as well, like a heat map that provides real-time visibility into system health.
Clayton Weise, Director of Cloud Services at KeyInfo, summed it up best when he said:
“ACI is the direction we’re going to go because it gives us the best flexibility.” (Read the entire Network World story here.)
In summary, this new release adds capabilities that will help you more effectively manage and secure your cloud environment, as well as leverage the benefits of both openness and operational flexibility.
Tags: #CiscoACI, #ciscodatacenter, ACI, API, cloud, Cloud Computing, containers, data center, docker, L4-7 Services, Linux Containers, Open, SDN, security
ITD and RISE are now part of CCIE Data Center:
Intelligent Traffic Director (ITD) is a hardware based multi-terabit layer 4 load-balancing, traffic steering and services insertion solution on the Nexus 5k/6k/7k/9k series of switches.
||Written Exam (%)
||Lab Exam (%)
|1.0 Cisco Data Center L2/L3 Technologies
|2.0 Cisco Data Center Network Services
|2.1 Design, Implement and Troubleshoot Service Insertion and Redirection
- 2.1.a Design, Implement and Troubleshoot Service Insertion and Redirection for example LB, vPATH, ITD, RISE
2.2 Design, Implement and Troubleshoot network services
- 2.2.a Design, Implement and Troubleshoot network services for example policy drivenL4-L7 services
|3.0 Data Center Storage Networking and Compute
|4.0 Data Center Automation and Orchestration
|5.0 Data Center Fabric Infrastructure
|6.0 Evolving Technologies
To learn about RISE (Remote Integrated Services Engine), please see: http://www.cisco.com/go/rise
To learn about ITD (Intelligent Traffic Director), please see: http://www.cisco.com/go/itd
Tags: #BestofInterop, #CiscoITD, #CiscoLive2015, #CLUS, ACE, ACI, ASA, ASA 1000V Cloud Firewall, best of interop, Best of Interop 2015, Best of Interop Finalist, Big Data, cache engines, CCIE, Cisco, Cisco Nexus, Cisco Nexus 5600, Cisco Nexus 7000, Cisco Nexus 9000, Cisco Nexus Switches, Cisco Prime NAM, Cisco WAAS, ciscolive, citrix, cloud, Cloud Computing, container, data center, Data Center container, F5, FirePOWER, Imperva, Imperva SecureSphere WAF, innovation, interop, IPS, ITD, load balancer, Load Balancing, nexus, Nexus 7000, NFV, SDN, security, server load balancer, Service Provider, Sourcefire, video, Web Application Firewall
If you come to Cisco’s corporate headquarters, chances are good that (especially if you’re traveling internationally) you will fly into SFO, which is the airport code for San Francisco International Airport. This point has virtually nothing to do with the rest of what you’re about to read…other than the fact that those same 3 letters – SFO – are representative of 3 key takeaways from an outstanding Infoworld product review on Application Centric Infrastructure (ACI). When you think about ACI, think about SFO:
Simple. Fast. Open.
I won’t spend much space on this, as I’d much rather you go and read Paul Venezia’s comprehensive and detailed look at ACI. But I do want to highlight a few brief comments on how ACI is Simple, Fast and Open.
“Implementing ACI is surprisingly simple, even in the case of large-scale buildouts.”
“Assuming the cabling is complete, the entire process of standing up an ACI fabric might take only a few minutes from start to finish.”
“Not only is ACI an extremely open architecture…”
“Cisco is actively supporting a community gathering around ACI, and the community is already reaping the rewards of Cisco’s open stance.”
“This is only one example of ACI’s openness and easy scriptability. The upshot is it will be straightforward to integrate ACI into custom automation and management solutions, such as centralized admin tools and self-service portals.”
“This should be made abundantly clear: This isn’t an API bolted onto the supplied administration tools, or running alongside the solution. The API is the administration tool.”
Simple. Fast. Open.
Whether you’re traveling to Northern California or not, if you’re considering a better way to do networking, think about SFO and ACI.
Photo courtesy of wikimedia.org
Tags: ACI, cloud, Cloud Computing, data center, Digital transformation, SDN, virtualization