Cisco Blogs


Cisco Blog > Data Center and Cloud

Summary – Fast IT: Sourcing Disruptive Innovation

The explosion of network connections among people, process, data, and things, now called the Internet of Everything (IoE), is the driver behind much of the disruption and change we see in all industries. It is making innovation more accessible and affordable, while presenting enormous opportunities.

At the same time, IT organizations are contending with significant challenges. Operational costs are rising as budgets fall. Pervasive mobility and an explosion in connected devices are intensifying complexity. Business users are bypassing IT to access cloud-based services while new security threats arise daily. These conditions can stand in the way of greater innovation and agility, and prevent companies from capturing the opportunities in the IoE economy.

Fast IT addresses the following core areas across IT:

  • Simplifying the infrastructure across silos and driving automation to reduce operational costs
  • Using strategically automated policy to build agility and intelligence to fuel growth and respond to changing conditions
  • Connecting the right people to the right information and process at the right time
  • Evolving security to defend against attacks before and while they happen, and to run analysis after they end

Read the full article Fast IT: Sourcing Disruptive Innovation to learn more. Full study findings can be found here.

Tags: , , , ,

UCS M-series Modular Servers – Because wastage just plain hurts!

Last week we announced the UCS M-series Modular Servers. The launch represented culmination of an exciting journey for us that started two years ago.

In mid 2012 just as UCS B-series blade servers were taking off in a big way, we noticed a group of our customers using our core technology very differently than customers in our primary market, enterprise IT. In our primary market customers loved UCS’s stateless computing model, virtualization benefits and the converged offerings with our partners EMC and NetApp. In this other category, customers did not consider those same benefits nearly as important. However UCS Manager’s powerful policy engine got them really excited. UCS Manager gave them a programmatic interface to manage thousands of nodes across dozens of sites globally.

Curious, I started to visit some of these customers. During one such visit, I was walking thru the aisles of their data center and I noticed something I had not ever seen at any of our enterprise IT customers data center. This customer had all UCS chassis single homed to a single Fabric Interconnect, I stopped in my tracks -- reallyIsn’t that kind of dangerous? What happens if there’s a failure? Or you have to upgrade? The customer explained to me how a combination of their application architecture and their application instance placement strategy made sure that outages at the rack level could be handled without service disruption. Wow! so we had engineered all kinds of resiliency, dual ported adapters, dual IOMs, dual chassis controllers, clustered Fabric Interconnects … lots and lots of hard engineering work to make our product robust and resilient, and this customer had thrown it all away with one toss… that really hurt. :( Read More »

Tags: , , , , ,

#EngineersUnplugged S6|Ep8: #UCSGrandSlam Edition!

September 10, 2014 at 9:34 am PST

In this week’s episode of Engineers Unplugged, Cisco’s CTO, Padmasree Warrior (@padmasree) and Satinder Sethi (VP, UCS Product Management and Data Center Solutions) whiteboard the UCS Grand Slam announcement, and what it means for customers and for the modern data center. Don’t miss this one!

It wouldn’t be Engineers Unplugged without a unicorn challenge, and Padma and Satinder delivered!

10669318_10152774542251349_1106876536311792259_o

10669318_10152774542251349_1106876536311792259_o

This is Engineers Unplugged, where technologists talk to each other the way they know best, with a whiteboard. The rules are simple:

  1. Episodes will publish weekly (or as close to it as we can manage)
  2. Subscribe to the podcast here: engineersunplugged.com
  3. Follow the #engineersunplugged conversation on Twitter
  4. Submit ideas for episodes or volunteer to appear by Tweeting to @CommsNinja
  5. Practice drawing unicorns

Join the behind the scenes by liking Engineers Unplugged on Facebook.

Tags: , , , , , , ,

Data Vault and Data Virtualization: Double Agility

Rick van der Lans is data virtualization’s leading independent analyst.  So when he writes a new white paper, any enterprise that is struggling to connect all their data (which is pretty much every enterprise), would be wise to check it out.

Rick’s latest is Data Vault and Data Virtualization: Double Agility. In a nutshell, the paper addresses how enterprises can craftily combine the Data Vault approach to modeling enterprise data warehouses with the data virtualization approach for connecting and delivering data.  The result is what Rick calls double agility as each approach accelerates time to solution in complex data environments.

Data Vault Pros and Cons

Adding new data sources such as big data and cloud to an existing data warehouses is difficult. The Data Vault approach provides the extensibility required.  This is the first agility.

Unfortunately, from a query and reporting point of view developing reports straight for a Data Vault‐based data warehouse results in complex SQL statements that almost always lead to bad reporting performance. The reason is Data Vault models distribute data over a large number of tables.

Losing Agility Due to Data Mart Proliferation

To solve the performance problems with Data Vault, many enterprises have built physical data marts that reorganize the data for faster queries.

Unfortunately valuable time must be spent on designing, optimizing, loading, and managing all these data marts.   And any new extensions to the enterprise data warehouse must be re-implemented across the impacted marts.

Data Virtualization Returns the Agility

To avoid the data mart workload, yet retain agile warehouse extensibility, Rick has worked with Netherlands based system integrator Centennium and Cisco to provide a better, double agility, alternative.

In this new solution, Cisco Data Virtualization, together with a Centennium-defined data modeling technique called SuperNova, replaces all the physical data marts.  So, no valuable time has to be spent on designing, optimizing, loading, managing and updating these derived data marts. Data warehouse extensibility is retained, but because the reporting is based on virtual, rather than physical models, they are very easy to create and maintain.

Meet Rick van der Lans at Data Virtualization Day

To learn more about this innovative solution as well as data virtualization in general, come to Data Virtualization Day 2014 in New York City on October 1.  Rick, along with the also sharp Barry Devlin, will join me on stage for the Analyst Roundtable. I hope to see you there.

 

Learn More

To learn more about Cisco Data Virtualization, check out our page

Join the Conversation

Follow us @CiscoDataVirt #DVDNYC

Tags: , , , , ,

Cisco UCS M4 Compute Platforms: Performance That Matters

Part of last week’s UCS Grand Slam launch event last week in NYC was the announcement of three new compute platforms – Cisco UCS C220 M4, C240 M4, and B200 M4.  Today, Intel announced the new Intel Xeon E5 v3 family of CPUs that will power these new UCS platforms.  Hopefully, this week, now that the confetti has settled from our brand new groundbreaking products like the M-Series with System Link Technology, UCS Mini with the new UCS 6324 Fabric Interconnect, and the capacity-optimized C3160 Rack Server, we can highlight some updates to our core compute platforms.

Its easy to get caught up in the new platforms, they are the new vehicles to bring the benefits of UCS to new markets and at a scale that was previously impractical.  But it’s important to remember that the UCS two-socket blade and rack servers were the original foundational platforms to bring the benefits of UCS to the datacenter.  In fact, the predecessor’s to these products propelled UCS to some amazing accomplishments.

Accomplishment

So, let’s pull back the covers a bit more on these very capable foundational compute platforms that make up many of the building blocks for the enterprise datacenter.

First, when we began to design the latest version of each product, we set out to follow a few simple rules.  Principle of these was to understand what makes them so popular and enhance those elements.’  For example, the B200 M3 is the best-selling server blade in the product line mainly due to its amazing versatility and uncompromised feature-set, all while maintaining a half-width form factor.  Well, the UCS B200 M4 Blade Server is more of the same and then some.  It still delivers uncompromised features like the highest speed, core count and TDP CPUs, maximum memory with 24 DIMM sockets and industry-leading I/O with support for both 2nd and 3rd generation Cisco Virtual Interface Cards (VIC) at up to 80Gb/s of bandwidth per blade.  And, all of that can be done simultaneously, no compromise.

UCS B200 M4 Blade Server

UCS B200 M4 Blade Server

That was easy enough to deliver, but in order to enhance this platform, we looked at the use cases that it served and found the flexibility was the next pivot point for the B200 M4.  The addition of Cisco Flexible Storage to the B200 M4 means that now, customers can truly scale the storage subsystem to match their needs.  Today many UCS customers take full advantage of the true stateless computing and do not use local storage on the blade.  For those use cases, it may be appropriate to have no local disk, no local RAID controller and even no local disk bays.  Why pay for, power and cool what you do not use?  For still other applications, not only is local storage critical, but high performance SSD with an equally-high performing RAID controller with flash-backed write cache is needed.   This is where the Flexible storage subsystem shines.  Either extreme, and the points in between are covers and no compromises made elsewhere.

Another tenet of our design philosophy was to focus innovation where the use cases could take advantage of it.  Take the UCS C240 M4 Rack Server for example, its M3 predecessor has found a home in many enterprise workloads, but it main differentiation is its optimization around local storage and I/O.  To that end, the C240 M4 has enhanced storage flexibility features that include a modular RAID Controller with optional Flash-backed write cache, options for up to 24 SFF or 12 LFF front-accessible hot plug HDD / SSD, two additional SFF internal boot drives, and even support for two 2.5” PCIe flash devices in the front drive bays.

UCS C220 M4 Rack Server

UCS C240 M4 Rack Server

The I/0 capabilities are also significantly improved with up to six PCIe slots that can house up to two NVidia Kepler GPUs.  We also added an mLOM. This slot that is optimized for VIC or 3rd party network cards add to the embedded GbE NIC ports without using one of the six PCIe slots.  The C240 M4 is the ideal platform for I/O and storage intensive enterprise bare metal and virtualized workloads.

The new UCS C220 M4 Rack Server shares a similar compute engine with the ability to also support up to 18 cores per socket and 1.5TB of memory, but it offers an optimization around density without compromising the enterprise performance and feature-set.  So you get the same enterprise class power and versatility in a compact footprint.

UCS C220 M4 Rack Server

UCS C220 M4 Rack Server

These new platforms have already begun leaving their mark with four new world record benchmarks that are targeted toward showing real benefit when deploying workloads that bring tangible value to our customers on platforms that customers demand.  This is how UCS Servers deliver performance that matters.

Look for more information on UCS innovation in this space in the future.