Cisco Blogs


Cisco Blog > The Platform

Cisco Announces Intent to Acquire Memoir Systems

Today, I am pleased to announce Cisco’s intent to acquire privately held Memoir Systems, a company that develops semiconductor memory intellectual property (IP) and tools that enable ASIC vendors to build programmable network switches with increasing speeds. This acquisition will enable the proliferation of affordable, fast memory for existing Cisco switch ASICs and will help advance Cisco’s ASIC innovations necessary to meet next-generation IT requirements.

Currently in the data center switching market, denser infrastructure and data-intensive workloads are driving demand for higher port density (feeds) and greater bitrates (speeds). At the same time, the accelerating growth of scale-out (non-virtualized) Big Data applications like Hadoop are driving increasing East-West data traffic – furthering the need for greater data center network density. Unfortunately, the physical memory in typical ASIC switch chips cannot cope with the design requirements for these more intense needs and as a result, can become the bottleneck that limits the density and performance of future data center switches.

To help solve the ASIC memory issue, Memoir currently licenses soft-logic IP, which speeds up memory access by up to 10 times. It also reduces the overall footprint this memory takes up in typical switch ASICs.  As a result, this technology allows the development of switch and router ASICs with speeds, feeds, and costs typically not possible with traditional physical memory design techniques. This differentiation is critically important as port densities and port speeds move from 10G to 40/100G.

The acquisition of Memoir Systems is expected to close in the first quarter of Cisco’s fiscal year 2015. The Memoir team will report into Cisco’s Insieme Business Unit, under Senior Vice President, Mario Mazzola.

I look forward to seeing Memoir’s technology used across Cisco’s future ASIC projects. Memoir’s technology and strong team will allow Cisco to continue to innovate at the chip level and advance our ASIC and overall networking strategies.

Tags: , , , , , , , , , , ,

Cisco UCS: Powering Applications at Every Scale

September 8, 2014 at 3:04 pm PST

If you follow the news in the world of data center you probably noticed a small announcement from Cisco last week regarding the UCS portfolio…  :)

grandslam

To net it out in a simple way, I’ve been telling people that the trail of innovation that Cisco has been blazing with UCS  just got a lot wider.   That’s because this rollout is all about three key vectors that our customers have guided us to expand on:

Here’s a short recap on the event.  If you missed it, the replay is available here.

IMG_4762

Padma Warrior and Joe Inzerillo discuss how technology is transforming the #MLB fan experience.

We had a stellar lineup at the event in New York.   Our CTO, Padma Warrior, headlined and did a fantastic job setting the context for this wave of innovation in the frame of IoE and Fast IT.   Paul Perez followed, explaining the sea change occurring in the application landscape and the customer imperatives guiding development of the UCS platform.   Finally, Satinder Sethi stepped us through all the new technology we’ve added to the portfolio.  Frank Palumbo hosted the event for us in New York, and I think it’s no coincidence he was rewarded later in the day by a thrilling walk-off win by the Yankees.   Note that my last link there is to MLB.com, whose CTO, John Inzerillo, joined our event to share all the cool fan experience technology they’re developing.

I’d like to thank our #CiscoChampions for joining us at the event and bringing their unique and (trust me) unfiltered perspective to the news.   Another highlight for me was the opportunity to tour the MLB Advanced Media Center with Matt Eastwood of IDC who joined us in New York to moderate a panel on scale-out computing.  Matt, so sorry about the results of the Yankees/Red Sox game…it’s tough to overcome Palumbo-level karma.    Having several of our customers and partners at the event really rounded it out, making a special day for everyone that joined us in New York and in the streaming sessions.

Jim Leach (L) and panel of Cisco Champions

Jim Leach (L) and Tech Field Day panel of Cisco Champions.

To hit on all the details, the team has taken a divide-and-conquer approach here on the blog as well as youtube and our other social media venues.  In addition to the links above, here are some of the pieces you can check out to learn more.  Scanning the #USCGrandSlam hashtag on Twitter is another good way to take a look at the news and reactions.

Padma with panelists discussing Big Data in the IoE.

Padma with panelists discussing Big Data in the IoE.

Tags: , , , , , , , , ,

Why ISVs Must Transform In SMAC Environment

In today’s era of SMAC – Social, Mobile, Analytics and Cloud based solution, Pay-Per-Use licensing and Dev Ops software development methodology, Independent Software Vendors (ISV) are facing major challenges on many fronts. ISVs strive to differentiate from their competitors and gain new customers, as well as retain existing customers and generate additional revenue from them. This shift is happening throughout the software developer market and has surfaced technological and business changes for ISVs.

Read More »

Tags: , , , , , , , , , , , , , , ,

Next Generation Applications and Data Analytics

I was speaking with a customer today at VMworld and, unlike many discussions, which are focused on the infrastructure (servers, storage, networking), this one turned primarily on the application. This person was describing to me his need to match the server to a new set of applications he is being asked to support and then what to do with all the data being generated.   With much of the conversation at the show focusing on virtualization of resources, he made the point that consideration of the architecture itself – how servers, storage and networking is leveraged – was still critical to mapping the requirements of the application back to what that application lives on.

This is a trend we’re seeing more and more.  A new breed of applications, and the increasing density of data, is driving a new way of thinking about the underlying infrastructure.  Often, these applications are developed internally, leveraging many of the toolkits available on the market today, and delivered through a private or public cloud.  These applications can be run from Read More »

Tags: , , , , , ,

Paradigm Shift with Edge Intelligence

In my Internet of Things keynote at LinuxCon 2014 in Chicago last week, I touched upon a new trend: the rise of a new kind of utility or service model, the so-called IoT specific service provider model, or IoT SP for short.

I had a recent conversation with a team of physicists at the Large Hadron Collider at CERN. I told them they would be surprised to hear the new computer scientist’s talk these days, about Data Gravity.  Programmers are notorious for overloading common words, adding connotations galore, messing with meanings entrenched in our natural language.

We all laughed and then the conversation grew deeper:

  • Big data is very difficult to move around, it takes energy and time and bandwidth hence expensive. And it is growing exponentially larger at the outer edge, with tens of billions of devices producing it at an ever faster rate, from an ever increasing set of places on our planet and beyond.
  • As a consequence of the laws of physics, we know we have an impedance mismatch between the core and the edge, I coined this as the Moore-Nielsen paradigm (described in my talk as well): data gets accumulated at the edges faster than the network can push into the core.
  • Therefore big data accumulated at the edge will attract applications (little data or procedural code), so apps will move to data, not the other way around, behaving as if data has “gravity”

Therefore, the notion of a very large centralized cloud that would control the massive rise of data spewing from tens of billions of connected devices is pitched both against the laws of physics and Open Source not to mention the thirst for freedom (no vendor lock-in) and privacy (no data lock-in). The paradigm shifted, we entered the 3rd big wave (after the mainframe decentralization to client-server, which in turn centralized to cloud): the move to a highly decentralized compute model, where the intelligence is shifting to the edge, as apps come to the data, at much larger scale, machine to machine, with little or no human interface or intervention.

The age-old dilemma, do we go vertical (domain specific) or horizontal (application development or management platform) pops up again. The answer has to be based on necessity not fashion, we have to do this well; hence vertical domain knowledge is overriding. With the declining cost of computing, we finally have the technology to move to a much more scalable and empowering model, the new opportunity in our industry, the mega trend.

Very reminiscent of the early 90′s and the beginning of the ISPs era, isn’t it? This time much more vertical with deep domain knowledge: connected energy, connected manufacturing, connected cities, connected cars, connected home, safety and security.  These innovation hubs all share something in common: an Open and Interconnected model, made easy by the dramatically lower compute cost and ubiquity in open source, to overcome all barriers of adoption, including the previously weak security or privacy models predicated on a central core. We can divide and conquer, deal with data in motion, differently than we deal with data at rest.

The so-called “wheel of computer science” has completed one revolution, just as its socio-economic observation predicted, the next generation has arrived, ready to help evolve or replace its aging predecessor. Which one, or which vertical will it be first…?

Tags: , , , , , , , , , , , , , , , , , ,