At Cisco we’re passionate about networking, and we have a ton of respect for those who are pushing the boundaries in this realm—even when they don’t work for our company.
Case in point: Co-founder and CTO of PLUMgrid, Pere Monclus. He’s actually a former Cisco employee, and his depth of knowledge when it comes to networking–particularly networking as it applies to OpenStack–is formidable. What’s more, he’s got a talent for taking difficult-to-understand concepts and making them easily digestible. In our interview last week he provided a great explanation of how SDN came about and what problems it is trying to solve. He also explained why network virtualization is so complicated compared to server virtualization, why—contrary to popular opinion–OpenStack Neutron is not broken, and why the cloud has forced the rise of global IT infrastructure companies.
Want to take your networking knowledge to the next level? Settle back and listen in. You’re in for a treat.
To see who we’re interviewing next, or to sign-up for the OpenStack Podcast, check out the show schedule! Interested in participating? Tweet us at @nextcast and @nikiacosta.
Niki Acosta: Thanks for tuning in and watching today. We have an awesome guest with us. I’m Niki Acosta with Cisco.
Jeff Dickey: I’m Jeff Dickey with Redapt.
Niki Acosta: Pere Monclus, introduce yourself.
Pere Monclus: I’m Pere Monclus here at PLUMgrid.
Niki Acosta: PLUMgrid. We were just prepping before the call. Really cool stuff to talk about today. We’re talking how there’s this networking bubble within OpenStack. There’s people who know all about the basic computing and storage functions, and networking at a basic level, and then there’s the network people. You, obviously, are one of these network people. Tell us about your journey into tech and your time at Cisco, and how that led to your role now here at PLUMgrid.
Pere Monclus: Perfect, thank you, Niki. As you’re saying, here at PLUMgrid we do essentially this notion of merging the knowledge that we carry from our past in terms of networking, and trying to understand the needs for the future in terms of private and public clouds, implications with storage, with compute, how the network needs to adjust.
Just a little bit of the journey, how we got there. I started many, many years ago in Cisco, in a platform called Catalyst 6500. That was the beginning of moving from simple networking, in the sense that before you had switches and routers, and networking was a fairly simple thing, that packets would move from source computer to destination into all that become much more complex. This was towards the end of the 90’s, beginning of the 2000’s, where networking went from providing connectivity to start to think what kind of problems do we have to solve in order to enable a new economy?
This is where e-commerce was starting. There were lot of security concerns. Businesses were not yet going on top of the Internet; maybe some websites and some portals, but not in the mainstream.
Somehow at Cisco, Cisco took the approach of saying, “We need to render the network secure and scalable. It’s not about connectivity anymore. It’s about enabling businesses to run on top of a network.” This was the beginning of essentially the notion of firewalls, load balancers, intrusion prevention, VPNs, anomaly detections, antivirus, and name it … The things you can do on top of the network [continuous 00:02:25].
On one side, we could argue that the whole thing succeeded, in the sense that nowadays the economy that we have lives and breathes on top of the Internet. Except for some well-known security issues, it kind of works.
The problem or the downside is that complexity grew to a level that just some of these black magic practitioners that understand how to configure networks cannot maintain this infrastructure. This is where networking went in one direction, where people would have to have their CCAs and their 20 years of networking experience in order to troubleshoot and configure networks.
Everybody else would understand networking from what you call the end-to-end principle, which essentially is like: a computer has an IP address, you want to talk to somebody else, you put the name of the service that you want to talk to, www.google.com, and magically the network works.
There’s a different perception from how end systems or computers or humans perceive networking, also the networking practitioners that maintain all the complexities, that has been accumulated over 20 years. In my journey through Cisco, I went through some of these stages. I started with switching and routing in this kind of enterprise platform or system.
Then I moved to being the architect of a high-performance firewall, and I got into networking services. That would be firewalls, load balancers, intrusion prevention. Then I transitioned to a data center group. In this group, we started to look at the notion of how will the business work together in a different way?
Now you don’t think networking just from a connectivity point of view, but you are dealing with customers that are running complex applications. When the application doesn’t work, you have to understand why. Are you dropping packets and it’s affecting storage? Is your computer not properly balanced compared to the networking and the storage subsystems that you have?
You start having a little bit more of a system understanding. The world shifts from, “It’s about maintaining and granting networking performance,” to “How do the applications do on top of this network?” This is where the whole journey of going from data centers to cloud started.
Internally at Cisco, I moved from this data center to a thing that was called Advanced Development. In there we started to look at the new trends that were coming. This was around 2005. The notion of cloud was in full swing. The ability to sensibly move workloads into public clouds was starting to be fashionable, but still in the early stages.
Everybody started to think, “If the world moves in that direction, how is my infrastructure going to be? How is my networking design going to be? Going back to the discussion from before, what’s going to be my security solution? Am I going to still rely on firewalls and security policies or I’m going to have some new model?”
I spent a lot of time working on this converged infrastructure, how compute networking and strategy would come together, and what kind of fundamentally new networking designs would be required.
From there, essentially, I started PLUMgrid with the other 2 co-founders. We started the journey off saying, “Networking cannot be seen in the traditional way, because something is happening to the infrastructure. This siloed way of multiple departments just came about. Their specific perspective is not going to work anymore.”
This takes us to OpenStack. It’s not that OpenStack is the cloud management system that invented cloud or that started this journey of virtualization. There’s well-known vendors that they believe have other solutions looking after that.
In them, there was still this notion that towards the end of the 2000s’, still infrastructure companies were working together, in the sense that you could see how Ciscos and HPs and IBMs and VMwares and VMCs would all try to work together to deliver these complex IT infrastructure systems for enterprises.
That was the status quo for almost 10 years, where everybody would help each other and own its own respective area of expertise, until cloud started to change the dynamics. If you think, as cloud computing started to merge and get traction, what was happening was that everybody started seeing that IT spending would shift from being in-house towards the cloud.
When you have, fundamentally, a market dynamic that your market is shrinking, because the money spent goes somewhere else, then you have to start doing something new. This is what if you see the last 6, 7, 8 years in the industry, there’s this collision course where Cisco is not anymore a networking company, VMC is not anymore a storage company. VMware is not anymore a hypervisor company. IBM the same, you name it. Oracle, Microsoft, everybody.
Everybody starts to think, “We have to become global IT infrastructure companies, including all the components: computer structure, networking. As you know, there was a company that was dominating the hypervisor market. Suddenly the world needed a common infrastructure that you could rely on, that people would understand how to configure systems, how to troubleshoot systems, how to organize these cloud projects.
This is where there was at the beginning these discussions between CloudStack, OpenStack, Eucalyptus, and different frameworks that would try to do that. Eventually, as the market grew and decided what was the winning strategy, OpenStack started to merge as a valid alternative to this one.
As you know, the rest is history. We’ve lived in these last 2 years, where slowly we’ve seen other cloud management systems who have structured an OpenStack [inaudible 00:08:35], and essentially start to be considered as a valid alternative for enterprise IT departments, in order to manage their complete infrastructures, as well as private and public clouds.
Going back to the network as we were discussing, now you have this open framework with a community behind, that everybody develops towards achieving a common goal. Different areas are perceived in different ways. Networking would be one of these areas, that has gone through a transition of novel network, and later on, neutral.
We can talk about this a little bit later, about what’s going on, why networking is still perceived as complicated…working, not working, broken. That’s some things that we can cover today, about those topics.
Jeff Dickey: Yeah, that would be great. If you could, could we do an SDN 101 overview? I just know there’s a big mindset change that needs to happen. I’d love to go over what that is and what it looks like.
Pere Monclus: Yes, SDN you could argue that it’s a marketing term, in the sense-
Niki Acosta: Like cloud?
Pere Monclus: Yes. SDN, if you think … Let’s say there’s networking. Networking has been done in different ways, originally based on hardware boxes, and now maybe you can do in different ways. As you’re saying, like cloud, before you were buying a server, then you were buying virtualized servers. Now you’re buying virtual machines in the cloud.
The name keeps changing because it reflects some of the new business models, efficiencies, operational characteristics of one infrastructure versus the next, but the fundamental principles, like if you think physical servers, virtual machines, and cloud, you have a complete abstraction, that in this case is a VM, that moves from physical to virtual to cloud, and gets from on-premises to off-premises.
Fundamentally, the technology principles didn’t change. The business models, who will reverse it? Do we rely on vendors or do we build it internally? All these things have changed. Now let’s go back to SDN, the SDN 101 that you were asking. Originally in networking, you would go to a networking vendor and you would buy a box. That box would be a switch or a router or a firewall.
Now you would connect it, and you would provide a networking infrastructure that has certain SLAs or guarantees, that gives to the servers or the applications in terms of what it’s supposed to do.
It’s supposed to, essentially, reliably send packets from point A to point B, and maybe provide the scalable systems to load balancers, or security through some middle boxes that provide security like [paroods 00:11:28]. That was how networking was delivered.
What happened was that at the moment we went from physical servers to virtual machines, something changed. What changed was that networking was always configured a priority, in the sense that you would have a Cisco CCA that would do a network design for you. The request would tell you what you were trying to accomplish, based on that create the plan, and now go and configure these individual boxes in a way that would accomplish what you expected from the network. That process-
Niki Acosta: Real quick on that. When you say they would configure the network, essentially these were people who understood or tried to gain an understanding of the application needs, and then actually log in to a GUI or something, right, and configure the network that way?
Pere Monclus: GUI, you’re very advanced.
Well, they would go into a console and type in direct secret commands through a CLI, they would accomplish that. GUIs were not the mainstream from a networking point of view. Mainly network engineers were using a CLI like Mans, and they would log into every single box to do those changes.
Then, of course, multi-device managers and GUI kicked in, but networking stayed for a long time through CLI like Mans and typing against different set of switches or routers. Now because of that, because of the human aspect and the operational consequences that you had, what was happening is that the network was very static.
The network would not change constantly. You would configure it, and unless you had a valid reason to change it, you would leave it as it is. This was a very fundamental principle, that when you have a data center, let’s say with 1,000 computers, a computer fails and you say, “Who cares? One of 1,000 computers fail. Maybe my computer with a database fails, and as such, I need a database backup or a disaster recovery solution, and I’m willing to spend some money.”
Compute had this notion of bounded area domains, in the sense that from out of 1,000 servers, you have 1 server that is running your stock trading application, and is making $5 million a minute … or a year, don’t matter. At this point, you can quantify the money that you would lose if that server were down. You could create a disaster recovery plan. “Do I need to buy another server? Do I need to replicate the storage? Do I need to put another location?” You would be able to quantify it.
When you go to the network, it’s an interesting element because if you misconfigure a networking element, networking has a set of dynamic protocols, routing protocols and [inaudible 00:14:10] protocols, that the things and the changes that you put into a box propagates through the network, because the network has to synchronize the state.
If you make a mistake, potentially the whole network could go down. If the network goes down, out of the 1,000 servers, not only 1 server fails. The 1,000 servers would fail. The risk assessments that you have to do when you configure a physical network is much more complex than configuring a single server.
That’s why networking engineers traditionally plan things ahead. They are very scared of changes. They think through changes twice, and when they do the changes, they may have effects over the whole data center, not into a single machine.
That was all good, because servers don’t move around. You buy a server, you have to ship it into the dock, you have to unpack it, you have to put it in a rack. Wire it and put the screws on as such. When you configure the network, that model would match. Who cares if the network is slow in terms of configuration? Putting a physical server is as slow too, so no problem.
Now virtualization kicks in, and now you have these nice tools that you can create virtual machines on demand. Now suddenly you say, “Before a server would take weeks to provision. Now it takes seconds or minutes. Now can you do changes in your network every single minute?”
That becomes a little bit more complex. This is where people started to say “Oh, networks have issues because my virtual machine cannot move into another location, because now I need networking changes,” or “I need some configurations that will not follow my compute, because now finally my compute is not attached to physical elements any more.”
At this point, networking started to change. A lot of people started to realize that the more complexity you put into the physical network, the more difficult it is to have a virtualized environment where your compute can spin up and down constantly. Now, you need to, essentially, configure your firewall rules, your accounting policies, your switching infrastructure and so on, surveillance, in a way that would provide the proper network connectivity to those virtual machines, that they come up and down dynamically.
Here is where the end comes as the end, if you think as a logical evolution of networking. Let’s, for a second, not talk about the new business models about SDN, that can potentially be very disruptive.
It’s this notion that as networking evolves to satisfy the needs of virtualized data centers, and private and public clouds, in the same way as you can provision on Amazon via just putting your credit card and a few clicks. In the same way as in Amazon, you can create a virtual private cloud from a networking point of view and configure it, enterprises and private and public clouds beyond the ones that are already available, it should be able to do exactly the same. They should not rely on manual intervention to the network, because it’s too error-prone and too slow.
The moment you start with this requirement, saying the network should be self-provisioned, on demand, elastic, all the things that we’ve been saying about compute for the last maybe 7-8 years, the network should achieve exactly the same capabilities.
This is what SDN is trying to solve. How? There’s multiple ways to solve it, but the notion is how to modernize networking to fulfill and satisfy the requirements that your applications and your private and public clouds have. That will be the mission statement of an SDN company, in the data center of course.
SDN applies not only to a data center, it applies to other places. If we say data center, now you say, “SDN, how?” In the sense that, when we define what has to be accomplished, the question is how. Different vendors will take different strategies. Here is what you have. Vendors that have huge businesses selling hardware, what they’re trying to do is to put in off-program ability and APIs into their hardware, in order to fulfill that requirement.
There’s a new set of vendors, like PLUMgrid and others, that they have the mindset that networking should move into software, and into the pre-matter of the network inside the hypervisor, to fulfill the networking needs. Those will provide again for the mobility and APIs and SDKs, in order to fulfill that all.
The philosophical divide is the following: that if you do it based on physical assets, networking boxes, the question is how portable is your infrastructure? In the sense that, when you can move a VM from your private cloud to a public cloud, or from your data center A to your data center B, the question is are you going to have exactly the same Dell servers? Are you going to have exactly the same Cisco switches and routers? Are you going to have exactly the same EMC storage, in all the data centers?
If the answer is yes, then you are a pretty homogeneous IT infrastructure, that it’s almost impossible to achieve, because as data centers edge out, essentially you’re going to bring new data centers online. Different build-ups are going to be using different infrastructure vendors.
As such, achieving a common hardware layer across all the data centers is almost impossible. If you think like that, you start thinking, “Well, networking should be in this software layer that can pan across data centers, that unifies and provides a common networking behavior to your clouds.
If you think, this already happened in the compute space, where you can buy, let’s say, a Dell server, an HP server, and an IBM server, but when you have a hypervisor like Unix KVM or like VMware ESX, that normalizes the hardware. You don’t have to worry if the hardware is different, because once you create a VM, the VM is fully abstract and isolated from the BPL’s of specific hardware.
SDN companies, based on a software overlay, are trying to achieve the same [inaudible 00:20:37], create this software abstraction that all the networking units that you need are sensibly provided and abstracted from the implementation that you have on network.
This is pretty powerful, because when you have private clouds going from your private cloud to, let’s say, Amazon or Google cloud, things like that, the portability aspect of your application and your networking policies and your security policy and so on, cannot depend on any specific hardware asset. This notion of portable clouds have to work across heterogeneous infrastructures.
That’s the level, the space what is the employers are trying to show. Technology-wise, there’s many rituals and nuances, in terms of why one approach is better than the other or worse than the other. At the end of the day, the question is how do we go back to a standard system that people understand, and how to operate and manage these different networking vendors in a consistent way?
This is where, if we go back to your discussion notion of what is OpenStack? It is the notion of saying regardless of what vendor do you have, physical or virtual, SDN, non-SDN, whatever, the way an application or a cloud management system should provision networking should not change based on the vendor you have under me. This is the abstraction that OpenStack Neutron tries to provide.
Niki Acosta: Consistent APIs against multiple … no matter what’s underneath, it’s pluggable underneath, right? You can take the same API call, and in theory, you should be able to replace, if you like, what’s underneath that abstraction layer, and be able to continue to operate with very few tweaks, if any at all. That’s the promise, right?
Pere Monclus: Exactly.
Niki Acosta: Jeff, you had a question.
Jeff Dickey: Yeah, let’s tie it back to OpenStack. What does this mean for Neutron and how does that all play together?
Pere Monclus: As Niki was saying, the promise is exactly this. Neutron is like, “I’m going to create this API.” We could argue about the API is perfect or not perfect; those are [reports 00:23:00], but let’s say the promise is I’m going to have this API that, regardless of what vendor you put underneath, it will just work in a consistent way. That’s the promise.
Some vendors are probably are closer to that promise than others. The reason is, what kind of thinking do you have underneath? In the sense that, imagine that you are a… What’s that? Switching vendor, and you go to Neutron and wrote an API. There you have definitions for a switch, a router, a firewall, a load balance or security policy, DNS, DHCP. I just mentioned 5 or 6 elements that, in the traditional world, they would come from 5 different vendors. Now everybody says, “Neutron is broken.” But Neutron is not broken. Neutron is just an abstraction.
If you think it’s just an API talking to a MySQL database with a schema and a set of plugins underneath that are supposed to fulfill the request of Neutron, what’s broken? It’s the implementations underneath. You could look in this way.
This is where as extensions in terms of saying, “OpenStack networking is broken, but Neutron is broken.” Let’s start by OpenStack networking has problems. If you think OpenStack networking is based on this notion of creating network nodes running in standard x86 server, and making the traffic hairpin through these network nodes, for advanced capabilities.
For simple features like distributed network switch or distributed network router, maybe go from server to server through OBS, for example. That notion of saying some features go from server to server, and some features need to hairpin through nodes, that has some implications, in the sense of performance, high availability, scalability, and all these things.
Having said that, everything is abstracted by Neutron. You could replace this network node model that creates bottlenecks and creates issues, you could create a fully distributed network infrastructure; for example, [inaudible 00:25:11] do that, but that’s [inaudible 00:25:13] of doing it, and still eliminate some of the bottlenecks and signature issues that the first implementation that I mentioned had, but still providing some network capabilities. That would be the way to approach.
Going back to the problem I was mentioning before, the reason why it’s so difficult … This was the area that Niki was saying at the beginning, that network practitioners, they come with these discussions and concerns, that somehow you could get when nobody else understands, is because does this understanding of who wants or who delivers a routing box for Cisco delivers the DNS, for Cisco delivers the switch or the firewall, in the current industry, this complexity and everybody having to play together cannot be established.
You translate this complexity to a Neutron player, and who is the entity that is going to test interoperability between all these vendors and of the API of Neutron? That’s a tough problem. Essentially, if I’m vendor A, I’m going to make sure that the plug-in for my voucher works. If I’m vendor B, I’m going to make sure that the plug-in for my load balancer works. If I’m vendor C, I’m going to make sure that my VNS implementation works. If I’m a customer, I say, “I don’t care about all of you guys. I just want to have a cloud running.”
This is a fundamental change, because when networking vendors would go to network activity [parliaments 00:26:42], the customer and the vendor would align in knowledge. If I’m selling you a router and you are a networking expert, you know that I’m not going to deliver a VNS solution. You know that you have to work with somebody else, and you know that you have to configure something between the multiple network elements that will tie them together.
Now you go to a cloud guy. A cloud guy will say, “What’s wrong with networking people? I can stand up a cloud in a couple of hours and I can bring OpenStack running into tens or hundreds of servers in a few hours. Now you’re telling me that they have to bring 5, 6 networking vendors and put a professional service project and pay, in terms of money, that in 6 months will deliver me a networking solution that won’t run OpenStack? That’s insane!”
What just happened is that the buyer of networking technologies is shifting very far from being a networking expert to being a cloud expert. A lot of people in the networking industry still approach them as if they were supposed to be networking experts and understand that all of this artificial complexity that has been created over 20 years doesn’t make a whole lot of sense when you move into the cloud thinking.
This is where, you asked before, the SDN 101 … What are we trying to achieve from a SDN point of view? There’s a fundamental different delivery vehicle in the discussion we were having before I [inaudible 00:28:05], which is hard work. Who is going to make the effort to try to understand what the cloud guy thinks? Who is going to, essentially, bridge this knowledge gap between the old way of doing networking, of multiple vendors and professional services, and months of integration to this new form of networking that, maybe not really have to have the same multiple vendors. But, it has to be delivered in almost hours, few hours to install a networking layer that will provide all the things that Neutron requires, with high availability, scalability, proper support and things like that. This is the challenge.
Niki Acosta: How big… just look at AWS. They have 3 networking products, right? On the opposite end of the spectrum… I was at the internal Cisco OpenStack Summit last week, and there’s just sort of this… to me, it felt like this gross complication of networking. Why is that happening? Are people just trying to cling on to their jobs? Are vendors trying to make sure that they’re still relevant by forcing this issue of having a grossly complicated networking model that requires a networking specialist?
Pere Monclus: That’s a tricky question. You have a little bit of both, in the sense that when you create Amazon, you say, “I have no legacy.” I’m starting a new project. I’m starting a new way of understanding computer infrastructure. I’m going to create it.
Enough people come to me that I just need to understand and support and maintain a single way of understanding networking. That’s the Amazon model. It’s like a fresh start, a new start. I don’t have to think in terms of legacy. People will not put traditional, Cobalt-based enterprise banking applications into Amazon. Doesn’t make sense.
Niki Acosta: Right.
Pere Monclus: You create new type of applications like web-type of applications, to create the next wave of Zyngas and Facebooks and whatever [inaudible 00:30:23]. In a sense, they don’t have to think in terms of legacy. They can have a fresh start.
On the contrary to when you are a company like Cisco, as you were saying, Cisco has 20, 25 years of customers behind it with multi-million dollar revenue on networking boxes and a lot of existing applications relying on Cisco network. At this point, what do you do? Do you say, “Guys, forget about 20 years of networking because, this is messed up and too complex. Let’s see a new way.” Then, you lose all your customers in the process. Or, do you juggle multiple strategies when you have to have a product portfolio for almost every type of use case that you possibly envision.
For a company like PLUMgrid, it’s saying, “Look. We are going to do the right thing for our customers. We are going to take this kind of [inaudible 00:31:14] problems head on.” It’s much easier, because we don’t have legacy thinking and revenue.
For a big company with years of customers behind, that’s a difficult proposition because you have to carry the people with you. You have to evolve what you have into the new way of thinking. That’s why, probably, you are seeing all these discussions and all this complexity.
Niki Acosta: Is PLUMgrid’s target customer the enterprise? If so, how are you bridging that gap?
Pere Monclus: Our standard customer is what we call public and private clouds, which is definitely a major component in the enterprise but we have some service providers that are trying to create clouds. We focus on public and private clouds. It’s like different definition from the old world, before you classified service providers and enterprises. They would have completely different use cases. Now, when you go into clouds … A private cloud in an enterprise, and a public cloud into a service provider, it is significant differences in terms of security, scale, whatever. But, exactly the application that we have running, the notion of an OpenStack cloud, is not that different one place or the other.
Most cases that they have on top, how am I going to provide an infrastructure as a service cloud or a platform on a service cloud or a technology efficient cloud? The users are going to be very different, but the technology underneath is the same. We focus on this, private and public clouds. And we have customers, and enterprise, and service clouds.
Did I miss your question? I think maybe you were asking something else.
Niki Acosta: No. It definitely helps. You’re right. Coming from Rackspace and seeing how Rackspace approached the network to Metacloud and now it’s Cisco, it just seems like networking is that one piece that can be handled so differently. Especially when you’re talking about, like you mentioned, people have an opportunity to start with cloud and don’t have this legacy stuff to deal with.
On the opposite spectrum, you have people that have tens of thousands, maybe hundreds of thousands of applications that are set up in this old way of networking. It’s kind of hard to wipe that slate clean. It’s impossible, maybe.
Pere Monclus: Yes. This is what we are doing here at PLUMgrid. The idea was exactly that, was saying “We had the ability to start a cloud and design a proper networking platform for cloud. How would it look like?” We started with this premise. There was this notion is … Of course, we may have to provide ways to deliver networking to legacy applications and enterprise applications because that’s a significant market. At the same time, as it as a new way of understanding networking, but with the ability to think from scratch. This is how we started the company.
This is what we created this networking solution, completely unsolved for that sits [inaudible 00:34:21] hypervisors and controlled by our [inaudible 00:34:26] OpenStack and things like that. The idea was, if I can deliver this on top of any infrastructure, I’m going to ensure some things have cloud proficiency and cloud portability. Of course, clouds have no independence. The fact that all the complexity of networking has to be absorbed by this overlay in the way that they can be on top of clean and simple networking design, [inaudible 00:34:52] one thing and so on. Basically, the focus was, “How do we create a networking infrastructure that is simple to operate, easy, cheap…” Cheap in terms of operational cost and the complexities associated with that, and that they reach in functionality.
Here we had an interesting decision to make. If you think, networking is approach on business components who are vendors trying to be the best in the business component. We are saying this, the fight for who is the best [inaudible 00:35:27] in the market, the fight for who is the best [loot 00:35:29] box in the market essentially creates this race that complexity kicks in and now you have 1 vendor introduces 20 features, the other introduces hundreds or thousands of features. Then, the complexity of managing them, directing them, keeps it going up. When we started thinking, you say “There’s two types of market.” One is the cloud applications, which, by definition, you don’t want complexity. This complexity implies operational cost, failures, things like that.
You may be good enough with a set of basic infrastructure. This would be an example of you are putting the pull like Amazon Cloud. You have the UPC concept, with addressing VMS, [inaudible 00:36:12], elastic API, things like that. Essentially, they have the minimal networking sets that you need to create a proper cloud.
Now, do they have it as advanced or as good as [CThrigs 00:36:23] or a F-Five or somebody else? Definitely not. But do you need those things for a cloud audience? And maybe the answer is no.
Now, completely different scenario. We’ll go to an enterprise private cloud. They say, “My provisions have been using this specific load balancing vendor for years and they have been using all these fancy features that really help me in my business and my applications.” That’s the other side of the spectrum. When you say the simple definition of an elastic API is not good enough, I need a proper load balancer, enterprise type load balancer.
We think we live in between these two extremes. If you take the notion of how to make the cloud infrastructure easy, you have this notion that the basic fundamental need is what we call a networking suite. We need the ability to install something into a cloud, and the function should be up and running in no time. We’ll fill all the networking needs like an Amazon cloud would have. This is what we started out with saying.
We have two approaches to how people think. One is a cloud guy where it just has to work and they need this Neutron-based functionality, nothing else. Then you have the other type of clouds, the enterprise clouds that say, “It’s not enough just to work. I need this specific bandwidth for advance and I need this specific bandwidth for firewall and so on.”
You think the difference between one and the other is, how do you want to tie this infrastructure? If you have to resell it, like a cloud provider, you want to get something that is easy to operate and that gives enough value to create these self-provision services. If you’re an enterprise, on the other hand, you have your application that ties your enterprise that has been drawn and born within a set of infrastructure elements that have to continue to be there to maintain your application. Both models have to be provided. When we start going on video saying, “What’s infrastructure?” We define these two models. One is, how to roll with our partners, networking partners through enterprise clouds. The other is how to deliver to cloud providers and networking [inaudible 00:38:31] that works and scales with property and so on. This is the component that we build. This is what we call a stagnant working suite and that we deliver to our customers. If you can extend it with a vision of networking partners on one side, it just works completely on the other side without much complexity. That’s the business model we have.
Niki Acosta: So are you guys contributing a ton of code back, in terms of what you’re writing that isn’t a plug-in or an extension?
Pere Monclus: Yes. We’re contributing to… In this one, it would be OpenStack. In OpenStack, many is in the Neutron area. As they say, this would be the notion of plug-ins and drivers and not always some of the functionality that we have. The other aspect that we have that we felt we had to contribute and it was a good for the community and for us internally, was the notion of the Linux Kernel. If you think what you have today is a set of functionality, the Linux Kernel was how to forward packets and how to perform security policies, and so on, that has some capabilities but some limitations.
From there, a new committee was created that was over years, over [street 00:39:42], that tries to address some of these issues, and it’s been years by some of these companies. On top of that community, what we are feeling is that there was something else neither which was the notion of an extensible data plane, a programmable data plane. That’s we call high advisor from marketing terminology.
Internally, in the Linux Kernel there is a component called EBPF, Extended Berkeley Packet Filter, that gives us this notion of how to be able to enhance and extend networking for cloud users. This is what we contributed back to the Linux Kernel. We have these two contributions, one is contributions in the Linux Kernel, the other is contributions in the OpenStack. In OpenStack, mainly we started, not from the beginning, we started to think one year in the cloud or two. We are more on the Neutron side of things. Now we are starting to look on automation frameworks and deployment and things like that, and how we can extend our contributions beyond just the networking.
Jeff Dickey: Where are people reaching out to you guys? What problems are they facing, or at what scale? Are you getting in the POC area, or are you getting called for issues as folks are scaling? Where in that process are you solving problems for folks?
Pere Monclus: Six months ago, we had maybe few production environments, but mainly were POCs. Suddenly, what we’ve seen in the last six months is that everything has changed. We went from having few POCs a month to a lot. What we are seeing is that the pace of the OpenStack adoption is changing. Originally, 2 years ago, it was mainly marketing. A lot of people were talking about that. The Summits we’re great. We all were doing POCs but POCs were not translating to production environments. Last year, we started seeing POCs going to alter production environments, and the end of 2014, things started to change. They drastically change. We are seeing that people are talking. Even the size of the POCs were changing. Originally, the POC’s were 5, 10 servers and the POCs were moving to 1,900 servers. In production environments, people were talking a lot … Hundreds, hundreds of servers. We are finding customers with 2,000 servers, 4,000 servers. Definitely, there is a change in the market that I would attribute to this notion that people are starting to accept that OpenStack is a very viable, and the pace is accelerating quite dramatically.
In terms of what spaces, we are seeing a lot of retail and some, of course, platform as a service clouds. That kind of a use case of how private and public clouds are going. Infrastructure as a service, a little bit less in the sense that the market is extremely competitive and people are realizing that business models to compete in the IS space are hard. There’s those few central established players. We are seeing, definitely, a big shift of behavior from our customers in the last six months.
Niki Acosta: That’s amazing. I can imagine the value that you can bring to a conversation with your history of being at Cisco and now embracing this cloud world. I assume, based on the number of people that go to Cisco Live, that there are a lot of people who aren’t quite where you are now.
Let’s say I’m a CCIE. Let’s say I’ve been doing this 5, 10 years. What advice would you have for me to help me wrap my head around this new model? What skills do I need to be successful in this new model?
Pere Monclus: The first thing is to understand that the network doesn’t end at the top of the rack. The network continues inside the hypervisor. This, some people swear it [inaudible 00:43:58] in the enterprise, but somehow the networking engineers, they then managed to control or configure or manage the networking layer inside the hypervisor. That was seen more as a sysadmin and not as a system’s responsibility.
When you go to clouds, everything blends together. Within the network, it doesn’t end at the top of the rack. It continues. You have all the Linux networking stack, all the UX networking stack, all the new SDN layers in the edge. As the network continue, you cannot deliver an end-to-end solution without mastering all these elements. When you move from a set of [inaudible 00:44:30] to the end cost, the way of thinking, the way of configuring changes significantly. Now you can’t use traditional network management tools anymore. Now you start to use the Devops tools, like Puppet, Chef, Ansible… Different ways of configuring the contribution files that are not just CLIs… This notion that the end hosts are not out of the scope of the engineers anymore.
As soon as you do this mental questioning, everything else follows. The fact that now you have to align some script in line with this, you have to automate everything you do. You have to be very strict in terms of keeping copies of your scripts and your configuration files, because now you have to exchange those with customers. The automation aspects. Everything else will follow as soon as you embrace this idea of networking continuously to the host. From a mindset point of view, that’s the most important thing. Everything else will follow.
Niki Acosta: That’s great advice. Definitely great advice. Are you saying that the networking folks, just in general, are they moving into a better overall relationship with application developers, in terms of specifics around scaling and having to process big data, things like that? Are they working better together now, or is there still a divide?
Pere Monclus: More than working better or not working together, I think it’s that internal organizations are shifting. This keeping networking architects, sysadmins, and storage experts, this is fading away into this new trend that we are seeing a lot that enterprises have this notion of, “I need cloud engineer, or VP of cloud, or whatever.”
As soon as you see the cloud thing, that, for us, is an indication that the enterprise that we have in front of us has done the mental transition that they cannot think [inaudible 00:46:34] any more.
As soon as you have that, what happens is that the dynamics of the organization change significantly. The alignment is not to deliver the best networking, or the best storage, or the best compute. The alignment is beyond the private cloud infrastructure. We have to deliver the best private cloud. At this point, the incentives change and the need to work together kicks in.
Having said that, the expertise if you have a strong networking guy will cross a little bit of the bridge towards the other side and then the traditional guy will come a little bit on the networking side. There will be more middle ground to have this common shared total goal that will make the projects go a little bit smoother. Still, the networking gap has to go more towards the implications on the (something) side.
Niki Acosta: So we have Devops, and now we have this concept of Netops?
Pere Monclus: Yes.
Jeff Dickey: Trademark.
Niki Acosta: I coined it! I said it first!
Pere Monclus: Eventually, things have to come together. There’s nice concepts, like some of these trends are bringing Linux into the networking boxes that try to bridge them the other way. If I could manage my networking boxes as if it was a huge machine, then maybe the Devops network expertise can cross into the network or vice versa …
You see it everywhere. The blending is starting to happen. It’s just a matter of time. How are you going call this, Netops, Devops, converged infrastructure, cloud engineer. It doesn’t matter. This will come with time. What are we are seeing is that the direction that’s pretty well established is happening.
Niki Acosta: By the way, that would be an excellent talk for the OpenStack summit by the way. I’m just saying. What we just discussed, I think there’s a lot of people who are just like, “Man.” The compute going from virtualization to cloud, it was a shift but it wasn’t a massive shift, I feel like. At the end of the day, you still have a server, right? It might be virtual now or it might be cloud-based but it still behaves the same way. I feel like, in networking, it’s not really the case. It’s such a drastic shift.
I have a good question that you may or may not want to answer. It’s one that I’ve struggled with, because or a long time, I traveled around the world telling people that hardware doesn’t matter anymore and software is the new hardware. Now, I work with Cisco. To my Cisco’s credit, to my new company’s credit, there are definitely situations where having cloud-optimized hardware and SLAs that can guarantee that the whole set’s gonna work is great. Does the hardware matter anymore?
Pere Monclus: If you think the physical hardware always matters in the sense that software doesn’t [inaudible 00:49:28] insulation. Software always signs on top of hardware. It’s like saying, “Does my server being running and [inaudible 00:49:37]?” It doesn’t make a difference. Of course there’s a difference. Fine system software, but the hardware makes a difference.
From a networking point of view, you have to start thinking the same way. There is a component that the physical network matters, like, “What’s the bandwidth? What’s the SLA? What’s the [inaudible 00:49:54] capabilities? What’s the QoS model that you have?”
Everything related to SLA, QoS and connectivity, the hardware still matters. Now what we are trying to say is that the information model and the information and the advanced capabilities is becoming more and more distributed through the perimeter of the network, and that complexity is being taken away and a simpler design is emerging into the new clouds.
But still, you need … There’s no amount of software for us to take a 1 GB link and transform it into a 100 GB link. There’s clearly other elements in the mix.
Jeff Dickey: We’ve got PLUMgrid here, and what I see with the networking folks is they run through it and it’s very much, “This is very complicated. This is a lot of work. I don’t understand this. The complexity is too much.” And then there’s this big, “Oh wow, this is awesome.” Like, it clicks, but it takes a little bit to click. When is the tipping point going to happen, kind of like we had with the ESX 2.0, up until the mainstream … When is that tipping point going to happen to where most of that aha moment is going to happen for the networking folks on the network virtualization?
Pere Monclus: [inaudible 00:51:26] As you were saying, when is the network virtualization going to go mainstream, right? Arguably it’s starting to go, in the sense that at least all the data centers that we are seeing all the clouds, most of them, they are not ready. Like, I would say almost all of them that we are seeing are planning an overlay and an underlay. This notion of network virtualization kind of being a best practice from a private and public cloud point of view, all the customers that we talk to, they are going this way regardless of [inaudible 00:51:59]. The overlay and underlay philosophy is starting to happen.
Now with the thing that complexity though is still different, in the sense of, as you were saying, when you are virtualizing a server, everything is self-contained—you are virtualizing that server. When you are virtualizing the network…connects multiple servers in a behavior that feels and looks like a single network or [inaudible 00:52:25] technology. So the level of expertise onto this, as the end solutions, is a little bit higher than the level of expertise of managing a single ESX server or a single KVM server.
Then you add to the fact of the complexity that this [inaudible 00:52:45] cloud management system, and [inaudible 00:52:48] start the cycle and you have the trunk OpenStack, and then you have the distros, like that, Canonical, Mirantis, Piston, and so on. So what’s happening is that there’s a lot of complexities in how to deploy that natural infrastructure in a seamless way, and usually people cannot just try an SDN layer. They first have to try the cloud and then bring an SDN layer on top.
That’s one of the direction that VM exists in the transition of compute. We are having to [supplement 00:53:23] a hypervisor and that’s it and you’re up and running. Now you have to have a hypervisor, the cloud management system, and then you can bring the SDN layer. I think it’s a little bit more tied to the full infrastructure, but to answer your question from private and public cloud point of view I think the overlay vs. underlay, all the new projects we are seeing has been [inaudible 00:53:44].
Jeff Dickey: That’s great.
Pere Monclus: From a marketing point of view, of course we have marketing kind of competing between VMware and Cisco with overlays vs. underlays. So that’s helping a lot from our point of view, in the sense that it is educating the customers. I think it’s been established that it’s a combination of an SDN layer in the overlay and [inaudible] in the underlay and how both work together to deliver the the solution.
Jeff Dickey: Okay. [crosstalk 00:54:11] Go ahead, Niki.
Niki Acosta: So with OpenStack being at heart of what you do, what’s on your OpenStack wish list? What you need from the community or what is the community to do to help drive this forward? Which, by the way, would also be a great talk for the OpenStack Summit. Just saying.
Pere Monclus: To me, if there was a common deployment mechanism across distributions, that would be awesome. Maybe [inaudible] maybe something else, but… Now, for a company like us, having to support multiple different installation methodologies–that’s complex. Then the one that OpenStack is not addressing that well is this notion of, “How do I go from a single data center or a single OpenStack region or availability zone, call it whatever you want, to multiple?” “How do I federate Keystone across locations and then work out of the networking definitions that I need to across locations?”
That is completely uncharted territory, or mainly uncharted territory, and people should remember that when you create the clouds, you don’t create one, because if it fails, you have serious problems, right? So you have to have at least multiple locations, and then that’s where you start bringing all these complex discussions about application movement, image management, identity management, and of course, network connectivity between these areas. That part in OpenStack is not advanced enough yet.
Niki Acosta: We have a couple minutes left. Jeff, do you have any more questions that you want to ask before we ask the question that we always ask when we end these things?
Jeff Dickey: Sure. How do people get started? What’s kind of the next step with PLUMGrid? How could they kick the tires or get started or learn? What’s the best next step?
Pere Monclus: So the best thing [inaudible 00:56:01] you go to the PLUMgrid.com contact, info. We will outreach to you and that’s different models we have. We’re starting a program where you can deploy PLUMgrid in a hosted environment, actually [inaudible 00:56:13] and things like that that we are preparing. We have software that we can install on premises for POCs. Then of course, we have demos and discussions we can do, so just pitch to us and we’ll work on it.
Jeff Dickey: Awesome.
Niki Acosta: I bet you’re in a lot of sales conversations, huh?
Pere Monclus: Sorry?
Niki Acosta: I bet you’re in a lot of sales conversations.
Pere Monclus: Yeah, kind of.
Niki Acosta: Yeah? Life at a small company, huh?
Pere Monclus: Has to do what has to be done, right?
Niki Acosta: Totally. Do what it takes, right?
Pere Monclus: Yes.
Niki Acosta: So [inaudible 00:56:53]. I’m not sure if you looked at the awesome list of people we’ve got on the show in just the short time that we’ve been doing this, but if there was a podcast that you wanted to see around cloud and OpenStack, who would be your people that you would want us to interview?
Pere Monclus: About cloud and OpenStack? It would be interesting to see how the third wave of OpenStack players, like you are seeing what you are doing with Cisco Intercloud, HP Helion.
What’s their perception about how the OpenStack system is going? It would be interesting to see how big companies… Are they going to approach OpenStack from an open point of view? Are they going to [inaudible 00:57:39] the market? What do they have in mind? Those type of podcasts to try to understand what’s in the traditional enterprise vendor companies? How do they think that OpenStack is going to work? And what’s in their mind?
Niki Acosta: How are they going to monetize from open source, right?
Pere Monclus: Yes, exactly.
Niki Acosta: How are they going to learn how to play nice?
Pere Monclus: Yes.
Niki Acosta: Yeah, that’s always a good question. So someone from Oracle, you would say?
Pere Monclus: Oracle, HP, Cisco. Yeah?
Niki Acosta: Yeah. Okay. Cool. Any specific people in mind? You know some people over there?
Pere Monclus: Sure? I mean, you know probably now everybody at Cisco.
Niki Acosta: Yeah. Got that one.
Pere Monclus: In HP you have Martin [Micos 00:58:22] and his team, and then at Oracle [inaudible 00:58:27], and some of the people driving the OpenStack open source projects are in Oracle, so this type of place we’ll be willing to seem to have [no problem 00:58:32].
Niki Acosta: Awesome.
Jeff Dickey: Yeah, great job.
Niki Acosta: Man, you are crazy brilliant and really smart. You broke this down in a way that made some light bulbs go off in my head today. I really appreciate you taking the time to just kind of break it down. It’s always good to take a level, step backwards to move seven steps forward. You did that in a very short amount of time today. Thanks for being an awesome guest.
Jeff Dickey: Yeah. Yeah, thank you. Thank you for everything you’ve done for Internet 2.0 and for SDN. You’re just a pioneer, so thank you.
Niki Acosta: You’re a pioneer and you look like a badass, I’m going to have to say. The hair, the beard, the cool accent… Awesome. You’re like the George Clooney of cloud over there.
Pere Monclus: Thank you, Niki. Thank you, Jeff. It was nice to talk to you.
Niki Acosta: Awesome.
Jeff Dickey: All right. Talk to you soon.
Niki Acosta: Take care, everyone. Who do we have next week, Jeff?
Jeff Dickey: That’s a great question. I always have to look that up.
Niki Acosta: Is it Jessica?
Jeff Dickey: No, it’s Shamail from EMC.
Niki Acosta: Oh, EMC, that’s right.
Jeff Dickey: He’s over there at EMC, and we’re going to hear from him. He’s the cloud architect in the office of the global CTO. It’s going to be a great show.
Niki Acosta: Ex-rapper too, right?
Jeff Dickey: Yeah. The records are everywhere. Thank you everyone for the downloads. It’s kind of surprising to look at the numbers. You guys are awesome. Keep up the great feedback. Send us any questions or any feedback or tweet us @OpenStackPod.
Niki Acosta: We are doing transcripts of every podcast that we do. They have been posted on the metacloud.com website. Here in the next few months, we’re going to try and transition those to the Open @ Cisco blog, if anyone is interested in catching some of the actual words that are being said here. It’s always fun to go back and read what the transcribers put in those chats.
Jeff Dickey: Yep. I’ll tweet that out. We’ll have this up. We’ll get that tweeted out.
Niki Acosta: Give us a week and come back and read what this guy said, because Pere is very brilliant.
Jeff Dickey: Yep. I can’t wait to listen to this again.
Niki Acosta: Yay! Well, goodbye!
Pere Monclus: Thank you guys again.
Jeff Dickey: Talk to you later. Bye.
CONNECT WITH US