Sometimes we spend so much time involved in the inner-workings of something (“inside the sausage factory”) that it’s valuable to occasionally come up for air to get a fresh perspective on things. I had one of those moments this week during a conversation with a Sr. Engineer at one of our customers. After a long whiteboard session about networking within their Data Center, he asked me if it was useful (YES!) and then he said he wasn’t sure how that had anything to do with Cloud Computing. The rest of the conversation went something like this:
ME: That was great because you highlights many design considerations for building massively scalable data center networks. [SCALABLE]
HIM: Glad it was helpful, but please don’t tell me this is Cloud Computing. This is just the evolution of Data Centers because now VMs and Applications can be mobile.
ME: OK, what do you think Cloud Computing is?
HIM: Cloud Computing is the stuff on the Internet, you know, like Amazon AWS or Google. All the on-demand, self-service, *aaS stuff that marketing people talk about.
ME: OK, fair enough. Does your company (Enterprise -- Financial Services) use any Cloud Computing?
Unless you have been living under a rock for the past few years – and perhaps even then – you have undoubtedly heard someone touting the merits of virtualization and cloud computing. Chief among the advantages are reduced costs and the capability to do more with fewer resources.
Although the terms are often used simultaneously, cloud and virtualization aren’t the same. Click below for a brief discussion of each.
This past weekend, the social media channels were ablaze with discussions about the Cloud Computing events of last week. Many of the discussions centered around the idea that customers of public cloud services had over-estimated what would actually be delivered, especially in the areas of High Availability and Disaster Recovery. Some people argued that it was the providers fault, while others argued that the customers should have known better and designed their applications accordingly.
Initial deployment costs often came up during discussions, especially as it related to start-ups and growing businesses that required (or preferred) the pay-as-you-go consumption model to one that was more CapEx focused. Sometime during the discussion, I received a tweet that said “Not every startup can afford to buy redundant vBlocks”.
I’m not sure if this was directed at me, Cisco or VCE. Either way, it was probably directed at the most visible integrated offering from technology companies that have chosen to supply best-of-breed infrastructure for public (and private) cloud builders, not “be the cloud” for companies.
My initial reaction was, “huh, when did the discussion move back to small companies buying their own infrastructure?”. This isn’t the late 1990s, where every start-up in Silicon Valley bought huge quantities of servers, storage and networks, which required them to raise large amounts of capital to fund the infrastructure before they could even begin growing their business. We understand that VCs give start-ups less these days because they don’t want to pay for the business risk + infrastructure assets. Too many start-ups fail or don’t have a viable business model, so move the infrastructure costs to the commodity public clouds. Read More »
If you were paying attention to the Intertubes or Twitterverse today, you probably heard about an issue at one of the well-known Cloud Computing providers. Needless to say, fingers were being pointed left and right, and all the “experts” came out to explain their 20/20 hindsight into causes (still unknown) and avoidance.
I purposefully avoided any comments about these events because sometimes in life systems go down. If you’ve been in the technology industry long enough, and actually worked in support or operations, you know that even the best designs can have issues. And I’m not ashamed to say that I’ve been the cause of some (temporary) issues with large customer systems. When it happens, it’s not a good day for anyone involved -- the operators, their customers, the fat-finger typer or wrong-cable puller, etc.
What dawned on me throughout the day were all the people labeling this #FAIL. This is the Internet’s new meme anytime something goes slightly different than plan. Read More »
Today is Earth Day, and that has me thinking green. As I discussed this afternoon at GigaOm’s Green: Net conference, the world is changing around us in many ways, including becoming more urbanized. Over the next five years, some 500 million people will be added to the world’s cities. As we think about how to manage the energy and environmental challenges that will accompany these trends, what role will the network play in helping us be more efficient and more sustainable? And what benefits will that bring to utilities and to consumers, to governments and communities at large?
Cities consume 75 percent of the world’s energy and are responsible for 80 percent of greenhouse gas emissions. Utilities and the energy infrastructure are at the heart of city planning. If we are to better manage this impact, we must transform our electrical grid into a modern and more sustainable platform for the 21st century. Technology is the only way we can achieve balanced and sustainable growth.
Lessons in how to make our electric grids more reliable, more secure and more scalable can be gleaned from our experience in vastly revamping the telecommunications infrastructure in the ‘90s. Here too we had somewhat proprietary, siloed networks that didn’t talk to one another. Here too we had an industry that was highly regulated and needed to cautiously implement change. And here too we had an emerging field of companies chomping at the bit to capitalize on making the new telecomm infrastructure everything it could be.
The lessons we learned from this transition are important: architect the infrastructure on open, standards-based technology; build in security from the beginning; and establish public- private partnerships to align policy with infrastructure investment needs.
This transformation will rely on new technologies but also on leveraging existing technologies such as routing and switching for a utility environment. Data centers, cloud computing and security have a role to play in managing and protecting the vast influx of usage data so that we can make better educated decisions about energy consumption. Energy management of businesses and homes will leverage the existing networks extend their reach and impact. And given that the entire grid is the world’s largest infrastructure, integrating energy infrastructure with information technology will require a disciplined, architectural approach that we can only begin to foresee.
This transition has great implications, especially in our largest cities, where the need is most apparent. Examples are cropping up around the world of this vision in action. The Envision Charlotte initiative has set a goal of reducing energy use by up to 20 percent within its perimeter through greater education of citizens and use of information technology. BC Hydro in Vancouver just announced that it will roll out 1.8 million smart meters based on Itron’s OpenWay technology, powered by Cisco, to enable a more efficient grid and foster the use of renewable energy. And the city of Incheon, Korea is building in sustainability from the ground up.
These are but a few of the examples of how cities are changing, based on their energy and environmental goals. As I look around today, I see a smarter, more connected world emerging with a more intelligent and efficient energy infrastructure, supporting millions of customers, and billions of watts, with one network at the core