As discussed in my previous post, application developers and data analysts are demanding fast access to ever larger data sets so they can not only reduce or even eliminate sampling errors in their queries (query the entire raw data set!), but they can also begin to ask new questions that were either not conceivable or not practical using traditional software and infrastructure. Hadoop emerged in this data arms race as a favored alternative to the RDBMS and SAN/NAS storage model. In this second half of the post, I’ll discuss how Hadoop was specifically designed to address these limitations.
Hadoop’s origins derive from two seminal Google white papers from 2003-4, the first describing the Google Filesystem (GFS) for persistent, massively scalable, reliable storage and the second the MapReduce framework for distributed data processing, both of which Google used to ingest and crunch the vast amounts of web data needed to provide timely and relevant search results. These papers laid the groundwork for Apache Hadoop’s implementation of MapReduce running on top of the Hadoop Filesystem (HDFS). Hadoop gained an early, dedicated following from companies like Yahoo!, Facebook, and Twitter, and has since found its way into enterprises of all types due to its unconventional approach to data and distributed computing. Hadoop tackles the problems discussed in Part 1 in the following ways:
Read More »
Tags: Big Data, Cisco, data center, Hadoop, NoSQL
Data Center Deconstructed reader Eric Chou writes: Good to see the knowledge sharing Doug. I read your book on building a Data Center a few years back and it was informative on the physical infrastructure piece. I think it would also be informative if you can share some of the experiences or creative ways to increase efficiency when there are macro environment limitations. I mean, outside of a select few companies (Google, Amazon, Facebook, Amazon), most companies are not able to build a Data Center from the ground up, buy the cheapest land near a lake or negotiate a jaw dropping electricity rate with the local government. What can we do when we need to house 1/2 floor of servers in a 80-year old peering exchange that assumes 2 KVA per rack when designed?
That’s a great question. As I often tell other Data Center managers, we can make any upgrades to our server environments we want to as long as there’s no downtime or cost. I’m joking with that comment – mostly – but it is a common scenario. Fortunately, there are several things that can be done in a legacy Data Center to improve its efficiency and reduce the likelihood of downtime without spending much money or disrupting the environment.
Here, then, are eight simple rules for improving a Data Center.
Read More »
Tags: 8 simple rules, Cisco, coc-data-center, color coding, data center, datacenterdeconstructed, energy efficiency, idle hardware, labeling, operational improvements, virtualization
“Terrific! Fantastic! Cisco have delivered yet more innovation and data center switching and unified fabric leadership. All these new features and capabilities!” …. BUT … (and you may be asking this question) …. “How can we exploit these new features quickly? I’m just too busy!”
Let’s face it -- these new data center capabilities are not much use sitting on your loading dock or drawing board. You’ll know this better that me -- it’s one thing to have new features available, it’s another challenge to exploit them by designing them into your architecture -- that is where the real skill comes in. But you’re busy, are unbdoubtedly overloaded, and wondering how you can get it all done.
So how can we help you translate these excellent new features into your production data center? How can you get access to our Cisco experts who have (each!) architected literally dozens of unified fabric designs? Perhaps you don’t need or want a full multi-week design exercise -- what other options are available to you? What can Cisco do that is small, quick and provides you with the key expertise to guide your direction on unified fabric adoption and evolution?
Let me continue the theme of my previous blogs and show how Cisco Services can help you exploit these latest market-leading innovations. I’m going to focus on how Design Reviews with our data center experts, through our Cisco Data Center Optimization Service (discussed previously here), could really take the pressure off you and your team and deliver real business benefits at the same time.
Cisco Data Center Optimization Service
Read More »
Tags: cisco_services, data center, optimization, Unified Fabric
On Thursday, I had the pleasure of attending a kind of roadshow of technical experts Cisco switching: From Campus to Data Center: Cisco Switching Deep Dive
This is a day-long seminar inviting customers and potential customers to find out about and discuss the technical specifics, capabilities, and applications of Cisco’s entire switching portfolio. The seminars are being held all over the US from the end of October and through the beginning of December. Here’s a little video I put together from the day and some notes on interesting things I saw. [Sorry for the shaky video!]
Read More »
Tags: catalyst, data center, nexus, switching
With this week’s announcement, Cisco continues its innovation and leadership by bringing unmatched architectural flexibility and revolutionary scale to meet diverse requirements of massively scalable data centers, big data environments, cloud-based architectures or bare-metal deployments – with one evolutionary network: Unified Fabric.
These next-generation solutions lay to rest the myth of “the good enough network”. When a modest five-year TCO is calculated (including CAPEX plus the cost of maintenance, labor, bandwidth and energy consumption), the Cisco advantage is clear. And when the value of Cisco’s unique Intelligent Automation capabilities are added, like implementing the newly launched Network Operations Automation Service, Cisco has a very compelling economic argument indeed.
To drive the point home, the real economics of networking reveal that for many organizations approximately 70% of network TCO is incurred after the initial equipment purchase. So why is this important?
Read More »
Tags: data center, intelligent automation, Network Operations Automation Service, tco, Unified Fabric