A number of forces are changing how we work, live, and innovate: pervasive technologies, distributed ways of working, “space rather than place” as a work ethos, new methods and modes of work, access to shared services, open versus closed innovation, a new generation of workers, environmental concerns, and macro socioeconomic shifts.
Given a choice, people will demand freedom to work, live, and innovate in ways that meet their individual lifestyles, unfettered by place. Meanwhile, pressures to reduce costs and seek new approaches to innovation are causing many private and public organizations to rethink how work gets done. Read More »
Tags: Big Data, Cisco, cloud services, device proliferation, future of work, IBSG, infrastructure, network, S+CC, security, smart applications, Smart+Connected Communities, urban services, urban sustainability, work-life
So what makes Cloud, Virtualization, and Big Data so interesting? It’s a familiar question for every data center manager and network engineer that manages and designs the networking infrastructure in a datacenter. Like most IT departments, you have been challenged by the cost cutting to control capital outlays and operational expenditures — but now that you have done everything possible to cut costs, how can you continue to operate and innovate with reduced budget? To me, it is all about how to efficiently utilize your data center resources while still evolving and scaling your data center architecture to meet user expectations of accessing consumer and business applications from any place and device at any time.
Cisco continues its innovation leadership with the Nexus portfolio by bringing unmatched architectural flexibility and revolutionary scale with enhanced virtual security; but how do- you leverage these dramatic improvements in technology to address your business needs? How is it possible to drive greater IT capabilities and optimize your data center switching platform such as Nexus while delivering a comprehensive range of world-class, innovative data, communications, and entertainment services to your users?
Read More »
Tags: Big Data, desktop virtualization, nexus
Big Data’s move into the enterprise has generated a lot of buzz on why big data, what are the components and how to integrate? The “why” was covered in a two part blog (Part 1 | Part 2) by Sean McKeown last week. To help answer the remaining questions, I presented Hadoop Network and Architecture Considerations last week at the sold out Hadoop World event in New York. The goal was to examine what considerations need to be taken to integrate Hadoop into Enterprise architectures by demystifying what happens on the network and identifying key network characteristics that affect Hadoop clusters.
The presentation includes results from an in depth testing effort to examine what Hadoop means to the network. We went through many rounds of testing that spanned several months (special thanks to Cloudera on their guidance). Read More »
Tags: Big Data, Cisco, Cloudera, data center, Hadoop
There’s been some activity inside Cisco around big data, particularly with regards to Hadoop running on Cisco’s Nexus switches and UCS servers. A little bit of that work is starting to surface here and there, so I thought it would be a good time to do a little post to aggregate.
If you’re interested in what else Cisco is up to in the exploding world of big data, check out the new page we put up to pull it all together - cisco.com/go/bigdata.
UPDATE: You can catch Jacob Rapp speaking with the folks from Wikibon live at 1:15PM on Wednesday Nov 9th on siliconANGLE.tv
Tags: Big Data, data center, networking, nexus, UCS
As discussed in my previous post, application developers and data analysts are demanding fast access to ever larger data sets so they can not only reduce or even eliminate sampling errors in their queries (query the entire raw data set!), but they can also begin to ask new questions that were either not conceivable or not practical using traditional software and infrastructure. Hadoop emerged in this data arms race as a favored alternative to the RDBMS and SAN/NAS storage model. In this second half of the post, I’ll discuss how Hadoop was specifically designed to address these limitations.
Hadoop’s origins derive from two seminal Google white papers from 2003-4, the first describing the Google Filesystem (GFS) for persistent, massively scalable, reliable storage and the second the MapReduce framework for distributed data processing, both of which Google used to ingest and crunch the vast amounts of web data needed to provide timely and relevant search results. These papers laid the groundwork for Apache Hadoop’s implementation of MapReduce running on top of the Hadoop Filesystem (HDFS). Hadoop gained an early, dedicated following from companies like Yahoo!, Facebook, and Twitter, and has since found its way into enterprises of all types due to its unconventional approach to data and distributed computing. Hadoop tackles the problems discussed in Part 1 in the following ways:
Read More »
Tags: Big Data, Cisco, data center, Hadoop, NoSQL