This week I had the privilege of speaking at Cisco Live 2013 about the coming explosion in connectivity among people, processes, data, and things, which Cisco calls the Internet of Everything (IoE).
This massive technological and societal shift promises to transform and accelerate our lives in profound ways as the number of connected objects soars from 10 billion today to 50 billion (and rising) by 2020.
Yet even before I left for Orlando or gave my first Cisco Live presentation, I saw ample evidence that IoE is not just a vision of the future. Increasingly, it is the Internet of today—and evolving rapidly all around us.
IoE represents the orchestration of a bevy of emerging technologies, including Big Data analytics, video, mobility, cloud, and machine-to-machine (M2M) communications. And it will ultimately infuse almost everything—roads, jet-engine parts, shoes, refrigerators, soil, supermarket shelves, you name it—with cheap, tiny sensors that will generate terabytes of data to be sifted for key insights.
My previous blogs have turned into a “in a world” series introducing the reader to the versatility of the Cisco Unified Computing System. We are no strangers to the fact that data collection and data records are exploding. The Internet of Things (IoT) promises to add a lot more data to our treasure trove. As more objects are embedded with sensors and get the ability to communicate even more data will be collected and stored. Here at Cisco, we see the Internet of Everything (IoE), which goes beyond IoT when we add people, processes and information to the mix. Cisco defines IoE as bringing together people, process, data, and things to make networked connections more relevant and valuable than ever before—turning information into actions that create new capabilities, richer experiences, and unprecedented economic opportunity for businesses, individuals, and countries. Check out http://blogs.cisco.com/ioe/how-the-internet-of-everything-will-change-the-worldfor-the-better-infographic/
Clearly the Internet of everything (IoE) will affect the data center in many ways. In this video Cisco VP Satinder Sethi, gives us a perspective on some of the challenges and how Cisco is partnering with other IT companies to solve the problems.
Organizations can transform, mine or analyze the data collected to create new business models, improve business processes, and reduce costs and risks. The recent NSA scandal of tacking phone records indicates it can be used to improve physical security. Read More »
One of the hottest topics in the data center lately is around big data and the actual dollar value that businesses are deriving from making sense from tons of unstructured data. Virtually every field is turning to gathering big data, with mobile sensor networks, cameras everywhere, and information archives. New techniques are being developed that can mine vast stores of data to inform decision making in ways that were previously unimagined. The fact that we can derive more knowledge by recognizing correlations can inform and enrich numerous aspects of every day life.
Cisco is partnering with leading software providers to offer a comprehensive infrastructure and management solution, based on Cisco Unified Computing System (UCS), to support our customers’ big data initiatives. Taking advantage of Cisco UCS’s Fabric based infrastructure, Cisco can apply significant advantage to big data workloads.
There are actually many advantages to hosting big data applications on Cisco UCS infrastructure. With UCS, Cisco offers a balance of performance, management and scale that sets UCS apart from other industry solutions. Although we’ll be discussing the benefits in more detail at Cisco Live next week, here is a sneak peak of what you can expect:
Reason #1 to deploy Cisco UCS for your big data analytics: Form factor independence and administrative parity.
Cisco UCS provides a single point of management for the overall infrastructure—whether it’s blade architecture on the enterprise application side or rack architecture on the big data side, including troubleshooting, monitoring, and alerting capabilities. Customers can proactively monitor the system and keep operational costs down.
In other words, Cisco UCS Rack Servers can be managed the same way as UCS Blade servers with full workload mobility across both blades and racks. This simplifies the management construct and eliminates the need for additional management silos in the data center. This form factor independence is made possible by Cisco Unified Fabric with single wire management and Cisco Unified Management that includes UCS Manager with Service Profiles.
Recent results clearly reinforce the growing understanding that Cisco has unleashed a more highly evolved and effective solution into the computing ecosystem. While the principles outlined by Charles Darwin in Origin of the Species can stir controversy, I find them to be an accurate model for technology evolution and quite useful for describing how we’ve arrived at this latest watershed in the x86 server market.
Our first observation would be the extremely rapid rate of customer adoption for Cisco’s Unified Computing System (UCS). Darwin would tell us that there must be significant advantage in “fitness to purpose” inherent to UCS that have driven this velocity. This is certainly true. Looking back at where we’ve been and how we’re positioned to go forward, here are key factors I see at play that create these advantages for UCS adopters:
Primitive incumbents in the server industry attempted converged infrastructure by choosing to combine compute and storage first. Cisco chose to converge compute and fabric first. This is a critical threshold event because it turns out that most optimizations for virtualization and cloud are fabric-oriented. With our Virtual Interface Cards we made server NICs and HBAs part of the fabric, not part of the server, a significant mutation in computing design. Further, Cisco abstracted every single identity and configuration element for servers, network access and storage into a programmable software model -- inventing fabric computing with stateless servers. Simple. Flexible. Resilient. Advantage: UCS Read More »
On June 20th, Cisco and MapR will join with Forrester Research Big Data analyst Mike Gualtieri to discuss “productionizing” Hadoop. But what does it mean?
Mike has developed a list of 7 architectural best practices that will help your enterprise quickly, and easily develop or move your Hadoop environment into standard data center processes. Following his guidelines, your can get your Hadoop environment up and running in no time, saving time by being proactive on the headaches and pitfalls that are unique to Big Data environments.
Joining Mike will be MapR CMO, Jack Norris discussing their best practices and how they line up with the Big 7 from Forrester.
Finally, Cisco IT will showcase a MapR production environment and how they have streamlined the complex Big Data workloads, automatically moving data into and running analytics out of their Hadoop environment.
Keeping the Hadoop production environment up and running smoothly is the name of the game here and in the face of resource constraints, Cisco IT has standardized on Cisco Tidal Enterprise Scheduler—with its seamless integrations into MapR, Hive, and Sqoop—giving your enterprise the ability to “productionize” complex workloads from any data source.