Last month, we proudly announced Connected Analytics for the Internet of Everything (IoE), easy-to-deploy software packages that bring analytics to data regardless of its location. It is a continued part of our commitment to delivering on our vision for fog computing, also called edge computing, a model that does not require the movement of data back to a centralized location for processing. If you’ve been reading my blog, you’ve seen me write about this as the concept of ‘Analytics 3.0’ or the ability to do analytics in a widely distributed manner, at the edge of the network and on streaming data. This capability is unique to Cisco and critical for deriving real-time insights in the IoE era.
To perform analytics using a traditional computing method, once data is generated it is aggregated, moved and stored into a central repository, such as a data lake or enterprise data warehouse, so it can be analyzed for insight. In the IoE, data is massive, messy, and everywhere – spanning many centralized data repositories in multiple clouds, and data warehouses. Increasingly, data is also being created in massive volume in a very distributed way…from sensors on offshore oil rigs, ships at sea, airplanes in flight, and machines on factory floors. In this new world, there are many problems that arise with the traditional method – not only is it expensive and time consuming to move all of this data to a central place, but critical data can also lose its real-time value in the process. In fact, many companies have stopped moving all of their data into a central repository and accepted the fact that data will live in multiple places.
Analytics 3.0 creates a more appropriate model, where the path to derive insight is different by combining traditional centralized data storage and analysis with data management and analytics that happen at the edge of the network…much closer to where the huge volume of new data is being created. Analytics involves complicated statistical models and software, but the concept is simple…using software to look for patterns in data, so you can make better decisions. It makes sense then to have this software close to where data is created, so you can find those patterns more quickly…and that’s the key concept behind Analytics 3.0. Once it’s analyzed, we can make more intelligent decisions about what data should be stored, moved or discarded. This model gives us the opportunity to get to the ‘interesting data’ quicker and also alleviates the costs of storing and moving the ‘non-interesting data.’
Analytics 3.0 is not about replacing big data analytics, cloud analytics and other centralized analytics. Those elements are all part of Analytics 3.0, but they are not sufficient to handle the volume of massively distributed data created in the IoE, and so they must be augmented with the ability to process and analyze data closer to where it is created. By combining centralized data sources with streaming data at the edge, you will look for and find new patterns in your data. Those patterns will help you make better decisions about growing your business, optimizing your operations or better serving your customers…and that is the power of Analytics for the IoE.
Join the Conversation
Follow @MikeFlannagan and @CiscoAnalytics.
Learn More from My Colleagues
Check out the blogs of Mala Anand, Bob Eve and Nicola Villa to learn more.
Tags: analytics, Big Data, cloud, connected analytics, data, Internet of Everything, IoE
In today’s era of increasing connectivity, data is getting generated in vast proportions. Moreover, it is also important to be able to generate insights from it quickly and act accordingly. Gone are the days when one would move data into a data warehouse and then extract insights from it to act at a later date. Here are four scenarios why.
Scenario 1: Cloud and Social
If a discussion around a brand is trending positively or negatively, that organization needs to take action then and cannot wait for a future time to do so. They might want to capitalize on the positive sentiment and amplify it or perhaps take action and remedy a trending negative sentiment. Both Twitter and Facebook provide several real time analytics capabilities leveraging big data technologies that they pioneered themselves. These analytics run within their cloud environment and provider users real time insights.
Read More »
Tags: Big Data, cloud, InterCloud, IoE, IoT
The move to cloud can be daunting. In their blog, Overcoming the Organizational Challenges of Moving to Cloud, Presidio describes one of the barriers organizations face when transitioning to cloud. Building a hybrid cloud – one based on private on-premises resources extended through the public cloud network – requires many shifts in thinking for IT.
In short, IT is becoming the cloud services broker within their own company. This means IT is less about building out infrastructure than it is brokering cloud-based services and applications. To achieve this, IT needs to be able to provide services tailored to their user base. In turn, IT needs to be able to access flexible services designed to meet their specific requirements.
Another important shift in thinking for IT is to realize that the cloud is a doorway to more than just virtual servers. It is a portal to new applications and new ways of doing business so you can act upon emerging opportunities quickly. It is an assurance that you can have the performance you need to follow through on these opportunities. And it is a direct connection to ongoing innovation, enabling your organization to seamlessly access leading technology without extensive capital investment.
Cloud providers like Presidio understand this dual role of cloud, to help manage costs today while enabling unrestricted expansion into the future. And, with their Cisco Powered services, Presidio offers a cloud that is also built for reliability, security, and scalability.
Learn more about how Presidio’s Hybrid Cloud and Cisco Powered cloud and managed services can transform your business.
Tags: Cisco Powered, cloud, presidio
Let’s start this blog with a hypothetical scenario. Suppose you’re the CIO and you’ve committed to your CEO and the Board of Directors that your company will execute an innovative new strategy to delight your customers directly through their mobile devices and leverage the cloud to service them at scale. You are 90% complete in your development and you just need to complete your final beta and production readiness testing to roll it out to production when you learn your leading competitor is going to beat you to the market with their own application next month.
Is your cloud infrastructure really as agile as you believe? Will the physical infrastructure scale with the load? Will you be able to secure your application and data from threats? Can you rapidly deploy a production application across servers, networks, and storage infrastructure securely?
During Cisco’s Data Center webcast Jan 13, we’ll walk through the hypothetical use case above and then see how Day 1 operations for configuration and deployment can be addressed in real life by customers and enabling vendors.
Episode 1: An Impossible Deadline
(click to play)
We’ll look at Day 2 operations and learn how important application visibility across physical and virtual infrastructure is to meet the most stringent uptime requirements.
Episode 2: The Needle in the Haystack
(click to play)
Finally, we examine the challenges of de-commissioning applications securely while maintaining compliance.
Episode 3: Operation Clean Up
(click to play)
Join us during the webcast as we hear from ACI customers who will share their production experiences with ACI and how it impacts their day 1 and day 2 operations. Hear ACI ecosystem partners who will share how they collaborate through ACI’s open policy model to simplify application delivery, security and orchestrate open clouds.
Make a note on your calendar for January 13th at 9 AM PST/ 12 EST and see Is your Data Center Ready for the Application Economy? (Register Here!).
The video on demand will be accessible through this same link.
If you are traveling to Cisco Live in Milan, Italy; please come to my session PSODCT-2455 “Simplify Day 0, 1, and 2 Operations in Application Centric Data Centers” on Jan 29th from 1:15PM — 2:15PM to learn how operations like tenant on-boarding, creating applications containers and self-service catalogs, application monitoring and troubleshooting can all be simplified with application policy driven automation.
Tags: ACI, applications centric infrastructure, SDN
This is my first blog post within the Data Center and Cloud technology area. I recently joined the Openstack@Cisco team under Lew Tucker focusing on advanced OpenStack System research as a Cloud Architect. As part of this role I performed a gap analysis on the functionality (or the lack thereof) of multicast within an OpenStack based private Cloud. Coming from Advanced Services I have seen multicast as a critical component of many datacenters providing group based access to data (streaming content, video conferencing, etc.) . Within a Cloud environment this requirement is almost if not more as critical as it is for enterprise data centers.
This blog will be the first in a series highlighting the current state of multicast capabilities within OpenStack. Here, I focused the analysis on OpenStack Icehouse running on top of Redhat 7 with OVS and a VLAN based network environment. I would like to thank the OpenStack Systems Engineering team for their great work on lying the foundation for this effort (preliminary tests on Ubuntu and Havana).
I used a virtual traffic generator called TeraVM to generate multicast based video traffic allowing for Mean Opinion Score calculation. The Mean Opinion Score or MOS is a calculated value showing the quality of video traffic based on latency, jitter, out of order packets and other network statistics. Historically, the MOS value was based on human perception of the quality of voice calls, hence the word opinion. Since then it has developed to an industry standardized way of measuring the quality of video and audio in networks. It is therefore a good way to objectively measure the performance of multicast on an OpenStack based Cloud. The MOS value ranges from 1 (very poor) to 5 (excellent). Anything above ~4.2 is typically acceptable for Service Provider grade video transmission.
I performed the multicast testing on a basic controller/compute node OpenStack environment, with neutron handling network traffic. In this blog, I focus my analysis solely on opensource components of OpenStack with Cisco products (CSR and N1K) being discussed in a follow-up blog. The tenant/provider networks are separated using VLANs. A Nexus 3064-X is used as the top of rack switch providing physical connectivity between the compute nodes. The nodes are based on UCS-C servers.
Read More »
Tags: Cisco, multicast, OpenStack