Avatar

One category of digital tools, making thinking visible, can give students a higher level of confidence to ask questions when they need help, or share their opinions and ideas with the rest of the class — leading to more thoughtful discussions. These same tools inform educators about how to plan for much more meaningful lessons.

In the 1980s coming of age movie, Ferris Bueller’s Day Off, one of the funniest scenes I remember (perhaps because I was a social studies teacher at the time) is the high school economics class with Ben Stein as the teacher. His famous monotone repetitive question to the entire class of “Anyone?” yields zero interest in response. He tries to explain the Hawley Smoot Tariff Bill, “which, anyone? Raised or lowered tariffs? Did it work? Anyone? Anyone know the effects?”

Of course, the comedic effect is that not one student was willing to acknowledge what teacher Stein was conveying about the Hawley Smoot Tariff Bill. He seems to accept his fate of a complete lack of student response without any concern or worry. Undeterred by blank glassy-eyed student stares, he seamlessly moves on to his endless unenthusiastic nod for class participation, “Anyone?” One student is drooling on his desk while he sleeps through Stein’s monotone. Stein’s character remains oblivious.

While that scene is an extreme caricature of a classroom, a friend of mine who was teaching physics at an Ivy League School in the 1980s only had about 10% of student engagement when he would ask students in his lecture classes of more than 150 for their input. Similar to the movie, he used to see his job as conveying his knowledge to his students as he tried to hold their attention. Today, he is still teaching — but in a very different culture supported by digital tools and new processes. It is not an exaggeration to say that all of his students are engaged almost all of the time. Students are actively defending their views of the application of physics, they are writing application questions, they are explaining concepts to peers in a new model of assessment.

Throughout all of this energy and excitement, this professor is learning more about how his students think and reason and learn. He has greater insights into their misconceptions and their ability to use their imaginations to extend beyond his expectations. He talks less and listens more. Acquisition of knowledge has improved. Enjoyment for student and educator is up. Use of time, the most precious commodity of learning and teaching, is much more efficient.

What the country needs is a high-tech version of Ferris Bueller’s Day Off to show the power of how technology can transform the culture of learning to be the exact opposite of the boring impact of Stein’s response to “Anyone?” In the original movie, the teacher owned the learning. In the new high-tech version, there has been a shift to students owning their learning. That is the revolution — a change in the culture and control of information in the classroom. The new high-tech version of the movie will not be a comedy. It will be an adventure story of how exciting learning is. The pathetic Ben Stein character will be replaced by a romantic hero who seamlessly ignites students’ passion for learning.

In reality, transforming the culture of the classroom can be complicated and hard work and can take many steps. Pioneering schools and universities are moving to support a vision that students can tap their native creativity and curiosity, and their proclivity to social engagement with peers, to manage more of their own learning. One of the most difficult aspects of creating a culture of high-performing engagement is managing the shift of control from the educator to the learner. Another level of potential resistance is creating a team-based classroom culture from individual students working for their own grades to teams of students working to help one another.

Frankly, letting go of control can be very scary for professors and teachers who traditionally have been highly valued for their conductor-like ability to control the flow of knowledge in a classroom where every student receives the same content at the same time. The trick is to strike a balance between empowering students to own more of their own learning while the teacher directs the flow of learning. In this new balance, the role of the educators is more important than ever.  All educators do not agree that technology has made their lives easier. What we need is to give them tools that can ease the workload while improving results.

Where to begin? I have selected two powerful digital tools that are very easy to use. Each one reveals how students are thinking with different patterns. Both are free — and both can support student engagement while better providing educators with insights into patterns of student thinking.

The first example is Prism, developed by a team of students at the University of Virginia. When you see Prism in action, it becomes immediately clear why it’s so effective with inviting students to engage in debate. Very little technical staff development is needed; all you have to know is how to cut and paste text and fill out a form. It is based on a design concept of digitally overlaying the interpretation of how an entire class interprets text onto three screens. Each screen turns the text into a different color. Each color represents a concept the teacher has chosen for interpretation. It is formative assessment at its very best, leading to a deeper understanding of how a whole class is thinking. The feedback is immediate.

Let me tell you about the details of the first time I watched a class come alive because of the creative use of Prism by an English teacher. The teacher was giving a lesson on Shakespeare. Before she introduced her students to a Prism exercise, she was asking the class questions about the play they were reading, such as which parts they thought were hardest to understand, most insightful, or most open to interpretation. Only a few students raised their hands to answer — and it was always the same few students who did.

Then, the teacher had her students break into groups of two. Each group was asked to read the same section of the play that was uploaded to Prism. Within Prism, the students had to reach agreement about which passages to highlight with the three digital colored highlighters according to the code filled in by the teacher. Red was for the most difficult words, green represented open to interpretation, and blue indicated major insights. (The teacher could have chosen any concept to be coded to a color, such as use of evidence or best use of inference.) As students began to highlight, the pattern of the whole class’ thinking began to be revealed as the font size of each word in the text changed as a function of how many teams highlighted various sections of the text. It was like watching a faded blurry map come into clear sharp focus! When the class was finished reading and highlighting the teacher simple clicked on one digital color at a time to reveal the patterns. It was easy and the impact on student engagement was immediate.

The whole class could see the pattern of thinking of their peers. When the teacher asked the class again: “Who would like to explain which passages they thought were the most insightful, and why?” This time, hands shot up everywhere. The difference in the students’ response was like night and day.

At that point, nearly every student was engaged in the lesson, and there was a high degree of enthusiasm. It was fascinating to watch. The bell rang, and the students didn’t want to leave. They were still debating with each other — and these were the same students who, moments earlier, wouldn’t talk or raise their hand.

What had changed in such a short amount of time to create the kind of rich discussion and engaged learning environment that many teachers only dream about?

When I asked the students to explain why they had become much more engaged, one girl noted that at the beginning of class she was reluctant to raise her hand, because she didn’t know what the other students were thinking. She didn’t feel safe in responding; because she might be mocked for saying something stupid — or something really smart. But once the class used Prism, she knew what other students were thinking, and she could see that she wasn’t alone in thinking the way she did — and that made it safe for her to participate in the class discussion. I need to mention that all of the students’ highlights are anonymous. It is this anonymity that gives the students the confidence to take risks.

The teacher was listening to this debriefing, and she was nodding. She understood the power of a tool like Prism to transform her class into a much more engaging, risk-taking, and intellectually curious environment.

We often think of making students’ thinking visible as a strategy to help teachers: When teachers have more insight into what their students know (or don’t know), they can adjust their lessons to make sure everyone understands the material. But making the thinking visible also helps students. When students can see how their ideas fit in with the rest of the group, they feel more comfortable in sharing those ideas — which leads to better and more open conversations. As one student commented, “For the first time, I realized that I was not the only one who had difficulty understanding one aspect of the reading. That gave me the confidence to ask the teacher for help.”

Teachers also can use Prism as a self-assessment tool. Students can upload their writing to the platform and highlight certain elements the teacher might request, such as inferences, supporting evidence, or places they could use help. The teacher benefits from seeing whether students understand these concepts, and the student’s benefit from reflecting on the quality of their work before they turn it in.

Research clearly supports the value of self-assessment, because it helps students become independent learners. For instance, researcher John Hattie has pored over nearly 1,200 educational studies from around the world to identify the factors that most strongly correlate with student success. Of the 195 independent variables he has identified, self-assessment ranks third on his list in terms of importance—and it’s the single most effective learning strategy that students can use for themselves.

Prism is just one example of a category of tools, of making thinking visible, that can help teachers, professors, and students to understand patterns of understanding that would not be possible in a world limited to paper. In the hands of a thoughtful educator, the patterns revealed by these tools can lead to richer debate and a deeper understanding of concepts. Educators can immediately see where there is a complete absence of highlights. These tools can help inform educators with much finer detail of what to cover next.

Another tool, Verso, is not limited to text but provides the educator to send out any content, photo, video, or text. It helps teachers encourage their students to think more deeply by asking open-ended, thought-provoking questions that students can answer either during class time or on their own. Only the teacher can see who left each comment, and this anonymity allows students to feel comfortable responding freely to the teacher’s questions and their peers’ responses.

One of the challenges with online discussion boards is eliciting original thinking from all students. If someone is the 10th student adding to the discussion, it’s hard to know how much he or she has been influenced by the first nine commenters. Or, students might be discouraged from giving an authentic response by what they’ve read from the first nine. With Verso, students don’t see each other’s responses until they submit their own — which solves this problem nicely.

Here’s how it works: Teachers create an activity by linking to a video or a document they want students to reflect on, then ask them a question about it — something that will provoke a good discussion. There’s a space in the assignment for teachers to model the kinds of responses they’d like to see from students, to make sure students understand the depth of thought that is required of them. Then, teachers assign the activity to their class.

Students can reply to and build on each other’s ideas, and they can reward others’ responses with “likes.” Teachers also can group students based on their responses and encourage them to probe each other’s ideas further, thereby taking the learning deeper.

In a traditional class discussion, pursuing a line of inquiry with one student means you have to ignore the other students. You can only get to one or two levels of discussion. But with Verso, you can put kids in groups and then have them respond to each other — which can be quite powerful. For example, you can challenge a subgroup of three students to try to convince the other two that their initial response is the most logical.

Verso is a great tool for helping students learns to ask more insightful questions and dealing with more difficult material. For instance, you could have students in history read this primary sources such as this letter to President Kennedy where the author expresses deep disappointment in the new president’s position on civil rights. Without telling students anything else about the letter, you could ask them to submit whatever questions they have about it through Verso.

Many students start out by asking basic factual questions, such as: “Who wrote the letter?” But those are closed questions; they don’t lead to any deeper thinking or debate. As the teacher encourages the class to ask hypothetical questions or divergent questions you can see an explosion of creativity as the questions become more complex. As a teacher, you can use Verso to teach students to develop a deeper line of enquiry. The anonymity provides the safety for the students to take a risk and the teacher mode allows the teachers to see what each student is thinking. It is the very best of both worlds.

These learning tools represent a small part of what can happen when powerful technologies are put in the hands of skilled teachers and are used to transform instruction. However, enabling the flow of critical information generated by learning tools isn’t made possible solely by the educators. The learning tools and technologies used to engage students are also extremely dependent on having a robust network. These web and application-based tools are being used by all students who are bringing their own devices onto the campus. As more educators realize how easy it is to tap the power of apps that make thinking visible, to save time and to add value to academic achievement, demand for access points and bandwidth will increase.  

Authors

Alan C November

Senior Partner November Learning

Avatar

Software engineering and developer communities are driving the market for cloud consumption and leading each industry into a new era of software-defined disruption. There are no longer questions about elastic and flexible agile development as the way to innovate and reduce time to market for businesses. Open source software plays a key role in the digital transformation to cloud native and understanding how your business strategy needs to address this next disruption in software development is crucial to the success of your business.

Cloud Native applications are a combination of existing and new software development patterns. The existing patterns are software automation (infrastructure and systems), API integrations, and services oriented architectures. The new cloud native pattern consists of microservices architecture, containerized services, and distributed management and orchestration. The journey towards cloud native has started and many organizations are already testing with this new pattern. To be successful in developing cloud native applications it is important that you prepare for the cloud native journey and understand the impact to your infrastructure.

Preparing for the Journey

The first step in a successful business transformation is setting the vision and the goal for the organization. Many organizations struggle with transformation because they start with the technology first.

Technology is exciting and challenging, but lessons learned from the industry are not to start there, but with your business mission, vision, and your people.

At the outset of the transformation, it is critical to get your leadership, partners, and customers on board with your plans, set clear expectations, and gather feedback often. It is important to over-communicate at this initial stage and ensure that buy-in is strong. As you progress on the cloud native journey, you will need to get the support and cover from your leadership.

The next step is to assemble the right team and break down the vision for the journey into phases or steps that are further decomposed into development actions or sprints. It is critical to evaluate the team member’s strengths and weaknesses against the organizational goals to accomplish your vision and invest upfront in training.  Most good ops and engineers are interested in furthering their career with training.

Lastly, evaluate technology choices and plan for technology integration with your existing back office, support, and IT systems including existing processes you have in place as an organization. You will need to work with other organizations across the business to identify skill sets needed and support the training or staff augmentation requirements of the other organizations.

We created a video to help you get into the cloud native mindset here.

Infrastructure Impact

@Cisco has always been focused on being a partner to our customers on their transformations. This has primarily been on the hardware front and focused on technology refreshes and technology advances. This is still very critical in Cloud Native Applications because your software can only differentiate you so much. If everyone is using the Cloud and everyone is using the same services, your service will perform, fail, and be as secure as your competitors. Perhaps your going for “good enough” service offerings, but business need that differentiation and your only get differentiation by taking advantage of the both the cloud native software patterns and latest technology advances in the underlying hardware infrastructure.

When evaluating the impact of infrastructure, it is important to start with the end goal of software defined, automated, and integrated in mind.  Software defined is often an industry buzzword, but it is very important to look at your existing network, compute, virtualization, storage, and security solutions from a set of software abstractions that can be programmatically configured (set) and consumed (get). Software defined infrastructure is considered part of the infrastructure services (network, compute, storage, and security) that can be configured for the business services to be deployed into or alternatively, the business can deploy a set of services leveraged these abstraction layers, often called blueprints.

Automated infrastructure is the ability to leverage API’s for provisioning a set of services and configuration primitives that enable abstractions and sets of services that deploy pre-defined sets of configuration and then validates the installation and readiness for the application to be deployed. Automation is key for cloud native, as applications need to be able to configure and re-configure in real time based on a number in inputs. These inputs range from failure states, to user demand, to application performance.

Integrated infrastructure considerations are often a secondary thought; however, they should be part of your initial planning. Many applications have dependencies that are internal to the IT infrastructure (i.e. behind the firewall). These dependencies can be data base related, IT compliance related, OSS related, or BSS related. Often, multiple dependencies are discovered as part of the application composition exercise. When evaluating your existing infrastructure, it is important to look at the integration points that your application dependence services have on the cloud native architecture. Many of these integration points can be abstracted as a set of services and API end points.

@Ciscodevnet focuses on the power of this combination and offers learning labs and training for developing with an API first mentality, the importance of CICD, devops, and opensource investment considerations. Please join the community at developer.cisco.com to begin to learn, code, inspire, and innovate.

Defining Business Services

Once you have prepared your organization for the journey and addressed the infrastructure impact- both existing IT, back office, and cloud native technologies, it is time to define business services for the application(s) you are developing.  In general, the best way to define business services is to understand the 4 subsystems for a cloud native architecture:

  • Application Composition
  • Policy and Event Framework
  • Application Delivery
  • Common Control and Ops


Cloud Native Architecture Sub-systems

There are several ways to decompose your application, and in general, there is not a specific right or wrong way. The best guideline from experience is start by the looking at the composition of your application from a set of services. Application composition is as much an art as a science. My recommendation is to look as the application composition as a block box with three components:

Application Composition

  1. Internals of the black box, which consists of the application design leveraging the functions that comprise the application logic.
  2. North bound interfaces from the black box, which consists of external API interfaces to the customer and external services (external to the your firewall)
  3. South bound interfaces from the black box, which consists of internal API interfaces to internal services, including OSS and BSS.

The application design (internals of the black box) for cloud native design methodology should be thought of as a set of function calls and dependencies.  Each function should be independent of the other functions and operate independently as much a possible. The state and scalability should be completely independent.

Each function needs a defined set of policies and events to enable the scalability and resiliency of the service functionality as well as an independent set of common control and operational primitives that enable the individual function service health and control to be performed independent of the other application functional components.  The cloud native application composition can be codified into a Cloud Native Application Blueprint as shown below:

Cloud Native Application Composition Blueprint

As mentioned above, the most complicated aspect of the application composition consist of the external and internal services the application depends on. The business models that leverage the OSS and BSS components need to be evaluated in light of business process and function. Some of these services can be containerized while others are not able to be containerized. How to enable a cloud native application to integrate with the OSS and BSS services should not be overlooked.

In addition, external services, especially from cloud providers, are very common. The top concern experienced in cloud native deployments has to do with the latency between the business application services and the external services consumed by the application services. This latency can often times experience timeouts, packet loss, and delays that cause user experience issues. To address this aspect of your cloud native design, understanding the networking impact of your external interfaces and leveraging DNS and SDN controllers to optimize the routing within the application services is required.

Deploy Business Services

Application deployed must be separate from the composition. Application portability is a key business requirement and one of the more reliability methods to achieve this to decouple the application code from the underlying deployment target.

The following guidelines are based on best practices for application portability:

  • Deploying the application into different environments (dev, test, production) each of which can consists of different environments (laptop, server, bare metal, private cloud, or public cloud)
  • Deploying to different locations (data center(s), availability zones, geo-location constraints)
  • Continuous Integration and Continuous Delivery of the application services across environments, location, and hybrid models to ensure that application and application services are continuously refreshed

Continually improve and optimize performance and scale. Once the basics and some of these underlying issues of the technology are understood, the team can then focus on improvements from a process and technology aspect. Sprints should incorporate user stories to address performance and stability issues. Then deployments can reflect these improvements along with enhancements to the underlying infrastructure, BSS, and OSS aspects.

Join the Community

Join me in Berlin this month at Cisco Live! #ciscoliveemea in the @ciscodevnet zone. I am presenting this topic and lessons learned in DEVNET-1065 Session – register here.

For a full description DevNet and everything we have going on @ciscoliveemea #DevNet zone check out our activity guide.

Authors

Kenneth Owens

Chief Technical Officer, Cloud Infrastructure Services

Avatar

Enterprise software licensing is complex, costly and easy to lose track of, especially in these days of servers on demand via infrastructure as a service (IaaS) and rapid setup and movement of virtual machines.  Many organizations are finding that vendor software audits uncover under-licensing resulting in (sometimes significant) license fee increases.

For example ……….. Did you know that many enterprise class software applications are licensed per processor core?  Not per core that you use, but the number of processor cores in your server, and in some cases, on the number of processor cores per cluster – regardless of how many cores you actually use for your app!  The video will show you more details on this:

 

Exploiting Core Placement to Reduce Software License Spend

Let’s look at further cost saving ideas that Cisco Services could help you identify.

Do you know how many old servers you have that are running software apps … that could be replaced by a modern Cisco UCS server, reducing your processor core count and freeing up software licenses you could re-purpose to another project or business unit?  (You could be a hero if you initiate such a “save to invest” program!)

Are you aware of applications that your team may have installed in the past ….. which are now sitting idle and unused… for which you are still paying license (and support!) fees for?

And finally – have you investigated how many servers (physical and/or virtual) you have that are physically idle, that haven’t been used in anger for say 90 days or more?   (Industry studies suggest 20-30% of a typical data center estate is idle!  I blogged on this a while back.)

How Cisco Services Can Help You Save on Software Costs!

Many IT shops are challenged with enterprise software costs.  With the complexity of enterprise  “all you can eat” software license agreements, it’s often hard and specialist work in order to ascertain a comprehensive licensing position.   This complexity is compounded with virtualization: it’s quite simply plain easy for an engineer to spin up additional instances of software apps, and all too easy to forget to shut them down when projects finish.

Cisco Services can help you cost rationalise your enterprise software applications.  The savings could be significant!

How do we do this?  As part of our Cisco Data Center Cost Rationalization Services, we use advanced software discovery and asset analysis software from our partner iQuate.  We not only identify areas of savings, we can also identify issues with software license compliance.  We can also help you remediate these issues, with Cisco Application Migration Services, where we can help you adopt the most cost effective licensing position, which maintaining your architectural requirements for resiliency, for example. The video above will help illustrate our approach.

Please, then, do get in touch if any of these savings appeal to you!

PS: For more information, please refer to some of my earlier blogs:

 

Authors

Stephen Speirs

No Longer at Cisco

Avatar

Rising from a devastating fire two years ago, Spanish food manufacturer Campofrio Food Group rebuilt its La Bureba factory from the ground up to be a state-of-the-art, digital food processing facility. The executive team at Campofrio Food Group knew that this factory could become a true showcase of Spanish food manufacturing processes, quality, and automation as well as a source of pride for its workers and the residents of the Burgos province. As part of its “Factory 4.0” vision, Campofrio Food Group implemented Industrie 4.0 and smart manufacturing pillars of full connectivity and visibility across processes, sensors, production lines, and people.

This Campofrio Food Group video encapsulates the emotional and intergenerational ties that employees and residents feel in the rebuilding of the factory. In fact, the King Felipe of Spain visited the rebuilt Nueva Burgos factory in late November 2016 and touted the Campofrio organization for showing true “resiliency in the face of adversity.”

https://youtu.be/tPZ1gJvRfnA

When our team first met with Campofrio Food Group’s IT and operational leadership in Madrid in 2015, I was struck by how committed they were to building a truly connected factory. Javier Alvarez, the CIO of Campofrio Food Group, had a vision of an innovative, converged, networked factory architecture that could be “future-proof” and a showcase example for other company facilities. Partnering with Cisco, Campofrio Food Group leveraged our manufacturing and networking expertise to implement a robust wireless infrastructure based on Cisco Connected Factory throughout its factory of over 99,000 square meters producing over 101,000 tons of food products a year.

More details are available in the press release here:

“We’ve created one of the most advanced food manufacturing plants in Europe. Designing our ‘New Bureba’ facility with connectivity at its core, we are confident to have a Factory 4.0 blueprint that will help us in continuing to demonstrate excellence through innovation, competitive pricing, and rapid time to market.”

-Javier Alvarez, CIO, Campofrio Food Group

From the beginning, the new “La Bureba” factory was going to be innovative by implementing tracking and traceability of all SKU’s along the different steps of production (slicing, drying, etc.). The factory went a step further in terms of data management of various recipes as well as real-time visibility  of materials, operational equipment effectiveness (OEE), and labor utilization. Another key concern was worker safety. With 5 different buildings in the La Bureba campus and with the inherent risks of food manufacturing, people tracking was key for safety and security.

How did they accomplish this? They applied our proven validated architectures for Connected Factory, Security, and Wireless, which resulted in a robust infrastructure comprised of hundreds of WiFi access points and nearly 2000 connection endpoints. They also used our latest Industrial Ethernet switches to better track production lines, improve redundancy, and also leveraged Cisco Prime for monitoring and maintenance.

The robust WiFi coverage throughout the manufacturing floor allows for the ability to remotely control robots within the specific operational secured environment.

Javier will be graciously sharing the Campofrio Food Group Bureba project best practices and critical success factors as part of the IT management track at our Cisco Live Berlin user conference on February 21st. We are honored to have been partners with Campofrio Food Group in this journey of launching their showcase factory. As Javier said, “any disaster can represent an opportunity” and certainly Campofrio Food Group has seized that opportunity to start a new chapter and show a new standard for food manufacturing.

Authors

Douglas Bellin

Global Lead, Industries

Manufacturing and Energy

Avatar

Welcome to Cisco Live Berlin!  Today, I want to introduce the industry’s first comprehensive edge-to-enterprise analytics solution in partnership with SAS.

In my earlier blog, I discussed the promise of connecting the unconnected. Here I will be talking about how to make this a meaningful reality. The Internet of Everything (IoE) will transform every industry, every person’s life in almost every aspect. It brings networking technology to places where it was once unavailable or impractical. The true power of smart, connected devices and the data and insights they generate will create the next era of opportunities.

Cisco estimates 50 billion devices will be connected to the Internet by 2020 and 500 billion devices by 2030. These devices generate data that analytic applications need to collect, aggregate and analyze to deliver informed, actionable insight. The challenge is to build the right digital infrastructure and enable the right set of applications to harness this data. Traditional computing models send the data to the enterprise data center for analysis. This is impractical in many scenarios given the volume of data being produced and the requirement for real-time analysis and response times, often measured in milliseconds. As a result, a new model for analyzing data at the edge of the network has emerged. This model moves the analysis and response close to the devices generating the data minimizing latency and reducing the load on the network and the enterprise data center.

Let me give you a few industry vertical use cases:

Smart Energy:  The Smart Grid, regarded by many as the next generation power grid, uses a two-way flow of electricity and information to create a widely distributed and automated energy delivery network that is safer, more reliable and more efficient, offering substantial benefits for both utilities and customers. Data collected by the devices are used to analyze and understand the energy consumption patterns and adjust the corresponding energy delivery, resulting in greater operating efficiency and environmental goals

 

Connected Automobile: The automotive industry is entering a period of profound change. Autonomous vehicles are close to reality using advanced machine-learning, powered by IoT data. Vehicle-to-vehicle and vehicle-to-infrastructure communications are being developed taking advantage of unified, secure network connectivity enabling transmission of real-time vehicle telematics, GPS tracking and geo-fencing, improving safety, mobility and efficiency, and proactive maintenance which decreases costs and vehicle downtime.

 

Smart Manufacturing: Smart Manufacturing solutions provide intelligent, timely information and collaboration that improves the quality of products and the efficiency with which they’re built. Smart Manufacturing enables continuous improvement of productivity through integration of Six Sigma DMAIC processes (Define, Measure, Analyze, Improve and Control). It also enables more accurate forecasts of product demand, greater visibility into supplier quality level, improved preventative maintenance, better asset management and safer factories.

The Edge-to-Enterprise solution announced today is designed, tested and validated by Cisco and SAS, and has three major tiers: Edge, Transfer and Enterprise, as shown in Figure 1.

  • Edge –  Analytics at the edge means understanding the local context and sending only the important information back to the enterprise. In this case, Cisco 829 Industrial Integrated Services Routers are used. They are designed for deployment in harsh conditions and are running the SAS Event Stream Processing (ESP) client software. The combination enables collecting millions of events per second, filtering the data, analyzing it and detecting patterns of interest in real-time. The output of the analysis defines what alerts to issue, and which data to store and route forward. The Cisco Fog Director Software simplifies the deployment and management of applications and models on the edge routers. Cisco Fog Director is deployed on Cisco UCS C-Series Rack Servers or the Cisco UCS Mini. Advanced data protection and compliance are enforced by Cisco 3000 Series Industrial Security Appliances and Cisco Firepower 4100 Series Firewall appliances.
  • Transfer  – The filtered (or relevant) data is routed from the edge to the enterprise via Apache Kafka: a secure, highly available, distributed message broker. The transfer tier can be physically located at the edge, in the branch office or in the enterprise data center depending on the application use case. Apache Kafka stores the data received from the edge tier and passes it on to the enterprise tier making the system very resilient. Based on the data retention requirements, Cisco UCS C240 Rack Servers or Cisco UCS S3260 Storage Servers are deployed in the transfer tier.
  • Enterprise – Analytics in the enterprise tier enables understanding the global context from multiple edge locations. This tier is powered by the popular Cisco UCS Integrated Infrastructure for Big Data and Analytics platform with Apache Hadoop handling data storage and SAS software stacks including SAS LASR Analytics Server, SAS Visual Analytics and SAS Visual Statistics enabling in-memory processing and visual interfaces for users to create and modify predictive models that meet business requirements. Core to the architecture are Cisco UCS 6300 Series Fabric Interconnects that provide network connectivity, management, and advanced monitoring. The architecture can scale to thousands of servers on demand through the use of Cisco Nexus 9000 Series switches and the Cisco Application Centric Infrastructure (ACI).

Figure 1: Edge-to-Enterprise Analytics Platform

The joint solution is offered as a Cisco Validated Design which provides step by step design guidelines, comprehensively tested and documented, to help ensure faster, more reliable and predictable deployments at lower total cost of ownership. A detailed reference design is depicted in Figure 2.

Figure 2: Edge-to-Enterprise Reference Architecture

Please check out the blog from Oliver Schabenberger, EVP and Chief Technology Officer at SAS for SAS’s perspective of our joint solution.

For More Information
White Paper: Cisco and SAS Edge-to-Enterprise IoT Analytics Platform
Cisco Validated Design: Edge-to-Enterprise IoT Analytics Platform with SAS 
Cisco Validated Design: SAS Visual Analytics with Cisco UCS 
Cisco Big Data Portal 

Cisco Live Berlin Session PSODCT-2020: Scale Big, Scale Fast: Cisco UCS Solutions for Edge-to-Enterprise Analytics. Wednesday, Feb 22, 1:15 p.m. – 2:15 p.m. | Hall 7.1 Breakout Room 714

 

Authors

Raghunath Nambiar

No Longer with Cisco

Avatar

SD WAN Webinar Heavy Reading and Cisco

Software-defined WAN describes a new trend in building overlay SDN networks over existing broadband and MPLS networks to provide many of the features and functions that businesses are used to getting but in simpler networks and at a lower price point.

Join Cisco and others on Tuesday, February 21, 2017 for a webinar “How to Succeed in the New Age of Software-Defined WANs?” that discusses the opportunities for service providers to deliver SD-WAN services to their customers, including the best path for moving forward.

Cisco’s Chris Lewis and others, including Sterling Perrin, Principal Analyst at Heavy Reading will address the most critical service provider topics including:

  • What are the key benefits of SD-WANs for enterprise customers?
  • What are some best practices from operator early adopters of SD-WAN services?
  • How can SDN- and NFV-based architecture improve service velocity and flexibility for both service providers and their customers?

Register to attend

 

Authors

Dan Crawford

Strategy

Data Center, Mobility, & Network Infrastructure

Avatar

The Global Center for Digital Business Transformation forecast that by 2020, four out of 10 organizations will be displaced or cease to exist due to digital disruption. Digitalization and social trends are reshaping business like never before. New social contracts and the explosion of consumer-driven mobile devices are reframing the workplace. In addition, we have the most diverse, multi-generational workforce in history.

At the same time, though, there are workforce challenges. One of the most significant issues, both from an economic and company culture perspective, is that of employee disengagement. Gallup reports that one disengaged employee costs an organization approximately $3,400 for every $10,000 in annual salary. Actively disengaged employees cost the U.S. economy an estimated $450 to $550 billion annually, due to lost productivity.

In light of the financial and productivity costs of disengagement, how can we create a workforce that is more engaged and productive? This question becomes even more pressing when we realize that 50 percent of jobs across all sectors will require technical acumen and new skills over the next few years.

To keep up with this skills surge, the answer is to provide employees with continuous learning and development via new digital social learning opportunities. Research by Josh Bersin reveals that this type of ongoing learning spurs a quest for new solutions. Continuous learning cultures are:

  • 92 percent more likely to develop novel products and processes
  • 56 percent more likely to be first to market with products and services
  • 52 percent more productive
  • 30 to 50 percent higher in retention and engagement rates
  • 17 percent more profitable than their peers

New ways to learn

Creating a continuous learning culture calls for traditional top-down hierarchical organizations to change their approach. Learning and development in a corporate-centric learning universe has historically been organized by functions. In this model, L&D, HR, business and compliance pushed learning sanctioned by the company.

But energizing an organization with digital social learning puts learners in the driver’s seat and gives them ownership of their own educational path. Learning becomes digitized, not just digital. Rather than merely transferring current training assets to an online platform, learning content is pulled through curation, facilitation, coaching and experts. In this scenario, networks of agile teams work and learn together. This creates a high degree of empowerment and communication, with real-time access to knowledge and expertise.

A continuous learning enables personalized learning on demand, empowering employees to learn what they want, when they want. Social and mobile networking tools and access to the organization’s experts support informal skills building, engagement and collaboration. An environment that enables secure knowledge sharing also facilitates learning across the enterprise.

This is the kind of best-in-class digital workplace experience that not only creates the skilled workforce organizations need but also creates the productivity that drives business goals and the engagement that keeps employees from seeking greener pastures.

Authors

Ryan Rose

Director of Product Management for Skills & Certifications

Cisco Learning & Certifications

Avatar

Healthcare organizations around the world are transforming experiences and outcomes through the power of technology. We’re laying the foundation for tomorrow for healthcare providers, payers, and life sciences organizations with our forward-thinking healthcare solutions. 

Watch our latest video to learn how Cisco is leading the digital transformation in healthcare.

https://www.youtube.com/watch?v=gZGJDWwxc1Q

Authors

Sydney Abbott

Project Specialist

Public Sector Marketing

Avatar

Effective Security requires three essential pillars: Simple to use, Open architecture and Automated workflows. The collaboration with RSA NetWitness Packets and Cisco AMP Threat Grid, in the RSA Conference 2017 Security Operations Center, exemplified the power of a four year partnership that provides an effective solution for network forensics and malware analysis.

The SOC team placed NetWitness Packets into Continuous monitoring mode, where .exe, .dll and other potentially malicious payloads were carved out of the network stream and underwent Static analysis, Network intelligence and Community lookup; before sent to Threat Grid for additional static and dynamic malware analysis. Threat Grid provides a ‘Glovebox’ function to safely interact with samples. This was especially useful to click on pop-ups for installers, warning dialog boxes and checks for user interaction.

Because Threat Grid has no instrumentation or hooks into the virtual environment, there is no presence to indicate to the samples that they are in a sandbox environment, in fact checking for that is an additional behavior upon which to alert.

The power of behavior indicators helped identify a lot of installers being downloaded, including host based firewalls, antivirus, messaging apps; which not inherently malicious, could be out of policy in a production environment. An example was a SCADA diagnostics and monitoring app, which opened up a listening port.

Reviewing the analysis video of the sample while running provided greater context. We then traced the session in NetWitness Packets to identify the user, source and destination; confirming it was from a SCADA security vendor at the conference, setting up a demo machine.

Unlike the Wild West of BlackHat, attendees in general were not using the network as an attack platform. However, very malicious samples were discovered as they were downloaded into the network, including this ransomware sample. The full PCAP of the command and control communication between the sample and the attack servers was automatically analyzed with SNORT rules as well, resulting in the SNORT alert tags.

The sample was identified as TeslaCrypt by its behavior and CnC. If it was a an unknown variant, it still would have been detected by the combination of behavioral indicators such as Shadow Copy Deletion Detected (so the user cannot recover to a system restore point), Process Modified Desktop Wallpaper (ransom note displayed) and Large Amount off High Entropy Artifacts Written (encrypting the user files on disk).

Over the course of RSAC 2017, the RSA and Cisco SOC team delivered 15 tours of the SOC, for over 600 attendees, where we showed real time traffic; plus advanced malware analysis, sandboxing and threat intelligence from Threat Grid, with time for Q&A with RSA and Cisco engineers.

Here is the Facebook Live interview from the SOC. See you at RSAC 2018!

Authors

Jessica (Bair) Oppenheimer

Director, Security Operations

Threat Detection & Response