Avatar

For many organizations, buying cloud services can be stressful. After all, as your business moves more and more into the cloud, you need to know your services and cloud provider are as reliable or better than if these services originated from within your own data center.

Buying cloud services can feel a lot like buying a car. How many of us really know what’s going on under the hood? We look at a few key stats like gas mileage and drive it around the block. Yeah, it accelerates and brakes. We know we’re safe and going to get relatively good gas efficiency. After all, cars have to meet certain standards. So in the end the decision comes down to price and comfort features such as how much we like the center console and cup holder.

But not all clouds are created equal. Low pricing and a fancy user portal are nice, but they aren’t what keep your business growing. Is best-effort service good enough for your operations? Can your organization afford to experience down time? Does your provider offer the flexibility you could get from other providers? Is your service truly enterprise-class?

The good news is that, just like there are standards in the car industry, there are standards for cloud. Services that are Cisco Powered, for example, have to meet strict requirements to carry the Cisco Powered logo. These requirements include certification and a third-party audit of every service to verify they deliver as promised.

You can learn more about what it takes to have confidence in your cloud provider from our partner, OneNeck. In their recent blog, “How to Reliably Offload IT Management to the Cloud,” they share a comprehensive list of factors to consider when choosing a cloud provider.
Selecting the right cloud provider and services doesn’t have to be frustrating and arbitrary. By understanding what comprises a reliable cloud, you can ask the right questions to ensure your provider is the best partner for your business.



Authors

Xander Uyleman

Senior Manager

Global Partner Marketing

Avatar

After a whirlwind week in Tokyo, it’s clear that Japan – the world’s third largest economy — is embracing the potential economic value of the Internet of Everything (IoE). For Japan, we estimate an IoE opportunity of $870 million over the next decade (out of a global economic value of $19 trillion).

With its proud history of industry, technology and innovation leadership, Japan is an ideal location for Cisco’s 7th IoE Center of Innovation — a $20million investment for Cisco — which opened last Thursday with nine Japan-based ecosystem partners. The excitement is high around our open lab’s charter to bring together customers, industry partners, startups, accelerators, government agencies and research communities to collaborate on next-generation technology. Photos of the center’s opening are here.

Wim Tokyo 1

In Tokyo, we will be working with partners to develop Fog Computing solutions focused on Manufacturing, Sports and Entertainment and Public Sector. These Fog solutions extend cloud storage, computing and services to the edge of the network, a critical element of realizing value from IoE.

Continue reading “Accelerating and Innovating the Internet of Everything in Japan”



Authors

Wim Elfrink

Executive Vice President, Industry Solutions & Chief

Globalisation Officer

Avatar

Statement from Cisco Chairman and CEO John Chambers:

U.S. Federal Communications Commission Chairman Tom Wheeler today unveiled a landmark proposal that has the power to transform our nation’s classrooms and put the power of the Internet at the fingertips of all teachers and students.

Connecting students and teachers in the classroom is one of the most important things that our nation can do to dramatically improve our educational system. Connected classrooms will provide students with real-time access to the world’s libraries, incredible science experiments, and a wealth of video, apps and other rich media content.  It also will connect students in rural areas, as well as enable students to take innovative and specialized courses at other schools and other districts.

The effects of this decision will be felt for decades. Not only will it encourage more students to enter the fields that make up STEM – Science, Technology, Engineering, and Math — but it will also help make our students and our nation more competitive on the global stage. The nations that are on the leading edge of the digital revolution will be the ones that lead in terms of innovation, job creation and economic growth.

The E-Rate program forms the bedrock of the federal government’s effort to connect our nation’s schools and libraries to the Internet. This proposal, if adopted, will breathe new life into the program and will help our children and grandchildren prepare for an ‘Internet of Everything’ future where technology is integrated into all aspects of work, life, and education.

###



Authors

John Earnhardt

No Longer at Cisco

Avatar

Vicki Livingston

by Vicki Livingston, Head of Communications, 4G Americas

From the first batters up to the plate in South Korea in 2012, with a flurry from Metro PCS in the U.S., Voice over LTE (VoLTE) seemed to be in a field of dreams and not making much noise or progress. However, bring it on baby… as VoLTE may be getting ready for prime time in 2014. Analyst firm Infonetics expects 30 commercial networks and 51 million subscribers with VoLTE by the end of this year and that number will grow to the major leagues on 2015.

So, what is it? And why is it good? Continue reading “Faster, Clearer and Future-Proof: VoLTE Brings it Home”



Authors

Paul Mankiewich

Chief Technology Officer

Mobility

Avatar

We asked the 2014 Cisco Champions what advice they would give to someone starting in the IT industry. Cisco Champions are seasoned IT technical experts and influencers who enjoy sharing their knowledge, expertise, and thoughts across the social web and with Cisco. The Cisco Champions program encompasses different areas of interest, such as Data Center, Internet of Things, Enterprise Networks, Collaboration and Security. Cisco Champions are located all over the world.
(Cisco Champions are not representatives of Cisco. Their views are their own)

Here are their top 5 tips.

1. Be a Specialist AND a Generalist
Specialize in your field, but keep general knowledge of related fields. So if you’re a networking expert, make sure to know servers, virtualization, storage and voice among others. You’ll thank yourself when you can troubleshoot problems that aren’t necessarily the network.
Benjamin Story
Benjamin Story
Network Engineer
@ntwrk80 Continue reading “Top 5 Tips for the IT Professional”



Authors

Rachel Bakker

Social Media Advocacy Manager

Digital and Social

Avatar

I’ve previously written about libfabric.  Here’s some highlights:

Today, we’re pleased to announce the next step in our libfabric journey: my team at Cisco (the UCS product team) is contributing an open source plugin to Open MPI that uses the libfabric APIs.

Continue reading “libfabric support of usNIC in Open MPI”



Authors

Jeff Squyres

The MPI Guy

UCS Platform Software

Avatar

According to the Breach Level Index, between July and September of this year, an average of 23 data records were lost or stolen every second – close to two million records every day.1 This data loss will continue as attackers become increasingly sophisticated in their attacks. Given this stark reality, we can no longer rely on traditional means of threat detection. Technically advanced attackers often leave behind clue-based evidence of their activities, but uncovering them usually involves filtering through mountains of logs and telemetry. The application of big data analytics to this problem has become a necessity.

To help organizations leverage big data in their security strategy, we are announcing the availability of an open source security analytics framework: OpenSOC. The OpenSOC framework helps organizations make big data part of their technical security strategy by providing a platform for the application of anomaly detection and incident forensics to the data loss problem. By integrating numerous elements of the Hadoop ecosystem such as Storm, Kafka, and Elasticsearch, OpenSOC provides a scalable platform incorporating capabilities such as full-packet capture indexing, storage, data enrichment, stream processing, batch processing, real-time search, and telemetry aggregation. It also provides a centralized platform to effectively enable security analysts to rapidly detect and respond to advanced security threats.

The OpenSOC framework provides three key elements for security analytics:

  1. Context

    A mechanism to capture, store, and normalize any type of security telemetry at extremely high rates. OpenSOC ingests data and pushes it to various processing units for advanced computation and analytics, providing the necessary context for security protection and the ability for efficient information storage. It provides visibility and the information required for successful investigation, remediation, and forensic work.

  2. Real-time

    Real-time processing and application of enrichments such as threat intelligence, geolocation, and DNS information to collected telemetry. The immediate application of this information to incoming telemetry provides the greater context and situational awareness critical for detailed and timely investigations.

  3. Centralized Perspective

    The interface presents alert summaries with threat intelligence and enrichment data specific to an alert on a single page. The advanced search capabilities and full packet-extraction tools are available for investigation without the need to pivot between multiple tools.

During a breach, sensitive customer information and intellectual property is compromised, putting the company’s reputation, resources, and intellectual property at risk. Quickly identifying and resolving the issue is critical, but, traditional approaches to security incident investigation can be time-consuming. An analyst may need to take the following steps:

  1. Review reports from a Security Incident and Event Manager (SIEM) and run batch queries on other telemetry sources for additional context.
  2. Research external threat intelligence sources to uncover proactive warnings to potential attacks.
  3. Research a network forensics tool with full packet capture and historical records in order to determine context.

Apart from having to access several tools and information sets, the act of searching and analyzing the amount of data collected can take minutes to hours using traditional techniques.

When we built OpenSOC, one of our goals was to bring all of these pieces together into a single platform.  Analysts can use a single tool to navigate data with narrowed focus instead of wasting precious time trying to make sense of mountains of unstructured data.

No network is created equal. Telemetry sources differ in every organization. The amount of telemetry that must be collected and stored in order to provide enough historical context also depends on the amount of data flowing through the network. Furthermore, relevant threat intelligence differs for each and every individual organization.

As an open source solution, OpenSOC opens the door for any organization to create an incident detection tool specific to their needs.  The framework is highly extensible: any organization can customize their incident investigation process. It can be tailored to ingest and view any type of telemetry, whether it is for specialized medical equipment or custom-built point of sale devices. By leveraging Hadoop, OpenSOC also has the foundational building blocks to horizontally scale the amount of data it collects, stores, and analyzes based on the needs of the network.  OpenSOC will continually evolve and innovate, vastly improving organizations’ ability to handle security incident response.

We look forward to seeing the OpenSOC framework evolving in the open source community. For more information and to contribute to the OpenSOC community, please visit the community website at http://opensoc.github.io/.


 

1http://www.breachlevelindex.com/



Authors

Pablo Salazar

No Longer at Cisco

Avatar

There’s a pretty great, short post from Business Insider last year that’s been getting re-circulation recently. It’s one-sentence summaries of famous business books like The Innovator’s Dilemma, Good to Great, Outliers, Purple Cow and The Lean Startup.

I particularly liked BI’s short summary Eric Ries’ book The Lean Startup, which is centered around the concept of creating a “minimum viable” product and then iterating on it, fed by with continual customer input and analytics. Here’s the nicely done reductionist summary:

“Rather than work forward from a technology or a complex strategy, work backward from the needs of the customers and build the simplest product possible.”

If you’ve been in tech the last few years – and especially in Silicon Valley – you won’t have escaped the term “Minimum Viable Product” (MVP), and you’ve undoubtedly been immersed in Agile development methodology. But there’s a dilemma in the seductive notion of Lean and MVP when misapplied: We all have seen teams who focus on the alluring idea of minimal without thinking about what will make the product viable from the standpoint of the customer: Across industries, we’ve seen that the “work backward from the needs of the customers” part is easy to miss in the rush to produce efficient code and quick deliverables.

This occasional lack of customer orientation has led to the backlash observation that “Agile doesn’t have a brain,” meaning it’s very good a producing efficiently, but not guaranteed to produce the right end products in the eyes of customers. We in tech have all seen this happen, and it’s vexing because it’s against the core principles of Agile to produce un-useful end deliverables.

Enter author Jeff Gothelf, an ardent evangelist for Lean and MVP thinking. Jeff is author of the excellent book Lean UX, and recently wrote about this “Agile doesn’t have a brain” topic in a really interesting post on the subject.

Jeff is working with us on some upcoming talks and a workshop, and in addition to what he says in the post above, brings some good advice for including design and customer thinking to the MVP debate:

  • Work “Lean” on projects, and focus relentlessly on the customer in your process and measures
  • Focus on user-driven metrics to understand how you’re doing 
  • Make sure designers and other key non-coding disciplines are in your agile sprints — they will add efficiency and dimension, helping to make sure the “right things” are being produced
  • Think “team,” not “roles” within the sprints (at Cisco, we even do this in Marketing sprints).
  • Most important: Transform from a culture of delivery to a culture of learning, where you are constantly tuning and improving based on end objectives and customer needs.

If you’re new to ideas of incorporating the customer-oriented design into MVP and Lean, I recommend Jeff’s book Lean UX.  And, as a bonus, there’s a great video overview he recently gave at Google on some of these topics.

Enjoy!

LeanUXBookPicture



Authors

Martin Hardee

Director, Cisco.com

Cisco.com

Avatar

Intel MPI LibraryCisco is pleased to announce the intention to support the Intel MPI Library™ with usNIC on the UCS server and Nexus switches product lines over the ultra low latency Ethernet and routable IP transports, at both 10GE and 40GE speeds.

usNIC will be enabled by a simple library plugin to the uDAPL framework included in enterprise Linux distributions. The Intel MPI Library can utilize the usNIC uDAPL library plugin without any modifications to existing MPI applications.

Continue reading “usNIC support for the Intel MPI Library”



Authors

Jeff Squyres

The MPI Guy

UCS Platform Software