Avatar

AFCEA Homeland SecurityLast week I had the opportunity to attend the AFCEA Homeland Security conference in Washington, D.C.

Eleven days after the September 11, 2001, terrorist attacks, the first Director of the Office of Homeland Security was appointed leading to the combination of 22 different federal departments and agencies into a unified, integrated cabinet agency when it was established in 2002.

The Department has a vital mission to secure the U.S. from many threats with capabilities that range from aviation and border security to emergency response, from cybersecurity to chemical facility inspections.

The three day conference focused on a number of important topics including:

  • Mobility and Interoperability
  • Information Sharing and Fusion Centers
  • Big Data Analytics
  • Role of Laboratories and Scientific Research in Homeland Security
  • Cybersecurity Education and Continuous Diagnostics and Monitoring

Continue reading “Highlights from AFCEA Homeland Security Conference: Remaining Ever-Vigilant”



Authors

Kacey Carpenter

Senior Manager

Global Government and Public Sector Marketing

Avatar

A consequence of the Moore Nielsen prediction is the phenomenon known as Data Gravity: big data is hard to move around, much easier for the smaller applications to come to it. Consider this: it took mankind over 2000 years to produce 2 Exabytes (2×1018 bytes) of data until 2012; now we produce this much in a day! The rate will go up from here. With data production far exceeding the capacity of the Network, particularly at the Edge, there is only one way to cope, which I call the three mega trends in networking and (big) data in Cloud computing scaled to IoT, or as some say, Fog computing:

  1. Dramatic growth in the applications specialized and optimized for analytics at the Edge: Big Data is hard to move around (data gravity), cannot move data fast enough to the analytics, therefore we need to move the analytics to the data. This will cause a dramatic growth in applications, specialized and optimized for analytics at the edge. Yes, our devices have gotten smarter, yes P2P traffic has become largest portion of Internet traffic, and yes M2M has arrived as the Internet of Things, there is no way to make progress but making the devices smarter, safer and, of course, better connected.
  2. Dramatic growth in the computational complexity to ETL (extract-transform-load) essential data from the Edge to be data-warehoused at the Core: Currently most open standards and open source efforts are buying us some time to squeeze as much information in as little time as possible via limited connection paths to billions of devices and soon enough we will realize there is a much more pragmatic approach to all of this. A jet engine produces more than 20 Terabytes of data for an hour of flight. Imagine what computational complexity we already have that boils that down to routing and maintenance decisions in such complex machines. Imagine the consequences of ignoring such capability, which can already be made available at rather trivial costs.
  3. The drive to instrument the data to be “open” rather than “closed”, with all the information we create, and all of its associated ownership and security concerns addressed: Open Data challenges have already surfaced, there comes a time when we begin to realize that an Open Data interface and guarantees about its availability and privacy need to be made and enforced. This is what drives the essential tie today between Public, Private and Hybrid cloud adoption (nearly one third each) and with the ever-growing amount of data at the Edge, the issue of who “owns” it and how is access “controlled” to it, become ever more relevant and important. At the end of the day, the producer/owner of the data must be in charge of its destiny, not some gatekeeper or web farm. This should not be any different that the very same rules that govern open source or open standards.

Last week I addressed these topics at the IEEE Cloud event at Boston University with wonderful colleagues from BU, Cambridge, Carnegie Mellon, MIT, Stanford and other researchers, plus of course, industry colleagues and all the popular, commercial web farms today. I was pleasantly surprised to see not just that the first two are top-of-mind already, but that the third one has emerged and is actually recognized. We have just started to sense the importance of this third wave, with huge implications in Cloud compute. My thanks to Azer Bestavros and Orran Krieger (Boston University), Mahadev Satyanarayanan (Carnegie Mellon University) and Michael Stonebraker (MIT) for the outstanding drive and leadership in addressing these challenges. I found Project Olive intriguing. We are happy to co-sponsor the BU Public Cloud Project, and most importantly, as we just wrapped up EclipseCon 2014 this week, very happy to see we are already walking the talk with Project Krikkit in Eclipse M2M. I made a personal prediction last week: just as most Cloud turned out to be Open Source, IoT software will all be Open Source. Eventually. The hard part is the Data, or should I say, Data Gravity…



Avatar

A couple of months ago I designed a Cisco Unified Wireless Network for an enterprise company. One of the big challenges was the ability to serve wireless guest services in the branch offices, which are connected to the headquarters via MPLS. Mostly the MPLS connections are terminated in the LAN zone, so it is difficult to tunnel the guests via the corporate infrastructure to gain internet access. I achieved this in the Cisco environment with the Mobility Anchor feature. With this feature, you can offer a flexible and easy-to-implement method for deploying wireless guest services by using Ethernet in IP within a centralized architecture. Ethernet over IP is a proprietary tunnel across Layer 3 topology between two Cisco wireless LAN controllers. For more Information see: http://goo.gl/6pjXcZ Continue reading “Cisco Meraki – Guest Access in an MPLS branch”



Authors

Sven Kutzer

Technical Solutions Architect

Global Security (GSSO) – EMEAR – Advanced Threat

Avatar

Innovation.  Change.  Market transitions.  This is the natural order when it comes to IT.

Today’s accelerated rate of technological change is disrupting all areas of IT, while at the same time creating new possibilities for our data center customers.  As a CIO, you’re tasked with capitalizing on the benefits of new technologies to enhance operations, but with minimal disruption to your business.  That’s not easy do to when the world is moving so quickly.

Innovation brings new players to the marketplace, and sometimes compels existing vendors to adjust their strategies.  Earlier this year, in a move that will have a significant impact across the IT landscape for technology providers and customers alike, IBM announced an agreement with Lenovo for the acquisition of IBM’s x86 server and associated networking business including Flex System.

Five years ago, Cisco made a strategic move by announcing a data center innovation and putting into motion a market transition.  Cisco Unified Computing System (UCS) led the converged data center transformation by integrating high-performance networking, compute, and storage into a single, unified platform. Cisco UCS created a new value proposition for the data center in virtualization and cloud computing, achieving measurable cost savings and technology gains.

Now the number two worldwide vendor of blade servers, our vision and ability to execute delivers value that clearly resonates with our customers.  Cisco UCS changes the economics of the data center by increasing operational simplicity and improving business agility.  This is a great time for you to take a closer look to learn why over 30,000 customers have made the move to Cisco UCS.

As the inevitable change takes place across the IT landscape, Cisco remains committed to the data center.  We are also committed to our long-time collaboration with IBM, one of Cisco’s most successful partnering relationships.  Our plan is to move forward, build on this relationship and continue to deliver solutions of high value for your data centers across technology, service, and support.



Authors

Frank Palumbo

Senior Vice President

Global Data Center Sales

Avatar

At CES this year we announced the expansion of Videoscape to the cloud. By launching Videoscape Cloud Software and Videoscape Cloud Services, we are empowering our customers with flexibility and agility to provision and scale infrastructure on demand, service velocity to introduce new functionality more rapidly, and cost optimization from more manageable and predictable cost structures.

For some of you, reading about Cisco + Cloud Software is nothing new; Videoscape is yet another example of how we are taking our industry proven and robust software capabilities and making them available for implementation as cloud software applications. Been there, done that.

Yet we have raised some eyebrows with Cisco + Cloud Services. What experience does Cisco have delivering software as a service (SaaS), and what service provider video/media entertainment provider expertise do we have operating SaaS models?

In today’s blog, I will answer the question of why our customers should feel confident with Cisco as their Cloud Services partner by pointing to Cisco’s established SaaS leadership as well as our domain expertise in the service provider video and media/entertainment space. In a follow up blog, I will address Continue reading “Videoscape + Cloud Services = Cisco SaaS Advantage”



Authors

Kip Compton

No longer with Cisco

Avatar

Web surfers in February 2014 experienced a median malware encounter rate of 1:341 requests, compared to a January 2014 median encounter rate of 1:375. This represents a 10% increase in risk of encountering web-delivered malware during the second month of the year. February 8, 9, and 16 were the highest risk days overall, at 1:244, 1:261, and 1:269, respectively. Interestingly, though perhaps not unexpectedly, web surfers were 77% more likely to encounter Facebook scams on the weekend compared to weekdays. 18% of all web malware encounters in February 2014 were for Facebook related scams.

Feb2014Rate

Continue reading “February 2014 Threat Metrics”



Authors

Mary Landesman

Senior Security Researcher

Cisco TRAC

Avatar

This post is co-authored with Levi Gundert and Andrew Tsonchev.

Update 2014-03-21: For clarity, the old kernel is a common indicator on the compromised hosts. We are still investigating the vulnerability, and do not yet know what the initial vector is, only that the compromised hosts are similarly ‘old’.

Update 2014-03-22: This post’s focus relates to a malicious redirection campaign driven by unauthorized access to thousands of websites. The observation of affected hosts running Linux kernel 2.6 is anecdotal and in no way reflects a universal condition among all of the compromised websites. Accordingly, we have adjusted the title for clarity. We have not identified the initial exploit vector for the stage zero URIs. It was not our intention to conflate our anecdotal observations with the technical facts provided in the listed URIs or other demonstrable data, and the below strike through annotations reflect that. We also want to thank the community for the timely feedback.

TRAC-tank-vertical_logo-300x243

TRAC has recently observed a large malicious web redirect campaign affecting hundreds of websites. Attackers compromised legitimate websites, inserting JavaScript that redirects visitors to other compromised websites. All of the affected web servers that we have examined use the Linux 2.6 kernel. Many of the affected servers are using Linux kernel versions first released in 2007 or earlier. It is possible that attackers have identified a vulnerability on the platform and have been able to take advantage of the fact that these are older systems that may not be continuously patched by administrators.
Continue reading “Coordinated Website Compromise Campaigns Continue to Plague Internet”



Authors

Martin Lee

EMEA Lead, Strategic Planning & Communications

Cisco Talos

Avatar

SES ( soon to be called ClickZ) hosted a Digital Marketing Conference in Jakarta this week which is the meeting point for digital marketing and advertising professionals in the AP region.
Here the  latest mobile marketing trends, best practices, new technologies have been discussed and presented, including Cisco’s CMX capabilities as part of meet the experts session called “Context marketing using Wifi location services”.

SESjakartaSome interesting observation and ideas being discussed include:

Multi-Channel Attribution modeling:
While online marketing investments are more measurable compared to conventional media such as television, however tracking what leads to sales conversion is becoming increasingly complicated.
The simple measures of last click or first click attribution are not fully meaningful to represent today’s omni-channel ultra-connected consumer. Therefore it’s not surprising that multi-channel tools and attribution modeling are one of the hot topics in digital analytics.

Data to underpin a successful digital marketing strategy:
Increasingly consumers are connected all the time – and with that every day around the world, connected consumers are being wooed by offers of better prices, better deals and better service.
How can marketers compete…often the only defence they think they have to fire back at competitors is to match those deals and price cuts.
However data is key, as more information about customers becomes more plentiful and more detailed, and as customers become more interactive with the companies they buy from, the competitive marketing landscape is becoming radically different. For many advanced organisations it is using data to deliver insight and analysis gives them a competitive edge to keep ahead of the pack. Continue reading “Mobile Marketing @ SES Jakarta & WiFi based Location Context”



Authors

Brendan O'Brien

Director Global Product Marketing

Connected Mobile Experiences