One of the key tools in the cybercrime toolbox is the drive by web exploit. Simply put, a drive by exploit is when a website is somehow violated such that it later causes the download of software, often from a different server and typically malicious in nature, without the knowledge of the end user. This software may be later used for a variety of things. It may be a key logger, recording keystrokes to capture things like passwords and credit card data, it could be a botnet client, turning the victim PC into a zombie used for spam, DDoS or even Bitcoin Mining. Regardless, the fundamentals remain the same. Do something bad to a website and then that website causes a silent install of malware on visitor machines.
OK, so we all know that mobility has become an absolute necessity in business. How many of us can honestly say that we could last even a day without our smart phone or tablet? We check our email, run enterprise apps, access the ERP, and conduct a host of other activities that require secure VPN access. But just like anything else, there’s a big difference between what we want and what we can (or should) have! After all, enterprise strength mobility requires enterprise strength security – something that’s been sorely lacking in all but a few mobile devices.
The much anticipated World IPv6 Day is now behind us. Almost 400 vendors came together on June 8, 2011 by enabling IPv6 for their content and services for 24 hours. Cisco was one of them. The goal of the test was to demonstrate the viability and potential caveats of a large-scale IPv6 deployment in the real world, as IPv6 has been steadily gaining more and more traction and interest recently due to the gradual IPv4 address exhaustion.
Internally, Cisco, as most organizations, was preparing for the 24 hours to go smoothly for its own IPv6-served content. At the same time, considering the large deployment of Cisco devices throughout networks everywhere, precautions were taken to address any issues that could arise during the dry run. Fortunately, activities concluded successfully with no major issues, showing that an IPv6 future could be closer than initially thought.
There already are and will be many reports created on results, statistics and lessons learned during testing. Among those, we would like to stress a few key-points taken from Cisco Distinguished and Support Engineers Carlos Pignataro, Salman Asadullah, Phil Remaker and Andrew Yourtchenko, who were all engaged in the project, which give a general feel on how the day went:
- Vendor coordination was made possible, showing that even competitors can work together when it comes to a common goal that will benefit everyone.
- There were no support cases related to the World IPv6 Day activities, which indicated a good level of both IPv6 preparedness and product readiness.
- IPv6 adoption could happen smoothly, avoiding major technical issues when done methodically.
- AAAA DNS records that are used for IPv6 do not automatically “break” the Internet, as it was often argued. There are certain challenges with providing an IPv6-enabled DNS infrastructure, but these can be addressed.
- User experience feedback was positive. That was based on an IPv6-only approach. Due to the implementations in a dual-stack environment, user experience could deteriorate based on IPv6 and/or IPv4 performance. In such environments, solutions that track IPv6 and IPv4 performance can alleviate help. As the transition is taking place for years to come, dual-stacked environments will be the way to go, and solutions like Happy Eyeballs can certainly make the experience more transparent for users. The Chrome browser already implements a similar fall-back mechanism, which had documented benefits for some of its users.
Concluding, it is important to note that the successful World IPv6 Day exercise proved that transition to IPv6 would probably not be nearly as scary as many had originally thought some time ago. Careful and gradual adoption is easier than it was believed, and it is already happening. Product concerns, improvements and caveats here at Cisco are aggressively being worked on, and the future will only include positive developments.
In my last post on this topic, I highlighted just how true the words “Work is no longer a place you go, but what you do” really are. We now have the ability to work anytime, anywhere, using any device. As easy as this has made the lives of workers all over the world, it’s made the lives of security administrators immensely difficult. Providing secure access to the corporate network in a borderless world, while still somehow keeping out the bad stuff, has caused traditional security policies to become increasingly difficult to configure, manage, and troubleshoot – the source of inordinate amounts of pain for security administrators.
That’s why Cisco has introduced identity-based firewall security as a new capability of the ASA platform. As the first installation of what will soon become full context-aware security, identity-based firewall security enables security administrators to utilize the plain language names of users and groups in policy definitions. Rather than authoring and managing the growing list of IP addresses to cover every possible location, device, or protocol that may be required for secure access to the network, identity-based firewall security enables security administrators to grant access to “Jeff.” Regardless of where I am or what I’m using for access, I’m still Jeff… so in the simplest case, my administrator can literally write one policy to provide “Jeff” access to the corporate network, rather than six different IP addresses for all the instantiations of Jeff.
This blog was originally published on: http://blogs.forbes.com/tomgillis/2011/05/24/criticism-abounds-but-cloud-computing-is-here-to-stay/
Wow! Lots of outrage over the colossal cloud computing outage at Amazon! With big sites such as Reddit, Foursquare, and Heroku taken down by the issues with Amazon Web Services (AWS), there’s brouhaha brewing about a black eye on Amazon—and the entire cloud computing industry.
“The biggest impact from the outage may be to the cloud itself,” said Rob Enderle, an analyst with the Enderle Group, in ComputerWorld. “What will take a hit is the image of this technology as being one you can depend on, and that image was critically damaged today…If the outage continues for long, it could set back growth of this service by years and permanently kill efforts by many to use this service in the future.”
So the cloud might be a little beat up, but is cloud computing dead? Not even close.
Cloud computing is here to stay, not only because the model is more efficient and more cost effective than the traditional IT infrastructure, but because it promotes the promise of specialization—a value that gives companies an edge and consumers a better product.
What’s AT&T Got to Do With It?
Remember the days when AT&T was the only phone company around, and their phone was the only one you could buy? First it was rotary, and then it was push-button. AT&T made every single part of the phone. It made the screws that held the phone together. The whole machine was incredibly durable, but it was also heavy, clunky, and incredibly inefficient—not to mention expensive.
It didn’t stay that way, however. Boom! Deregulation hit the industry and the price of a phone went from a hundred dollars to a hundred pennies. Everything changed, and today we see the result: throwaway phones. Now phones are ubiquitous, they’re incredibly inexpensive, and they can do more than ever before.
IT infrastructure is moving down the same path. Until now, every company has built its own expertise into its proprietary IT systems. Every company has been (metaphorically speaking) fabricating its own screws, making its own hammers, and toiling over its own infrastructure. There’s been massive duplication of efforts, and the approach is filled with gross inefficiencies.
Now that’s all changing with cloud computing. It has gained rapid adoption exactly because it recognizes the inefficiencies and complications of traditional IT infrastructure, which is built on large, complex systems that require specialized skill sets to implement and deploy. The most interesting form of cloud computing is Infrastructure as a Service, or IaaS. Instead of tilting up the servers and fabricating the screws yourself, you look to a specialist—a large service provider with a deeper level of expertise, greater economies of scale, and the ability to provide the infrastructure on which you can run your apps. Another upshot: by removing a massive noncore task from the organizational to-do list, a new wave of efficiencies and innovation can be unleashed. (Pretty soon, traditional security will look no different from that rotary phone I saw on eBay for $9.99: a charmingly clunky reminder of a long-gone era.)
Build a Plan, Don’t Pray for Perfection
Cloud computing—or anything in computing—is not perfect. Data centers, whether they are public or private, go down. Outages happen in-house as well as to the industry’s leading cloud-hosting providers.
What Amazon’s outage truly demonstrates is just how hard this job is. It’s not an argument against AWS or the cloud industry; it’s a reminder that we need to have specialists handle this complex technology. Specialists can, and will, run into problems, but their ability to respond will be better than the ability of a soap company or a car maker or a media empire to respond. As the Heroku team, one of the sites crippled by the outage, put it: “Amazon employs some of the best infrastructure engineers in the world: if they can’t make it work, then probably no one can.”
What we must all recognize is that we need solutions to better insulate companies against inevitable outages. The question we should be asking is not how can we trust the cloud, but rather how can we make enterprise applications more robust? What should the failover plan look like? (Because things fail.)
The answer is portability. We must have the ability to move apps from one infrastructure to another so that if one bursts, the whole world doesn’t come to a screeching halt. That’s Internet 101. Instead of just one web server, have two web servers in different locations and roll the load between them. Contingency plans that included having two data centers from two different providers and different availability zones kept sites such as the business audience marketing platform company Bizo running during the Amazon outage. By similarly designing systems that took potential failures into account, Netflix was largely unaffected.
The current tools available for virtual data center don’t provide good portability and rollover ability from private to public data centers. Technology vendors need to address how to move a data center workload from one cloud computing provider to another, so they can provide the resiliency and efficiency needed to deal with the occasional bad hair day. With that investment we’ll all come out looking a lot better.