This Friday starting at midnight marks the official start of the holiday shopping season for American consumers. After the traditional thanksgiving feast (Many Retail Stores are opening at midnight with specials) up to 152 Million Shoppers are expected to visit stores and websites this Black Friday Weekend, according to NRF Survey published in November.
As we approach this long weekend holiday, I thought I’d share a few articles that would make good reading while the turnkey is in the oven.
The publishing of the Neiman Marcus Christmas Book is always noteworth and this year is no exception. Neiman Marcus generates a huge amount of press coverage and brand exposure for the holiday catalog.
In the past, back to school shopping conjures images of moms, minivans and moving from store to store with shopping lists. Today’s back to school shopping trips for families are more of a logistics exercise with moms doing online research and checking on social media to find the best deals before they set foot in the first store.
This is backed by recent researching showing the rise of the “Connected Mom”.
In Deloitte’s 2011 Back to School Survey, research shows that 64% of respondents with smart phones plan to use them for back-to-school, and 43% will download discounts, coupons and sales information. Social media is also playing a role with 35% of respondents using social networking sites to assist in shopping.
Wow! Lots of outrage over the colossal cloud computing outage at Amazon! With big sites such as Reddit, Foursquare, and Heroku taken down by the issues with Amazon Web Services (AWS), there’s brouhaha brewing about a black eye on Amazon—and the entire cloud computing industry.
“The biggest impact from the outage may be to the cloud itself,” said Rob Enderle, an analyst with the Enderle Group, in ComputerWorld. “What will take a hit is the image of this technology as being one you can depend on, and that image was critically damaged today…If the outage continues for long, it could set back growth of this service by years and permanently kill efforts by many to use this service in the future.”
So the cloud might be a little beat up, but is cloud computing dead? Not even close.
Cloud computing is here to stay, not only because the model is more efficient and more cost effective than the traditional IT infrastructure, but because it promotes the promise of specialization—a value that gives companies an edge and consumers a better product.
What’s AT&T Got to Do With It?
Remember the days when AT&T was the only phone company around, and their phone was the only one you could buy? First it was rotary, and then it was push-button. AT&T made every single part of the phone. It made the screws that held the phone together. The whole machine was incredibly durable, but it was also heavy, clunky, and incredibly inefficient—not to mention expensive.
It didn’t stay that way, however. Boom! Deregulation hit the industry and the price of a phone went from a hundred dollars to a hundred pennies. Everything changed, and today we see the result: throwaway phones. Now phones are ubiquitous, they’re incredibly inexpensive, and they can do more than ever before.
IT infrastructure is moving down the same path. Until now, every company has built its own expertise into its proprietary IT systems. Every company has been (metaphorically speaking) fabricating its own screws, making its own hammers, and toiling over its own infrastructure. There’s been massive duplication of efforts, and the approach is filled with gross inefficiencies.
Now that’s all changing with cloud computing. It has gained rapid adoption exactly because it recognizes the inefficiencies and complications of traditional IT infrastructure, which is built on large, complex systems that require specialized skill sets to implement and deploy. The most interesting form of cloud computing is Infrastructure as a Service, or IaaS. Instead of tilting up the servers and fabricating the screws yourself, you look to a specialist—a large service provider with a deeper level of expertise, greater economies of scale, and the ability to provide the infrastructure on which you can run your apps. Another upshot: by removing a massive noncore task from the organizational to-do list, a new wave of efficiencies and innovation can be unleashed. (Pretty soon, traditional security will look no different from that rotary phone I saw on eBay for $9.99: a charmingly clunky reminder of a long-gone era.)
Build a Plan, Don’t Pray for Perfection
Cloud computing—or anything in computing—is not perfect. Data centers, whether they are public or private, go down. Outages happen in-house as well as to the industry’s leading cloud-hosting providers.
What we must all recognize is that we need solutions to better insulate companies against inevitable outages. The question we should be asking is not how can we trust the cloud, but rather how can we make enterprise applications more robust? What should the failover plan look like? (Because things fail.)
The answer is portability. We must have the ability to move apps from one infrastructure to another so that if one bursts, the whole world doesn’t come to a screeching halt. That’s Internet 101. Instead of just one web server, have two web servers in different locations and roll the load between them. Contingency plans that included having two data centers from two different providers and different availability zones kept sites such as the business audience marketing platform company Bizo running during the Amazon outage. By similarly designing systems that took potential failures into account, Netflix was largely unaffected.
The current tools available for virtual data center don’t provide good portability and rollover ability from private to public data centers. Technology vendors need to address how to move a data center workload from one cloud computing provider to another, so they can provide the resiliency and efficiency needed to deal with the occasional bad hair day. With that investment we’ll all come out looking a lot better.
The next wave of spam is now making its way into social networks. One example of this type of threat is the Koobface malware, distributed through social networks such as Facebook. Koobface tricked users into downloading the malware, which then spread via the network of trusted friends. (For more details please read Unsociable: Social Media Brings a New Wave of Threats)
Facebook recognized this malware was a major problem. The trick to solving it, though, was determining how to distinguish the behavior of a bot acting like a human from the behavior of a real human. The initial answer seemed clear: selectively use a “captcha.” A captcha is the squiggly letters or numbers with interspersed lines that websites use to verify the user is a real person, not a bot. It’s very difficult for a machine to read the captcha and enter the right characters. (IMHO it is difficult for a person to enter the right characters, too—so no wonder a bot can’t do it.)
Last year brought a surprising, and seemingly positive, change in the number of security threats: it was the first year we saw spam volumes drop. That decrease was a significant change from the previous decade, in which spam volumes roughly doubled every year, compounding to yield a dirty Internet where about 90 percent of the email flowing over the backbone is spam. So does the drop in spam volume mean spam is suddenly less of a problem? Have spammers given up and gone home, or maybe developed a conscience and let up a little?
Unfortunately, no. Spam has just changed. It’s become more sophisticated. We are seeing a massive shift away from the spray-and-pray tactics of the past to much more targeted and complex attacks. One consistent trait of attackers: they always follow the money. Therefore, as social media sites such as Facebook have experienced explosive growth (and explosive valuations), it’s no surprise that threat writers are exploring ways to tap into these networks to deliver the next generation of attacks.