Cisco Logo


Security

“I’m sorry, Dave, I’m afraid I can’t do that.”

- HAL the computer from 2001: A Space Odyssey (1968)

Every day, essential business and physical functions are executed by software, without human oversight.  Many of these functions—automobile braking systems, automatic systems on commercial aircraft and commuter trains, medical equipment—function at speeds and levels of precision that cannot be matched by human beings.  Thankfully, the persistent fear that someone may eventually create software that is intelligent enough to defy us has not come to pass.  If anything, the opposite remains the more immediate concern:  as fallible humans, we continue to generate software riddled with problems, setting the stage for accidents waiting to happen. One such incident was recently made public.

News of the U.S. Securities and Exchange Commission (SEC) pursuing hackers comes just months after self-policing by the New York Stock Exchange (NYSE) resulted in fines against Credit Suisse for failing to catch a faulty algorithm that regulated “buy” and “sell” orders for its proprietary trading arm.  According to the Financial Times, one afternoon in 2007, hundreds of thousands of messages flooded the NYSE trading system with simultaneous buy and sell orders, slowing trading across the board.  Although lives may not directly depend on the NYSE, some fear that a software glitch such as the Credit Suisse incident could knock out Wall Street, realistically sending shock waves across the global financial industry and inflicting billions in economic damage.

The software that traders use to assist them is becoming weirder, riskier, and faster.  Some algorithms use artificial intelligence to sense market trends and the attitude or likely direction of stock movement, executing trades automatically based on the information.  Perhaps because these systems are not perceived to be directly responsible for lives, the risk tolerance for market trading software may be greater than automated systems that are responsible for public safety.  But the fact remains that many software engineers are facing pressure to generate software with little time for testing, shipping it to users who use it in ways that could not have been foreseen by its developers.   While the facts surrounding the Toyota unintended acceleration cases remain unclear, the affair may be causing some to wish for the days of a recognizable internal combustion engine with a mechanical throttle.

Smart phones take the problem of unintended use to a new level with the proliferation of software apps promising to do everything from balance your checkbook to check your insulin levels.  The extent to which we are becoming dependent on our mobile computing devices is reaching previously unimagined levels, and the everyday nature of our phone apps makes them seem less critical—possibly until an emergency situation arises and the only link we have with first responders is our mobile phone.  The immediate and accessible nature of information has eroded our barriers to trust.  Layer upon layer of abstraction separates us from the sources of information.  We are used to relying on computers to give us solutions, without confirming or vetting the quality of information received.  Oftentimes, it is the availability or prominence of an answer that is judged, not its correctness. If my smart phone GPS app takes me down a road that no longer exists, it is an inconvenience, but if I have an emergency situation and my map program sends me to a non-existent hospital, the consequences could be serious.

This same phenomenon has bigger implications when applied to large-scale systems.  As the economic and environmental advantages of smart grid applications for energy infrastructure replace older legacy systems, hardware and software will increasingly decide for us how power is balanced, provided, and billed.  Smart grid providers will face the considerable challenge of protecting against the risk of software flaws that could potentially contribute to failure of critical systems.

Given that humans are fallible and computers are fundamentally unintelligent, flaws will continue to lurk in the code that governs critical systems.  It could be that one way to decrease the risk of catastrophes stemming from software flaws, or to stop malfunctions before they do real damage, is to ensure manual override switches on critical software systems.  Sorry, HAL.

Comments Are Closed

  1. Return to Countries/Regions
  2. Return to Home
  1. All Security
  2. Return to Home