Cisco Blogs


Cisco Blog > Security

Haystack, Diaspora, and Establishing Trust

Haystack was supposed to be a revolutionary tool in the cause of freedom. Billed as a sort of steganographic communications tool for censored Iranians, the software hurtled to popularity in the media. But last week, it seems to have fallen quickly out of favor. Code that was not made generally available was reviewed by Jacob Applebaum, who was frank in his assessment. Applebaum is well-positioned to offer an expert opinion here, as he works for the Tor Project, which has significant experience designing software to anonymize network traffic. In the wake of Haystack’s trouble, I’m reminded of how our fragile psychologies fall victim to trusting things that we should not.

Diaspora, and the urge to be safe, socially

On September 15, the Diaspora Project, which aims to give users control over how their content is shared on social networks, released its alpha code. Conceived after several privacy issues were raised about social giant, Facebook, the Diaspora Project set its sights on making a social framework that would support encrypted content whose release was controlled by users, and not the network.

At this stage, the fledgling project admits that the code is only suitable for testing and bug reporting, but many technologists fear that users are considering it production-quality. Several public Diaspora nodes exist, and security experts reviewing the code have logged a variety of security issues.

While Haystack was not released as open source code, it would have been possible for anyone with the software to perform some black-box testing and review it to some extent—and since its target audience had reason to fear for their physical safety, Heap should have taken steps to ensure it was reviewed before placing trusting end users in harm’s way. But Diaspora, as open software, is still facing similar criticism—many are warning users not to store real information in Diaspora nodes at this time, but those warnings may be falling on deaf ears.

Not “open” vs “closed” — it’s about trust

What we see here is a human response to an undesirable circumstance, and a similar reaction that many people have when they are presented with actual malicious software. A need is identified, a solution is proposed, and software is delivered that purports to fill the role of that solution. In many cases, users blindly trust the software and begin using it. What should instead happen is that a more detailed investigation take place, users should question the motives and reputation of those providing software, seek expert opinion, and perhaps wait to see the experiences of others who have tried it.

What we need to keep in mind about complex systems is not whether we can secure them, but rather what level of trust we wish to give them. If something is important, such as evading detection by a repressive government is to dissidents in Iran, then software that supports that need should be thoroughly tested. This software should not be assigned trust based on assertions of a relatively-unknown expert. Likewise, if privacy in social networking is essential, then software released by four college students should not be adopted without further review.

This issue arises for users in many ways every day. Emerging technologies, like cloud-based services, also need this kind of review. Users need to make decisions about email messages they receive, web sites they visit, and how they use software on their desktops. Security technologies exist to help in many of these areas, but much of the solution will also be in user awareness. Hopefully, these technologies will converge with interfaces and architectures that support a user’s ability to make solid decisions with good information.

Tags: , , ,

Comments Are Closed