Future Crimes

Home > Other > Future Crimes > Page 52
Future Crimes Page 52

by Marc Goodman


  Killer Apps: Bad Software and Its Consequences

  Every time you get a security update … whatever is getting updated has been broken, lying there vulnerable, for who-knows-how-long. Sometimes days, sometimes years.

  QUINN NORTON

  Facebook’s software developers have long lived by the mantra “Move fast and break things.” The saying, which was emblazoned on the walls across the company’s headquarters, reflected Facebook’s hacker ethos, which dictated that even if new software tools or features were not perfect, speed of code creation was key, even if it caused problems or security issues along the way. According to Zuckerberg, “If you never break anything, you’re probably not moving fast enough.” Facebook is not alone in its softwarecoding practices. Either openly or behind closed doors, the majority of the software industry operates under a variation of the motto “Just ship it” or “Done is better than perfect.” Many coders knowingly ship software that they admit “sucks” but let it go, hoping, perhaps, to do better next time. These attitudes are emblematic of everything that is wrong with software coding and represent perhaps the largest single threat against computer security today.

  The general public would be deeply surprised at just how much of the technology around us barely works, cobbled together by so-called ducttape programming, always just a few keystrokes away from a system crash. As Quinn Norton, a journalist with Wired magazine who covers the hacker community, has pointed out, “Software is bullshit.” Most computer programmers are overwhelmed, short on time and money. They too just want to go home and see their kids, and as a result what we get is buggy, incomplete, security-hole-ridden software and incidents such as Heartbleed or massive hacks against Target, Sony, and Home Depot.

  Writing computer code today is no easy task; indeed, it is incredibly complex. With nearly fifty million individual lines of code in Microsoft Office, each of which needs to work perfectly to keep out attackers, surely some things will go awry. And that’s just one program. Your computer or smart phone must harmonize and police all the programs it’s running, let alone those running on other systems with which it wishes to interact on every Web site you visit. The problem grows exponentially as more and more devices on the IoT begin communicating with one another. All of these software bugs and security flaws have a cumulative effect on our global information grid, and that is why 75 percent of our systems can be penetrated in mere minutes. This complexity, coupled with a profound laissez-faire attitude toward software bugs, has led Dan Kaminsky, a respected computer security researcher, to observe that today “we are truly living through Code in the Age of Cholera.”

  When challenged about the poor state of the world’s software today, many coders retort, “We’re only human, there is no such thing as perfect software.” And they are right. But we’re nowhere near perfect, perhaps only at 50 percent of where we could and should be, according to Charlie Miller, a respected security researcher. Just bumping that number up to 70 or 80 percent could make a huge difference to our overall computer security. Consumers want powerful feature-rich software, and they want it now, with tens of thousands of people willing to stand in line days in advance, sleeping on the sidewalk, to get the latest iPhone. But software providers need to significantly up their game and design security up front, from the ground up, as a key component of trustworthy computing.

  In order to turn this ship around, incentives will need to be aligned to ensure the badly needed emphasis on secure computing actually occurs. For example, today when hackers find a vulnerability in a software program, they can either sell it on the black market to Crime, Inc. for a significant profit or report it to a vendor for next to nothing while facing the threat of prosecution. Thus they make the rather obvious choice. Though this is beginning to change and some companies have established “bug bounty programs,” few offer cash rewards, and among those that do, the amounts are far less than those available in the digital underground. That needs to change. Creating well-funded security vulnerability reporting systems that pay hackers for bringing major flaws to vendors’ attention would help minimize the damage these software companies themselves created when they rushed insecure and buggy code out the door onto an unsuspecting public.

  Given that software is the engine that runs the global economy and all of our critical infrastructures, from electricity to the phone system, there’s really no time to lose. But it will take much more than a few security researchers writing compelling articles on the topic; it will require an outcry from the public, one that has been sorely lacking until this point, to demand better-quality software. Think about it. Why do we accept all of these flaws as the natural state of affairs? They needn’t be. We can make a change by holding those in the software industry, which is worth $150 billion a year, accountable for their actions. Absent this demand from the public, in the battle between profitability and security, profit will win every time. We need to help companies understand it is in their long-term interest to write more secure code and that there will be consequences for failing to do so. As things stand today, the engineers, coders, and companies that create today’s technologies have near-zero personal and professional responsibility for the consequences of their actions. It’s time to change that.

  Software Damages

  The noted Yale computer science professor Edward Tufte once observed that there are only two industries that refer to their customers as users: computer designers and drug dealers. Importantly, you are equally as likely to recover damages from either of them for the harms their products cause. The fact of the matter is that when you click on those lengthy terms of service without reading them, you agree that you are using a company’s software or Internet service “as is,” and all liability for any damages lands on you. These firms use language such as “you will hold harmless and indemnify us and our affiliates, officers, agents, and employees from any claim, suit, or action arising from or related to the use of the Services” and “we do not guarantee that our product will always be safe, secure, or error-free.” Would you buy a Chipotle burrito if it came with such a warning? I think not. So how is it that the software business has carved out an exception for itself for ever being responsible for anything? Good question.

  When an automobile crashes because of faulty wiring or bad firmware, as we saw with the deadly Toyota acceleration cases, those injured can sue for damages. So why not with software? Is it reasonable to suggest that if somebody were to die or suffer severe economic losses as a result of faulty software that she or her loved ones be denied a day in court because the ToS said so? Even when it could be proven to a judge and jury that the software was the proximate cause of the harm? I do not believe it is.

  Do not get me wrong. I am not a fan of creating new laws willy-nilly. Nor would I suggest that regulation is the very best approach to handle the totality of our global cyber insecurity. It is at best a blunt tool in a field as rapidly evolving as technology. But a line needs to be drawn in the sand. Reckless disregard for any and all consequences of poorly written software, released with known vulnerabilities and foisted on a public incapable of individually reading through the millions of lines of code on their smart phones or laptops to adjudge the concomitant risks, is just plain wrong. Those writing and creating these tools must bear some responsibility.

  Needless to say, the software industry is vehemently against any such change. It claims that allowing liability lawsuits would have catastrophic effects on its profitability and would bankrupt it. It also asserts that the complexity of software interactions is so great that it would be impossible to fairly adjudicate blame in case of injury. Both arguments fall short. We’ve been here before, particularly with the automobile industry, whose products through the 1960s had a terrible safety record. Through consumer advocacy and congressional action, the National Traffic and Motor Vehicle Safety Act was passed in 1966, allowing the government to enforce industry safety regulations. Doing so resulted in one of the largest achievements in public health of the twen
tieth century. Automobile deaths dropped precipitously, saving tens of thousands of lives.

  Of course today’s technology may be more complex than the automobiles of yesteryear, but there will be no improvement in the safety and security of its software and hardware products until the business incentives are aligned to encourage change. Currently, any harms suffered by end users are theirs and theirs alone, with little or no harm accruing to the software vendors responsible for the damages. There are few if any consequences for releasing crappy code, and thus the practice continues unabated. Unless those responsible for the underlying security problems they create are held accountable for their actions, little will change. Only when the business costs of releasing persistently broken code are greater than fixing the known vulnerabilities in the first place will the balance tip in favor of better and more secure code. Though I am not advocating the creation of vast new regulations or government bureaucracies, I do believe a vigorous public debate regarding the underlying causes of our widespread computer insecurity is in order. The time to get our coding and software house in order is now, before we add the next fifty billion things to our global information grid.

  Reducing Data Pollution and Reclaiming Privacy

  Throughout this book, we have seen the consequences of amassing petabytes upon petabytes of data, information that eventually leaks. Whether it is personal medical records, bank balances, government secrets, or corporate intellectual property, it all leaks. The mass storage of these data in the hands of a few major Internet and data companies provides irresistibly rich targets to attack, a one-stop shop for thieves. As I’ve said before, the more data you produce, the more organized crime is willing to consume.

  Though most Internet users have chosen to voluntarily share some of the most intimate details of their lives via online social networks, the companies behind these services gather way more data than most ever realize. Purveyors of “free” Internet services persistently track users across their entire online experience as well as their movements in the physical world through the use of their mobile phones. But as we’ve seen, the most expensive things in life are free. All of this information is cut, sliced, and diced and sold off to the shady and secretive world of data brokers, who exercise little care or control over the accuracy or the security of the information they retain. Though we might complain about these practices (if we were the actual customers of these social media firms), we have no ability to do so. We bargained those rights away in exchange for free e-mail, status updates, and online photographs, agreed to in the click of a fifty-page four-point-font ToS agreement that none of us read. These overreaching, entirely one-sided “agreements” should not absolve the companies that author them of all liability pertaining to how they keep and store our data. If they choose to keep every single bread crumb they can possibly gather on our lives, then they should be responsible for the consequences.

  The striking thing about this system is that it needn’t be organized this way. It is estimated that each Facebook user worldwide only generates about $8 in ad revenue (not profit) for the company per year. I’d rather send Facebook ten bucks and be left alone. At less than a buck a month, it’s about a hundred times less than my cable bill. The whole system is screwy. As the MIT researcher Ethan Zuckerman has proclaimed, “Advertising is the original sin of the web. The fallen state of our Internet is a direct, if unintentional, consequence of choosing advertising as the default model to support online content and services.” Though our data pay for Gmail, YouTube, and Facebook today, we could just as easily support Internet companies whose goal was to store as little personal data of ours as possible, in exchange for minute sums of cash. Why not just disintermediate the middle man altogether for a much more logical system? We would become Facebook’s and Google’s clients for a dollar a month and could go on to enjoy our lives.

  Unfortunately today, just as is the case with software vendors, the incentives are misaligned from a public safety and security perspective. Facebook is incentivized to gather an ever-growing amount of personally identifiable data on its customers that it can sell on to thousands of data brokers around the world at a profit. That is its business model. Whether purchasers of this information ultimately allow it to be used to commit identity theft, stalking, or industrial espionage is of little concern to social media companies after they’ve auctioned the information off to the highest bidder. Of course it matters to us, those who suffer the economic and social harms from these leaked data. For those who prefer the benefits of the “free” system, let them enjoy it and all it entails. But why not allow the rest of us the option to pay to maintain greater control over our privacy and security?

  While it may be impossible to “live off the grid” in today’s modern world, we can by all means design a system that is much more protective. There are better, more balanced examples out there, such as the EU’s Data Protection Directive, which is much more consumer-friendly and enshrines privacy as a fundamental right of all EU citizens. It limits what data companies can store about us and how long they can keep it before the data must be deleted. This is a more sensible approach that not only adjusts the completely lopsided balance of power in our relationships with Internet firms but also protects us and our data from leaking into the hands of Crime, Inc.

  Kill the Password

  As we saw in the first chapter with Mat Honan’s epic hacking, a string of alphanumeric characters can no longer protect us. Sure, you might buy yourself some time by creating a twenty-five-digit password with upper- and lowercase letters, numbers, and symbols, but the fact of the matter is almost nobody does that. Instead, even in 2015 the most popular passwords remain “123456” and “password.” Fifty-five percent of people use the same password across most Web sites, and 40 percent don’t even bother to use one at all on their smart phones. Even if they did, it might not help much. Given advances in computing power, cloud processing, and crimeware from the digital underground, more than 90 percent of passwords can be brute-forced and cracked within just a few hours, according to a study by Deloitte Consulting. Worse, Crime, Inc. organizations such as Russia’s CyberVor have amassed more than 1.2 billion user names and passwords, which they can use to unlock accounts at will. Plainly stated, our current system of just using a user name and password is utterly broken.

  There are some measures we can take today that will provide additional layers of protection. One example is the two-factor authentication offered by Google, Microsoft, PayPal, Apple, Twitter, and others, which combines your user name and password with something you have such as a security token, key fob, or mobile phone. Most consumer Internet companies use your smart phone as the second factor by sending you a onetime code via text message that you must also enter to gain access to your account. Thus even if a hacker cracked your bank account, social media service, or social media profile password, he would still need access to your phone and text message, something he would be unlikely to have if you and your phone were in New York and the hacker in Moscow. While two-factor authentication is definitely a step in the right direction, these systems can be subverted via man-in-the-middle attacks, which intercept text messages via mobile phone malware.

  To that end, many smart-phone companies such as Apple and Samsung are moving toward another form of two-factor security, combining something you know with something you are—such as your biometric fingerprint or voice identity. Your fingerprint will increasingly become your password, and with the release of the iPhone 6 and iOS 8 Apple has allowed other companies, such as PayPal and your bank, to use your phone’s Touch ID fingerprint sensor to authenticate you. While hackers, such as the Chaos Computer Club, and others have circumvented some of these systems in the past (if they had access to the device), multifactor authentication can provide a significant improvement over the standard user name and password. Mat Honan was right. It’s time to kill the password and move on to multifactor authentication and biometrics, tools that, though far from perfect, are an immense improvemen
t over the feeble alphanumeric characters we use today. Though there is currently no cure-all for user identification, there are tremendous opportunities to create significantly better alternatives, particularly through coordinated research and funding efforts, as we will soon discuss.

  Encryption by Default

  There are only two types of companies—those that have been hacked and those that will be.

  ROBERT MUELLER, FORMER FBI DIRECTOR

  The vast majority of today’s data is unencrypted or poorly protected. A study by the computer giant HP in July 2014 revealed that 90 percent of our connected devices collect personal data, 70 percent of which is shared across a network without any form of encryption. That means that anybody who gains access to a computer system through poorly coded software, downloaded malware, or weak passwords can steal, read, and use any of the data contained in that system. Without encryption, the data are entirely readable by anyone who has access to them. That is why fifty-five million credit cards stolen from Home Depot can be used by Crime, Inc., because the company’s in-store payment system didn’t encrypt customers’ credit data while in memory. Had the data been properly encrypted, they would have been entirely without value to the thieves who stole them. It’s not just financial data that are too often unencrypted; so too are our medical records, corporate secrets, military video drone feeds, celebrity nude photographs, and nearly all our e-mail. The impact of all these computer breaches and data theft could be greatly minimized if the proper implementation of encryption were to become the default standard practice.

 

‹ Prev