Over the last two months, the concept of ransomware has blasted into the public conscious with massive and disruptive cyberattacks. Both ransomware attacks were accompanied by a heavy dose of intrigue, and, for those willing to listen, a stern warning of what is to come.
In May, 2017, the WannaCry “ransomware cryptoworm” was unleashed. WannaCry’s primary innovation (over prior ransomware) was to use a flaw called ETERNALBLUE in the Windows SMB protocol (used for accessing shared files) to spread from one machine to the next. Researchers speculate that the attack began with an infection of a machine that had publicly exposed port 445 on a vulnerable Windows machine. From there, the worm spread by scanning the internal and external network for other machines listening on port 445. If a vulnerable machine was found, the worm spread and the process repeated. Once infected, the virus would display a message requesting ransom payment in return for unlocking the data on the infected machine.
WannaCry was intriguing for a few reasons:
– It exploited a flaw that was discovered and hidden for its own offensive purposed by the United States National Security Agency.
– The flaw that was patched in a March, 2017 update for all supported versions of Windows.
The rapid spread of WannaCry thus exposed two facts that come as no surprise. First, that it is a very dangerous idea to discover and subsequently conceal security vulnerabilities. Second, there is a vast amount of deployed software that is old or unpatched. WannaCry forced Microsoft to release patches for software as old as Windows XP, which hit its end of life on April 8, 2014.
Not two months later, ransomware again made front page news, although in this case it appears that ransomware was a cover for what was ultimately a tool of mayhem. Petya used some of the same mechanisms as WannaCry, including the same ETERNALBLUE exploit as its most effective means of spreading. Much like WannaCry, once a machine was infected, the ransomware would encrypt all files on that machine and display a message demanding ransom. However, paying the ransom was to no avail as researchers quickly realized that the attacker’s intent was to wipe data under the guise of ransomware. There was in fact no way to unlock or undo the damage on an infected device.
The origin of the Petya attack is of particular interest – first, it is a reasonable speculation that this attack was generated as an act of cyber warfare directed at the Ukraine, and second, the attack began with an infected update to a popular Ukranian accounting package. It appears the attackers’ first step was to compromise the update servers for this accounting package and insert Petya into an upcoming automatic upgrade. Then the automatic upgrade mechanism for this software package spread the virus throughout the Ukraine, and from there petya used ETERNALBLUE to spread far and wide.
Both attacks point to fundamental issues that are inherent to the design of the Windows operating system, corporate intranets, and the Internet. Through this lens, mobile operating systems and, in particular, iOS are a seismic upgrade in protecting devices and networks from this type of attack.
Windows was built with an implicit trust model that follows two precepts: first, that once a piece of software is installed on a Windows computer that piece of software is trusted to act responsibly, and second, that all machines inside of the same intranet network are trustworthy. Over time, this trust model has proven dangerous, but due to the massive impact of redesigning Windows to implement a more pessimistic security model, the solution of choice is to add policing to each individual machine (i.e. anti-virus software and the Windows firewall). Unfortunately, keeping the policing up-to-date and properly configured is a challenge on its own.
For example, the Windows firewall was first introduced in Windows 2003 and Windows XP, two of the end-of-life’d operating systems that were vulnerable to the ETERNALBLUE exploit. Given that the firewall was available, a pessimistic security choice would be to shut down all inbound network traffic on all ports unless a machine was legitimately acting as a file sharing host. While there are exceptions, most machines in a corporate network leave SMB access open on port 445 on the off chance that a user might choose to share a local file folder. However, this most likely has not happened, never will happen, and may be prevented by IT using other means. So why was port 445 exposed through the firewall? Simply because it is difficult to police thousands of machines that have made an inherently wrong assumption about who to trust.
Applications installed on a Windows system also enjoy broad trust to touch (and encrypt) any file on the file system. Looking generally at how Windows is used today, receiving email and browsing the Internet are undoubtedly the two most common functions of an end-user Windows machine. However, these two activities are fraught with danger, as the end user is constantly exposed to untrusted information in the form of phishing attacks, viruses, and compromised websites. If you have ever used Safe Browsing mode on a Windows Server, you will instantly understand why attempting to police internet browsing renders the browser unusable, but at the same time a compromised web browser has access to all of the local files stored on that machine. Hence, end users are exposed to vast amounts of potentially compromised information, and IT is left to do the very best job it can to try to police and cleanup end users’ mistakes.
The second major issue exposed by these attacks is the issue of updates. It is a long running and unresolved struggle to find the right way to ensure that security patches are applies broadly and quickly when a vulnerability is discovered. Consider all that Microsoft has done with their automatic updates, their Security Bulletins and, now the Security TechCenter. After all that education and technology applied to the problem of security upgrades, the main exploit used to propagate both of these worms was a known vulnerability with a patch available that was not broadly deployed.
Part of the challenge is of Microsoft’s own creation – forcing users to pay for major version upgrades will undoubtedly leave some users behind, clinging to their now ancient versions of Windows. Another part of the challenge is that without a compelling reason to upgrade, IT’s desire to maintain stability wins out over any other arguments in favor of upgrade. All responsibility for administering and supporting upgrades is in IT’s hands. Most of the benefits of an upgrade go to end users who can probably live without that upgrade. In addition, even minor security updates cause disruption due to reboots and incompatible patches that IT is left to cleanup. There are simply too many incentives lined up against upgrades.
By contrast, Apple iOS (and Android to a lesser degree) have done a few things right:
– iOS and Android both assume that networks and apps are untrusted. They attempt to create a firm wall between apps so that a single infected app cannot encrypt or otherwise steal data from another app. Of course, this wall between apps is compromised if a device is rooted or jailbroken, which leads to point 2.
– Apple handles iOS and app upgrades in the right way – by (a) maintaining control of upgrades, and (b) placing all the incentive on end users to do those upgrades. IT need not be involved and, in fact, IT cannot stop upgrades from happening. At times, this model does cause software to break due to an upgrade leaving IT (and app makers) in a scramble to fix the issue. However, in the big picture, Apple is doing far more good than wrong. Given that it has millions of iOS users to serve, Apple has massive incentives to protect their upgrade servers (unlike the accounting software vendor in the Ukraine), and it has similarly massive incentives to try to ensure that software is compatible across iOS versions. In general, OS upgrades close more security vulnerabilities than they introduce, and keeping users up-to-date with the latest patches is a critical step in ensuring device security.
– Apple devices are increasingly workplace friendly, which means that end users can bring their own devices to work. Corporate users often have multiple personal iOS devices (an iPhone and an iPad at the least), and they want to do work on those devices.
– Apple only allows users to install trusted apps, which must be signed using a cryptographic key associated with the developer’s Apple Developer account. This ensures that (a) apps come from known entities, (b) Apple has the control to review apps prior to their release and (c) Apple can remove apps from the App Store if they misbehave.
These advantages of iOS should come as no surprise. Windows was designed for a much simpler time. iOS was designed to run on mobile networks, running a broad catalog of apps that cannot possibly be policed to perfection. Hence, Apple made sensible decisions for modern times. What has proved to be a surprisingly great decision is Apple’s decision to keep control of software upgrades, including the OS itself and all apps. That is one place where Android has fallen short, and it may be impossible for Android to ever recover.
It has been over 30 years since Microsoft Windows was first released. The World has changed, and our model for computing must change as well. That starts with OS vendors like Microsoft and Apple rethinking the assumptions made at the system level about trust. Apple has taken sensible steps to protect the operating system for the modern World. With Windows 10 Enterprise, Microsoft has taken two giant leaps in the right direction, although implementing the new Windows security features requires a substantial IT investment that will take years to realize. This change in World view must also extend to how IT enables work.
In a better World, end users take responsibility for their own devices. IT spends a huge amount of time and resources trying to account for the fact that computers that are nominally designated as corporate devices are often used for unavoidable and unsafe activities (i.e. web browsing and receiving email). If you flip this model on its head, IT is out of the business of protecting devices and instead IT focuses on protecting the applications and data that are required for work.
The responsibility should fall to each application to protect the data it creates and consumes, and to ensure that if one device is compromised, a user can very easily and safely switch to another while that device is repaired. In a World where personal devices can interact with corporate data, IT is no longer in the business of responding to a virus attack that may have originated from a user’s poor personal decision to open an untrusted email attachment, to ignore an SSL warning while browsing, or to disable the Windows Firewall. Users can take their compromised devices to the Apple store, or to any major computer store for repair. While one device is being fixed, the next device can be used to continue working.
However, to reach this future, app makers must change the way that apps are built, not just on mobile devices but on all supported devices. If apps start to make modern assumptions – that information must be protected from other users of that device, from other apps installed on that device, and from networks connected to that device – then the apps themselves can take responsibility for keeping sensitive corporate data safe, regardless of the state of the host device. IT then takes responsibility for picking corporate apps well, which in my opinion is a fair division of responsibility.
Seth Hallem is CEO and co-founder of Mobile Helix, vendor of LINK, an encrypted app for lawyers to work on sensitive documents from anywhere.