Comment: FBI cracks the iPhone – how and what are the legal IT implications?

On March 28th, the Department of Justice confirmed that it had successfully unlocked the San Bernardino shooter’s iPhone 5C without Apple’s assistance. On that same day, the US government moved to vacate a California court order that had attempted to force Apple to assist in the decryption of the device. While the legal maneuverings are fascinating in their own right, the conclusion leads to an even more fascinating technology discussion – how did the FBI crack the iPhone, and what are the implications of this successful hack?
While the details of “how” are sparse, there is enough information available to paint a picture. iPhone encryption combines an encryption key burned into the hardware of the device with an encryption key generated from the user’s passcode to protect data. The former key is designed to prevent data from being moved from the device. Moving data off of the device would allow a hacker to attack iPhone encryption with a much larger and more powerful computer. The latter key is designed to keep the data on the device private. The vulnerability in this protection scheme is that passcodes, even if they have 6 digits, present a fairly small space of possible values (only 1 million for a numeric passcode). Hence, one can hack any iPhone if one can try enough passcodes fast enough (i.e., with a computer, rather than with your fingers on the device). Creating a machine to tap passcodes into an iPhone is by no means a challenge for a motivated hacker. Hence, Apple has added additional protections to stop such brute force attacks.
To try to prevent brute force attacks, iOS limits the number of times a user can attempt to enter a passcode and fail to enter the passcode correctly. First, after 6 failed passcode attempts the device will inject an escalating delay before the user can make a further passcode entry; second, after 11 failed attempts the device will lock itself permanently. At that point in time all data on the device is essentially erased and there is no hope of decoding it. In addition to these software protections against a brute force attack, newer iOS devices (iPhone 5S, 6, 6+, iPad Air, etc.) add a 5 second delay between passcode attempts that is implemented in hardware. This fact provides another clue into how the FBI may have compromised the device, as the FBI has stated clearly that their attack will only work on the iPhone 5C.

Given the above restrictions on brute force attacks, the anonymous party assisting the FBI must have found a way to remove the software safeguards that prevent repeated attempts (and subsequent failures) to enter a passcode. How might this be accomplished? One potential answer is a direct attack on the kernel that manipulates the in-memory tracking of failed attempts. A second potential answer is a code injection attack that allows the attacker to directly exercise the operating system’s internal decryption APIs. This latter attack would have the added benefit of allowing the attacker to test various passcodes without having to worry about actually tapping them into the device. Either attack starts with an unintended entryway to the iOS kernel, so how might such an entryway work?
Imagining such a hack is quite simple (although the specific implementation of the hack is undoubtedly quite complex). Every device has device drivers, which are software components that provide an interface between the operating system core and a specific sensor or external device that is integrated into that core. For example, an iPhone’s camera is operated by a device driver that interfaces with the core iOS kernel. Different device models have different device drivers, because features like the camera, microphone, speakers, display, WiFi radio, cellular radio, etc. are different due to the variance in cost and form factor. Each such device driver is responsible for gathering data from the external device (say a photo from the camera), and moving that data into the core kernel so that it is available to other applications.
When the kernel receives such data, it must treat such data with care because that data is untrustworthy – in other words, the specific contents of that data are unknown. One coding mistake in the handling of that data could easily lead to a common security vulnerability known as a buffer overflow. A buffer overflow can allow unwanted manipulation of data that is stored in memory by the running kernel.
To bring these technical details to life, let’s imagine a simple (and hypothetical) attack: first, the attacker develops an automated way to type passcodes into the device screen. This could be as simple and unsophisticated as a small robot that taps a pattern on the screen given a passcode as its input. Second, suppose there was a vulnerability in the iPhone 5C’s WiFi radio such that a carefully constructed packet enabled a hacker to exploit a buffer overflow. Somewhere in the kernel’s internal memory it is keeping a count of the number of failed passcode attempts. Given a little bit of luck and a small coding mistake, a hacker might be able to send a carefully constructed WiFi packet that, due to the vulnerability, could reset that failure count to 0. Given these two tools, the brute force attack is simple – type in passcodes automatically in sequence trying every one of the 1 million possibilities, and every 6 passcode attempts send a special WiFi packet to the device that resets iOS’ in-memory failure count. At the rate of 1 attempt every 2 seconds it would take about 23 days to reliably unlock the phone (and it could be a whole lot less if the computer finds the lucky passcode digits sooner).
Besides the intrigue of the hack, what does this mean for legal IT organizations? The first question for IT to ask is how sensitive is the data that attorneys carry on their devices, and what type of attacker might want access to that data? The FBI is a very sophisticated attacker, but they are one of many corporate and government organizations that might have the resources to execute such an attack. If attorneys are carrying data that is sensitive enough (as the IT folks at Mossack Fonseca would have been wise to recognize), then it is incumbent upon IT to provide the means to protect it. Protecting sensitive data is both a social and a technological effort.
On the social side, users need to embrace strong policies that restrict the flow of data so that employees (including attorneys and staff) only have access to the data that they need to use regularly in order to accomplish their work. Users also need to embrace longer and more complex passwords. A complex password is the best defense against unwanted access to corporate resources of all sorts, including email, DMS, VPN, laptops, desktops, and other resources on the corporate network.
On the technical side, IT needs to ensure that a strong password policy is both enforced and it is feasible for users to comply, and IT must ensure that strong passwords mean strong data protection. Implementing a strong password policy essentially requires two things: first, sensible length and complexity policies implemented in Active Directory (or any similar directory service), and second, single sign-on. The more passwords that users need to remember, the less complex users will make their passwords to aid their memories. Users should need one very good password to access all of the firm’s sensitive resources.
Implementing strong data protection requires a comprehensive approach to information flow and to data encryption. Information flow is a very simple concept – if a user has permission to access a document in a protected environment (e.g., on a corporate laptop), what can that user then do with the document? Can the user move that document out of the protected environment (company laptop) and to an unprotected environment (personal device)? Data encryption defines how data is secured in a protected environment. The encryption algorithms themselves are the easy part. The harder parts are (a) keeping data safe under attack by using a complex password as an encryption seed, and (b) forcing data to stay in a safe place, which relates back to information flow.
Particularly on mobile devices where users are both averse to password entry and encouraged to exchange information between apps, the challenge of data protection is difficult. However, there are mobile applications available on the market that are designed with the specific intent to keep sensitive data safe. When the data demands it, IT would be wise to find such an app that strikes the right balance between productivity and protection.
The conclusion for IT is simple – protecting data requires a comprehensive plan, which includes policy, education, and technology. The key is to only use this hammer when the nail really deserves it. Otherwise you will be compromising user productivity in the name of unnecessary security. However, when the sensitivity of the data demands it, it is then incumbent upon IT to find the right technology partners to keep data safe. The good news is that in the arms race of hacker vs. IT, the mobile battle is still young. IT has time to step up their data protection before hacks like the FBI’s are broadly available to those with the requisite means and motivation. The bad news is that the iPhone encryption case has raised a flag that states, loud and clear, that there is a way to hack an iPhone.
Seth Hallem is the CEO and co-founder of Mobile Helix, a provider of secure mobile solutions for legal teams. Previously, Seth was the co-founder and CEO of Coverity, a Synopsys company, a leader in software quality and security testing solutions.