Have New Backdoors Been Discovered in iOS? An Interview With iOS Developer and Digital Forensics Expert Jonathan Zdziarski
When the HOPE X (Hackers On Planet Earth) 2014 convention started on July 18th, one particular presentation caught the undivided attention of information security professionals. Ever since, the tech media's been buzzing... Are there really newly discovered backdoors in iOS's code? Is Apple spying on the countless millions of iPhone and iPad users worldwide? Or is Apple sending people's text messages, photos, and emails to the NSA or other intelligence agencies? The uproar got so much attention that Apple's corporate PR felt the urge to address the hot new rumors.
The presentation at HOPE X that triggered all that buzz, titled "Identifying Back Doors, Attack Points, and Surveillance Mechanisms in iOS Devices" was given by Jonathan Zdziarski.
Zdziarski has an impressive technical background. He helped to develop some of the first software jailbreaks for iOS. Many of the forensics methodologies that law enforcement and intelligence use with iOS today were invented by him. He has directly trained people in law enforcement agencies around the world. His extensive iOS knowledge has led him to write four books on the subject, published by O'Reilly Media- iPhone Forensics, iPhone Open Application Development, iPhone SDK Application Development, and Hacking and Securing iOS Applications.
Coverage of celebrity phone hacking scandals, including what happened to Scarlett Johansson, have also cited Zdziarski's work.
In addition to being an Information Security Researcher, I'm also a tech journalist. I know very well that sensationalism sells magazines and drives SEO and page clicks. Mr. Zdziarski is very concerned about that, and stresses that no one should panic or jump to conclusions.
Most of the media coverage I've read about Zdziarski's findings cite quotes from his talk or paragraphs from his blog posts on his website, zdziarski.com. But I had yet to find a journalist who actually questioned him directly. I strongly believe that the best way to learn about Zdziarski's research is to interview him. So, that's what I did.
Kim Crawley: All of the celebrity phone hacking scandals I've heard of recently seem to have been caused by the security ignorance or negligence of the targetted celebrity. For instance, Sarah Palin's webmail leaked because her password retrieval answers were all easily found by researchnig her on the Internet.
Some of the other incidents were just matters of lousy physical security, their devices ending up in the wrong hands. Do you suppose that in Scarlett Johansson's case, iOS backdoors may have played a role? What's your hypothesis about that?
Jonathan Zdziarski: Well, I really can't speak about which techniques were used on Scarlett Johansson. There are some things that may have happened. There are social engineering techniques that are psychological. That could have most likely played a role in a number of different incidents. With social engineering, you still have a wide array of technologies to choose from, in terms of actually acquiring data from a device.
You can take it off of the phone directly, you can steal it from a backup, you can take it from the iCloud. And again, it depends on what you steal, too. Are you stealing someone's passwords? Are you stealing someone's PIN code? Are you stealing the physical device itself? There are a number of different scenarios that could have explained data being stolen from any celebrity's device.
I don't have any special knowledge about Scarlett Johansson's case. I can really just speculate. But, data security is definitely a very prominent threat. You can be assured that individuals such as the ones who stole her photos are certainly looking for anything possible.
KC: Do you suppose that entities such as celebrity tabloids like TMZ, or News Corp's former News of the World may be employing people with blackhat hacking know-how specifically to retrieve data like that?
JZ: I can't really speculate. But if there are unsavory organizations that are looking to steal this sort of personal data, you would want to hire somebody who is technically adept to try and access the content. There's certainly no short supply of criminals who'd be willing to hack someone's device for a couple of bucks.
Part of the problem you run into, especially with data security, is that once you've targeted a specific individual, it becomes in some ways easier to get information from them. At this point, you're actually monitoring and surveilling someone. So, it's as simple as watching someone type in a code, over their shoulder. Or, using a camera to watch them.
KC: Or to upload a software keylogger?
JZ: Right. So if you're specifically targeting someone, security becomes much more difficult. If there are backdoors or mechanisms that rely on any personal data that can be obtained from that kind of surveillance, then you'll find a way of penetrating the device that you're looking into.
KC: Quite frankly, if I was on the dark side and employed to do stuff like that for a celebrity tabloid publication, the first thing I would do is get into the malware blackmarket. I'd get a RATS to put on there. It's easy to social engineer by filebinding it to an email attachment. I've experimented with that stuff in penetration testing.
JZ: Yeah, well, I listened to a talk a few years back. I believe it was from a researcher at the University of Waterloo, up in Canada.
KC: Yep, that's about a two hour drive from here. (I live in Toronto.)
JZ: He did a study on blackmarket botnets. That was years ago, so I'm sure the technology is more advanced now. I think his talk was five years ago. He was reporting that there are very large networks of botnets that existed as kind of a blackmarket eBay. There was a search engine component present, and you could pay the botmaster for access to this network, and literally search like you were searching Google. You could search for a particular person or a particular type of situation.
If that's the case, assuming his research was valid, a lot of data is stolen without targeting any specific individual. Botmasters have already compromised many devices.
KC: So, a malicious blackhat working for a celebrity tabloid may have to search through all kinds of data, most of it being taken from non-celebrities.
JZ: There are a number of different possibilities. What I got from this gentleman's talk is if you knew what to search for, some of these botnets were advanced enough that you can search through harvested copies of indexed data. In effect, it worked a lot like the NSA's Prism system.
KC: You recently gave a talk at HOPE X about backdoors you may have found in iOS. Which particular versions of iOS does that pertain to?
JZ: The code I discussed in my talk goes as far back as iOS 2. In iOS 2, there was very little security built into it. It was pretty much a hackfest for everyone.
Over the years, Apple has added a number of security mechanisms, which have greatly improved the overall security of the device.
Some of the mechanisms in my talk were due to Apple. I've watched some of these services evolve through different versions of iOS. In iOS 2, there was little personal information one could get from these interfaces. Over the past couple of years, with iOS 6 and iOS 7, I've seen a number of data sources that (may contain personal information) grow from six to now I believe, forty-four different data sources.
A lot of the new data sources that have shown up are the ability to dump the users' photo album, copy their MMS or SMS databases, your notes, your address book, screenshots of your activity, your keyboard typing cache which comes from autocorrect, a number of other personal artifacts of data. They should never come off the phone except for backup. The problem is, these mechanisms now is that they've grown so large, they're dumping a lot of data and they bypass backup encryption.
When the user has their phone connected to their desktop, they can turn on backup encryption and enter a password. It tells the phone, if anything comes off of the phone, they can make a backup. If I turn encryption back on my personal device, and then run a backup on iTunes, that backup is completely encrypted and protected. However, when you use these interfaces that I've been discussing, that backup encryption is bypassed.
It may be due to sloppy engineering, or some other decision Apple made, I can't speculate as to why. All I can really say is because of that mechanism, because of that one reality, it can be very dangerous. You can use this mechanism to not only pull personal data off, you can also (bypass the encryption) wirelessly, in a number of cases. It really opens up various security concerns, for a specific set of threat models.
KC: There's also a rumor I've heard that the fingerprint scan unlocking, that I think was introduced in the latest version of iOS, could possibly be sending records of users' fingerprints to Apple.
JZ: I think that's mostly just speculation. I haven't seen any evidence of that. What I can tell you about is these services I've described in my talk.
First of all, what I must stress with you is what they can't do. Whenever you do any kind of security research, you build a threat model. In layperson's terms, what's the actual threat? In which scenarios will the attack work?
A scenario people are concerned about is a stranger stealing a locked phone, in a place like a losing it in a taxicab, or if someone breaks into your home. That threat model doesn't work with the research I've put forward. I've made that very clear from day one. My research didn't outline any ways that someone could steal data from a locked phone, unless you have a PIN, or access to the owner's desktop.
In article from about a year ago, I think from 2013, Der Spiegel had outlined an NSA program that involved penetrating desktop computers. That would be a springboard to targeting a user's iPhone. From there, thirty-eight different features could be activated on the phone.
The information that Der Spiegel outlined sounded like they may be involved with some of these services. That doesn't mean that Apple's conspiring, necessarily. If you take the Occam's razor approach, it would make sense to the NSA to include some of these services in an attack. In the case that you have a government, maybe a foreign government penetrating a user's computer, you have a privileged position. That's what this attack requires.
Now in the real world, where this applies is obviously to diplomats traveling into hostile countries, you sometimes have people at border customers confiscating the equipment of security researchers. That obviously implies a privileged position, perhaps for law enforcement.
If they suspect you've committed a crime and maybe have made an arrest, in many cases they will seize your desktop as well.
Some more likely scenarios are if you're specifically being targeted. For example, if you have an ex-lover, maybe you've plugged your phone into their computer. They could then very likely be have what they need to able to wirelessly dump data off of your phone. Or a coworker is specifically targeting you. Maybe they'll take advantage of an opportunity where you've walked away from your desktop, they can take data from your desktop and copy that for themselves, onto a USB stick.
So, there are clear scenarios that fit this particular threat model. However, what I've tried to stress to everyone is in the general widespread threat model, if you have possession of a locked phone, that simply doesn't work with this research.
KC: So, at the very least, iOS users should configure their iPhone or iPad so that it requires a password or a swipe code to unlock.
JZ: Right. Well, at least a stranger can't get into your phone unless they get very lucky. But relying on that for security probably isn't a good idea for a high profile individual. If someone is really targeting you, your unlock code will only protect you to the degree that you can keep it secret. Anybody who is watching you, surveilling you, we use the scenario of paparazzi with a celebrity, it would be easy to watch the person over their shoulder when they type the PIN in.
If you were to steal someone's phone after figuring out what the PIN is, then of course you could access all of this type of data.
Now, where backup encryption is supposed to protect you is a scenario just like that. If I was to try to steal your phone data off of your phone with a computer, all of that data should come off encrypted. Unfortunately, these services, especially the service I've talked about, it's called file_relay. This is the service Apple says is a diagnostic service. That service may bypass backup encryption. So, if I had your device and your PIN, or if you left it unlocked, I could dump a significant portion of your personal data. And I could do it wirelessly. That makes it a little bit more of a concern.
Here's another scenario. If you leave your phone unlocked, and you're a target, let's say if you're a CEO, or a diplomat, or someone important, if I can connect to that phone for even a few seconds, I could create a record for myself. And then later on, I could access that device wirelessly, any time I want. The user has no visual indication that I did something with their phone.
KC: Do you think it would be fairly easy for someone, even if they lacked information security expertise, and if they lacked the authentication credentials, to bypass the security on an iPhone. Maybe there's a vulnerability in how the decryption keys are stored?
JZ: Not that I'm aware of. They'd need the authentication credentials.
KC: So, maybe a typical end user couldn't figure it out.
JZ: Right. Most people aren't very aware. Even for those who are aware, this requires a significant amount of knowledge to be able to do. You have to know the specific (software programming) calls to communicate with the service.
So, this isn't something people should be freaking out about. They shouldn't be throwing their phones away. It is a security concern, but at the same time, it does have a numbers of caveats.
I'm sure that now both Apple and the public are aware of the concerns I've found from my research.
KC: There's one thing that occurs to me based on what you've written. By reverse engineering iOS code, you've found programs with names like house_arrest, file_relay, and lockdownd. You could make educated guesses as to what those programs do, but my first reaction was, why didn't developers give those programs innocuous names like happy_kitten, or it_just_works? Do you think Apple is naive enough to believe that their code won't be reverse engineered, or do you think the people doing the coding for their corporate masters just don't care?
JZ: I think Apple didn't count on someone, initially, hacking their device. However, they're very aware now of reverse engineering. I think it's more along the lines of, I don't think Apple intended to try to conspire to keep this a secret.
If you are a security researcher, you know how to get into the phone. Back in iOS 2, there wasn't very much security at all. You didn't have backup encryption at all.
KC: If I was looking at the source code, and I saw a program or function named house_arrest, that would raise alarm bells to me.
JZ: It's just related to Apple's sandboxing, where you're kind of putting every application in a jail. When you're aware of that, it doesn't sound too sketchy. I've been cautious not to read anything into the names of the services.
KC: Why do you think they may have named it something like that?
JZ: Your answer is as good as mine.
KC: Okay. (Laughs.)
JZ: If you're dealing with running applications in jails within your phone, it kind of makes sense.
KC: On another note, many years ago my fiance (computer security scientist Sean Rooney) decided to run Wireshark in Windows 2000 and XP. He found a lot of traffic reporting his activity to Microsoft. From that point on, he decided to never use Windows for anything, not even to run as virtual machines. It seems that major vendors have been doing this sort of thing for some time. What are your thoughts about that?
JZ: You really are sharing a lot of personal details with Apple if you're using an iPhone. That's true with Android and other operating systems as well. It's a trade off. It helps Apple provide services and functionality for your phone. The problem is how much personal information is shared, and how it's used.
That issue needs to be resolved. I think the only way to resolve it is legislation requiring disclosure. It's not necessarily a bad thing (to share information with a vendor), as long as they say what kind of information is shared and how it's going to be used.
KC: I've found that the vast majority of the time, end users will just skip right past the EULA (end user license agreement.) "Accept, accept, okay, next!"
JZ: I think Apple is doing a reasonable job, but at the same time, what I don't like about a number of these mobile devices is that you're forced potentially to enter a contract in order for any useful functionality on your phone. Someone spends $600 or $700 on a device, and in order to do something useful, such as finding the nearest restaurant, you're agreeing to share your personal details.
I think there's room for improvement there. A balance needs to be struck between the consumer and the mobile manufacturer. I think right now the scales are tipped in favor of the manufacturer. Through legislation or other means, there may be ways to strike a fairer balance.
KC: It seems that Apple's attitude is "the devices we sell to you are really our property."
JZ: I think, to some degree, that's true. At the same time, no one is forcing Apple to provide these types of services. Consumers have alternatives. If they were more concerned, more people would be switching to Android. It's become an accepted practice that a consumer shares a device with the device manufacturer.
In regards to my research, I really think Apple stepped over the line here. Now, these interfaces don't send your personal information directly to Apple. But they do make it available for Apple's diagnostics. This may be the first time, since the iPhone was first released seven years ago, that Apple has not disclosed that there are ways to get data from a phone, around encryption.
If you give that phone to an Apple Care employee, if they use the diagnostic functions, you're unwittingly giving them access to your personal information.
KC: Now, these are all proprietary vendors we're talking about. One popular misconception about Android is that, because it's built on a GNU/Linux kernel, Android is all open source. That's nonsense. I know everything written on top of the kernel which is mainly written in Java is mainly proprietary code.
I'm an avid gamer, and so I use walled garden environments of the kind Apple likes, such as when I use my PS3, PS4, and PS Vita with the PlayStation Network. On the other hand, I'm a huge fan of Richard Stallman, the Free Software Foundation, and the FOSS (free and open source software) movement in general.
An argument is made that open source is better, because if there are backdoors in software, they can easily be found.
JZ: I think it's been proven that open source can have security problems like closed source code can have. For example, the Heartbleed bug in OpenSSL.
KC: I've written about Heartbleed. The bug was a misplaced comma, it was a syntax error.
JZ: The developer was very good about apologizing and fixing the bug when it was discovered. There was no backdoor intentionally placed. I give him a lot of credit for coming forward, it must have been very embarassing. Saying it wasn't the government or the NSA doing it really helped.
Open source really has little advantage (over closed source) except for disclosure. The Hearbleed bug was around for years. Anyone could have found it in that time.
I really think closed source code gets more attention from security researchers.
KC: But someone like Richard Stallman would argue that it's a lot less likely that someone would put a backdoor into open source code, because it could be easily discovered.
JZ: There are many ways to write obfuscated code so that it doesn't look obvious that it performs malicious functions. There are actually contests for writing that sort of code.
KC: Just to close here, do you have any final words, any other information my readers could benefit from, regarding your recent revelations?
JZ: Regarding my research, the thing I'm trying to stress to everyone is to not panic. This is not a type of attack that'll affect everybody. There are very specific types of threats that certain people are vulnerable to. However, it's good that the media is talking about my research. There are problems that need to be addressed. At the same time, I don't think there's any cause to panic. I don't think there's a widely exploitable attack surface area that'd affect a large number of users.
People that should primarily be concerned about these mechanisms are public officials who are traveling internationally, anyone else who may be a target of someone else, maybe by a government or a foreign government.
Jonathan Zdziarski's books about iOS development and security can be purchased from O'Reilly's website:
http://www.oreilly.com/pub/au/1861
References
Studies by J. Zdziarski and Co-Authors Describe New Findings in Digital Technology
Slides From My HOPE X Talk
Jonathan Zdziarski
http://www.zdziarski.com/blog/?p=3441
Zdziarski's Pastebin of logs from iOS' file_relay service
About Me
Jonathan Zdziarski
http://www.zdziarski.com/blog/?page_id=202
Apple Snuck Backdoor Surveillance Tools Into Their iOS
Matthew Phelan, Gawker.com
http://blackbag.gawker.com/apple-snuck-backdoor-surveillance-tools-into-their-i-e-1610260959