iPhone gyroscopes, of all things, can uniquely ID handsets on anything earlier than iOS 12.2

Your iPhone can be uniquely fingerprinted by apps and websites in a way that you can never clear. Not by deleting cookies, not by clearing your cache, not even by reinstalling iOS. Cambridge University researchers will present a paper to the IEEE Symposium on Security and Privacy 2019 today explaining how their fingerprinting …

1. I was half expecting Apple t ocome up with a patch that basically completely randomised the output of the sensors like they did for some other identity in the phone :)

I wonder just how much noise they need to add to the feed to make it non-fingerprinty and whether that is sufficiently small to not impact gaming experience?

1. According to the researchers' site apple have patched. Not read the paper in detail yet but it's a nice idea, "Fig. 1 (a) presents the raw gyroscope measurements collected by the two devices. From the figure, we can clearly observe the quantization. This is because the outputs of the gyroscope ADC are integers. Taking the difference between two sensor readings directly reveals the gain of the sensor. According to Equation 2, the difference between two measurements, can be calculated as".

Then goes on to:

~ΔA = round(G_0^-1 ΔO)

Where ~ΔA is their estimate for the changes in sensor readout, G_0 is an initial guess of the gain matrix and ΔO are the changes in reported gyroscope output. You can then start a recursive estimation of the actual calibration matrix G.

You may not need to add much fuzz to defeat this version of the attack if it relies on the quantisation of the output data (effectively, measuring the smallest distance between non-identical outputs), though it does take a least squares approach to its repeated fitting, but a more statistical variation of the same may be able to overcome that. Apple's fix apparently follows the authors' suggestion to apply random noise uniformly distributed over the discrete step width, this means the straightforward rounding step that lets you into ~ΔA and from there to estimate the gyroscope calibration G is defeated, but clever use of something like a Kalman filter to get at the device motion might let you start to average out that noise (since it's at the sampling frequency and motion changes are unlikely to be).

Edit: adding uniform noise at the level of the readout steps isn't going to negatively impact accuracy very much (it will a little). However if you can use statistical techniques to beat added noise then it may get into a trade-off for obfuscation versus calibration accuracy.

1. It seems like they're analyzing sampling aliasing. If so, better interpolation would be critical before a little noise can mask it. The formulas might be a bit tricky to work out but it would be trivial CPU overhead for sensors that run no faster than a few hundred samples per second.

Microphones might have this issue to. At least on some Android phones, apps have a choice of getting raw or calibrated microphone data.

2. Well given that most Android phones aren't calibrated but still work for gaming, there's no reason adding noise to 'hide' the EXACT amount of calibration should negatively affect gaming unless they overdo it. I'm not really sure why you couldn't use this to uniquely identify any device, calibrated or not, that wasn't adding noise. Not that there's a reason to do this on Android, since it doesn't go to such lengths to prevent apps from uniquely identifying a device.

The difference between MEMS devices can't be that large or uncalibrated devices would not work properly. Apple's calibration must be precise to an overly large number of digits for it to be detectable to the point of identifying individual iPhones.

I have to hand it to those who figured out these attacks, that's a pretty impressive piece of detective work and engineering. Advertisers must be REALLY struggling to identify specific iPhones to go to such great lengths though.

1. I'm not really sure why you couldn't use this to uniquely identify any device, calibrated or not, that wasn't adding noise.

They're using the discrete nature of the sensor output to let them work out the device-specific calibration values (integer sensor output to calibrated real numbers). For uncalibrated devices those values will not be unique (I'd guess batch-specific at best?). If you could find a way for the uncalibrated devices to secretly determine their correct calibration then I suppose that might be identifiable, but calibrating something is difficult enough, let alone without co-operation, and the calibration uncertainty is likely to be more than the spread of device characteristics anyway. Attacking calibrated devices you're really reading a set of stored floating point numbers that are the same every time you extract them.

2. It may take a little longer to get the fingerprint, but nearly everyone will put their phone down somewhere at some point.

If the app taking the fingerprint were to just wait for a period with no gross jitters on the sensors, then you could assume that it's stationary and fingerprint it then, without having to try and account for gait etc.

1. Are we going to see a market form for vibrating phone cases now?

2. It doesn't matter much either way. If it's stationary then you need enough sensor noise to get the jitter out, test mode on my phone seems to confirm there should be enough on a normal office table, though you still need to wait long enough to get the different combinations that will allow you to estimate the full calibration matrix. Because they're looking at changes between samples moving about is fine too, and they seem to do a trick with looking at the differences between pairs of outputs of similar values. There's not much of the work that is actually concerned with gait. There is a little discussion about bias correction in the javascript API causing a problem, waiting for the phone to be stationary is a suggested strategy for that. However the 100 samples they need is less than 1 second of data.

To crudely summarise, say you've got an accelerometer and it outputs integer values from 0 to 255. You have some calibration that turns that into physical units, say m/s^2. You still only have 256 possible values. Maybe they cover -20 to 20m/s^2, so they're about 0.156m/s^2 apart (-20m/s^2, -19.844m/s^2...). Look at the values you've collected over some set of samples and take the difference between close ones, you'll see +/0.156, +/-0.312 etc. On a calibrated device that 0.312 will not be exact, say it's 0.312445, this value will be different between phones. With the gyroscope you've got a slightly more complex scenario with 6 (I think, guessing it's symmetric) independent calibration values that can be extracted in a similar way. Because the devices are accelerometers the method works equally well stationary (values zero with noise), in steady motion (values zero with noise again) and constant acceleration (constant value with noise), but a bit of changing acceleration, unless incredibly jerky, is not a problem.

3. An app can probably be more trusted, but the more concerning one is a script on a website. A user can be tracked inside an app by many more reliable mechanisms, so the only benefit to apps. in using this is to try to track a user after they've deleted an app and reinstalled it. Concerning, yes, but not a very frequent thing to do. Meanwhile, sites that can fingerprint via scripts are much more worrying. If it takes a second to do that, lots of sites will do it successfully. If we can extend this to a minute, fewer sites will bother and plenty won't get a chance to finish before the user closes it. Still, I think the best way is to deny sites access to those measurements. If a thing needs those, they can write an app.

4. ... but a website running this code is less likely to be loaded when the phone is not in use!

5. nearly everyone will put their phone down somewhere at some point

True, but will nearly everyone put their phone down on a PERFECTLY level surface?

"[T]he actual technique does not necessarily have to be malicious in practice (for example, a bank might use it to uniquely fingerprint your phone as an anti-fraud measure)"

About that... NO! The bank does not need to fingerprint my phone, given that they won't be fingerprinting anyone's non-Apple devices. Nobody needs a unique fingerprint that I can't wipe. History to determine whether a request is likely to be legitimate or not, sure. Sneaky tracking data, no. In addition to it being creepy, it wouldn't actually help very much given that only a small number of devices can use it even if Apple unpatches the vulnerability.

Saying things like that makes it sound like you think there is a legitimate use for this invasive technique. There isn't.

1. Re: Please don't do that

I would not have an objection to a bank uniquely identifying my specific phone as a way to require less authentication, so long as the API that did that and permission it would ask for was something Apple had to specifically approve for an app to reach the app store.

Basically it would be sort of two factor authentication without requiring a separate dongle - I have MY phone and MY password, so they know for sure it is me. But the possibility of abuse is so high I'd want it to be something Apple controlled pretty tightly and only allowed apps from approved publishers for a very limited number of apps that deal with financial stuff like banks, brokerages, etc. and the user would have to approve it on their end as well.

Being able to know it is MY phone is too useful to throw away completely just because 99% of the market would abuse it for advertising purposes or worse. So I'm not ready to say "don't allow it at all" but it would need to be treated with extreme care.

2. Re: Please don't do that

I can think of at least one case: it creates an identifier that's persistent even if your phone is stolen and wiped. I don't know if that's more persistent than IMEI, but it could be useful.

Naturally, it could also be used maliciously.

4. Interesting side-channel attack

The commonality with other side channel attack is that these typically also require some kind of high-precision measurement.

See title

6. People are just too goddamn smart.

This is obviously why we can't have nice things.

AND these people are apparently witches.

1. Re: People are just too goddamn smart.

You think a smartphone is a 'nice thing'? To me it's a (forced upon us) necessary evil. ;-)

1. Re: People are just too goddamn smart.

Who is forcing it on you?

1. Re: People are just too goddamn smart.

Forced, as in the growth in technology that relies more and more on mobile apps and services, deals that make it more cost effective to have a mobile data plan as your primary internet access (no need for a home based hub) and people who need instant access to the internet for their job. Just an ever increasing world that relies more and more on mobile technology each year.

2. Re: People are just too goddamn smart.

Banks, for starters. All of them as they've agreed to do so in bank cartel meeting, being proper cartel and all.

"Paper pin-lists are banned by EU, you must have a phone app to use network banking"

"Oh, you you don't have a smart phone, too bad. Just pay them in the bank then."

And if you go to bank (hah, nearest is 15 kms away) they charge 8 euros per bill you want to pay anything.

Try to live without banking ... so yes, it is literally forced on you.

7. Straining at gnats

Why is everyone so worried about device fingerprints ? Surely 99% of people don’t clear down their cookies regularly and don’t have security problems?

I can imagine hackers and paedos want to avoid fingerprints, but presumably they’re using Tor on virtual machine running on a USB stick. I guess there are a few other legit concerns, but not enough to justify having to press “I accept your cookies blah blah” on every bloody web page...

POST COMMENT House rules

Not a member of The Register? Create a new account here.