mal-trained AI?
I'm immediately wondering if malware can be created to train it to disable security features by giving false haptics, camera and mic inputs etc. That would be fun for them to patch, does anyone fancy re-programming the neural net? Or block swathes of features instead. Age-old functionality versus security blah blah.