One thing which hasn't been mentioned re MP3 encoding
Is that it chops the sound into 24ms (from memory) chunks and performs a fourier transform to move the information from the time domain to the frequency domain. Within that frequency domain it's easy to filter and/or scale the frequencies you don't have bit rate to transmit, and to remove completely those parts which the perceptual encoding model selected claims cannot be heard.
When you replay it, the reverse occurs, and the remaining data is converted back into time domain (I've omitted details of other compression coding on the data itself as it's not relevant) and replayed. Two things have happened now: you've lost what the coder thinks you can't hear, or had it reduced in precision, and you've lost the phase information in the original signal, which may or may not be significant. The theory is that the human hear can't hear phase information; I'm not so sure, but...
A second point is that the mp3 standards *do not* define the codec. They specify that a datastream like *this* shall produce and output *thus*, but they don't say how you get to the datastream. Different codecs make different decisions on the perceptual coding models; some are audibly different with the same algorithm on floating point or integer processors - particularly at low bit rates. It's likely that similar effects pertain on the decoder.
Third point: the DAC on a phone or laptop is unlikely to be anything other than the cheapest the maker could get away with. A high noise floor, less than stable clock, cheap filters (apropos of which, many sound cards (in days of old - I don't know if this is still true) used switched capacitor filters for antialiasing, driven from the sample frequency. An excellent idea - things track automatically. But I came across some cards which also had a high pass filter at the bottom end to stop LF noise; some of those cut off at 300Hz or higher for 44.1k sampling).
Assuming for the sake of argument that FLAC/ALAC is truly lossless - that the bit pattern going in is exactly the same as the bit pattern going out, then the way to test the comparison would actually be to ignore the FLAC coded signal completely and find something clean in 16 bit audio - say a CD rip done with a good CD, ideally not one that's compressed to death as so many are - and get it in a WAV PCM file. Use that file with the codec of your choice to create an MP3 file at the bitrate of your choice; decode that using the decoder of your choice to another WAV PCM file.
Now play the two WAV files. If you can hear a difference, there is one; if you can't, it doesn't matter. The ADC doesn't matter *if that's what you normally listen through* since it affects both WAV files the same way. For completeness, FLAC encode and decode and listen to that, with the same logic.
If you really want to get silly, use an audio editing file like Audacity to subtract the two files (you'll have to delay the WAV file a little to get the timing right) and see how much signal is left. That's what the codec thinks you can't hear.