But if the AI were to achieve superintelligence, which Bostrom believes is inevitable once it reaches human-level intelligence
First off, there are problems of what it means to "achieve superintelligence". In what? What does it mean? It does not mean that NP-hard problems drop magically away: approximation, errors, bounded rationality, quick-and-dirtyness and arbitrary dumbass attacks will be an inherent feature of AI, even if it is not limited by a short-term memory of ~7 items. There is also a hard limit on being "maximally intelligent", which would be fastest learning algorithm possible, and it is very closely linked to the untractability of finding maximally compressed representations (see here).
and be totally focussed on making paperclips, it could end up converting all known matter into making paperclips. What to us appears entirely maniacal behaviour makes perfect sense to the AI, its only goal is to make paperclips.
No it couldn't. First, what is described here is a factory that is, by its very definition, NOT intelligent. Looks like a bait-and-swtch-to-grey-goo scheme. Now being intelligent does not mean being suddenly able to command energy and material processes of the environment to perform crazy feats (even if Frank Herbert though so in "Destination: Void"). Doing "philosophy" does not mean having a license to veer off into crazy & unhinged territory.
He also reckons the probability that we are all living in a Matrix-esque computer simulation is quite high.
What the fuck do I even read? This is a discussion that is even lower bog-tier than the unprovable "multiverse" grappling-at-funding activities so beloved by sadass physicists out there. DERP! Extend and solve your Quantum Field Theories properly, you lazy f*cks, there is megaton of works to do!