Re: So is it running z/OS or Linux or both? @boltar (again)
You got me thinking back more than 25 years to my training on Amdahl's Multiple Domain Facility (MDF) that I talked about in my last post, and I realised that back then, hypervisors did not really virtualise I/O.
What early hypervisors would do was to segregate memory and access to I/O channels (literally in the IBM mainframe world, but I suppose analogous to a set of disks or other devices hung off of a single adapter in more modern thinking), and provide a time-slice scheduler between partitions for the CPU.
All handling of I/O was performed natively by the hosted OS, including boot block requests, and it was only in very rare situations (such as extended I/O interrupts) that the hosted OS even knew it was running in a virtualised environment.
What this meant was that a hosted OS had to have complete and exclusive access to a string of disks, or indeed any other device, and all the hypervisor had to do was check that a hosted system did not try to access disks or other devices that were not presented to it.
The most difficult part of slicing a machine up like this was making sure that device interrupts were handled by the correct hosted OS, the one that had initiated the I/O operation.
There was virtualised addressing for each LPAR, so each hosted OS ran as if it has it's own contiguous address space starting at 0, and running up to the memory address configured. Additional protection was provided by memory having access keys attached to each page, and a hosted OS had to have the correct key to access a page, and each LPAR was only given it's own memory key. I think this memory keying was a hang-over from the early version of IBM VM, which did not have a fully virtualised addressing scheme.
It's only since you have shared virtualised I/O to the hosted OSs that hypervisors have become particularly sophisticated.