Re: He's right. Again.
"one of the advantages of Windows more modern hybrid microkernel design..."
Oh this is hilarious. But not in a good way.
Back in the days of NT3.x, when the evidence of NT's origins in Cutler's VAXELN distributed RT kernel were still visible to those who knew VAXELN (and for those who didn't know, it was briefly written about in Custer's Inside Windows NT), there was some plausibility in the modular kernel talk.
Various classes of process ran in separate address spaces and communicated via procedure calls, The amount of shared data was strictly limited, and for good reason (robustness and security, for example).
But this robustness came at the price of performance. Run the same app on a Win16 box and the same box running NT and the Win16 box would be a performance winner.
So over time Gates forced changes towards the monolithic approach, e.g. moving assorted drivers and subsystems into the kernel for performance reasons that for security and robustness reasons should have been isolated from each other.
The Win16 box wouldn't be a productivity winner, because it would keep running out of memory or locking up or falling over. Things that the NT user didn't have to put up with, But productivity is a lot harder to measure than performance.
And as far as I can see, the "more modern hybrid microkernel design", if it ever really existed, was sacrificed with the modular design, when performance won over productivity.
There *may* have been some return to the modular design during the "trusted computing" era, where it became important for PCs not to leak high value media content on the copy-protected way between content provider and HD display. But that wasn't about generic robustness and security, just about providing a trustable path (whatever that might mean) for end to end content delivery.