Google 20% time projects
It isn't just GMail, the following is the comprehensive list of Google 20% staff pet projects.
Google sky (an extension of Google Earth)
Google Art Project
The wife of a former senior Apple engineer has spilled the beans on Apple's transition, and it's a sweet and surprising story. The first years of the last decade posed a problem for Apple hardware: it had a new, futuristic operating system but was being left behind in the performance race. CEO Steve Jobs had fallen out with …
Ahh yes, the famous 20% time, I'd still rather not be expected to work for 50 hours plus a week, in order to be allowed to work for 20% of this time on things which personally interest me. If something personally interests me, I'll do it at home, under my own terms and if it becomes a realistic business mode, maybe approach my employer for buyin or develop it myself.
Don't equate OS with the underpinnings. While I think the BSD userland is wonderful, the NeXTStep framework and GUI are just as much part of the OS but the port couldn't really happen until it was possible to virtualise PowerPC commands fast enough (Rosetta + Intel Core Duo) to remain usable.
Care to explain what part of Mach 3 is shared with BSD? The answer is nothing. The BSD emulation layer is simply grafted onto the side of the operating system. Mach was written at Carnegie Mellon University, and the leading light of the team was Avie Trevanian, who became NeXT's and then Apple's main technology guy. Mach 2.5 was heavily based upon a Unix code base, built to show the value operating system API, whilst keeping the useful bits of an existing operating system. Mach 3 was a scratch rewrite. In Darwin some of the Unix emulation (which was mostly in user mode server processes) was migrated back into the kernel for speed.
This continual background buzz that OSX is just BSD is simply an annoying lack of knowledge of the technical history, and the current technical structure of OSX. The entire kernel is new code, the process model is different, the device driver model is different, the system APIs are different. OSX includes the Quartz graphics layer, and the list goes on. There is however, very usefully, a BSD compatible emulation layer. Apple leveraged this well. But it doesn't make OSX a tweaked BSD. What OSX is, is NeXT. If you want to see where the OS really came from look there, and the linkage back to Mach.
There are a number of factual errors in your comment.
No version of Mach is "based on a Unix code base". Mach was designed to be a message-passing microkernel, but for Unix compatibility, there is a large in-kernel Unix server; this is just one of the modules of code within the kernel, although by far the largest. In NeXTstep this was based on BSD 4.x code - 4.3 and 4.4-Lite, I believe - but in Mac OS X it was updated with code from the FreeBSD kernel. This is one of the main reasons that core FreeBSD developer Jordan Hubbard was hired by Apple.
There is no "BSD emulation layer". The userland of the OS is also taken from BSD, and again, in Mac OS X it was updated with code from FreeBSD.
Yes, XNU and OS X are radically different from BSD and indeed FreeBSD, but there is a lot of FreeBSD code in there. Your talk of "emulation layers" makes me think that perhaps you don't know what a userland is. I suggest starting your reading here:
In addition to what Liam Proven added, when I admin Apple Servers (and the occasional friend's Apple machine), I always head for the command-line (occasionally single-user), and treat it like it's a BSD box. This approach hasn't bit me in the ass. Yet :-)
Funny how folks find simple hardware modern-day "sacred cows", innit.
"No version of Mach is "based on a Unix code base"." This is probably simply not knowing the history. Prior to Mach 3 the message passing system API was grafted into what was a Unix kernel. This included the first external pager, and other services. I know, I have the source code. We are talking a long time ago. I visited CMU and the Mach team in 1989. Mach 3 was under development at this time, but 2.0 and 2.5 had been deployed for a while. Mach 2.6 was shot though with Sun code, and core kernel components (such as the process switch code) were lifted verbatim. Mach 3 changed and extended the APIs quite a bit.
The term Userland is much more recent than much of this technology. Sure the Unix emulation ran in a user mode process, so in more modern terminology is a Userland. The microkernel guys were doing this all over the place. Back in the late 80's it was all the rage. Chorus was doing very similar things. The terminology doesn't make the idea different.
You are absolutely correct to point out that Mach is the important bit and it's not BSD. But that is not the major problem with the article. It gives the impression that the OS/X efforts on Intel were a one-man show.
Apple's efforts with the underlying code go much deeper and started much, much earlier on an approach with two prongs:
- The first was Darwin. In 2002 I was running Darwin on a discarded Intel PC - when I say "Intel" I mean every chip on the motherboard was Intel and so was the network card. Darwin for x86 version 1.4x ONLY ran with Intel drivers.
- The second prong was the expertise derived from Apple's even earlier MkLinux project. This worked on ancient NuBus PPC Macs and used the Mach kernel also, despite looking exactly like Red Hat Linux, right down to the Anaconda installer.
At one point I had a lot of discarded units from a university teaching lab that we re-equipped with PCs. I also had a lot of older Apple Laserwriter printers with no ethernet capability. For long and long I followed the MkLinux community headed up by David Gatwood at Apple, making good use of the old hardware.
The old NuBus machines would fit exactly under the Laserwriters, accepted print jobs over the ethernet and transmitted them to the Laserwriters via appletalk, reformatting as necessary on the way. The Laserwriters looked to a PC like an expensive networked multiple job language printer, thanks to the old PPC Mac running Linux with netatalk, LPD, etc. And from a remote administration point of view, they were a bog standard Linux.
As a long time Mac [and PC] user, I recall how the Mac purists were horrified when Jobs announced that x86 based Macs were going to be released.
The choice was logical, the PowerPC roadmap was going nowhere. Sure, it wasn't about MHz/GHz, but the G series chips weren't going to take Apple where they needed to be in the long run.
Intel's new DualCore processors were fast, ran cool and generated a lot less heat than the G series predecessors and the option for Intel's integrated graphics also made sense [on the Mac Mini]
In essence the big factor was that Apple were far sighted enough to support Bootcamp, thus allowing Mac owners to also run [a retail version of] Windows on their systems. This was the golden egg in my book, best of both worlds with ease of installation and comparability due to Bootcamp generating the drivers CD.
Today's Core i5 and i7 Macs are awesome, albeit a little overpriced, but again, of you want to game, or you're not part of the anti-Redmond establishment, you can enjoy the best of both on one shiny aluminium clad desktop or laptop (no offence to the purists for calling a MacBook Pro a "laptop")
The PowerPC chips were problematic, particularly on heat - even the G4 laptops were uncomfortably warm - so I can understand the shift to a different architecture. I'm just disappointed they didn't pick another sane and properly-designed one, such as ARM. Compared to either, x86/x64 is a 35-year-old steaming pile of crap, with one layer of (mostly) backwards-compatible cruft bolted on after another.
For those of you who go back far enough you will recall that the PPC to Intel migration was not the first time Apple had changed platform with minimal disruption. Prior to PPC being the processor of choice all Macs were based on the 68K chipset and when Apple introduced the PPC processor they included a pretty decent 68K layer in the os which allowed old applications to still run. In fact 68K support was still in Mac OS X right up to 10.4 if I recall correctly. The original term "fat binary" derived from having 68K and PPC in one application package, feature made possible by the use of the Mac OS resource fork file structure.
What are the secrets?
From WWDC 2005:
"Jobs then confirmed a long-held belief that Apple was working on an Intel-compatible version of Mac OS X that some have termed “Marklar.”
Mac OS X has been “leading a secret double life” for the past five years, said Jobs. “So today for the first time, I can confirm the rumors that every release of Mac OS X has been compiled for PowerPC and Intel. This has been going on for the last five years.”"
'obliged to run antiquated chips at higher frequencies and higher temperatures – essentially overclocking the parts.'
Ummm... when going to the original cited artcle, also from El Reg
'Overclocking is when the user raises the clock frequency beyond the recommended frequency marked on the processor. Chips are capable of operating at several speeds, and are graded as they leave the fab: good batches are judged capable of running higher frequencies. Since the parts Apple is using in the latest Macs run at their marked frequency, Apple can't strictly be accused of "overclocking".'
Biting the hand that feeds IT © 1998–2019