back to article Insider cuts into Apple, peels off Intel Mac OS X port secrets

The wife of a former senior Apple engineer has spilled the beans on Apple's transition, and it's a sweet and surprising story. The first years of the last decade posed a problem for Apple hardware: it had a new, futuristic operating system but was being left behind in the performance race. CEO Steve Jobs had fallen out with …

COMMENTS

This topic is closed for new posts.
  1. Anonymous Coward
    Anonymous Coward

    Google 20% time projects

    It isn't just GMail, the following is the comprehensive list of Google 20% staff pet projects.

    Gmail

    Orkut

    Google News

    Adsense

    Google Talk

    Google sky (an extension of Google Earth)

    Google Art Project

    1. jake Silver badge

      Re: Google 20% time projects

      "google"?

      What's that? Some of us think for ourselves ...

    2. Anonymous Coward
      Anonymous Coward

      Re: Google 20% time projects

      Ahh yes, the famous 20% time, I'd still rather not be expected to work for 50 hours plus a week, in order to be allowed to work for 20% of this time on things which personally interest me. If something personally interests me, I'll do it at home, under my own terms and if it becomes a realistic business mode, maybe approach my employer for buyin or develop it myself.

    3. jai

      Re: Google 20% time projects

      GMail is just the only one that the general public are likely to be aware of?

    4. lurker

      Re: Google 20% time projects

      Plus the article seems to be assuming that only projects which see the light of day as a finished product are worthwhile, whereas in reality it's quite likely that a lot of the skills and techniques developed can be applied elsewhere.

      1. Anonymous Coward
        Anonymous Coward

        Re: Google 20% time projects

        Apparently at Valve it's more like 100%.

  2. jake Silver badge

    Oh, c'mon.

    Apple's current OS is basically just a tweaked BSD.

    Most folks with a clue have been running BSD on Intel for decades.

    1. toadwarrior

      Re: Oh, c'mon.

      yeah it's based on freebsd now due to his work.

    2. Charlie Clark Silver badge

      Re: Oh, c'mon.

      Don't equate OS with the underpinnings. While I think the BSD userland is wonderful, the NeXTStep framework and GUI are just as much part of the OS but the port couldn't really happen until it was possible to virtualise PowerPC commands fast enough (Rosetta + Intel Core Duo) to remain usable.

    3. ThomH

      Re: Oh, c'mon.

      You mean other than the kernel, the drivers, the windowing system, the binary format, the filing system and the system- and user-level libraries?

    4. Francis Vaughan

      Re: Oh, c'mon.

      Care to explain what part of Mach 3 is shared with BSD? The answer is nothing. The BSD emulation layer is simply grafted onto the side of the operating system. Mach was written at Carnegie Mellon University, and the leading light of the team was Avie Trevanian, who became NeXT's and then Apple's main technology guy. Mach 2.5 was heavily based upon a Unix code base, built to show the value operating system API, whilst keeping the useful bits of an existing operating system. Mach 3 was a scratch rewrite. In Darwin some of the Unix emulation (which was mostly in user mode server processes) was migrated back into the kernel for speed.

      This continual background buzz that OSX is just BSD is simply an annoying lack of knowledge of the technical history, and the current technical structure of OSX. The entire kernel is new code, the process model is different, the device driver model is different, the system APIs are different. OSX includes the Quartz graphics layer, and the list goes on. There is however, very usefully, a BSD compatible emulation layer. Apple leveraged this well. But it doesn't make OSX a tweaked BSD. What OSX is, is NeXT. If you want to see where the OS really came from look there, and the linkage back to Mach.

      1. Liam Proven Silver badge
        FAIL

        Re: Oh, c'mon.

        There are a number of factual errors in your comment.

        No version of Mach is "based on a Unix code base". Mach was designed to be a message-passing microkernel, but for Unix compatibility, there is a large in-kernel Unix server; this is just one of the modules of code within the kernel, although by far the largest. In NeXTstep this was based on BSD 4.x code - 4.3 and 4.4-Lite, I believe - but in Mac OS X it was updated with code from the FreeBSD kernel. This is one of the main reasons that core FreeBSD developer Jordan Hubbard was hired by Apple.

        There is no "BSD emulation layer". The userland of the OS is also taken from BSD, and again, in Mac OS X it was updated with code from FreeBSD.

        Yes, XNU and OS X are radically different from BSD and indeed FreeBSD, but there is a lot of FreeBSD code in there. Your talk of "emulation layers" makes me think that perhaps you don't know what a userland is. I suggest starting your reading here:

        http://en.wikipedia.org/wiki/User_space

        1. jake Silver badge

          Re: Oh, c'mon.

          In addition to what Liam Proven added, when I admin Apple Servers (and the occasional friend's Apple machine), I always head for the command-line (occasionally single-user), and treat it like it's a BSD box. This approach hasn't bit me in the ass. Yet :-)

          Funny how folks find simple hardware modern-day "sacred cows", innit.

        2. Francis Vaughan

          Re: Oh, c'mon.

          Late reply.

          "No version of Mach is "based on a Unix code base"." This is probably simply not knowing the history. Prior to Mach 3 the message passing system API was grafted into what was a Unix kernel. This included the first external pager, and other services. I know, I have the source code. We are talking a long time ago. I visited CMU and the Mach team in 1989. Mach 3 was under development at this time, but 2.0 and 2.5 had been deployed for a while. Mach 2.6 was shot though with Sun code, and core kernel components (such as the process switch code) were lifted verbatim. Mach 3 changed and extended the APIs quite a bit.

          The term Userland is much more recent than much of this technology. Sure the Unix emulation ran in a user mode process, so in more modern terminology is a Userland. The microkernel guys were doing this all over the place. Back in the late 80's it was all the rage. Chorus was doing very similar things. The terminology doesn't make the idea different.

      2. SDoradus
        Go

        The article does a disservice to history. Intel Darwin and MkLinux were critical.

        You are absolutely correct to point out that Mach is the important bit and it's not BSD. But that is not the major problem with the article. It gives the impression that the OS/X efforts on Intel were a one-man show.

        Apple's efforts with the underlying code go much deeper and started much, much earlier on an approach with two prongs:

        - The first was Darwin. In 2002 I was running Darwin on a discarded Intel PC - when I say "Intel" I mean every chip on the motherboard was Intel and so was the network card. Darwin for x86 version 1.4x ONLY ran with Intel drivers.

        - The second prong was the expertise derived from Apple's even earlier MkLinux project. This worked on ancient NuBus PPC Macs and used the Mach kernel also, despite looking exactly like Red Hat Linux, right down to the Anaconda installer.

        At one point I had a lot of discarded units from a university teaching lab that we re-equipped with PCs. I also had a lot of older Apple Laserwriter printers with no ethernet capability. For long and long I followed the MkLinux community headed up by David Gatwood at Apple, making good use of the old hardware.

        The old NuBus machines would fit exactly under the Laserwriters, accepted print jobs over the ethernet and transmitted them to the Laserwriters via appletalk, reformatting as necessary on the way. The Laserwriters looked to a PC like an expensive networked multiple job language printer, thanks to the old PPC Mac running Linux with netatalk, LPD, etc. And from a remote administration point of view, they were a bog standard Linux.

  3. FIA Silver badge

    There's a bit of a history of this kind of thing at Apple....

    http://www.pacifict.com/Story/

    1. DJV Silver badge
      Thumb Up

      @FIA

      Hadn't read that before - excellent!

  4. Mondo the Magnificent
    Thumb Up

    OS X & x86!

    As a long time Mac [and PC] user, I recall how the Mac purists were horrified when Jobs announced that x86 based Macs were going to be released.

    The choice was logical, the PowerPC roadmap was going nowhere. Sure, it wasn't about MHz/GHz, but the G series chips weren't going to take Apple where they needed to be in the long run.

    Intel's new DualCore processors were fast, ran cool and generated a lot less heat than the G series predecessors and the option for Intel's integrated graphics also made sense [on the Mac Mini]

    In essence the big factor was that Apple were far sighted enough to support Bootcamp, thus allowing Mac owners to also run [a retail version of] Windows on their systems. This was the golden egg in my book, best of both worlds with ease of installation and comparability due to Bootcamp generating the drivers CD.

    Today's Core i5 and i7 Macs are awesome, albeit a little overpriced, but again, of you want to game, or you're not part of the anti-Redmond establishment, you can enjoy the best of both on one shiny aluminium clad desktop or laptop (no offence to the purists for calling a MacBook Pro a "laptop")

    1. Archibald Trumpetbeetle
      Happy

      Re: OS X & x86!

      > you can enjoy the best of both

      The stability of Windows with the value-of-money of Apple hardware. Sign me up!

    2. Charlie Clark Silver badge

      Re: OS X & x86!

      In essence the big factor was that Apple were far sighted enough to support Bootcamp

      Running on x86 meant that virtualisation tools such Parallels were a very viable option without even having to worry about dual-booting.

    3. Kanhef

      Re: OS X & x86!

      The PowerPC chips were problematic, particularly on heat - even the G4 laptops were uncomfortably warm - so I can understand the shift to a different architecture. I'm just disappointed they didn't pick another sane and properly-designed one, such as ARM. Compared to either, x86/x64 is a 35-year-old steaming pile of crap, with one layer of (mostly) backwards-compatible cruft bolted on after another.

  5. Annihilator
    Happy

    Google's 20%

    *Cough*

    http://dilbert.com/strips/comic/2011-12-19/

  6. Mage Silver badge
    Linux

    Boot camp

    Buy overpriced HW and an Overpriced non-OEM Windows? Madness.

    Windows on cheap HW + Cygwin or Linux on cheap HW + Wine makes more sense.

    Apples are for people that only want Apple OS.

    In the future, only Apple Approved Applications.

  7. Quentin North
    Thumb Up

    Before PPC there was 68K

    For those of you who go back far enough you will recall that the PPC to Intel migration was not the first time Apple had changed platform with minimal disruption. Prior to PPC being the processor of choice all Macs were based on the 68K chipset and when Apple introduced the PPC processor they included a pretty decent 68K layer in the os which allowed old applications to still run. In fact 68K support was still in Mac OS X right up to 10.4 if I recall correctly. The original term "fat binary" derived from having 68K and PPC in one application package, feature made possible by the use of the Mac OS resource fork file structure.

  8. Lance 3

    What are the secrets?

    http://www.macworld.com/article/1045157/liveupdate.html

    From WWDC 2005:

    "Jobs then confirmed a long-held belief that Apple was working on an Intel-compatible version of Mac OS X that some have termed “Marklar.”

    Mac OS X has been “leading a secret double life” for the past five years, said Jobs. “So today for the first time, I can confirm the rumors that every release of Mac OS X has been compiled for PowerPC and Intel. This has been going on for the last five years.”"

  9. P. Lee
    Coat

    I wonder

    ... if the transition to ARM desktop/laptop CPUs will be as smooth.

  10. King1Con
    Flame

    Overclocking???

    'obliged to run antiquated chips at higher frequencies and higher temperatures – essentially overclocking the parts.'

    Ummm... when going to the original cited artcle, also from El Reg

    'Overclocking is when the user raises the clock frequency beyond the recommended frequency marked on the processor. Chips are capable of operating at several speeds, and are graded as they leave the fab: good batches are judged capable of running higher frequencies. Since the parts Apple is using in the latest Macs run at their marked frequency, Apple can't strictly be accused of "overclocking".'

    nuff said...

This topic is closed for new posts.

Other stories you might like