Re: While [RISC OS] was radical in 1987, it's very retro now.
Eh, close enough for government work. ;-)
I own kit that runs RISC OS 2, 3 and 5. I debated this with myself but decided it was just a nerdy detail.
127 posts • joined 7 Jan 2008
Eh, close enough for government work. ;-)
I own kit that runs RISC OS 2, 3 and 5. I debated this with myself but decided it was just a nerdy detail.
Again, that is the language, not the operating system. The article was about operating systems, and at best only tertiarily about programming languages.
That is Oberon the programming language, not Oberon the operating system.
The first draft of the article did mention the pyboard, but I asked the editor to add the C.H.I.P. as it seemed more relevant. To keep it short, it replaced the link to the pyboard.
The article is not about programming languages. It is about operating systems. Thus, it doesn't matter what compiler is hosted on top of Linux, that does not address the issue that Linux is a problem, as it is too large, too complex, too slow, too cryptic and too difficult to be a good educational tool.
(RISC OS, meanwhile, is better than that on all fronts, but it's hopelessly obsolete.)
It doesn't serve any market. It's an academic curiosity.
But it serves as an example of a different, better way to build an educational OS that can be, and was, used for real work.
OBERON IS NOT PASCAL, any more than you are Homo Erectus.
Yes, there are current OSes written in Oberon and its successors. A port of them is entirely viable and would be a useful and interesting too.
The article did NOT say it was a competitor to the Pi Zero.
The article said that the Micro:Bit, CodeBug and C.H.I.P. were competitors to the Pi Zero.
Perhaps you should reread it more carefully this time?
Sheesh, people. You lot couldn't have failed any more heroically to NOT GET IT.
 I was not promoting the Oberon SBC.
I was not even promoting the Oberon OS itself.
What I was saying is this:
As we now have effectively-free super-simple hardware, we now need free, super-simple software to go with it.
I am not proposing that Oberon is this software. It was merely a convenient example, since this little device just recently shipped (and sold out).
The culture of computing for several decades has been C and Unix or Unix-like OSes. Oberon shows that an essentially one-man language, compiler and OS can produce a real, viable, practical, usable OS that an entire University department ran on for decades.
The IT industry assumes that operating systems have to be written in C to work -- wrong -- and must by nature be big and complex -- wrong.
I'm not saying it should be Oberon. It should not be Oberon. Oberon is obsolete.
But it should be something simple, clean, modern, written in a single language from the bottom of the OS stack to the top -- and that language should not be C or any relative or derivative of C, because C is old, outmoded and there are better tools: easier, safer, more powerful, more capable.
We should start over, using the lessons we have learned. We should give kids something small, fast, simple, clean, efficient. Not piles of kludge layered on top of a late-1960s hack.
No, we should not be teaching children with "real world" tools. That is for job training. Education is not job training, and vice versa. You don't teach schoolkids woodwork with chainsaws and 100m tall trees. You don't teach chemistry in an oil refinery. You use little safe educational tools.
 Oberon is not Pascal.
You lot evolved from ape-like hominids. (Clearly not very far, in some cases.) That doesn't mean you are still apes. You're human now. (FSVO 'human'.)
Pascal evolved into Modula-2, which evolved into Oberon, Oberon 2, Oberon-V, Oberon-07, Active Oberon, Zonnon and so on.
Oberon the OS evolved into AOS, then into Bluebottle, then into A2.
 Oberon the language and Oberon the operating system are not the same.
No, there is no Oberon OS for the Pi. Yes, there are compilers for Oberon the language for ARM and for the Pi. Using them, someone could port the OS, sure. It has not been done, partly because, as I said, it's an effectively obsolete set of tools that has been long superseded.
You know there _was_ a version of LocoScript for the PC.
You could run that on 32-bit Windows, or under XP Mode on Windows 7. No idea where you'd legally get a copy now, though. The company hung on for a long time, but I think it's dead now. There's a mirror of the homepage here:
Another company, SD Microsystems, serviced the aftermarket for decades, but I think they've gone too.
The point being, important lessons were learned building the Unix shell. Yes there's cruft too -- it's over 40 years old. But it's polished smooth, for all that.
PowerShell learns few of those lessons.
George Santayana said: "Those who do not remember the past are condemned to repeat it."
Henry Spencer modified this to: "Those who do not understand Unix are condemned to reinvent it, poorly."
Microsoft is still learning to reinvent Unix -- slowly separating text-mode core OS from graphical layer; learning the importance of a rich command line; learning to write graphical commands that emit said CLI, easing automation. But it's not doing it terribly well.
The trouble is that the Stockholm Syndrome world of corporate IT has been brainwashed into believing that it's the only way and to frantically deny the Great Heresy that is Unix.
[...] Note the obsessive use of abbreviations and avoidance of capital letters; this is a system invented by people to whom repetitive stress disorder is what black lung is to miners. Long names get worn down to three-letter nubbins, like stones smoothed by a river.
This is not the place to try to explain why each of the above directories exists, and what is contained in it. At first it all seems obscure; worse, it seems deliberately obscure. When I started using Linux I was accustomed to being able to create directories wherever I wanted and to give them whatever names struck my fancy. Under Unix you are free to do that, of course (you are free to do anything) but as you gain experience with the system you come to understand that the directories listed above were created for the best of reasons and that your life will be much easier if you follow along (within /home, by the way, you have pretty much unlimited freedom).
After this kind of thing has happened several hundred or thousand times, the hacker understands why Unix is the way it is, and agrees that it wouldn't be the same any other way. It is this sort of acculturation that gives Unix hackers their confidence in the system, and the attitude of calm, unshakable, annoying superiority captured in the Dilbert cartoon. Windows 95 and MacOS are products, contrived by engineers in the service of specific companies. Unix, by contrast, is not so much a product as it is a painstakingly compiled oral history of the hacker subculture. It is our Gilgamesh epic.
What made old epics like Gilgamesh so powerful and so long-lived was that they were living bodies of narrative that many people knew by heart, and told over and over again--making their own personal embellishments whenever it struck their fancy. The bad embellishments were shouted down, the good ones picked up by others, polished, improved, and, over time, incorporated into the story. Likewise, Unix is known, loved, and understood by so many hackers that it can be re-created from scratch whenever someone needs it. This is very difficult to understand for people who are accustomed to thinking of OSes as things that absolutely have to be bought.
Did I say it wasn't?
No. I said the management tools suck. I stand by that.
But once we grow out of the era of whole-system virtualisation -- and Docker is helping -- then it will all become rather irrelevant, anyway.
Not Good Enough, VMware. 2/10, must try harder.
It's 2015. Virtualisation is free now. There are a choice of both proprietary freeware & FOSS hypervisors & management tools. VMware still has a stranglehold on the high end, sure, but MICROS~1 is working hard to attack that, whereas the FOSS crowd have caught on to what I was writing about on the Reg in 2010 and are starting to develop better, more mature tools than VMware's 1960s-style whole-system-emulation approach.
And still, the independent emperor of whole-system virtualisation requires Windows clients? My leg, it is being pulled.
It is long long past time. I can understand Hyper-V Server requiring a current version of Windows to manage it -- I mean, MS has to sell licences to live -- but a /rival/ to MICROS~1 requiring a MICROS~1 product to use the rival's? That is *insane*.
Even Microsoft itself produce a free client for Terminal Server & give it away, for nothing, for both old versions of Windows and for Mac OS X -- and the protocol is well-enough described that there is a choice of FOSS clients for Free OSes; choose your desktop, there's a client.
In essence, the VMware client is not massively more complicated. The logic of machine creation & management is just a few dialogue boxes. Even implementing stuff like remote mounting of disk images, or upload/download of VM images, is nothing complicated. All the hard work of the fancy inter-host clustering and migration is done by the hypervisor; the client just has to provide a UI to the raw command line or whatever it is.
There should be a cross-platform client served up right into your browser -- any browser -- when you connect to the host, rendered in modern dynamic HTML or, at a push, in Java. And a binary client available for the leading 2 commercial OSes and enough code or docs for the FOSS people to implement one too.
Having 3 clients for Windows, crappy as they apparently are, which don't even support all extant versions of the host, means the company is just not trying.
This isn't awesome or epic; it's sad, a failure.
PowerPC might have had a chance. As it is, the last vestige in the GP computing market is the Amiga X1000, as discussed here:
Uses a PWRficient PA6T-1682M, made by, ironically, a subdivision of Apple.
Talk about things going in circles...
Oh, yes, Apple-only... except for a few minority platforms. So tiny you probably never heard of them. Let me see, there was...
• the Playstation 3 from an obscure little Japanese company called Sony.
• the Wii and WiiU from another unheard-of Japanese outfit, Nintendo
• oh, and the xBox 360... who was that, ah, yes, Microsoft.
80 million units of the first, 100 million of the second, 80 million of the third. Over a quarter of a billion PowerPC CPUs shipped in those 3 alone.
But they're not desktop computers, so you ignore them.
That's ignoring embedded systems etc.
Tell me again how that means Apple-only, would you?
Good for you.
I woudn't. Why? Because currently I have 52GB of Dropbox and 15GB of Google Drive, without paying a penny.
And I suspect that few others would, either.
Ubuntu does Linux OSes. It should not have been mucking around with cloud services, music stores etc. when there are others that do those things far, far better. It is foolish to enter a crowded marketplace with strong, established players unless you have a remarkably compelling offering, which Ubuntu didn't.
There's an old maxim that clearly bears repeating.
Don't put all your eggs in one basket.
The LBA limit was an early 1990s thing.
The BIOS hard disk handling was by cylinders, heads and sectors-per-track (CHS). Various revisions and vendors limited these to different numbers, but effectively, the limits were something like 1024 cylinders, 16 heads and 63 sectors per track, meaning a max disk size of 504MB.
Changing from CHS addressing to LBA allowed more - depending on implementation, either 4GiB or 8GiB. 8GiB was the limit for a while - e.g. the 1st 2 generations of G3 Macs could only boot off the first 8GB of a hard disk, because of early EIDE controllers.
No, the DOS limits were /much/ earlier and older.
From old old memory:
MS-DOS 1.x didn't support hard disks.
MS-DOS 2.x did, but just one, of up to 10MB.
MS-DOS 3.0 supported a single hard disk partition (per drive) of up to 32MB.
MS-DOS 3.2 supported two partitions per drive, so 2 x 32MB.
MS-DOS 3.3 supported one primary and an extended partition containing as many 32MB "logical drives" as you wanted. (I built an MS-DOS fileserver with a 330MB hard disk ones - it had drive letters C:, D:. E:, F:. G:, H:, I:, J:, K: and a leftover 11MB L: drive. Messy as hell but all you could do without 3rd party "disk extenders" such as Golden Bow's one. The server OS was 3Com 3+Share if anyone rememembers that.)
Lots of vendors implemented hacks and extensions to allow bigger disks, but they were all mutually incompatible and many failed to work with some 3rd party software. Of course, anything that directly accessed disk data structures, like a defragger or a disk-repair tool such as Norton Utilities was 100% guaranteed to catastrophically corrupt any such extended disk setup.
The one that caught on was Compaq DOS 3.31. It used an extension of FAT16 that allowed bigger clusters - still just 65,535 of them, but multiple 512 byte sectors per cluster, permitting bigger partitions. The max cluster size was 16KiB so the max disk size was 65535*16KiB = 2GiB.
This is the one that IBM adopted into MS-DOS 4 and it became the standard. However, disks over 512MB used inefficient 8KiB clusters - i.e. files were allocated with a granularity of 8KiB and even a 1 byte file took 8KiB. An 8.0001KiB file would take 16KiB.
This became disastrous over 1GiB where the granularity was 16KiB. Roughly 20-30% of disk space would be wasted because of this granularity as inaccessible "slack space".
This was only fixed in Windows 95 OSR2 with FAT32, which permitted huge disks - up to 2TiB - with much finer granularity.
But all of DOS 4, 5 and 6.x permitted disk partitions of up to 2GiB.
The point being, there is disagreement over how fast it's going, partly over the maths, partly because of incomplete models, partly because we don't know all the factors yet.
But that doesn't mean it /isn't/ happening, and it is extremely foolish to think "hey, some estimates say no problem, so we're FIIIIIIIIIIIINE!"
Jeez, so much disinformation in the comments.
It is *nothing* to do with drivers for Intel GMA9x0 graphics; that's a side-effect. There are in fact 64-bit drivers for GMA950:
The first-gen Macintels had Core Solo & Core Duo CPUs. These were 32-bit-only chips. These Macs can only run up to Snow Leopard, which includes both 32-bit and 64-bit kernels.
Ref: Apple cheat-sheet - http://support.apple.com/kb/ht3696
The 2nd gen had Core 2 Duo, which can run 64-bit code, but the EFI firmware is still 32-bit.
Lion includes both 64-bit and 32-bit kernels and thus can run on machines with 32-bit firmware, so long as they have at least 2GB of RAM. (32-bit Macs have the same limits on RAM above 3-and-a-bit gig as 32-bit PCs.)
Ref: Apple support again - http://support.apple.com/kb/ht4287
Mountain Lion & Mavericks only have 64-bit kernels. They therefore require machines with 64-bit EFI to boot at all.
However, as the above article states, the higher requirements of GPU capabilities in the newer OS's versions of OpenGL, OpenCL etc. do mean that some models whose CPU and EFI are compliant will not work.
Bear in mind that the original US Timex Sinclair 2068 is /not/ the same as the Portuguese Timex Computer 2068. But you're right, they /must/ have known and it's a criminal oversight.
> Nothing against Ubuntu, but you'd be lacking in senses if you decided to pick Ubuntu over CentOS
> for your server needs after this news.
I disagree. A lot of sysadmins rate Debian as considerably better than RH. Many still prefer APT-GET over YUM and RPM, or Debian's openness over RH's lack of it.
Ubuntu Server is basically Debian with a fixed, regular release schedule and a cleaner install, minus TASKSEL, which those rolling their own don't need anyway. I know some sysadmins who prefer Ubuntu Server to Debian for this reason: /nothing/ is installed by default, not even ssh. 3rd party support for Ubuntu is also now more plentiful than for Debian or Red Hat.
> I thought that RHEL was free to obtain and run but you paid for a support contract?
No. RHEL is purely commercial and is only available by buying it.
CentOS & Scientific Linux take RHEL's published source files & recompile them. Their OSes are Free and freeware: no charge, no support.
Oracle takes the sources, recompiles them, gives the binaries away for nothing but charges for support, as you describe.
Oracle is thus offering RH's own product for free & charging less for support. Various commentators, myself included, speculated that this was in an effort to reduce RH's share price for possible hostile acquisition. However, this hasn't happened. Possible reasons are:
• Perhaps people don't think Oracle can support someone else's code as well as the code's authors can.
• Perhaps people just don't trust Oracle.
As Tynemouth Software provides -- for a price:
This is true, yes, but I did specifically say:
> Then you need to licence the [...] layered products on top, such as Exchange or SQL Server.
> Of course, various bundles and deals apply to all this.
Those layered products include the high-end management tools.
It's a complicated, hairy mess, as many Microsoft resellers said to me in my background research.
That is true of /all/ forms of virtualisation, though. If you're running dozens of Windows or Linux or whatever instances on top of a single copy of Windows Server with Hyper-V and that host copy of Windows goes down, all your instances are gone, instant toast.
OS virtualisation makes no difference to this at all.
Of course, this is what clustering and failover are for, but then again, if Windows integrated some form of containers, there is no reason at all that they could not fail over to containers on other hosts.
Nope, IBM VM is something quite different again - this is using one OS kernel to multitask multiple instances of another, different OS kernel, one per user. This is directly homologous to, say, running multiple Windows Server instances on VMware, or running multiple Linux instances on top of Hyper-V Server.
Again, do buy the ebook. :-) I'm not on royalties here! It's just the best intro to the basic underpinnings that I could do.
No, timesharing is something quite different. May I suggest my Reg ebook?
The source articles are still on the Reg if you search for them.
Timesharing means simultaneously multitasking multiple interactive user sessions; it's something totally different.
Yes, these certainly make it simple, but at (literally) a steep price.
No problem - but they've changed their tune, then. My source was the Microsoft speakers at _Microsoft's launch event for WS2012_ which I covered here:
... and here:
This is the modern Commodore 64:
Neither OS/2 (or eComStation) nor VMS is mentioned because none are FOSS nor even freeware.
I am aware of FreeVMS but it is old, inactive, incomplete and has not advanced in years. I think it's dead.
Risc OS 5 is not open source; it is merely Shared Source.
Castle's licence explicitly forbids porting to x86.
In any event, if they did, it would be of little interest. Architecturally, it is primitive, with no true memory protection, no virtual memory, no disk partition support and no preemptive multitasking in the kernel (bizarrely, the *Text Editor* does this. Yes, really.)
That one is *ancient*. Try RPCEmu if you want something current:
Now with Acorn Phoebe (Risc PC 2) emulation!
Even that's been addressed now:
The QNX demo disk is still around:
Mere clean-room development and reverse engineering is not sufficient if the APIs, look and feel etc. are themselves legally-protected.
Also, to retain Windows compatibility, ReactOS is developed with and must be built with Microsoft compilers. I am not sure that the whole thing /can/ be compiled with GCC, but if even parts are, then it ceases to be able to execute Windows binaries and drivers, I believe.
Seriously, I don't think they have a snowball's chance in a supernova if they ever get close enough to be any kind of a serious option or even a minor threat.
Yes, there is. Read more carefully.
You may note the author of that piece and compare to the current one. :¬)
Oberon is slated for inclusion in a future article that I am draughting on the use of non-traditional programming languages for OS design and implementation.
Bluebottle might make the cut as well. :¬)
In my opinion, yes. I think it's a lot more attractive than, for example, Windows 8 or most Linux distributions - for example, KDE has been beaten very hard with the ugly stick.
The most attractive-looking OSes ever in my book were BeOS and NeXTstep.
AFAIK NitrOS9 is just an enhanced version of the Dragon 32 implementation of OS9.
There are far more modern versions of OS9 available for x86, various RISC chips and so on.
However, it is not included because it is neither FOSS nor freeware.
Not included because it was not FOSS or even freeware, and is also very long obsolete.
All the OSes mentioned can be downloaded and run on modern hardware at no cost.
Amiga OS is indeed mentioned in the article, with links to Amiga OS 4, MorphOS and both AROS and Icaros.
However I think that the coward to whom you're replying has deleted their comment, the wuss. :¬)
House does indeed look amazing. Sadly, I only discovered it subsequently, but I am draughting a future article looking at OSes written in unconventional programming languages...
Not included, because it is not FOSS or even freeware.
TempleOS is wonderful. It is written by a severely mentally ill man who is in long-term care. Yes, he has recently got religion bad. Earlier versions had different names and did not have the religious imagery.
It is not OK to mock people with mental illness, even if they are a genius.
MenuetOS is in the article. Read more carefully. :¬)
Contiki is not included because (AFAIK) it does not run on x86 - it focusses on much lower-end hardware. It was originally designed for 1980s 8-bit home micros: it runs on the Commodore 64, Amstrad CPC series and so on.
All Linux ISOs for years have been bootable off both optical media and USB. The FOSS Unetbootin tool lets you make a bootable USB key.