The Archimedes was a brilliant machine.
Way ahed of its time. I wrote a music tracker on it that worked with four tracks of samples, in BASIC. Try doing that on a Miggy or ST.
Not to be outdone by the open sourcing of an early version of MS-DOS for Intel chippery, version 5 of RISC OS – arguably the original commercially successful Arm operating system – is going fully open source. History lesson RISC OS was designed and developed by Acorn Computers, once dubbed the Apple of Britain, in the 1980s to …
"Pretty much everyone got rich off the breakup of Acorn... except the community of enthusiasts and developers that had bought into and supported Acorn products for a decade or more."
Dealers also lost out. My company which specialised exclusively in the RISC OS market had just invested significantly when Acorn canned the desktop division:-(
Great news about the licensing and a very good informative article!
When the BBC Micro was dominant, the company realised that it had enough money to develop either a next-generation school micro or its ARM processors. It chose the latter and, financially, that decision cannot be faulted. The Archimedes/RISC PC was always going to be a kooky and expensive niche, supported only by a shrinking band of Acorn fans, but its processor is fast taking over the world.
Like me. I went through three or four RISC OS machines and still have a working RISC PC. Had to move to Wintel for a few years until Linux desktops became productive enough.
Attempts to reach the present announcement have been going on for a long time. RISC OS Open Ltd, aka ROOL, were formed several years ago, with exactly this as their ambition. Looks like the eternal bitchfights between IP owners have at last been settled.
H'mm. Maybe my next upgrade will be a Pi dual-booting RISC OS and Devuan. That would be nice.
And I'll tell you what else. How about a nigh-on unhackable ROM-based, compact and lightning-fast but still fully-featured, and Open Source, OS to power the IoT? That would be nice, too.
Quite amazing that OSes for what were once entirely suitable to be used for fully-fledged computers can now be considered as viable OSes for IoT devices.
What did Windows, OS X, and Linux bring to the party apart from hardware improvements just to be able to run them in the first place?
> What did Windows, OS X, and Linux bring to the party apart from hardware improvements just to be able to run them in the first place?
As a '90s gamer, DOS was my main environment, and Windows was largely ignorable. RISC OS just wasn't available for my x86 machine. I'd used RISC OS at school just as I had used Atari ST and Amiga equivilents at friends' houses (and was jealous of their sprite based games libraries).
OSX seems to have served Apple and their users well, especially those (mainly DTP and music professionals) who hung on in there during the '90s. I've not heard of RISC OS being considered by Apple before they brought in NeXT.
I can't speak confidently as to the others, which is going to make me sound like an apologist, but I know that macOS née OS X spends modern-scale processor cycles on:
(i) using the full set PDF primitives for all desktop drawing;
(ii) the dynamic dispatch that underlies its approach to UI building;
(iii) contrasted with Risc OS, the various context switch costs associated with preemptive multitasking and a full implementation of protected memory*; and
(iv) various things that were historically optional: the file indexing that goes into Spotlight, the background backup of Time Machine, the remote syncing of iCloud file storage, etc.
* if memory serves, Risc OS protects applications from each other, but doesn't protect the OS from applications.
Do you necessarily want these things? Tough, you're getting them.
> And I'll tell you what else. How about a nigh-on unhackable ROM-based, compact and lightning-fast but still fully-featured, and Open Source, OS to power the IoT?
It would, but the trouble with it being ROM based is how to patch a vulnerability that is undiscovered at the time of shipping - as we've just seen with Amazon's Free RTOS. To quote Douglas Adams "the problem with something that is designed never to go wrong is that it is a lot harder to fix when it does go wrong than something that was designed to go wrong in the first place".
Of course you could take a small RTOS and subject to it to much scrutiny, or take a mature RTOS on the grounds that any vulnerabilities would have been found by now. There's also active research in OS kernals that are mathematically proven to do what they're supposed to - Formal Verification.
the trouble with it being ROM based is how to patch a vulnerability that is undiscovered at the time of shipping - as we've just seen with Amazon's Free RTOS.
Yes, it's a patent for what was common in 80s home micros. And, of course, it's by Apple in 1999.
ROM based OS can be as permanent or as upgradeable as you like. For example the EAROM (Electrically Alterable ROM) can be reprogrammed by enabling a special hardware signal, while Acorn made the ROM physically swappable and distributed OS upgrades by sending out a replacement chip. The BIOS in the average desktop PC faces similar issues but is mostly a lot more easily updated - and hence a lot more easily hacked.
An OS in ROM is not a panacea for all ills, but it helps finesse security management.
Well it's not like they had a choice - they couldn't do the former as there was no 6502 replacement that suited them so they did the latter. I think the article implies that.
This was an age when eg IDE was a novelty, the problem was that Acorn were used to doing peripheral control on the CPU to cut hardware costs and arrive at a viable price point but you can't have your CPU disappearing into it's own microcode for a dozen (plus some random number of) clock cycles without your OS thinking the hardware has failed. IIRC MUL was the first ARM instruction to take more than one clock tick and if you watch an ARM running RISC OS it's forever jumping into and out of Supervisor Mode.
There's no need to be revisionist over the history. The truth is the Master series hung around in education for an embarrassing length of time and left the door open for the competition which consequently sealed Acorn's fate. That ARM didn't disappear is 50% excellent judgement and 50% good timing/luck.
As for unhackable, as with BBC MOS, RISC OS routines were called through vectors plus those OS ROM modules were fully relocatable and could be replaced by soft loaded RAM resident versions and often were cos patching. Hacking the OS was half the fun and yes, still got mine too.
Only compared to other Basics or Fortran.
It was prehistoric and there were real languages available.
BBC Basic 1981 (I thought earlier?) according to Wikipedia.
UCSD Pascal was first released in 1978. I used it on an Apple II.
I'd argue that even Forth (since 1972, also built in on Jupiter Ace in 1982) is better than Basic for learning.
There were loads of good languages for learning that could have, and many did, run on BBC Micro, Apple II, IBM PC (it originally had a Basic in ROM and could use a cassette tape, I think?), Research Machine. Especially on CP/M, from about 1977, PC didn't reach UK & Ireland till 1981.
You're not wrong, but I'd say, horses for courses. As in, the Beeb and their BASIC was years ahead of what was on offer for comparable markets and users at the time. C64 BASIC wasn't up to much if I recall and you certainly weren't getting any of that inline assembler goodness, or rich documentation. And all this out of the box, with Mum and Dad not having to spend a penny more to get little Johnny and Jane started on this new-fangled computing thing.
The get-started-for-cheap-n-easy thing shouldn't be underestimated. I'd posit that Modula-2 and suchlike were not cheap or easy at the time and it wasn't until Borland Turbo Pascal and Zortech C came out on the PC at the bargain price of $29.99 that there was a comparable replacement on the PC.
I have often thought about what it was that made the micro revolution happen in the 1980s.
My thoughts are that one of the reasons was the immediacy of getting something done that hooked the youth of the '80s. Rocking up to a machine, typing a four or five line program followed by RUN, and having colours splatted all over the screen, or random beeps coming from the speaker says to a newbie "look, you can do magical things", and they're hooked, almost in no time flat.
BASIC was the best tool at the time for this first step. Quick to learn, easy to remember, and immediate.
I look at what is necessary to learn Pascal, Modula and the other compiled languages. First you have to learn the editor. Then you have to write the code. Then you work out how to compile, and only then (assuming that you don't get any cryptic errors from the compiler), you get to see the results. Even using IDEs puts too much complexity in the first step before you achieve anything.
Most of the youth of today will turn off after exhausting their limited attention span at the point that you have to invoke the compiler. And this IMHO is the problem with most modern languages used for teaching.
Add to this the need to learn quite complex language constructs before being able to write syntactically correct code in things like Python, currently the poster boy of teaching languages, and you will turn off more kids than you attract, even if they are quite able.
I saw this in the early '80s. I worked in a UK Polytechnic, and had several intake years on HMC and HMD computing courses coming in having learned BASIC on Spectrum VIC20 and C64 systems (amongst others) who sat down at a terminal, learned how to log in and use an editor like EDT, and start writing Pascal, complaining bitterly that this was not what they thought computing was all about, and why was it so complicated! Once they got over the hump, they were fine, but some did not get that far.
Similarly, my father learned to program on Spectrum and BBC micros, and as a retirement present in about 1992 was given an 8086 MS/DOS PC, and one of the first things he asked me was "How do I write a program that draws pictures and plays sounds" (things he had been doing for years to aid the teaching he was doing), and I had to say that it was not built in to MS/DOS, and even GW Basic by itself without extra software packages.
I don't believe that he ever wrote another program ever again.
Your comment about Forth is interesting. I learned Forth as an additional language (PL/1, APL, C and Pascal were my primary languages then) back in the 1980's (ironically on a BBC Micro with the HCCS Micro Forth ROM, not Acornsoft Forth), and I would say that it is an extremely poor language to for a newbie to learn programming in. The stack based arithmetic system is completely non-intuitive to someone who has not studied computing already (good grief, most people have difficulty understanding and using named variables in a computer program), and although you can define meaningful words in the dictionary, most of the primitives are terse, and impossible to guess the meaning of without reading the manual. And even getting to the point where you could define a word would tax most kids I have known.
At least most Fortran/Algol/BASIC/COBOL based languages have their keywords closely matching English and sometimes mathematical languages. And BASIC scores well in not having strict typing, something that becomes more important as you get more proficient, but a real barrier to someone just learning.
So in my view, as a first stepping stone, BASIC is a good start to gain the concepts of programming, followed on by a move to a more comprehensive language. And BBC BASIC was one of the fastest and best.
Theoretically yes. I've certain seen copies of Arthur (RISC OS 1 by another name) and RISC OS 2 in EPROM. They only needed 2 x 500K or 1MB ROMS/EPROMS. I'm not sure if they ever made any 16bit wide 2MB EPROMS that RISC OS 3 and 4 needed.
RISC OS 5 is available in one time PROMs.
RISC OS 4 was available in FLASH using chips on a carrier PCB but as they couldn't be flashed in situ (No read/write connected on the 40 pin socket) PROMs were a cheaper/easier solution.
You need to watch the address bus width. The ARM processors used by Acorn, including the StrongARM, used 27 bits (I think, getting on for 20 years since I left Acorn) of the notional 32. More recent versions use the whole 32 and I think RISC OS 4 and 5 are intended for the full width architecture. It may or may not be possible to build for the older architecture.
You can even use pin compatible Flash Memory, you can make an adaptor for BIOS chip on a 486, boot floppy that updates BIOS, swap jumper so blank Flash memory connected (I've actually swapped out a BIOS chip while power on and put in a blank DIL IC) then use Bios update command with argument for your custom BIN file. You can make a simple adaptor and use 486 PC compatible reprogrammed BIOS IC as a cartridge on an original game boy. There used to be a scope adaptor. You can write the program for the "Gameboy original" using Modula-2, Pascal, C or Z80 Assembler on a CP/M emulator on DOS, or in DOSbox on MacOS, Windows, Linux or even RiscOS. https://www.riscos.info/index.php/DosBox.
I've not used actual EPROMS since pin compatible Flash Memory came out.
...without preemptive multitasking it feels a little fragile these days. If you have a process go rogue for some reason, you can lock up the entire system.
The thing that sticks with me today however, is that for all the years I used these things, I don't remember that ever happening. The software quality back then must have been fantastic.
RiscOS really was magnificent but...
...without preemptive multitasking it feels a little fragile these days. If you have a process go rogue for some reason, you can lock up the entire system.
It runs on Raspberry Pi but can only use one of the CPU's quad cores.
Run it on the single core zero then! I have a copy of the version that was released for the Pi 'originally' and its gobsmackingly fast compared to Raspbian. It seems to run like a dream on the Zero but the apps were few and far between. I really wished someone would put the GNU developer stack on it and now it seems my dream may come true.
Whilst it was co-operative multitasking with no interrupt driven scheduler, the points at which a process could lose the CPU were built into many of the system calls, including the ones to read keystrokes from the keyboard and the mouse position.
What this meant was that if you were doing any I/O through the OS, there were regular points where control could be wrested from a process.
That's not to say that it was not possible to write a process that would never relinquish the CPU, but most normal processes are not written like that.
The real issue (IIRC) is that the earlier versions of RiscOS did not enforce any memory space virtualisation or separation. All processes had to be written as position independent code that could sit anywhere in memory, and used the OS to manage their data space. This meant that in this day and age, RiscOS would be regarded as a really insecure OS.
All processes had to be written as position independent code that could sit anywhere in memory
Are you sure about that? I thought this applied only to relocatable modules, not application tasks which ran in user mode? IIRC the MEMC presented the memory pages allocated to a task as a fixed address space, protected from other tasks. Only relocatable modules run in a privileged mode / ring with access to the full physical address space.
I'm no expert, although I have looked into the memory layout of RiscOS because I was interested.
It may be that over the different versions of RiscOS, new features were included, but Wikipedia indicates RiscOS 2 did not have virtual addressing, and I saw nothing in the remaining history to indicate that it was added later.
It is quite true that MEMC did have memory protection capabilities, but from what I have read, it was not used in the earlier versions of RiscOS, although I am sure that it was used in RISC iX.
I find it hard to believe that the current versions of RiscOS do not have memory protection, but my original post was really about RiscOS under Acorn's custodianship.
This applies to classic RISCOS up to 3.1, the latest version may work differently, since modern ARM devices have a different memory controller.
"One-to-many mapping is used to 'hide' pages of applications away when several applications are sharing the same address (&8000 upwards) under the Desktop. These pages are, of course, not held at &8000"
Desktop applications run in user mode, and see an address space starting at &8000, the MEMC translates this to the real address in physical memory. When the 'desktop' switches between tasks, it changes which memory pages are mapped into address &8000 and upwards, which isolates / hides those memory pages from other tasks.
Code that runs in a privileged processor mode (like relocatable modules) can access the full memory address space. Relocatable modules are assigned memory in a shared block called the module area which is not dynamically mapped by the MEMC, allowing them to be called from anywhere. Hence the modules must use relative addressing so they can run at whatever memory address they are loaded. If modules were unloaded from memory, this could leave gaps of unrecoverable memory (unless the next module was small enough to load into a vacant gap). The result being that you often had to reboot a computer that had been running for a long time, when the module area was full.
Biting the hand that feeds IT © 1998–2019