Re: Optimal solution
Spring time for Hitler, in Gerrrmannny.....
164 posts • joined 11 May 2012
Spring time for Hitler, in Gerrrmannny.....
I think you severely overestimate the number of low scores an app will receive for not being sufficiently restrictive in its set of permissions.
Look at Vista : it did mostly did The Right Thing. Windows 7 was made deliberately less secure, and needed an extra setting to restore the UAC to switch to the secure desktop and insist on a password.
Users did not appreciate this at all, why do you think they're going to give a rats arse that Facebook wants to control their camera, speaker, phone, address book, network and sd card?
You mean, in the same way that Android apps are supposed to cope with auto rotation, save state properly and handle things like hardware keyboards? Yet, they don't, even with very popular apps (web browsers, snapchat, yadda, yadda..)
I'm sure the team would be glad of any code submissions. This is very new code and has only just gone into -current. At the moment I wouldn't expect it to do much other than run some versions of OpenBSD in a VM.
I've done worse than that. I was at a large corporate establishment, needing to shut down a system running a bespoke application.
In front of me was a large desktop with a monitor on top, and a keyboard and mouse in front of it. It was displaying the application. I initiated shutdown, and it worked fine, sitting at the shutdown complete message.
I pressed the power button on the desktop. The desktop switched off.
The shutdown complete message stayed ON.
'oh, that's not the server' said the customer 'we put the keyboard/monitor for your server on top of this desktop because it's convenient. Your server is next door'
Always ask, even when it seems a stupid question.
No sensible way to easily see what's running in the background. No easy way to turn the background services on and off. No multiple profiles to manage all this by default.
Apps that don't save their state properly (especially web browsers) and re-load the entire page from the network when you switch back to it, when the request is now invalid.
General pain in the arse to get around (better with later releases). Apps dying unexpectedly. Inability to manage permissions.
Apps that update *all the fucking time* and gradually get worse.
I generally like Android, but let's not pretend it's perfect.
I've generally liked my Xperia Pro; unfortunately things don't stand still in the mobile world. The Facebook app consumed increasing amounts of CPU in later releases, and needed so many privileges I had to drop it in favour of a web browser. Web sites have increased in complexity, and despite the fact it's now on Lollipop (third party rom, sort of works..) the hardware just isn't capable.
For a year and a half to two years it was pretty good. Now it's just not fast or reliable enough - might be the third party rom, might be the aging hardware.
I can't say I'm an actual fan of the OS, though, it's mostly improved in each release but it reminds me of the early releases of Windows. Gingerbread was 3.1. Ice Cream Sandwich was 95. Kitkat was OSR2. Lollipop is Windows 98 (unpatched).
The last phones I really liked were Nokia not very smartphones, which had a passable web browser for a year or two, and some Java based apps, but couldn't cut it in the end.
..and I've just found a more definitive statement on that. The Neptune Pine is not water resistant. Bloody useless, then, I'll stick with my Casio sportswatch for running, for now.
Making phone calls is in the extreme minority of things I do with my smartphone. Texting is somewhat more prevalent.
It's important to have access to Facebook, as that's where friends arrange events. Maps access to navigate is staggeringly useful. Being able to see if trains are running on time is vital. Checking your e-mail. Being able to download PDF e-tickets for events and display them, without printing them out. etc, etc. A friend uses theirs to learn Japanese on the go, and I'm ideally going to use mine to keep up with Russian.
It's not a toy. It genuinely improves my life. The only reason I'm looking at updating my current phone is that it's far too slow to access more modern web sites. It's cracked and peeling, not a fashion item.
As to watches, well, I've recently been looking at the Neptune Pine cheap on Morgan Computers. Yes, it's a ridiculous smart watch for every day use, but the key issue that decides if I bother to go for it seems to be the waterproofing. Without water resistance it's a toy, with water resistance it's not a watch - it's a cheap, portable running/biking/hiking activity monitor and map..
Thanks, that's very kind. I don't run too many Android apps these days because the more background services my phone runs, the more unusable it is :(. I want to run more..
What I need : Decent web browser, National rail enquiries app, IMAP client with lots of mail stored locally as well as left remotely and support for multiple accounts without enforcing a unified inbox (currently using K9 Mail), maps, PDF viewer, a Facebook client that doesn't use up 100% cpu and need access to everything on your phone (I use the web interface currently because of this), Whatsapp, Kik Messenger, twitter client, ebay app, Youtube, Kindle client, FTP client, SSH client.
Nice to have : Skype, jabber client to access google talk, memrise, OS MapFinder, iPlayer for radio would be lovely, nethack, some sort of app to automatically download web pages to read later, and ideally random gaming apps off humble bundle (I did a bit of a search last night and found there are hacks to get Google play services working, which allows games such as Plants vs Zombies 2 to work).
On a practical note, how well does the PP fit in a jeans pocket?
Whilst on computers I have a bit of a preference for Windows and BSD, I don't think I'm quite so bothered on phones, provided they don't overly restrict what I can do.. I like Android, but I don't necessarily want Google to win the mobile phone wars.
It looks good, and I want to buy it, but five hundred and sixty quid SIM free? Thank you, but no.
I note the Passport, which is now a year old, is available for 3-400 quid, which is more like it. Here's hoping the price drops once the exclusivity period ends.
There's no way this will ever get unlocked, so the assumption has to be the phone will be a doorstop inside two years, and will never get an update to Marshmallow.
My actual requirements are a physical keyboard, a great screen and being able to install random Android apps if need be. I would really like a removable battery too, but that dream may die with the Xperia Pro I'm currently using. I'm wondering if another alternative is the Passport - I have no attachment to the OS being Android, so long as the apps run..
It looks like Microsoft have updated their T&Cs to be somewhat more specific.
Is it something off this :https://www.microsoft.com/en-us/privacystatement/
The latter contains
'Full data includes all Basic and Enhanced data, and also turns on advanced diagnostic features that collect additional data from your device, which helps us further troubleshoot and fix problems. When devices experience problems that are difficult to diagnose or replicate with Microsoft’s internal testing, Microsoft will randomly select a small number of devices, from those opted into this level and exhibiting the problem, from which to gather all of the data needed to diagnose and fix the problem (including user content that may have triggered the issue). If an error report contains personal data, we won’t use that information to identify, contact, or target advertising to you. This is the recommended option for the best Windows experience and the most effective troubleshooting.'
Which seems to address the user log on credentials you might be specifying. Other items specify that dumps 'may' contain user data - this has been true for years, it's just now it's more obvious. It's the same under every operating system - if a process dumps core, it may well have user data in it.
So, yes, I'd anticipate it's for analysing issues. When a process falls over, there's no time to sanitise data.
I used the word 'telemetry' very specifically, note I didn't say 'data'. I don't believe that Microsoft are deliberately snooping actual user data for malicious intent, in any case.
Telemetry has been around for well over a decade, with crash dumps sent to Microsoft, and then on to third party developers if it was their software that failed. Then MS have added submissions to monitor which help a user uses, and other applications/features of the OS.
This is generally a good idea, as it helps Microsoft target which areas on an OS to improve (the flip side is that if a feature is rarely used, it may be dropped..). It's definitely improved drivers, and helped with applications.
I don't think this is a huge deal with Windows 10 - it's similar to what happened with Ubuntu. The larger issue is Microsoft's unrelenting push to get people on to a rolling release operating system. Obviously they think the trade off in dis-satisfied users is acceptable with pushing out Windows 10.
This applies to pretty much all companies including Microsoft, but they're very popular, so it's worth bearing in mind. Apple are probably worse, but have less market share and business usage. Phone related operating systems are substantially more execrable.
If you're using relatively standard Win32 and other core technologies, including core parts of .NET, I would not be worried about developing and using a Microsoft solution. They have a solid OS, with underpinnings that generally improve in each release, and an excellent commitment to backwards compatibility.
As soon as anything whatsoever outside the above (i.e. something that is not too big to be changed) is used, the company's vision becomes important. If your way of working or product design doesn't ally with that, then there is a problem.
If the platform supporting your product isn't open source or you have insufficient internal expertise to maintain an open source platform, and your product or way of working is indelibly marked on that platform, you have a splendid 'opportunity' to frantically change your environment.
Need to use Remote Storage Manager? That lasted all the way from Windows 2000 to Vista, and then got dropped. Bits of Exchange have changed radically between 5.5 and 2000, and again between 2003 and 2007. The Microsoft vision for a client OS is for a frequently updated client, with a constant moderate speed Internet connection.
It's even worse if you're using minority technologies, such as with Windows Phone 7, or new technologies that have not proven themselves in the market place. Expect to have the rug pulled from under you.
None of this should be a surprise. The mobile direction of Windows has been happening for years. Telemetry has increased with each Windows release and is generally a good idea. Windows 8 has had a considerable number of patches that changed it, and the Windows Store apps regularly. Automatic updates has defaulted to 'download and apply' for years, so it's clear that Microsoft sees the trade off of patched systems vs a (relative) minority of broken systems as acceptable.
However, it's not going to change unless people pay for it, and by pay I mean 'deliberately go through the cost and manpower to re-implement on a platform that allies with your aims for the foreseeable future'. It's all about the apps, it always was, and always will be. The Internet connected world has considerably increased the amount of activities that are possible solely in a browser, but native apps are still necessary.
It may also - specifically talking about moving off Windows - involve more pain, and paying more for fewer, higher quality features. It also needs a compelling feature for people to move, and I should point out that anything mass market has a similar Internet connected, data/telemetry reporting, automatically updated design to Windows (I do not include any non end user mass market Linux distributions, or stuff like BSD, even if I personally like it)
(I'm tempted to put a VMWare rant in there as well, but the post is already long enough)
Their article says, verbatim
'Admittedly this is subtle bug, because there is no buggy code that could be spotted immediately. The bug emerges only if one looks at a bigger picture of logic flows'
but 'On the other hand, it is really shocking that such a bug has been lurking in the core of the hypervisor for so many years'
Oh Do Fuck Off. No, this is not a splendid situation. Yes, plenty of operating systems have privilege escalation bugs. Yes, Xen is now quite old, complex and large. Yes, some things could be arranged better - find me a project without flaws.
If you depend on a project you should damn well contribute back to it, and as far as I can see Qubes do not. Their donation page - for an 'OS' that depends on Xen and Fedora, does not seem to donate back to them. Neither do they seem to contribute code to Xen (there are two matches for qubes, which appear to be borrowing code from qubes, not contributing to xen from qubes).
When everyone realised just how creaky the internals of OpenSSL were, the OpenBSD project got off their arses and created LibreSSL. I await Qubes' efforts in doing something similarly productive - write your own bloody hypervisor if it's so easy, FreeBSD have, and the other BSDs are trying related outings.
Women who are after your money are easy to spot, that's why..
Also, you've probably deleted e-mails from scammers. Why, yes, that 'god fearing' lady, and the one with pictures that looks suspiciously like a model/porn star must be real.
From reports I've seen, it's usually women that get conned.
Although, I'd have to say it's a fallacy to say they're 'convincing'. Bullshit. As soon as you've established some compatibility and that there are no major red flags, arrange a meeting in real life. If they ask for money, drop them. If they won't arrange a meeting, drop them and move on. No exceptions.
It may be that using the above method you reject real people in addition to fakers, but they're wasting your time anyway, so move on regardless.
I'm surprised, but my main gaming box triple boots between Windows 8 (DirectX is faster than on 7), XP (games that support EAX, and not OpenAL), and SteamOS, so I'll try a direct A-B comparison of Portal.
It's nice that Valve released SteamOS, but it was still pretty beta last time I checked. Didn't like CRT monitors (ok, not much of a surprise), not happy with 4:3 aspect ratio monitors (less forgivable), only output from my sound card via optical (a pain), and claimed that numerous games were fine with an XBox controller, when they were unusable.
It's possible Humble Bundle have done more for Linux, at least a while back.
As I said, it's not that this is bad - it's just that it's more a treat for an avid Linux fan, than a dedicated gamer. It's a lot better than OS/2 ever managed..
What Steam is doing is to somewhat increase the number of Linux games, usually with worse performance and less features than the Windows versions, especially lying about whether games work with controller support. If you're a die hard Linux fan, it's great, but as a platform agnostic gamer it's a sore disappointment.
Operating system sales, like consoles, are driven by app availability. There needs to be a compelling reason to move, and for the majority of people Linux does not provide that. Users will simply stay on Windows 7/8, sales of 10 will decline, and Microsoft will be forced to release 11/10.1 with concerns addressed - as happened with 8.
Until you see Adobe test the water with a Linux app, it'll be business as usual. Even if people start to shift to Linux, they'll be running Windows software in VMs for years - there is no such thing as a sudden change.
There's a reason it took OS X six releases before they dropped compatibility layers..
There's easier targets than the microcode - not sure if the microcode can be made complex enough to be useful. Much better to target the BIOS, the systems management mode, firmware in common devices, or the usual operating system exploits.
It never ends. OpenBSD has some of the most secure, audited code on the planet and a driven team, but they still find issues. When the strict memory allocator became default in OpenBSD a few years ago, it broke twenty year old Unix code that no-one had found errors in before.
Also, yeah, people *do* check processor's code, or at least the output - think back to the Pentium FDIV issue. There have been many processor errata, they're just not covered as extensively. When you're absolutely certain your code is ok, and the compiler is ok, then the only thing left to do is compare your hardware with other hardware to see if the issue remains.
This is working as designed - a fairly obscure issue has been found, it will be fixed, and standard OpenBSD features stopped it being exploitable.
Obviously that would be silly, but there are two ways round this :
1) Manufacturers forced to release documentation so that new firmware can be coded
2) Manufacturers must pay into a fund so other people can maintain their kit
The sticky point will come with forced updates down to users (also inevitable), and even worse in extreme cases 'your hardware is too old to connect to the Internet. buy new hardware' (obviously certain people will push for that based on cost, rather than technical merit)
Countdown to enforced firmware updates for hardware lifetime - GO!
He's better than he used to be, but yes, OpenBSD's development is deliberately hostile - they do not want people who are inexperienced, or unprepared to keep to the project's ethos, including its cross platform nature.
Having said that, if someone puts in genuine effort, they're generally very helpful. It's the same with development and documentation - so long as you've read the docs first, people will help.
..pity they haven't a clue on anything else. AMD are going nowhere.
Their desktop and server chips are sub-par, except at the low end where the integrated GPU is better than Intel's offerings. Their discrete GPUs are substandard compared to Nvidia at the medium end, and not cheap enough or revolutionary at the higher end.
I can see they're trying to push HSA, but the timeframe on that means nothing will be seen before the end of 2016. It also isn't being taken seriously, as HSA will not be used in their high end workstation and x64 server chips.
TSX isn't on the roadmap as far as I can see, and Intel has now released chips where it actually works. There's absolutely no innovation in virtualisation, which is utterly unforgivable - if ever there was an area AMD could innovate in, it's virtualisation.
Even without any hardware changes, AMD could integrate its existing GPUs with virtualisation, better than Nvidia. I'm aware that they're doing something with virtualisation, and have been for a year or so, but there is absolutely naff all to show for it on x86. I presume they're concentrating on ARM VM, as there's a lot of work being done on that by various companies and projects.
They could integrate an ARM core with a Radeon chipset for consumer level devices and a more open architecture than the current market. No sign of that, either.
HBM is a neat trick, but hardly revolutionary, like AMD being the first to implement PCI-e 2.0 and 3.0 interfaces on their GPUs. Neat, but absolutely pointless, as the prior generation interface wasn't being saturated.
NT was not based on OS/2. It was meant to be OS/2 v3, but has little similarity to it. Yes, it shipped with HPFS until v3.5, but that was written by Gordon Letwin of Microsoft. The OS/2 subsystem was required to run MS Mail until Exchange reached maturity.
(There was also an OS/2 1.3 PM subsystem for NT. If anyone has a copy of this, I would be very, very interested to see it, as I'm not aware of anyone who has it, never mind screenshots).
Dave Cutler came from DEC, NT is more derived on a DEC system called Prism.
The first release of NT was v3.1, not 3.5. It was memory hungry for the time, but not too bad. I ran OS/2 for preference during the 90s, but NT3.1 was probably in a better state than the initial release of OS/2 2.0.
The issue with NT 4 was not so much the desktop (that was needed, it looked embarrassing next to OS/2, or even '95), but the fact graphics drivers were moved into the kernel. However, it may have been needed, by 1999 the lack of USB and DirectX in NT was a real issue if you wanted to run more than servers. By the mid nineties the separation of desktop/server operating systems was ending, it was rapidly apparent that Netware would not survive, and that NT would eventually be the future.
The issue is not 'users', the issue is developers. Users will use pre-canned ROM images, the number of people that customise the code created by a development group is close to non existent.
So, the real issues here are 1) Why would someone want to work around the safeguards (speed?) and 2) Could this be done accidentally through either lack of skill, or more probably, sub par documentation.
The sensible approach is to consider that it's not possible to stop people modifying firmware, and that there are clear benefits to doing so. The logical conclusion is therefore that manufacturers should release better documentation for their hardware.
The less sensible approach is to sign everything up to the hilt and prevent any documentation getting out, never mind customised firmware.
Manufacturers might see an advantage to this, but it is only a question when, not if, they are forced to continue updating their firmware beyond the current roughly two year 'we can't be arsed because it's been superceded by a new model' period, to cope with security risks. At that point it becomes an active advantage for a third party to provide firmware, and to push customers on to that.
Almost all of the article is rubbish.
First, computers are faster, work gets done faster, and they do more. A few bloated updates to applications does not change this.
Incremental change is quite clearly the best way to go - revolution doesn't usually work.
There's a reason why OpenGL still struggles to update to new, clean versions, and has vast amounts of cruft - no-one is prepared to put in the multi million hours to recreate a bug free AutoCAD for no real benefit.
Likewise, when Microsoft did mostly the Right Thing with Vista (it was released too early and OEMs unfortunately twisted MS' arms to have substandard minimum hardware specifications, but it laid necessary groundwork) what happened? Microsoft moved too far, too fast for the market. It introduced a new driver model (somewhat refined since) that improved performance and functionality, took advantage of hardware better than XP did, increased security, and added features.
The new security features broke applications, drivers took months to be usable despite years of advance notice to the IHVs. Users didn't like the interface changes, or the enforced security.
Roll on three years to Windows 7, and suddenly everything is 'peachy' despite the fact it's not that different from Vista. True, the graphics driver model has changed to use less memory and the interface has been made more coherent, but mostly it's because machines that can run W7 properly are shipping, drivers have been updated and applications changed, and the system made less secure by default.
Granted, there are tipping points at which it's more economic to recreate than update, but it's unwise to underestimate the sheer amount of real world use an existing system has been adapted to cope with.
Like that's going to fly. They don't have a fucking clue what people use their browser for, and I'm sick of Firefox hanging the entire browser when Flash gets a tiny bit upset. Generally I've thought it's an improvement to Chrome, but maybe I should now move.
I shouldn't feed the troll, but 'poor old Poettering'? The guy is rude and slavishly dedicated to the idea that His Idea Is Right, and damn anyone else.
Behaving like a 'team can do no wrong' - sounds exactly like Poettering's style to me.
I've read his responses on the mailing lists, and as soon as a reasonable point is raised that can't be easily resolved the answer is usually 'well, systemd/other product is so much faster/better, and it would take too much effort to accommodate you' (so screw you)
Odd configuration? Well, you fix it - not us. Design works well - up until the point when it doesn't? Well, it works fine 98% of the time. Has an impact on other Unixes, especially non Linux ones? Not our problem..
Still running an Xperia Pro with the hardware keyboard, and its 2011 hardware is struggling with a Kitkat ROM. I'm hoping the keyboard and battery life are ok on the Priv so that I can buy one unlocked and keep it for years. Ideally a removable battery would be awesome, too, but that may be expecting too much.
Why? Because it looks good, takes ages to fail, dims, sips power, and it's expensive but still within the reach of most people. Most CFL lights look awful, as did pearl lightbulbs, but this looks good.
A modern 4K panel might be cool if it looked as good as some of the very best stylish CRTs (let's face it, mostly they're big and ugly), and performed as well as CRTs (debateable, depending on which bits of CRT features you liked).
"We are not talking about a product they are making or even consider making. This is a design that is no longer in any format or any money made on from utilising it"
You haven't the merest semblance of a clue, have you?
The 1966 Batman movie is still sold (it's rather good too, with a cast commentary. It loads up with a cheery 'holy interactive Bat menus, Batman!' voiced by Burt Ward no doubt), so is the tv series. DC still sell graphic novels featuring the 1960s Batman character in it and they occasionally create *new* comics featuring the 1960s Batman portrayal, certainly as recent as a few years ago - I've not read many of them since.
DC still make plenty of money from the 1960s Batman. The same goes for all the other 1960s characters - the original Flash is still around, as is the original Green Lantern (whatever he's called these days), and most of the others. About the only character which isn't alive - maybe - is the original Superman (Earth two), although a quick look at Wikipedia tells me his death has been retconned and he's probably alive again.
There is a middle ground between the horrors of a data centre with type 1 token ring cables everywhere, and the most perfectly crimped accurate to the inch cabling.
We wandered into a comms room with a server feeding a few dozen modems - except for the fact they didn't work. Upon closer examination it was found the cables between the modems and phone socket were routed within an inch of their life - it was a thing of beauty. Unfortunately the cables are quite fragile and don't react well to being bent. So, go through and test each modem individually, then try and find new cables (they all look the same, but aren't).
Likewise, what goes in will some day come out. There are some installations that are a joy to work on - it's racked, and undoing the bolts lets the unit slide out as smooth as silk. On others, it's underneath three other heavy systems, and superglued to the bottom..
Yep, or Print Screen, or in one case I remember something very odd involving arrow keys.
That still won't let you look at the machine between bootup and loading SSH. Also, if your system does not contain ILO/serial control, it probably doesn't contain a watchdog either - so look forward to the occasion where the server is rebooted/powered on with wake on LAN and it hangs at the RAID controller.
Intel would still win. Check out Sandy Bridge vs AMD FX benchmarks.
Nope, people are not comparing like with like. All the complaints so far are comparing Skylake with Haswell-E. The DMI link between the processor and chipset bandwidth has doubled, and the chips are finally shipping with TSX that works (unless you were running Haswell-EX previously).
40% isn't enough for AMD to be competitive, and they're not serious about their HSA architecture, because it's not included in their higher end processors.
I agree, by the time Zen comes out both Skylake-E (fast, lots of cores) and Kaby Lake (who knows!) will be out, and it'll still be nowhere.
Well, yes, yes it is. Look at a Realdoll and stick some electronics in it (basically what's being suggested here). Realdolls, last time I looked, ranged from average to uncommon physiques and then branched out into a separate line of Holy Fuck That's Creepy alien or doll like models. The difference from a lifesize Barbie should be minimal - that has improbable physiques, too.
It's all nonsense, anyway. There's no way a sex robot is a credible possibility at this point. The most that's likely is a real doll with an inbuilt sex toy, some voice recognition and canned responses. It's a very fancy blow up doll, not a responsive partner.
We are discussing word processing. A word processor as defined for the last 30+ years, is something that includes page markup. vi is a capable text editor, if unfriendly to newcomers.
For pure writing (story/detail development) there are outlining tools, the PCW was extremely popular with professional writers and journalists, and there was at least one program that was designed specifically to outline/collapse/move sections with no emphasis on formatting.
vi for all its power is not friendly. Locoscript could be used by anyone.
The fact there was a micro cheaper than an Apple system isn't relevant. The release cost of the PCW, with screen, printer and software was less than half of the cost of the base kit of your H11A..
..helps if you follow the photo credit link. It's installed in an Apple IIe. Didn't realise the 3" drives were readily installed on other systems.
Leaving aside the fact that vi is not suitable to let novices create documents with formatting, the key point here is cost. At the time the PCW was produced, the other options for producing documents, in decreasing order of expense were :
Apple Mac : very expensive, software, printer not included
PC : expensive, software, printer not included
... huge price gulf...
PCW : reasonably priced, included word processor, operating system and printer.
CPC6128 : cheaper with green screen, same price as PCW with colour. Didn't come with a printer or word processing software. Much less memory. Only one drive supported. Targeted at gamers who liked to do a bit of business.
Home micros. Non starter. Typically no 80 column screen, little memory, monitor often not included.
Electric typewriter, physical typewriter : nuff said. ew.
My first 'proper' computer, after a series of home micros, before I moved on to the PC. I did everything from word processing, DTP, programming and games. The large amount of memory on the 8512, the ability to slot addons into the expansion slot, and twin drives made it very useful.
It's to Amstrad's credit that they supplied it with CP/M, greatly expanding its use, plus the excellent basic (ok, the screen handling's use of escape codes was somewhat irritating) and wonderful manuals.
It also helped to have a fairly slow computer, as it meant code had to be efficient.
Ultima IV was not released for the PCW, the CPC or the Spectrum +3 (or, after doing some checking, the Oric Atmos - didn't realise that used 3" too). Presumably this is for someone that installed one in a PC/Apple II for whatever reason..
Never released, of course, but a fun assembly exercise. Officially there were no PCW viruses..
It ran, and performed its intended function. However, it also left a TSR in memory, two minutes later redefined the font characters upside down, and unloaded itself.
Wouldn't have been too difficult to track, because the TSR facility is built into CP/M, and I didn't bother to do it the harder way and patch the kernel or change any of the jump vectors on the system. Why bother doing more when you know it can be done and have proven the principle.
It would have been in 8080 code, too, with Z80 manually patched in. I used the compiling/linking tools included with the system instead of making life an awful lot easier by using a Z80 assembler. CP/M Plus' development tools were never updated to support the Z80.
Can't really argue with any of that. OS/2 needed at least 8MB. My first 'proper PC' deliberately had 8MB, because I bought it to run OS/2 2.1 and knew from a bit of research that less was a bad idea. 16MB was sadly out of budget, but it ran ok.
OS/2 2.0 was pushed out the door. The first (huge) service pack, brought in 32 bit GDI, either an improvement to/the first instance of multimedia (MMPM/2) - can't remember which, and basically made it usable.
It was a decent OS, but like NT, it was very apparent this was not targeted at consumers (at least until v3.0).
Video was a pain. The driver model was appalling, and IBM didn't develop enough drivers themselves. There is some speculation online that internal IBM politics meant a half decent driver model (finally delivered in v3, although most initial GRADD drivers sucked) was axed in favour of an architecture that had to make special consideration for Windows under OS/2 seamless window support, and full screen Winos/2 was basically a separate driver.
Additionally it used the DOS support to set refresh rates and resolution, which needed a reboot to activate. I later spent an obscene amount of money on an Elsa card, only a few weeks before the Matrox Millennium came out and blew everything else away.. On the bright side, the cheap and plentiful Cirrus Logic chipsets were well supported, if not super speedy.
My main work OS/2 PC was a 486DX2-50, which could have been better. 20MB RAM, Cirrus Logic graphics. It was utterly stable, however and took an incredible amount of punishment.
I was perhaps a little unusual in putting my lot in with OS/2 at the time, but it did get me a job, so it wasn't a waste. It wasn't until Windows 2000, however, that I felt it had really surpassed OS/2. NT4, which I moved to from OS/2, lacked many conveniences that OS/2 and Windows 9x had at that time.
By the time I moved, I'd wasted far too much time arguing about OS/2 vs NT. For the most part, the advantage of NT when I moved wasn't architectural, but software support. NT's interface wasn't as good, even if I did buy Object Desktop NT, but the ability to run Python is what I remember most.
OS/2 *still* can't run up to date Python, and this has knock on effects such as the build process for Firefox.
I should have clarified. Correct, the license fee clause wasn't MS playing dirty. It was the /other/ stuff they did that was dirty
xdfcopy - also ran in DOS if I remember correctly, so you didn't need an OS/2 box to write it. Not actually an IBM developed technology, but a clever extension of the 3.5" DSHD format that worked on practically every drive.
What on earth are you talking about - this article is talking about 32 bit OS/2, not the 1.x 16 bit releases.
OS/2 has never had a separately sold CLI and GUI. OS/2 1.0 had no GUI. Technically you can boot OS/2 without PM if you want a text only boot, but that's usually only used in embedded systems.
OS/2 up to 2.11 didn't include networking by default. v3 had 'connect' options with networking included. v4 had only one edition, with networking included. The Netware client was free.
The price was not ludicrously expensive. I bought the releases, as a student.
OS/2's DOS mode was exceptionally good - in the 32 bit release, that this article is talking about. The 16 bit versions of OS/2 had issues with DOS compatibility, because they ran in 80286 protected mode.
At the time OS/2 2.x onwards was around, there absolutely were graphics/memory/etc APIs. It's only for DOS games it was an issue.
There are several reasons OS/2 failed :
Between 92-95 (OS/2 2.0 to Warp v3) it was a much better OS. However, It's All About The Apps, Stupid. OS/2 never got the app coverage, and the compatibility with DOS/Windows was superb.
So, you have a choice between a relatively expensive OS/2 app, which fewer people have used, with a different file format. Alternatively, buy the Windows app and run it under OS/2. If the app dies, it doesn't take down OS/2.
Describe was technically an excellent word processor, and the output quality was fantastic. Unfortunately it was simply much less usable than Ami Pro under Windows, and Ami Pro for OS/2 was a deeply shitty port.
Mesa was an awesome spreadsheet, but it wasn't Lotus 1-2-3, or Excel - the de facto standards. Still, if you wanted to throw 300,000 rows in your spreadsheet it'd work fine - unlike Excel which took into the millennium to fix that.
Generally you paid more for higher quality functionality, but less of it. That just wasn't what the user base would accept, but is the reality of a smaller user base. IBM was not good at funding companies to develop software.
Second, the install routine sucked. Microsoft spent a lot more time ensuring the install worked. OS/2's install has always been sub-par, and the hardware support meant that to run it successfully you selected the machine based on OS/2 support, not bunging it on a random PC.
Third, Microsoft played dirty, and for the non 'for Windows' (fullpack) versions of OS/2, IBM was paying MS a Windows license cost for each copy of OS/2 sold. This also heavily affected bundling with machines (it wasn't, except by IBM) so the install quality became more critical.
Fourth, IBM lost the plot with Workplace OS, thinking they could regain the market with a PowerPC infrastructure. That's the point at which OS/2 decisively lost - the emphasis on what to develop was wrong.
I was a big OS/2 fan, but must accept that NT was simply a better OS. It was designed well from the start, from scratch. OS/2 wasn't - for 2.x onwards, it still had 1.x code (the 32 bit GDI came in a 2.x service pack, 32 bit windowing only came in v3), was relentlessly single user and never addressed some extremely irritating long term issues such as the synchronous input queue.
None of the above mattered for a while. If IBM had junked Workplace OS and re-architected the necessary parts OS/2 might have survived longer. However, It's Still All About The Apps. Apple were suffering at the time, and one of the critical reasons they returned to success was an agreement to release a new version of MS Office for OS X.
Seeing as WordPerfect and Wordstar royally fucked up their Windows releases, and Lotus never gained traction (pity, I greatly preferred their products) the market shifted to Microsoft apps. That needed an MS OS, and relying on running a compatibility layer for your competitor's products is a losing strategy.
They deliberately destroyed the sonic screwdriver during one of Davison's episodes, pretty much for that reason..