Re: Hmm, the usual comments.
> They licensed the sensor but the SDK was all them
The software was written by Rare in the UK. Microsoft bought them, too.
1683 posts • joined 27 Apr 2009
> They licensed the sensor but the SDK was all them
The software was written by Rare in the UK. Microsoft bought them, too.
> No it didn't, it got to 2.3.7
Several phones and tablets are advertised as running 2.3.8, such as Samsung Galaxy Mini 7, or 2.3.9 such as:
2.3.9 was released in late 2012 or early 2013.
> If the system were more modular, then they would be able to update most of it,
Much of the Google infrastructure, including WebView, is built as apps and updated from the Play Store - to anybody.
> still means you're cutting loose anything more than two and a half years old (on average).
Actually they don't. Android 2.3 series (Gingerbread) was continued to be developed well after 3 and 4 were delivered. It went at least to 2.3.9. It was up to manufacturers to update existing phones.
> Oh, and that "Program Files (x86)" folder would come in very handy if they ever dusted off their x86 emulator, written for RISC processors back when a top-end CPU had, er, some modest fraction of the CPU power of a modern low-end phone. ;)
The x86 emulator only had to do x86-16 or x86-32 on a 64bit RISC CPU - Alpha, POWER or MIPS. Running x86-64 on an ARM 32 bit CPU, as is currently used in most phones, is quite a different issue.
It is more likely that they will try to squeeze in a small x86 core somewhere.
> an impressive range of low cost models.
It was not so much 'low cost' as low price. In spite of MS giving a $billion a year to Nokia the phone division did not make a profit in any quarter that they made WP phones. This indicates that (overall) the phone's selling prices were below the total costs.
Nokia made that not continue and MS wrote off another $7billion.
> 2013. Did they say anything back then about future upgrade-ability?
By 2013 they had completely dumped all Windows Mobile 6.x users with no upgrade path, not even for apps or even the development tools. They had also dumped all Windows Phone 7 as none would go to WP8. Fanboys (as the one above) claimed that 7.8 would (or did) give all WP8 features but all it gave was a few extra colours and a couple of new sizes for the UI tiles.
I would be surprised if anyone expected upgrade-ability for a WP8 phone, but it seems that some will upgrade to 10 - probably whether the user wants to or not.
None will get continuum, that requires dual GPUs or something and is only on 950s.
> A true 286 OS - even without a GUI - would have allowed multiprocessing, but application would have need to be rewritten for protected mode (and most DOS applications were written to directly access the hardware also).
There were several 286 protected mode OSs. MS even wrote one itself. MS-DOS 4.0 and 4.1 (not to be confused with the much later 4.01), also known as European DOS because Siemens and ICL (where I worked) used it briefly was a 286 protected mode version of MS-DOS derived from 3.1 and 3.2 respectively. This also had limited multitasking in background. It could run 'well behaved' DOS programs in protected mode and a single 'badly behaved' DOS program in real mode.
The 'behaviour was mainly that of memory access. In an 8086 or real mode the program could do segment calculations. Usually this was required to access memory arrays larger than 64Kb. The program would calculate a suitable segment/offset pair to give 'tileing' over the memory. This would break the 8026. In principle the OS could create selectors every 16 bytes to cater for the program doing these calculations but there was a limit of 8000 selectors so it simply wasn't viable. I have a manual on DOS 4.0 here somewhere that describes how a compatible program can be written and also the additional features that can be used.
It simply wasn't good enough and was dumped when MS moved to MS-DOS 5 (not the be confused with the much later MS-DOS 5) that was renamed OS/2 during development.
> I think someone may have thought it would be a good idea to have similar version numbers so as to not confuse the market.
Or to confuse the market. Nobody was going to buy a 1.0 version. Even then the market was conditioned to 'wait for version 3'.
> 386 ... the real advantage for PCs / MS was being able to to run DOS apps etc at same time.
Yes, that is an example of IBM PC HW & SW holding back the industry for several years. They needed to wait for the 80386 to be able to do that.
> DOS was MS's bought in reverse engineered CP/M 86
No. 'DOS' was SCP's reverse engineering of CP/M-80 - allegedly of version 1.3 because the very early versions had a bug in its FCB handling that had been eliminated in CP/M 2.x.
> The 8088 and 8086 weren't real 16 bit CPUs at all.
The 8086 was. The 8088 only did 8bit memory access. In some cases the 8088 was faster because to change a single byte the 8086 had to read the word it was contained in, change the byte and write it back while the 8088 only needed to do the write.
> You could only do the same things as on a Z80. Actually later Z80s had MMU and 512K RAM.
The Z80 had a 16bit address space, the MMU catered for bank-switching the memory into this address space one bank at a time. Actually, most catered for a fixed segment, say 16Kb, for the core of the operating system with a 48Kb bank above that which could be swapped. That could be done with the 8085 as well, such as on the ICL PC multiuser system running MP/M II.
The 8086/8088 didn't have the limitation of a fixed segment reducing the effective segment size and also had direct access to 4 segments at any one time. This meant that with a Z80 or 8085 the effective immediate memory size for a program was 48Kb while with an 8088/8086 it was 256Kb. A Z80 program could use the MMU to page some of its data, but that added more complication.
> IBM PC was simply a big metal version of Apple II with only text display (graphics later!) and only 320K floppy.
It is correct that the IBM PC was designed to compete against the Apple II that was turning up in IBM mainframe sites. Most of these were running Visicalc or had Z80 Softcards with CP/M. The IBM PC model A* only had 160Kb floppy (vs 120Kb on Apple II), had the same BASIC (AppleSoft was written by Microsoft), and IBM had paid to have VisiCalc, Peach, WordStar and other software that was running on Apple (and Softcard).
> So the ENTIRE IBM PC HW & SW, held back desktop computing for 5 to 10 years!
That is arguable on the basis that IBM, as a major manufacturer, legitimised the whole 'microcomputer industry' and made it grow beyond the hobbiest group that started it. On the other hand the IBM PC was rather old technology even for the early 80s. The 8086 had been out since 1978, MS/PC-DOS 1.x was equivalent to CP/M from 5 years earlier and couldn't even support hard disks or even user areas or sub-directories.. DRI had multiuser MP/M since 1978 (using bank switching on 8085/Z80, later on 8086). When MS-DOS 2.x was released DRI was demonstrating Concurrent-CP/M-86 with pre-emptive multitasking and virtual screens.
While much better hardware was available around the time of the IBM PC, it was, in my view, the Microsoft software that has held back computing for the last _30_ years.
* The 5150 Model B (which I have here) is identifiable by a blue B in a circle stamped on the back panel). The main differences between the A and B are that the spacing of board slots is narrower on the B.
> Why is first NT, 3.1?
Because it used the GUI from Windows 3.1 and they tried to make it appear to be a 'family' of operating systems. Anyway, with MS's history, no one was going to buy version 1.0.
> As far as I remember, there was no real multi-user concept in OS/2,
Not until Ed Iacobucci formed Citrix to add multiuser facility to OS/2. Later they did the same for Windows NT.
> DOS was very, very fast.
No, it wasn't. Display calls to MS-DOS were very, very slow. Display calls to BIOS were passable fast. If you wanted very fast display you bypassed both and did direct screen writes, just like most professional software did.
MS-DOS was also very slow on file access, in particular on large data files that required random access, due to the way FAT worked. Large ISAM files were particularly slow compared to other systems because in order to access a particular position within the file the OS, for each access, had to start at the directory entry and follow down the FAT table until it found the appropriate cluster. That is why defragging was required, by bringing all the FAT entries for a file together it reduced the number of data blocks required to be read. iNode systems, for example, could access any part of a large data file with many fewer block reads.
DR-DOS had a feature that was not available in standard MS-DOS and that was the cluster size could be specified when a partition was formatted (other utilities could also do this). On a particular partition size MS-DOS would only give, say, 2KB cluster size. Using DR-DOS to give an 8KB cluster size would give an improvement of 3x for random access to a 1Megabyte ISAM data file with no other change. This was solely because there were 4x fewer FAT entries to access.
The only reason that MS-DOS was perceived as being 'fast' was because it could be bypassed by the programs and didn't get in way.
> "Proprietary drivers such as the ATI or nVidia drivers are easy to install but not installed by default."
That doesn't mean that you won't get a driver for an ATI or nVidia card, you will get the open source one. Either they don't have the rights to include the proprietary one or they don't want to because it isn't open source.
> Language packs are missing and defaulted to US.
When I recently installed Mint it knew where I was and set the appropriate locale. What did you do wrong ?
> and do not work with each other (take universal copy-paste between all apps as a most vivid example of that).
Why do you think that copy and paste does not work between applications ? There are even clipboard tools that allow selection of the last few clips (klipper, glipper, etc).
The last time I heard of that shortfall it was Windows Phone 7.
It seems that was "a most vivid example" of your ignorance about Linux.
> Even stupidly simple things like creating a permanent file share generally require memorising or looking up commands
Samba comes with SWAT, the GUI for configuring such things, and has done for many years. Just look in the menu system.
There are other GUIs that will configure Samba, such as GAdmin or Webmin, which also configure many other servers and systems, and can do so remotely if required.
> look what was about at the time Windows 95 came out ... There were a lot of GUIs about back then.
There were other GUIs* about when Windows 1 was _announced_, let alone released. In fact it was announced when Bill Gates saw DRI's GEM being demonstrated at COMDEX. Then they started writing it.
* Star, PERQ, Lisa, GEM, ...
> People complain that Microsoft have too many versions of Windows to pick from and it's too confusing when they release 5 different flavours of Windows 10. They do not want to wade through 50 different linux distros before they even know which one to download.
Exactly. Just like everyone should just buy a silver Ford Mondeo and not bother with wading through car magazines or visiting dealers trying to work out whether they want a people mover or a sports car. That is just too confusing, they don't want to wade through 50 test drives, and then there are all the options and colour choices.
> You have some things in your post here that are technically incorrect. You should be aware that Android and GNU cannot coexist in the same root directory.
When did I ever make the claim that they would, and what relevance does that have ?
> I have run a GNU/Linux distribution on a number of ARM devices, and the executables for those cannot run inside of an Android environment.
And I pointed out that there are several apps available in the Play Store and/or FDroid that are collections that include GNU/Linux executables that _do_ run in an 'Android environment'. These collections may well include their own libraries that provide an interface between the utilities and compilers and the Android display system, but these use the Android provided Linux kernel. The utilities are built from the same source code as the ones that you ran.
Whether they run in the same 'root' file system is irrelevant, they certainly can access the Android file system, and are installed in the same place as other Android apps.
> Technically, even if you are right in principle,
Yes, I am am right technically and in principle.
> Android would be a group or a family of Linux distributions
Thank you for agreeing. Yes, Android is a group or family of Linux distributions, each with its own version numbers, or even different names: Cyanogenmod, Nokia X, Amazon Fire.
> You saying that the phrase "Linux distribution" has referred to the kernel doesn't make it so.
Linux is the kernel. Various systems also include Firefox distributions, LibreOffice distributions, Apache distributions. Collectively they are properly referred to as 'Ubuntu' or 'Red Hat'.
> That is inconsistent with the way the phrase has been used, which should be obvious. People didn't refer to anything which wasn't an operating system as a "Linux distribution," even if it was something that included the kernel.
You are getting confused again by trying to contort words into meaningless phrases.
What 'thing' isn't an operating system but includes an operating system kernel ? Maybe just the kernel alone, but that is called 'Linux'. Are you trying to claim that Android isn't an operating system?
> Technically, each kernel release was a distributable version of Linux, yet no one referred to those as "Linux distributions."
No. They called the release, "a release", or "an update".
> GNU/Hurd is not a Linux distribution, and I have never heard anyone argue that it was. (Do you know what a strawman is?)
You are attempting to argue that something can be called a 'Linux distribution' _only_ if it includes GNU, and that this has nothing to do with the kernel (as in "absolutely incorrect", "doesn't make it so", "inconsistent with the way the phrase has been used"). GNU and Linux are separate things that can be used in combination with other things, such as GNU/Hurd or GNU/FreeBSD. The _only_ thing that makes it a 'Linux distribution' is the inclusion of Linux, how hard is that ?
> Android's use of an incompatible/conflicting C library puts it in an entirely different position than anything else that you refer to?
Yes, there are at least two groups of things that use Linux. One of them is the various GNU/Linux, the other is the various Android/Linux. There is another hybrid group that has both GNU and Android running on the Linux kernel.
It happens that generally 'Linux' is used to refer to systems that include the Linux kernel and much other software from many different sources. In the same way 'Hoover' is used to refer to many different brands and types of household appliances, or even as a verb to refer to using those. Or 'iPad' is used to refer to any type of tablet computer. I have even heard 'GoPro' used as a verb, in that they would GoPro the area before starting landscaping work.
That doesn't make that usage 'right', or more importantly, doesn't make it the _only_ usage. 'Hoover' can still be used to refer to the company or to their branded washing machines. You don't have to stop using 'Hoover' for a DWT L413AIW3 because confused people would think it was a vacuum cleaner.
You are attempting to restrict the use of 'Linux' so that it is not used in connection with the vast majority of uses for that kernel. It seems that your only motivation is so that Linux is seen as a tiny part of computing instead of being, as it is, in the majority of computer devices today.
> However, the desktop in general is just much better than alternatives for certain things. It will take on a smaller less 'generally used for everything' type of role, but I can't see it dying in the forseeable future.
The sales of desktop computers, whether they are on top of or under the desk, are declining. The usage of desktop computers is also declining as the tasks done in the past on desktops, such as web browsing and email, are being done on phones, tablets or other smaller devices. Most of this is because all recent desktop computers are 'good enough' and won't need replacement for many years, and most mobile devices are 'good enough' to replace their usage.
Declining or dying does not indicate complete elimination, as indicated by 'lingering'.
> Applications for GNU/Linux must be ported to work with Android.
Not at all. The code of most GNU stuff is processor agnostic and ARM versions have been available for years.
The standard C library and others as appropriate need to be supplied along with the programs and a shell (bash, dash, csh, etc). These libraries, which act as an interfacing layer between the programs and the system, would access the Linux kernel in exactly the same way as they do on other Linux distros.
Note: GNU is primarily command line and text software. An interface library is needed to provide the display and input, but that would be required for other different systems.
Several apps are available for Android that provide GNU utilities (eg: Terminal IDE) and languages and 'applications for GNU/Linux' can be recompiled for ARM to run in those user spaces.
> It's clear that Android is a different operating system than GNU/Linux.
Android has quite a different GUI from the usual desktop GUIs of Gnome, KDE, or several others. That is because Android was designed for phones. Similarly, WP7 had quite a different GUI from Windows 7.
GUI programs such as written for X, Gnome (GTK) or KDE (Qt) would need various amounts of porting but Qt4 is available on Android as are X Servers.
"""Most Qt applications should be portable to Android with ease, unless they depend on a specific hardware or software feature not supported by Android. If your application is not using any such feature, deployment is probably the only step that demands some changes to your application."""
Your 'argument' seems to be based on dogma rather than anything approaching knowledge.
> They are all from AT&T. Maybe now SCO?
> This would be the end of Linux etc.
Ironically, it is Google that could lead you finding the answer to such questions as posed in your title. POSIX is a family of standards that were developed by and belong to the IEEE Computer Society.
> And Google just copying the Java APIs like they did must be one of the dumbest things they ever done. Especially as most of those APIs are not very good.
Google didn't 'copy' the API, they _used_ the API, specifically a subset of Apache Harmony.
> And how much does Microsoft owe the POSIX lot?
And to DRI for the CP/M API used in MS-DOS 1.x.
> For jobs the 99% required COMPUTER skills are Windows and Office.
When you finally leave your mother's basement you will find that the world is much more varied than you have ever imagined.
The vast majority of jobs do not even require computer skills at all. Most of those that do require interaction with computers only require basic skills of using a keyboard.
> For jobs the 99% required skills are Windows and Office.
You appear to imagine that 'work' is sending memos to each other and fiddling with numbers until the 'right' answer is given in the bottom-right.
You seem unaware that there are factories, transport, farms, logistics, ... where 'work' is not sitting a cubicle typing on a computer. Even where 'work' involves accessing a computer this is most often entirely with business applications such as SAP.
But then you probably haven't been out in the real world yet.
> But that's rather different to the idea of using Minecraft as an educational tool in its own right.
'Fighting Monsters' doesn't appear on my curriculum.
> Creative mode only, ... no mods at all other than the Python console.
Exactly. Minecraft on Pi is for learning programming in the schools.
> Minecraft is not meant to be in the first category [learning].
> Erm....Minecraft isn't free, in any sense of the word.
It is available on Raspberry Pi at no cost, ... at the moment.
>> "You cannot buy a new OS/X licence without buying an Apple computer."
> You can buy as many as you like at £14.99 a pop from Apple,
No, you can only buy an upgrade to an existing licence that you bought when you purchased the Apple computer you will run it on. It is not a new licence.
> No more hand rolled distros allowed.
Maybe not on Intel, but there are many other chip makers. There are many other architectures too. Maybe this is the year of ARM on the desktop.
>> "In general OEMs and retailers do not offer Linux*.
> They are individuals like myself who think outside of the box,by assembling their own system & installing any open source O.S that suits them.
In what way do you rate as being an OEM and/or Retailer ?
I have been building systems for myself and my clients for the last 30 years, but that hasn't made them available to the general public to pick.
> You can buy it separately, it's no different to Windows or Linux.
You are confused. It is quite different from Windows or Linux. OS/X only runs on Apple hardware. You cannot buy a new OS/X licence without buying an Apple computer.
If you buy an Apple computer then included in the price is a licence for OS/X of some particular version. Updates to that version are free. Upgrades to the next version may, or may not, have an additional cost.
"""If your iMac has 10.5.8, then you have Mac OS X Leopard. The .8 in 10.5.8 means there have been 8 updates applied to the base 10.5.0 version. Mac OS X Snow Leopard is version 10.6 which you do not currently own, but can be purchased for $29 from any Apple Store."""
> Since FB, Google, Apple, GCHQ already know ABSOLUTELY EVERYTHING about you already
If those companies know everything about _you_, then you are a fool.
They certainly know very little about me: I do not have a Facebook account, I have no Apple pips, I use Adblock, NoScript, Ghostery, RequestPolicy, and others. If you are providing your information to those then it is on a voluntary basis.
With Windows 10, though, it seems these circumventions won't work and others seem to only be partially effective, or are so only until the next update.
> They could choose Linux or Mac OS. They choose not to choose it.
> So retailers choose not to offer much Linux.
In general OEMs and retailers do not offer Linux*, not because they think that it wouldn't sell, but because Microsoft controls the OEMs. If they do not do MS's willing they they could lose their 'loyalty discounts' which would cost them millions. They have to make a choice: all Windows or no Windows.
The average consumer then only has a choice of Windows or Apple.
* there are a small number available if you search hard for them.
> Anybody else notice AMD is up 15% today? What's up with that?
Maybe it was because Microsoft said that new Intel CPUs wouldn't run Windows XP, 7 or 8.x*.
* actually they probably will, but just won't use the new features of the new CPUs.
Developers probably dropped any thoughts of rewriting their stuff to UWP when Microsoft announced that they will run their Android and iOS versions on Windows.
> Why Microsoft?
Doesn't matter, they will still call them 'iPads'.
> Does anyone know if the Linux community has started development work on these chipsets?
With the first x86-64 CPUs, Linux was the first implementation.
"""Linux was the first operating system kernel to run the x86-64 architecture in long mode, starting with the 2.4 version in 2001 (preceding the hardware's availability)."""
The initial Windows x86-64 implementation worked on AMD but failed on Intel CPUs.
> a NEW!!! edition of Windows '10' (10R2 ?) that will not run on the MS defined 'legacy' cpu platforms, so that MS only has a single single (x64 ?) code-base to maintain for Win '10'.
That has happened all the time. Windows 1 ran on 8086, Windows 3.0 could run on 8086, 80286 or 80386 but Windows 3.1 dropped the 8086. Windows 95 required at least a 80386. NT 4 required a 486 or above. 2000 was Pentium only, and also dropped MIP, Alpha and PowerPC.
When Vista was released MS announced that this would be the final x86-32 release and all future versions would be x86-64 (but they still continued with 32bit).
Current Windows 10 (allegedly the same code base) runs on x86, x86-64 and ARM7.
But that is the reverse of what the article was about.
> rather than copy a good one.
"Copying a good one" is called 'copyright infringement' and is liable to penalties.
> Not VM software, but a complete OS.
Look for ReactOS
> Binary compatibility is a major selling point for PCs, destroy that and you destroy the reason most people buy Wintel boxes.
It would only be necessary to stop Windows 7/8 (and Linux) from _booting_ on the new CPU. Once Windows 10 has booted it can provide all the binary compatibility required - with patented propriety emulators if necessary.
The way to get this implemented is to give OEMs a 100% discount for Windows 10 (normally $100 to OEMs) on computers that use these CPUs. This would make the retail price of these $200 cheaper than 'standard' computers. The buyer demand would force OEMs to demand these CPUs from Intel and AMD.
The same happened with WinModems and WinPrinters (GDI) in the late 90s. The machines were cheaper but Windows only. However, electronic became cheap enough that 'full feature' modems and printers were just as cheap.
Microsoft is trying to get this lock-in back.
> I would hope there will be a lawsuit at some point. If this is game they're playing, there had damn well better be "non-OS'd" machines available.
I don't see many lawsuits where someone bought a Windows Phone 8 and thought they should be able to install Android (or Windows Mobile 6.5). Why would anyone think that a Windows 10 PC should be able to run some other OS?
> No sir, it's not. As the article says, they all sat in a room and decided not to write drivers for an OS which will go end-of-life in 2020 and is still being sold by OEMs.
A OEM version of Windows is licenced to the machine that it is installed on when you buy it. Updates for that version _on_that_machine_ will be available until 2020.
Moving that copy of Windows to another, newer, machine or to a newer CPU is a breach of licence. You do not have the opportunity to require drivers for newer hardware.
Retail (non OEM) copies of Windows were not tied to any particular machine but were specified as to the particular range of hardware requirements at that time. You wouldn't expect a copy of Windows 7 to install on an IBM POWER9 computer or a RapsberryPi2, why would you expect it to work on a 2017 Intel i9 ?
> these old OS's won't recognize or use cpu features outside of the subset defined for the minimum platform requirement.
Motherboards came with CDs. After installing Windows you would install the software on the CD (which had appropriate sections for whichever Windows version it was) and various drivers and utilities appropriate to the chipset and CPU. In this way new features could be added to the base platform. For example Windows XX might have had no mechanism to monitor or control fan speed or CPU temperature but the software on the CD added this*.
Will MS be able to prevent motherboard and chip makers from continuing to add Windows 7/8 software to handle newer CPU features ?
* Retail machines already had this built into the installed software by the OEM.
> What the MS cartel are proposing is some mechanism to prevent you from running an old MS OS on a modern cpu
What they really want to do is to also prevent you from running _non-MS_ OS on a modern CPU.
For example with Windows 10 OEMs can now make 'Secure Boot' permanently on which makes it more difficult to boot another OS. This may be tied to 'loyalty discounts' so if they do this they get an extra few dollars discount.
By convincing Intel to make 'Windows 10 only' CPUs and making 'loyalty discounts' to OEMs dependent on using those CPUs, then the cheaper machines (or more profitable ones) will not only not run Windows 7/8 but also not run Linux/BSD. It may be that the discount will make Windows 10 'free' to the OEMs.
The question is: what would be in it for Intel and AMD ? Will Microsoft pay them to do this? Will MS threaten them with making Windows that will _not_ run on their current chips? That seems unlikely. It is not as if 'Win10 only' CPUs would be cheaper to make, or would suddenly have such a large volume as to have efficiency of scale.
It did seem that this was attempted before. In the mid naughties it is alleged that MS was working on a 'next gen' Windows running on the PowerPC Xenon, as used in the XBox, so that it could make 'XPC' boxes that would not run Linux, or anything else . The system would be .NET based using managed code running on a CLR. This was supposed to follow on from XP but they couldn't get it working and so had to throw together Vista from existing bits and pieces.
Microsoft's plan is obviously to change their revenue stream from selling products to services. 'Selling' involves one-time purchase. 'Services' involves annual or monthly fees (as with Azure or Office360) or a percentage cut (as with app store). To make that transition they need to lock in the need for services, such as for updates, or cloud or ability to buy, or rent, software.
Note: they did manage to convince printer manufacturers to make 'WinModems' and 'WinPrinters' (GDI) that would only work with Windows. They were cheaper because they had no processing capability, relying on patented propriety Windows drivers to do the processing, but once you bought them you were locked into Windows to use them*. Today it is just as cheap to have full printer capability because the cost of electronic components is so much less.
* some WinModems and WinPrinters do have Linux drivers.
Microsoft's first hardware product was the 'Z80 Softcard' that ran CP/M on an Apple II.
"""As Steve Ballmer stated during the Microsoft Surface reveal, the SoftCard was Microsoft's number one revenue source in 1980."""
Before Microsoft was formed, Bill Gates and Paul Allen ran 'Traf--o-data' which made and sold 8008 based computers for analysing traffic data.