112 posts • joined 20 Jun 2012
Admit it - You think it would be useful
A lot of the pictures are going to be banal, but sometimes it would be convenient to have a camera immediately available to capture illegal actions. I commute by bike, and so many times I wish I could capture pictures of people cutting me off, or blocking the bike lane and the pedestrian walkway.
Re: Free Software is a requirement
Personally, IoT without Free Software is impossible, because I won't buy it.
Trade secrets? Excluded. Patents, especially on the software? Of dubious benefit to society and should be eliminated. If you want me to buy the thing, then I need to be in control of it.
Free Software is a requirement
The Internet of Things needs to be Free Software, top to bottom, device firmware and all. There's no other way for it to be secure.
These security researchers are wrong that we just need a security focus. The problem is that a manufacturer's product lifecycle goes from sale to end of useful support typically in much less time than a device's service life. And in many cases, they don't even have the ability to push out security updates.
The normal way I hear these things go, you have a chip platform devised by Broadcom or Qualcomm or somebody, with binary drivers locking it to a specific Linux kernel version, and then you have a product design from some obscure Asian manufacturer, and then you have the big brand OEM's customization, finally releasing a product years later. Neither the OEM nor the original chip maker can release effective updates for this setup, and both want to move on to the next thing.
The real solution is for everything to be run by Free Software. Linux has already proven that PCs remain useful long after the end of manufacturer support. The Internet of Things needs the same opportunity.
LinkedIn spams without informed consent
Many people don't intend to send email bombs, but there's something they do on LinkedIn that makes mass emails go out from it.
LinkedIn does not respect people, but it doesn't have any content that I find important. That's why I'll never sign up for LinkedIn.
Because, while they still use the OpenSSL library, they need the early disclosure to prepare packages for their own users.
Also, being developed in the open, LibreSSL is doubtless already being installed in production systems somewhere.
Re: Ride the City (www.ridethecity.com)
I'm planning a route, and I know that a couple miles of it have a moderate incline that will tire my riding companion. Google says the route is "mostly flat" and doesn't give any elevation data. Ride the City says there's an "elevation gain" of 190 ft, but doesn't say where the gain will be.
Re: @ Andrew Orlowski
Sometimes, Andrew, I wonder why you're paid for what you do.
4G, WiMAX and 5G should be frightening the crap out of the incumbents. Why aren't they?
Mostly, it's because the incumbents have most of the power in wireless. AT&T and Verizon have the most spectrum, as sold by Congress and regulated by the FCC. It's difficult for a competitor to arise. Wireless is a shared medium, so you aren't going to stuff many household-Netflix-worths of traffic through it. I recently tried WiMAX, and the latency is through the roof, and there is a lot more packet loss than wired.
The carriers welcome wireless. It's a way for them to claim that there is competitive broadband. But unlike real broadband, where you really have to work to hit the multi-hundred-GB bandwidth cap, wireless has a cap of like 0.5 GB to 2 GB. A 4G connection lets you use an entire month's allotment of data in less than a day. If you use more data, you pay dearly. And if they can get Congress to allow them to let go of wireline, then they will happily stop maintaining the wires and force everybody onto cell phones, like a third world country.
Stick to your guns: Stop supporting XP
I'm very disappointed in Microsoft. First caving on MSE updates, and now caving on this Internet Explorer update. XP belongs in a museum, not on a PC with users that goes online.
Seagate not attractive
I used to buy drives at a pretty regular pace, but I stopped during the Thailand Flood Crisis. I'm very disappointed at how long it is taking for prices to come back down. Yes, "is taking," present continuous, because I bought my 3TB hard drive for $90 pre-flood, and I haven't found any 3TB (or bigger) drives even reach that price since.
So, I'm just getting used to having a fixed amount of space, and using SSDs whenever I can. SSDs have dramatic speed advantages over HDD.
Seagate is also not attractive because their drives have strangely glitchy performance, and Backblaze found their drives to be the least reliable (but cheapest on a per-TB basis), and their SSHD hybrid drives have too small and too slow SSD caching to be worth the expense.
Re: "...grand turnaround plan from Elop that was supposed to save Nokia."
What a revisionist story.
When Elop took control of Nokia, Samsung wasn't even considered a threat. The main reason not to go Android was the very legitimate fear of losing their investment in Navteq. The main reason to go Windows was bizarre (that Nokia would be so big they could influence Microsoft's development), and as it turned out, the naysayers were right. Microsoft's slow development of Windows Phone has hurt Nokia's desirability and their attempts to do phablets.
Windows Phone and Android were not the only alternatives. Nokia was also working on MeeGo, which had a transition plan for their huge Symbian installed base, but was mired in mismanagement. Even after Elop torched his platforms, a small group was working on Meltemi Linux, until Elop noticed and fired them.
With Windows Phone, Nokia was the biggest fish in a small pond. They've managed to sell many phones, primarily by being the only Windows Phone OEM willing to lose money on every device. Microsoft was not "devices and services" back then.
It's good for Nokia's shareholders right now to have a profitable company again, but I'd be surprised if that were the plan. It would have been immensely better to have a profitable company that was also the captain of the industry, and not a 10% market share also-ran.
Hex codes are a good thing
The aversion to hex codes is confounding.
Any competent computer scientist learns hex code. If you don't understand hex, then you shouldn't be holding technical opinions. And average people can't understand normal IP addresses anyway; as far as they're concerned, the dotted quads are hieroglyphs. IPv4 just has shorter hieroglyphic names than IPv6 does.
I find hex codes to be much easier to work with. Each character stands for a unique 4 bits of address. Most allocations are done along half-octet boundaries (prefixes divisible by 4: /32, /40, /48, /56, /60, /64) so each character in the prefix is the same for every host in the network, except for the trailing zeroes in the prefix. Contrast that with IPv4's decimal addresses, where each decimal digit covers several binary digits partially. And IPv4's paucity of addresses means subnets get allocated on awkward bit boundaries.
Concrete example time. Let's say you get allocated 2001:db8:abcd:ef00::/56. Every host on your network will have 2001:db8:abcd:ef00: at the beginning of the address, only varying in the last 16 hex digits, because each subnet is recommended to use 64 bits. If you have more subnets, then the two zeroes at the end of the prefix will change to the subnet address, but otherwise they will all have the same prefix. With the recommended allocation, you have 256 subnets to play with; or you could manually use those 72 bits however you want.
Let's contrast this with IPv4, an allocation of 172.16.64.0/21. Some hosts could have 172.16.65 at the beginning of the address, and others could have 172.16.70, but none will have 172.16.72. Not to mention network masks for hosts that still use those: If you want the final 11 bits to be host address, the mask will be 255.255.248.0, but if you want 10 bits for host address, the mask is 255.255.252.0. You need to do decimal to binary conversions whenever you work with IPv4 addresses. And you have far fewer subnets to play with, or far fewer hosts per subnet.
Hex digits are way easier to use. The vast address space of IPv6 makes it even easier to use. It's not the complexity of the technology that's holding it back, but laziness.
Re: Running out of addresses might sometimes be a good thing...
Unfortunately, the majority of IPv6 engineers come from enterprises and large research organizations, and are several degrees of separation removed from the concerns of SMB and normal households. So, much of the IPv6 deployment involves manual address entry. Also, I think there's something wrong with your ISP-provided router.
It looks like the real "solution" is DHCPv6-PD. The router receives from the ISP's upstream router an address and a prefix via DHCP. Then it is free to use that prefix however it's configured. To get the DHCPv6-PD assignment, you probably have to turn off any routing fanciness in your ISP-provided router, and use it as a dumb modem. I haven't heard of CPE using DHCPv6-PD to assign subnets within a network.
IPv6 addresses are bountiful, but they're not infinite. A lot of the address space is restricted for various amusing reasons. In particular, fully half of the IPv6 address is recommended to be set aside for the subnet. (No more /24, /20, /16, /8: /64 for everyone.) SLAAC depends on that allocation scheme. That leaves not a lot of address space for the average small business. And when a bunch of ISPs are allocating only a /60 or even just a single /64, there is no alternative but to wrangle the addresses manually.
What about the Chief Technical Officer?
Brendan Eich was not just a short-lived CEO. He was also co-founder and Chief Technical Officer, and now Mozilla has lost that role.
What impact is this going to have on Mozilla? Did the office of CTO actually do anything important? I heard that Eich was known for being "humble," so he may have downplayed his contributions, if there were any, but I'm concerned about Mozilla's future. I'm disappointed that no journalist has covered this aspect.
Google is using Chrome to promote new DRM technologies. Apple and Microsoft are using Safari and Internet Explorer to promote patented codecs. Opera has abdicated from technology leadership. The W3C has capitulated to hostile special interests. We need a leader of technology that advocates for user rights, and Mozilla has provided that leadership. Now that Mozilla's chief of technology has been forced out, and a former chief of marketing put in his place, what impact will there be on Mozilla's technology leadership?
Re: More issues with OpenSSL
I'm going with the theory that the OpenSSL core developers don't deserve more volunteers. SSL is incredibly difficult to do right, and OpenSSL is badly written and badly maintained code. Fixing it is like throwing good money after bad. It should be replaced. The trouble is that it's hard to find another library to standardize on, and to be sure that it's correct.
The discussions also include anecdotes about how hard it is sometimes to get improvements into OpenSSL. But if OpenSSL improves drastically due to Google's involvement, then it may become a good idea to standardize on OpenSSL again.
Re: For sufficiently small values of "wide"
I require a good reason why I would bother learning and using Cobol, when other languages are readily available and widely supported.
It doesn't feel like this is the case with Cobol. GNU Cobol is obviously making things better, but it's implemented in C. I wonder just what is the advantage of using Cobol, when there are so many languages that are better-integrated with my operating systems, and when I personally don't have any Cobol legacy code.
The worst mistake? I think that would be C.
As Poul-Henning Kamp writes, C has caused The Most Expensive One-byte Mistake, that is, data structures without built-in bounds checks. Most recently, that type of programming is in the news because of the Heartbeat bug. (Yes, I know the Heartbeat bug doesn't specifically use strcpy; the principle is the same.)
The menace from C is insidious and inescapable. Modern processors are built to be good at running C, not some safe programming language, and runtimes for other languages are implemented in C. For example, in theory, Java is a very safe programming language, but in practice Java is the vehicle of countless security vulnerabilities. Because of so much legacy investment in C-based systems, including Windows, Mac, and Unix, I don't see an easy way out of this trap.
Who expects Microsoft software to run flawlessly?
even if “... a certain non-Windows guest has integration services that were based off earlier Hyper-V protocols, the guest is expected to run flawlessly on newer Hyper-V releases.”
Oh, like anything that Microsoft produced has "run flawlessly." What universe are these people writing from?
Technically excellent people at Red Hat with horrible people skills
It's not just the Linux kernel maintainers. Debian's technical committee almost didn't recommend switching to systemd, precisely because many people have a... different... standard of how to do technical collaboration.
Though, it's alarming that Kay thinks he can just ignore the upstream kernel people. Lennart Poettering is trying to develop kdbus and get it into the kernel, so Linux would have a more useful RPC system. Lennart seems to be siding with Kay here, and Red Hat does maintain its own sets of patches away from the upstream Linux kernel. This doesn't bode well for collaboration.
Gentoo is already forking udev away from systemd. Since systemd is free software, perhaps the solution will be forking systemd at some point, like all those forks that GLIBC and GCC used to get until their respective leaderships changed.
Windows 95 was useful, not great
I remember when Windows 95 came out. Shortly afterwards, I visited some computer stores, and I was assailed by that Windows 95 boot sound from all sides. I interpreted that to mean that Windows 95 was not especially stable. Almost 19 years ago... Feels like a completely different world, one where Microsoft was considered a hero by everyone except the Apple Evangelists (an actual thing, headed by Guy Kawasaki), and people lined up to buy copies of Windows 95 at launch. At least Ctrl-Alt-Del worked reliably.
Windows 95 was slower than Windows 3.1, but I used it anyway. It was the mid-90's: Why continue to use retarded 8.3 filenames? Windows 95 also had preemptive multitasking among Win32 processes, and its own built-in TCP/IP stack, and Plug-and-Pray so you didn't need to configure devices using text files as much, and it did useful things with that right-click button that exists on every single mouse intended for a PC. Still didn't do anything useful with the middle-click, but that was less common.
I think of myself as an optimist. Computers are stupid. Every OS sucks. I hate computers. I think I am an optimist because I consider upgrades to be an opportunity to approach, ever so slightly, the ideal of a computer that actually works for you. Windows XP is insecure and slow and bad at 64-bit and bad for the Internet, so it needs to be eliminated. Windows 8 is terrible, but I think it's better than XP.
"What other computer platform has been more-or-less supported for 20 years?"
Well, the MC6800 series is still represented by the HC08 family of processors. That's more than 30 years now.
I don't think the computing platform should be part of the device. The interface should have open specifications, so whatever platform in the future could be adapted to drive it. For example, the best replacement for the Commodore 64 disk drive might be a flash memory adapter, not an exact replica of the original disk drive with all its original problems.
Re: Keeping Windows XP alive is not good for anyone
"'Do you expect a PC platform to last for 20 years? That's insane.'
Meanwhile, out here in the real world...."
...decisions are made by insane people with money?
Well, "ignorant" might be more charitable, but there's only so much charity I can tolerate.
Re: So does OSX and Linux...
MacOS X, that is a valid criticism. Apple wants you to replace everything on a regular basis.
But for Linux, the comparison is not apt. Linux does not have a single support option, because there is not one Linux. There are Mint, and Ubuntu, and Debian, and Red Hat, and Arch, and Slackware, and many others. And there are the non-Linux operating systems. There is less incentive for long-term support of Linux distributions, considering that most people get paid very little for it and release their efforts for free, but if you want very long support there are Red Hat/CentOS and SuSE Linux Enterprise Server.
And, because Linux and most of the software built on it are free and open source, you have the option of downloading the source code and fixing it or contracting it out yourself. That is inconceivable to a mind trained on Microsoft and Apple technologies. That is why Richard Stallman is right in the long term.
Also, Microsoft did not intend to be supporting Windows XP for so long. It was a horrible historical accident, due to the exposure of insecurity during the rise of broadband, and due to the extremely poor fit of Windows Vista. Microsoft is not committing to support any other system for more than 10 years and some months.
Keeping Windows XP alive is not good for anyone
This type of thinking would so be foreign in Microsoft's earlier days. After all, Windows 3.1 came out in 1992, and nobody was calling for it to be supported in 2005, 13 years after it was released. Granted, Windows XP compares much better against Windows 8 than Windows 3.1 compared against Windows XP.
Keeping XP alive is bad for Microsoft. It means that Microsoft has lost control of the Windows APIs. As HiDPI screens finally appear, Microsoft needs programmers to switch to APIs that work well with these displays. Otherwise, people have a horrible experience and continue switching to tablets running Android or iOS.
Keeping XP alive is bad for the Internet. Besides the obvious security issues and botnet zombies, XP sucks at IPv6 and web standards. XP also never will support exFAT, and will gradually lose the ability to run newer hardware, as the drivers are written for newer versions of Windows.
Keeping XP alive is even bad for the people who use it. Industrial equipment requires Windows XP? That's so your industry's fault for tolerating proprietary drivers. Everything should have been open, so you could drive it with whatever operating system exists in 20 years. Do you expect a PC platform to last for 20 years? That's insane. 20 years ago, Intel's latest processor was the 100 MHz Pentium, running in a Socket 7 motherboard with maybe 4 32-bit PCI slots, some ISA slots, and IEEE 1284 parallel port and RS-232 serial ports. It's now hard to find a PC with any of these interfaces, though you can get cards and adapters for everything except for the ISA slot. It's best to think of the junky Windows XP box as part of the otherwise fancy machine, and woe will befall you when the weakest, least replaceable part finally fails.
Using Windows indefinitely is a horrible idea. It needs contact with Microsoft to be officially activated, which means it requires Microsoft to keep their activation systems running. Microsoft will do so for now, but the day could come when your system fails, and you fix it somehow, but that triggers a reactivation, and Microsoft's servers could have stopped responding. It's better to use a system that doesn't require activation in the first place.
And when can you use this in Android?
It's great to see this in Java, but when will it actually get to Android?
Java 7 was released in 2011, but Android is stuck on Java 6. Only in late 2013, finally Android could use Java 7 language features, but Android still can't use the Java 7 libraries.
Basically, I think of Android as a fork of Java 6.
Re: He's right... and wrong!
Wow, I wasn't even thinking about hard drive firmware. If you're really paranoid, you can employ full-disk encryption on that. Bus master peripherals are trickier...
I was thinking more like coreboot, based on LinuxBIOS. There's no fundamental reason for the motherboard firmware to be proprietary software, when motherboard makers suck at writing firmware and free alternatives exist.
What about DNSSEC, etc?
HTTPS is an inconvenience for the Great Firewall, but since the Chinese government controls a certificate authority and spoofs DNS answers, it's not an insurmountable barrier.
What we need is end-to-end trust. They can start by signing the google.com zone, so a validating DNS resolver will refuse any spoofed responses. They can add the certificates that google.com uses to the DNS record using DANE or similar, so future browsers can refuse fake certificates without out-of-band techniques such as certificate pinning.
That's still not foolproof. Clearly, we can't trust google.cn. The Chinese government might decide to run its own DNS root, and outlaw domestic use of the IANA root. With the US finally deciding to get out of the business of running ICANN, the future of the root authorities could come into question.
Humans can't handle the semantic web
Piaget taught that humans reach a "Formal Operational" stage of cognitive development around the time that they reach physical adulthood. Going by his classification, I think most people barely get into the "Concrete Operational" stage, which is supposed to end near puberty.
Semantic tagging is very abstract. Most people don't understand abstractness. And if you do understand it, it's a waste of effort to add appropriate metadata when there are no programs to process it. It's just much easier to stick to ad-hoc textual conventions. That's why Google needs all those PhD researchers, to extract the semantic information from the mess of text.
What about Opera?
I guess Opera is not as exciting because it's a private company in the wastelands of Norway, so it doesn't publish its internal workings. And now Opera is dead to me, because they've decided to stop developing Presto and become a thrall of Google Chrome for some reason.
The death of the 1920x1200 screen was very sad, but what I'm watching now are the 4K monitors. They're starting to become affordable, for sufficiently stretchy definitions of "affordable." Sometimes a 4K IPS screen even goes below $1000. If I had plenty of money sitting around, I would so get that.
It has to be unobtrusive
For health monitoring to go mainstream, it has to be effortless. Any little difficulty means an exponential drop-off in the number of people doing it. I think having to wear an ugly watch is a substantial barrier to successful monitoring.
But I just got the first exercise report from the Moto X that I bought a couple weeks ago. I didn't install it, and I certainly wouldn't activate it if I had to do so every day. Now that I have a report estimating how far I've walked and how far I've biked, I certainly think it's interesting.
Re: 2007 hardware obsolete?
My 2004 Pentium 4 desktop is also running Windows 8.1. It's running the 32-bit version, but it's running fine. However, it has an NVIDIA GeForce 7800GT video card. I wouldn't count on the motherboard's video running well.
Re: OSX Mavericks
It's not just an arbitrary hard-coding that prevents Mountain Lion and Mavericks from running.
It's device drivers. Apple is not bothering to support the Intel GMA 900 with 64-bit device drivers. Apple is still supporting the GeForce 9400M, so that's supported. So, you can get a newer OS installed, but it will have miserable performance and probably not display with native resolutions.
Re: What are we waiting for?
Not all of the Core 2 Duo systems can run Mavericks.
The CPU can do it, but Apple never bothered to write 64-bit drivers for the GMA 900 video processor on the Intel 915GM chipset. The first "unibody" Macs introduced the NVIDIA GeForce 9400M, which Apple is still supporting with drivers. So, I think some adventurous people managed to get Mavericks to install, but it doesn't do native resolution and the performance is miserable.
This lack of support sucks for me. I was trying not to install Lion on my early-2008 MacBooks, because they have only 2GB of RAM, and reportedly Lion sucks with 2GB of RAM compared to Snow Leopard. I didn't think it was a good use of limited funds to upgrade those to the 4GB RAM or SSD so that Lion runs well.
What can make you wish for Scott Forstall to come back?
I've been reconsidering Jonathan Ive's actual design taste ever since he replaced all the fonts with spindly light text on low-contrast backgrounds. I thought the home movies with the unreadable white text were amateur mistakes, but now the marketing materials on Apple.com feature that, too. It's hideous.
Now this. Ive should never have been let out of the industrial design lab.
Why so negative about Hi-Def?
In my organization, we have a traditional standard-definition DVR, and it's almost completely useless. We haven't blanketed the outside of our building with cameras, so a single camera has to cover a large area.
Multiple times now we've had perps come in and vandalize some part of our property. Then we look at the recording and say, yep, that's them. That blurry block of pixels. The police can't do anything with this information.
My TV has had high definition for a while now. I anticipate high-definition security cameras some day. Eventually.
And what about services?
This so soon after news breaking out about Lumias sending info to Redmond. Nokia actually classified the OS as a third-party component when denying that Nokia violated privacy.
It's not so long ago since Microsoft accidentally implemented their Chinese censorship across the globe, too.
Of course, Microsoft tries to follow the laws in the companies where they do business. Even if they are draconian laws from totalitarian countries.
No privacy benefits to Microsoft
This is an awfully bad time to be touting the privacy benefits of being Microsoft instead of Google.
Nokia smartphone leaks information abroad
PowerBook 5300 wasn't that bad, either
The PowerBook 5300 didn't sell very well, especially the high-end model, but it worked decently. Except for the part where they disabled Li-ion batteries, because of the fires that no customer actually experienced, and then didn't enable them again when Sony sorted out their problems. The PowerBook 3400 and first-generation PowerBook G3 batteries were physically compatible, but the PowerBook 5300 wouldn't take them.
The CD-ROM drive was a slight problem, but it could take external CD-ROM drives through the SCSI port. Even then, I didn't use CD-ROMs that often, and I even reused the external drive (originally bought for my 1991-era Quadra) on my PowerBook 3400 instead of buying an internal drive. Apparently, a CD-ROM drive was built for the 5300, but it could take only small discs, and was only seen in that horrid Independence Day movie.
I think the biggest problem with the 5300 was just the narrative that the press wanted to build, that Apple was failing and doomed, that Steve Jobs managed to reverse.
"They didn't innovate, but they didn't fail either, so hooray for them."
Microsoft did innovate. Remember Microsoft Bob? While that was a commercial failure, it did give Bill Gates a wife and family.
Microsoft has been a lot more open with what they're doing than Apple, so you can see their failures. Things like Singularity, WinFS, Longhorn, and Courier would never have made it to the public in a modern Apple. A few innovations actually do get out of the lab, too, such as the big table Surface, not to be confused with the failed tablet Surface.
Apple //c was not bad
The Apple IIc wasn't that bad. It was essentially everything from the Apple IIe, in a compact case including the floppy drive but excluding expansion slots. For ordinary use, that was sufficient. My school had 2 Apple IIc and about 10 Apple IIe in the computer lab, and 1 Apple IIgs. I preferred the IIc's keyboard. I had an Apple IIc at home, too. But I don't know how expensive they were, nor exactly how well they sold.
Re: ...they can be persuaded to switch to a Mac
"OpenOffice is great but it barely can hold up with ancient Microsoft Office 2003, let alone 2007 or newer. LibreOffice is even worse, as it's essentially a features whore (why finally getting these annoying bugs fixed when we can have skins!)."
Spoken like somebody who never uses OpenOffice or LibreOffice. In fact, OpenOffice and LibreOffice are horribly glitchy and slow. But they are legally free, and as long as you're aware of their limitations then you can avoid trouble.
The major difference between LibreOffice and OpenOffice is that LibreOffice actually has a community behind it, so it has bug fixes and new features. Apache OpenOffice is the result of Oracle throwing in the towel on any commercial ambitions for OpenOffice, but being unwilling to join a real open-source community. So, they're getting contributions from IBM, but that's about it.
Almost done with Opera
Well, I for one stopped using their browser when all the technically fascinating stuff became neglected afterthoughts, for example Opera Unite. Now, I'm still using Opera Mini on my phone, because the phone is just too weak to use a modern browser. I will stop as soon as I get a real smartphone.
Re: Things I hope for...
"Have you heard of FreeBSD, Samba and Mono yet?"
Surely you mean OpenLDAP or something, and not the platform from the guy who's so in love with Microsoft technology that he named his clone of .NET after the kissing disease.
Re: MULTIPLE SCREENS!!
My memory of the era is a bit fuzzy, but I'm pretty sure you could extend the display on a 512K Mac by attaching a display adapter to the CPU. As in, open the thing, pull out the motherboard, and clamp an adapter precariously to the pins that attach the 68000 to the motherboard.
I'm not sure anybody actually built a display adapter that did that, though. I definitely remember some adapter designed to be clamped on like that, but I'm not 100% sure what it was.
Re: Strange Article
AppleTalk was great, and I loved how much faster the Chooser was than the Network Neighborhood, but your anecdotes don't seem to jive with reality.
I don't know where you got your ImageWriter, but I've never seen one with an AppleTalk card installed. From the documentation, and from the drivers that came with the Macintosh OS, I know they existed, but I've never seen one. Likewise, I never bought the software that would let me share my StyleWriter with the other Macs on the network. That was the domain of businesses that actually had enough money to spend. Also, you did have to worry which serial port your printer was attached to, except Apple labeled them Printer and Modem instead of COM1 and COM2.
Multiple screens were nicer than PC, but rare. If you wanted multiple screens on a Mac, you needed to get an additional NuBus card. Or, later, a PCI card. Powerbooks could run only one screen at a time, at best mirroring.
SCSI was nice, but that's what you get when you put workstation technology on a PC. Here are some shortcomings:
1) It was not hot-plug. All my Macs had a copy of SCSIProbe to activate any device that wasn't there when the computer first booted up.
2) It used manual addresses. 7 and 0 were SCSI controller and internal hard drive, respectively. But what about the rest? In the words of some forum philosopher, "WHO FUCKING CARES?"
3) It turned users into amateur electricians, because it required terminators to eliminate reflections.
4) Apple didn't keep up with storage technology. My Quadra 900 had 5 MB/s SCSI in 1991. My Power Mac G3 had 5 MB/s SCSI in 1998. Ultra Wide SCSI (40 MB/s) existed, but was the domain of expensive workstations and servers.
In one way, PCs were even worse than you describe. IBM introduced the PS/2 mouse and keyboard connectors. Now, instead of 2 different ports for 2 different peripherals, you had 2 same ports that were not interchangeable. Plug the mouse into the keyboard port and vice versa, and you get an error when you boot up. Macs were so much better; you could plug keyboards and mice into the ADB in any combination you wanted. Keyboards even had ADB ports and power buttons, so you needed only one extra-long ADB cable to put the computer somewhere far away and give you some quiet.
Re: Locked into enforced throw-away
"I've got a decent monitor, but now I have to throw it away and use the crappy one in my new computer."
No, you don't. Most people have pretty crappy monitors, and now Apple refuses to sell an iMac with less than 1080p IPS with anti-reflective coating.
But if you do have a good monitor, you can connect it to any Mac. The iMac can even drive 2 external monitors using only mini-DisplayPort adapters.
"I've got a decent computer, but now I have to throw it away and use the crappy one with my new monitor."
No, you don't. You think you did because it's big and you spent money on it years ago, but even the slowest iMac is faster than most computers I see in people's homes. But nobody is forcing you to buy an iMac. I don't see the point of getting an iMac instead of a nice IPS display if you truly don't want to use the iMac.
I wish my MacBook Pro had a palm rest!
The palm rest is a feature that I unexpectedly miss in my new MacBook Pro. Jony Ive has gone all minimalist industrial in his designs. To keep the lines all straight and clean when the laptop is closed, the laptop's base is now ringed by a sharp edge that cuts into my wrist if I rest on it.
Later passive matrixes weren't that bad
These days, I'm looking for high-DPI IPS or OLED screens, but back in the day I had a PowerBook 190cs. Apple actually continued to use passive matrix screens until they finished selling the budget "Wallstreet" PowerBook G3 in 1998.
The early passive matrix screens were a blurry mess. You had separate brightness and contrast controls, where the contrast varied between washed out and completely dark, with no good image in between. Operating systems included a pointer trails feature, because the screen updated so slowly that you would easily lose your pointer if you moved it faster than 1 mm per second.
The later passive matrix screens weren't so bad. Sure, due to the crosstalk between the display transistors, there was a massive amount of image bleeding, and the colors were horrible. But the display refreshed quickly enough to be usable, and most importantly they were relatively cheap.
Boycott the USA
The USA is becoming quite the hazardous place for a security researcher. I still remember the Sklyarov affair. Most recently, Adi Shamir, the "S" in "RSA," has been getting a hard time procuring a visa to do his lecture tours in the USA. The American law allows the customs and immigration to be jerks at discretion.
I don't know what's a good place to have a security conference. This year, I guess TrustyCon needed to work around people's existing travel plans. But next time they should find some place safer.
Re: What we want to know is...
TIFKAM is like Marmite. Some do admittedly love it, but too many people out there do not like it. Too many for a company like MS with around 90% of the desktop/laptop market to just force it unconditionally onto everyone without creating a backlash.
And many people do not like the Start menu. It seems like a major waste of space, to have the entire screen in front of you, and you have to scurry into the corner to reach important controls. People are just used to the Start menu, because the Start menu was the main interface since 1995.
Note, I rarely use the Start button. I vastly prefer to use the keyboard to open the Start menu/screen.
- +Analysis Microsoft: We're making ONE TRUE WINDOWS to rule us all
- Climate: 'An excuse for tax hikes', scientists 'don't know what they're talking about'
- Analysis Nadella: Apps must run on ALL WINDOWS – PCs, slabs and mobes
- Apple: We'll unleash OS X Yosemite beta on the MASSES July 24
- Yorkshire cops fail to grasp principle behind BT Fon Wi-Fi network