Why the cloud?
All the better to listen to you my dear
3951 posts • joined 18 Jul 2007
All the better to listen to you my dear
It doesn't matter what they communicate. A system which allows a driver to be inattentive will cause accidents. A system which is in itself is imperfect will cause accidents. Both need to be addressed for the system to be safer than a driver by themselves. So this is a forseeable consequence of bad design.
An analogy might be a factory with a dangerous hydraulic machine. You could put warnings all over the machine saying not to do certain things while it's running and someone still will either through stupidity, inattentiveness or whatever. That is why factories are required to install things safety gates, two handed controls, sensors etc. that automatically shut down the machine if the operator does something that puts them at risk. A car hurtling down the road at 70 mph is a dangerous machine and safety should be treated as importantly as it is in a factory.
Tesla's "autopilot" is actually quite modest and it's easy to see how you might break the problem down and model it - multiple lanes of cars all going the same way, sensors that model the car's surroundings / lane markings, algorithms that maintain speed & distance, algorithms that mark opportunities to overtake, algorithms to avoid / brake hazards based on proximity, control steering / brakes / lights. It's complex no doubt but it can be modeled.
But it requires:
a) The computer is able to see all hazards, act in a predictable way and additionally only engage when the road and conditions are suitable. This is clearly not the case.
b) The car forces the driver's attention. Force the driver to hold the wheel with both hands. Force them to touch a pedal in a certain way. Monitor their head and posture. This is clearly not the case either.
It is the failure of a) and b) which causes accidents. A failure of a) is bad enough but without an attentive human, it's a guaranteed accident. This is a forseeable consequence of not forcing attention, i.e. bad design.
The funny part is Tesla's self drive solution is quite modest. The problems facing mostly or even fully automated cars are multiple factors higher. Perhaps reports of accidents might do something to allow a little bit of reality to creep into the hype about self drive vehicles.
The simple answer is to pick a minimum version of Android as the cutoff and use a cross-section of tablets to test against that cover a range of screen, resolution and performance factors. It's not rocket science.
They don't even have to be *real* devices since they could be virtualized and run as part of an automation suite.
Android is not the same everywhere - different devices have different sized screens, different resolutions and different aspect ratios. Some devices may also lack GPS, a telephone stack and so on. But most of these differences are pretty superficial and easy to deal with providing you write your code properly in the first place and don't make horrible assumptions.
I assume Salesforce hasn't. It's the modern day equivalent of notices on websites that said "This page only works on Internet Explorer 6 and 7 (because it's a heap of crap that makes all kinds of bad assumptions that we could fix but we won't)"
I can spot about 8 things wrong with that snippet.
But on a general point, end of line comments are usually never a good idea. There are exceptions of course, but it's usually clearer to put a comment before a piece of code rather than after it. Especially if the code has a formatter like astyle run over it.
Anyone who has been on a large project knows everyone has their own ideas about indentation, use of spaces / tabs, formatting, braces on end of lines, naming conventions, ordering of #includes, public / private code etc.
You have to have a common coding style or the free-for-all becomes a dog's dinner with a greater chance of bugs being introduced. In addition patches can become a real mess if someone reformats a style or mixes their own style in with the rest of the other code.
I'm hardly surprised that the Linux kernel should enforce a coding style. It already has a document for this but oddly the network drivers are excepted from using the normal comment convention. Clearly Linus has gotten so pissed off with this exemption that he's put his foot down. I don't suppose it would be hard to write a perl script that fixed this in a single patch. Probably that will happen if it hasn't already.
All those loyal longtime staff are an expensive burden when the next wave of layoffs happens. Maybe they're trying to piss them off enough that some leave of their own accord. The company would save far more than the cost of a pen.
If you're stupid enough to download warez from some dodgy site then you get everything you deserve.
I recently entered America (still there) and was treated to nearly 2 hours in a queue as people were greeted with 20 rows of US customs posts with only 3 of them open. Of course to "speed up" the process they had some electronic kiosks to complete some steps of immigration except they roped them all off for some other flight to use and not us.
As a final kick in the balls when I return to the UK I'll probably get another 2 hour queue thanks to their equally bullshit e-borders system which can't cope with families.
And that's on top of the inconvenience and bother of ESTA. The only faint praise I can give to their system was it was slightly less awful than Australia's electronic visa system which takes about 20 minutes a person to complete and only stops short of asking for a stool sample.
Tech companies aren't going to invest any money in the UK when they have no idea what the hell is happening. Same for most industries. They'll just start developing plans to move their centre of operations somewhere else which is part of the EU, e.g. Ireland.
The UK needs to get some certainty into the situation and fast. e.g. fast track plans to join the European Economic Area. It's still a terrible choice compared to remain (all the rules, none of the influence) but its still better than uncertainty.
I downloaded and burned YDL to disc. I'm probably one of the miniscule % of people who bothered with the feature or had any intention of using it. It was neat to get going but in truth YDL ran pretty badly on the PS3 because there was a hypervisor in the way and the CPU wasn't designed for out of order execution. The main use someone might have had for it is to get at the SPUs.
It was hardly surprising Sony dropped the OtherOS when a viable hypervisor attack became possible. It would have been refined into a burnable ISO that would have been used to boot and root the console.
I was using it without issue on virtually every application. The only one which caused me trouble was Eclipse. For some reason the SWT -> GTK UI didn't open dialogs at the proper size and it was unusable. I had to switch back to X because of that. I hope it's fixed in FC24. Wayland is a long overdue replacement for X and it is tantalisingly close to becoming the default.
A hyperlink CAN be content.
I bet iOS is starting to use up all the space in its fixed system partition so they're shedding some apps in the hope they can go longer before doing something more drastic.
I don't see it helps users much since anything in the system partition is basically not using space in the user partition. But by moving these apps to the user partition, people who install them end up using up more space.
I like Twitter but it has started to become quite obnoxious with the amount of ads it puts in my feed. Not just in the list but in conversations.
I realise they've got to make money *some way* but I question if how they do it now is the appropriate one. I make sport of it by marking them as not relevant or offensive. Occasionally I'll slag off the thing in the ad too, particularly if its for some "freemium" mobile game, or bad movie. I'm helping in my own small way at driving advertisers away. #selfdestructive?
"Yes, this sort of regulation is what holds us back from producing the nation's entire requirements for meat products"
It's more to stop producers, stores or restaurants flogging stuff as Stilton cheese, or Parma ham when actually it came from somewhere in Bulgaria and bears no relation to the thing its trying to pass itself off as.
It doesn't stop Bulgarian cheese or ham being sold but its sold on its merits rather than riding the coattails of someone elses. And if in time it gains a reputation for quality it can register for protection too.
If I recall it was someone making elderflower champagne and they had to change the name to something else because champagne was a protected designation of origin. British food & drink producers benefit from the same rules e.g. Melton Mowbray pies are protected. At one point Newcastle Brown Ale was protected and then the manufacturer moved their factory out of Newcastle they had to have their protection cancelled because they were in violation of their own protection (!).
"Don't forget that "beef" probably doesn't mean steak, or even 100% meat + reasonable fat content. It'll surely include udder, rectum, anus, lips, nostrils, eyeballs, bladder, spleen etc."
McDonalds burgers are 100% beef. In the UK, Ireland and rest of Europe that would mean abiding by the EU definition of what meat is - "skeletal muscle with naturally included or adherent fat and connective tissue". i.e. cuts of meat which are ground up, chopped and formed into burgers.
Perhaps the definition differs in other regions. The US for example is notoriously bad at cracking down on food practice. So-called "pink slime" is mechanically separated meat particles which have been centrifugally spun to remove fat and treated with ammonia to kill bugs which is reintroduced with chopped meat. Lots of places use it to cut corners. McDonalds did too but don't know apparently.
"CEO Stephen J. Easterbrook detailed myriad process improvements that have been made to make individual outlets more efficient and therefore make it more fun to buy food at McDonalds."
I expect it's a combination of lots of things and this is just one that the article happened to focus on.
I only have Android so I can't speak for rival systems. Aside from the "loony factor" of talking to your phone out loud, the fact is that Ok Google can be frustrating at times.
It's best for web searches and pretty diabolical for maps and reminders. I've tried dictating posts (like this one) into it and it flubs so many words that I have to heavily correct it.
Processing voice into coherent commands and sentences is obviously hard. But it demonstrates that people should stop drinking the koolaid when we hear about self driving cars, delivery drones or other AI projects. Google can't even get voice recognition in a phone working acceptably. At least your phone won't drive you into a brick wall or drop on your head.
I really don't see why it's Unicode's job to store silly little pictures which change from one week to the next.
The display which turned off to save battery. The display which didn't work in strong glare. The battery which could barely last a day or two. The constant bother of charging. The proprietary chargers and accessories. The lack of compelling apps. The proprietary protocols and ties to phone platforms. The cost. etc.
All these things sunk smart watches. At the end of the day a "smart" watch was just a normal watch which didn't tell the as well as a normal watch and was considerably more bother to use.
When smart watches make substantial progress on all of the issues above, then they may get some market traction.
"That boot time depends on what you put on that card. My Pi probably boots up in around 15 seconds or so."
Great. But I was referring to something analogous to the programming the bit, i.e. to boot the Pi up to a desktop and Scratch or similar.
The bit is just on and its ready. The Pi isn't.
"32kb? Luxury. Kids these days are spoiled rotten enough as it is. When I were a lad, I remember getting my 16K ram pack for my ZX-81 and wondering what I'd do with all that memory! Teaching kids to write small, efficient code is far more useful given that should translate to cheaper/smaller iGizmos"
I reckon kids should still have to type out pages of hex printed on silvery bogroll like I did as a kid. With the added thrill that the characters for B, 8 and 0 look indistinguishable and the program will probably crash as soon as it runs.
If you consider the amount of effort required for a kid to run a Pi Zero and program it vs the micro bit you might appreciate why the latter exists.
A Pi would require at least a USB charger, keyboard, display, HDMI cable, mouse, and micro SD card. And the SD card needs to be flashed with something and takes several minutes to boot up.
The only thing you need to program a micro bit is a USB charger and a phone or tablet with bluetooth. You can plug it into a PC if you want but a tablet or phone will do. In fact you don't even need actual hardware since there is an emulator that works with the tools.
And it's easier for the teacher to mark the kid's work because they can either upload and run them from the emulator, or the kid can bring the device in and simply plug it in to demonstrate it. I expect in time there will even be things like robots, weather stations etc. when you just plug the bit into a slot and it talks over GPIO.
There is no doubt in my mind which is easier for course work in schools. I don't see it as being bad news for the Pi because the micro bit has obvious limitations. It is meant for learning purposes. Those kids who started with the micro bit will graduate onto a Pi in time.
"The UWP concept, "dumbing things down for the lowest common denominator" so that your crippled application runs on ANYTHING, just plain stinks. NOBODY wants a DUMBED DOWN application. PERCEPTION is EVERYTHING."
I really don't know what you're talking about here. Dumbed down? It merely rationalises the functionality that's in Win32, sweeping out a lot of dead or obsolete functionality and organises the rest along the lines seen in other modern APIs - io, file, net, etc. And it does so in a language agnostic way so you can write in C++, C# or anything else that binds with it.
The problem with UWP has nothing to do with the concept but the corner that Microsoft has painted it into by tying it to the store. If it weren't tied to the store then the chances are it would be more popular. It's could even become a cross platform lingua franca if Microsoft are bold enough to release it to Android and iOS.
But if you don't want to use it, then no one would force you. I'm sure Win32 will be around for a long time to come.
And decouple UWP from the store. There is no reason it has to be tied and there are some very good reasons to develop against UWP that coupling prevents. The APIs for UWP are a lot cleaner and sweep away a lot of the crap that has accumulated in Win32. But while its tied to the store it's not much use for general purpose development.
"I have never had a facebook or twitter account but I do use linkedin. "
I use LinkedIn but most of the time I feel more like it's using me. During a contract phase I accepted links from some agents. Big mistake. These agents get a job spec in that says "java" in it and spam everyone who comes up in a search result. Multiply that by every agent who has the spec and its a lot of spam. It's become a cattle market and people on the system have become the cattle to be monetized for the benefit of people like agents.
I've disconnected from the lot of them. If they want to talk with me they can spend one of their precious InMails. Chances are I'll ignore that too but at least it shows some kind of deliberate attempt to interact rather than spamming dozens of people at once.
That link suggests that all the attackers needed to do to find the most common passwords was count duplicates. So 7c4a8d09ca3762af61e59520943dc26494f8941b was 123456 and they could count them up and crack them.
LinkedIn, a site which should know better didn't even bother to salt its passwords. Not acceptable, not even in 2012.
Does the bug put UK / Western countries at greater risk of harm than intelligence services could justify if they left it there to exploit themselves?
The issue with NULL / null / nil (whatever) pointers is you put the caller, or the thing you're calling at risk of failing. Yes it may handle the null, or maybe it'll assume you didn't feed it garbage and promptly crash or throw some fatal exception. This might not be so big a deal for some app on the desktop, but it might be damned serious if this is a drone flying around in the sky.
Some languages like Rust don't even support the concept of NULL for standard application programming. There is literally no such thing. If you have a function which might return nothing, then you have to use a return type such as Option which is basically an enum capable of returning a value called Nothing. Otherwise if something says it returns a Foo, it is guaranteed to return a Foo.
Security. The Node Package Manager allows packages to depend on other packages and define a loose dependency rule that allows them to accept minor or major updates in the other package.
There is a way to lock down NPM dependencies - shrinkwrapping. But that is not the default behaviour in NPM and even if it was there is no checksumming of packages to ensure that the version 0.3 fetched this time is the same version 0.3 fetched when the shrinkwrap was made. So even shrinkwrapping isn't a solution. At the very least I think enterprises should not point at the standard npm js repository so if there was a breach they would be isolated from it.
See the other story about Walmart.
The USA still(!) uses swipe machines for payments. When the customer swipes, the number goes through the store's own systems and is authorized by the payment processor. The card details can be skimmed or the store can be hacked and both allow the details to be stolen. And of course cashiers never bother to check the signature so stolen cards are easier to use too.
Chip and pin might not be perfect but it takes the store out of the loop. If the store doesn't secure its database then it's no great loss because the card details aren't in there to steal. All that will be there is some kind of transaction token and little else.
Anyone producing a €500 note would instantly raise suspicions about the person holding it. Even the €100 note is relatively rare. I expect most criminals would prefer €50 notes since they're the highest value note in common circulation.
The price of rent has gone up because not enough houses were built. It'll swing the other way in time. Prices are certainly nowhere close to what a place in SF would cost. Not by a long shot.
Anyway, there are other places in Ireland where businesses should locate. The likes of Galway, Cork & Limerick all host a lot of tech firms and aren't suffering anywhere like the same kind of rent hikes as Dublin.
Z-push is an open source impl of ActiveSync so I assume if that's what is necessary then there is an option.
Exchange also supports MAPI so Thunderbird could potentially sync folders, email and appointments via that.
LibreOffice is competing with Microsoft Office which includes Outlook in some configurations. It'd be pretty handy to provide Thunderbird as an option particularly if it included a scheduling plugin. Better yet if it grew sync backends so data could be synced with Exchange / Domino servers.
"Plenty of people complain about systemd"
Yes they do and its usually for specious, wrong or refutable reasons. Or because they're trolling.
"A simple but well tested and highly reliable component has been replaced with various iterations of "ooh! shiny shiny!"."
Linux has always been about reinvention. How many times has the kernel been rewritten? How many desktops are there for it and how many times have they been rewritten? How many calculators, browsers, file managers and all the rest are there for it? Even now there is a concerted effort to get replace X with Wayland (or Mir). There is barely a part of the core which hasn't been rewritten multiple times to improve performance or to remove some arcane, baroque, incomplete or broken behaviour. Why should the user-land bootstrap be exempt from this?
Regarding upstart, it was an improvement on sysvinit, but it was still thought to start things unnecessarily because it was event driven. e.g. network-manager's conf listened on dbus to start but just because dbus ran didn't mean anyone wanted to start network-manager. Systemd is dependency based so services are launched explicitly systemd ensures all the dependencies are started first.
Secondly, systemd isn't new software. It's been in some distributions for six years now and even enterprise dists like RHEL have used it for 2. They use it because it is reliable, it fixes longstanding issues with sysvinit/upstart, it enforces security via cgroups and minimal privilege and it's more efficient.
GNOME's software app is pretty dreadful too. It can become unresponsive, displaying a bunch of ellipses on icons, lock up if the package manager is busy doing something and generally give unhelpful error messages or no feedback at all.
It also assumes users are only interested installing end-user applications so it's not a package manager. Something like synaptic would be more useful for admins but I'm not really sure why a high level app store and a package manager can't coexist in the same software since they need access to the same functionality.
Ubuntu made systemd the default in 15.04 so there is no "painful transition". It's happened and curiously the world did not collapse. The same is true of virtually every mainstream dist. If you have reason to run a script to launch a daemon you can still do it.
And it's important to note that the old way of starting services in Ubuntu since 6.10 (and Fedora since 9) was upstart, not sysvinit. Not that you would know it from the usual whining about systemd vs sysvinit in this thread.
"Improve boot times? PUHLEASE! If (IF!) I turn off a system (which is a rare event) it takes less than 1 minute to boot."
So please clarify if you're using systemd or not. If you're not using systemd (which I assume from your tone), then what would be the boot time if you were and do you care what systemd does if it clearly doesn't affect you?
And if you ARE using systemd what would be the boot time if you weren't and why are you getting so worked up when your system goes 6 months without a reboot?
And secondly, your server scenario is hardly representative. Certain servers may reboot infrequently but the same is not true for workstations, laptops, embedded devices, virtual machines etc. where startup times are important and sometimes critical.
"Logging integrity? Never had an issue with the integrity of the logs."
And how do you know you've never had a problem unless you can verify the integrity of the logs in the first place? In systemd you can type "journalctl --verify" and see. If you're paranoid you can even enable forward secure sealing so that groups of messages are forward secure so silently tampering or corrupting the files is extremely difficult.
"How do BINARY log files improve things"?"
Forward secure sealing, indexing, metadata, searching etc. If you want text it is trivial to present it as text by typing "journalctl -n 100" or whatever. And commands like dmesg are still there too.
"Whose brilliant idea was it to disable /var/log/messages in Mint</sarcasm>? You can (still) re-enable it..."
Why not ask Mint? Maybe they assume that someone who wants to look at messages would be capable of typing journalctl and seeing them.
"People were annoyed that it was one massive bundled binary rather than a lot of smaller components which could be maintained independently"
The thing is, it isn't a massive bundled binary. It consists of dozens of executables that have well defined purposes relating to their own area of concern and run with minimal privileges. And systemd doesn't implement an NTP daemon, but it does have a service that allows it to synchronise the local date & time with a remote NTP server during bootup.
This is bad how exactly?
"It's an indictment of OS designers (if not humanity in general) that anyone can still form the phrase 'improve boot times' in the year 2016."
Er what? Most Linux dists have used systemd for years now, and before that the likes of Ubuntu were toying with other ways to concurrently launch system services. I was comparing the current situation (fast boots, no hacky launch scripts) to what it was like before and what some people appear to be yearning for.
"That it's still being done in such a time-wasteful manner is indicative of something deeply negative."
It isn't being done in a time wasteful manner. That's the point. Most Linux dists have adopted systemd because it brings the OS up into a usable state in an efficient timely fashion.
It's trolled and complained about by a vocal minority. Other people don't care or appreciate it for what it does to improve boot times, logging integrity and a bunch of other things.
I use CM13 on my OnePlus One and the milestones have been pretty good. Flashing a phone is certainly not for the fainthearted. Upgrading from 12.1 to 13 totally broke it (despite the phone seeing the update and notifying me of it) and I spent a good 3 or 4 hours wiping various system / cache partitions and restoring gapps to get it back and running. Another milestone appeared the other day and that applied fine.
I prefer phones that track vanilla Android and if they customize at all then they do so with a light touch. There might have been reason to customize back in the 2.x age, but Android has a perfectly usable vanilla experience these days. Aside from bringing consistency to the platform it also increases the likelihood of firmware updates because there is less stuff in the custom firmware to maintain and test each time.
Probably the best near-vanilla experience would be something like Cyanogenmod. Lots of minor changes here and there but close enough to track and merge AOSP. Even CM have dumped some of their apps and tweaks, presumably because the core functionality has improved sufficiently to render them unnecessary.
I'm amazed Maplin is still trading. Literally nothing they sell can be described as value for money and in some cases sells for a 5-10x markup over the internet. I assume most of their customers are reasonably tech savvy so I don't understand how they get away with it.
"Not really - analog storage has significantly different performance at both ends of the audio spectrum and - assuming they don't mess up the digital to analog cutting translation when they make the LP - you really do hear a quite different performance."
The proper way to verify the sound output would be to do an ABX test between the master and the LP, and between the master and a CD / FLAC / WAV. If you can tell the difference its because the audio has been distorted or otherwise failed to capture frequency in the master. And that's BAD, not good.
Most vinyls are made from digital masters. So all the people proclaiming they're richer / warmer / whatever are simply deluding themselves.