Re: I generate the licenses..
38 posts • joined 8 Jan 2017
I can think of two good use cases.
Someone who is often away from home on business and doesn't know ahead of time when they'll be back. Turn the heating on before leaving for the flight home.
Someone who has a holiday home, perhaps in a snow resort. Turn the heating on before getting in the car to drive there.
A quick search of the ISO 3166 registry shows that EU was added as an "exceptionally reserved" code in March 1998, at the request of the ISO 4217 maintenance agency, so it could be used in ISIN numbers. Those are the numbers which the finance industry use to identify share holdings.
This is 7 years before ICANN launched the .eu TLD in 2005.
If you're going to write (yet another) anti-ICANN polemic, you should take the trouble to research your points. (I'm not a fan of ICANN, BTW.)
"Exceptionally reserved" codes are the ones in the standard that don't meet the criteria for being countries. For example, some are for islands, requested by the International Postal Union. "UK" is another such code, requested by the UK government.
Cerf and Berners-Lee are engineers: why would anyone assume their opinions on geopolitical subjects like this are worth listening to?
In the case of Berners-Lee, we *know* he could not spell "referrer". Do I think he's read more widely than I have on history, totalitarianism, politics and the social issues driving what's happening online? No, in fact I have no reason to think he's better read even than the average person.
I'm currently sporting bandages on both thumbs after typing too hard and long on mine. This isn't the first time. There just isn't enough travel and feel. So I've gone back to the "spare" keyboard (bought for a Windows build that hasn't happened) with mechanical Cherry switches, but which unfortunately also has all the key legends written on the front instead of the top, probably because 15 year old boys think that looks cool and they probably don't have all kinds of financial accounts that look unkindly upon passwords being mistyped and therefore ask "secret" personal questions.
I quite liked the later VT220, but the terminal I used and liked most in the VT100 era was the Teleray 1061, which I believe had Hall effect switches. The day came when it was announced all the terminals would be replaced by ICL ones with horrid rubber dome keyboards and ghastly green displays. I appealed to the boss and kept my two 1061s. More of them failed, but with some horsetrading I kept the last two in the building. Then one morning after yet another repair, there was one, with the sickly green glow of an interloping ICL beside it.
"There's something called the sockets API. Its remained unchanged pretty much since the 80s..."
Funny, getaddrinfo() and kqueue()/epoll() weren't there in the 1980s. So if you want IPv6 support and to avoid the well-known performance problems with select(), you have to use more modern code.
Assuming you have getaddrinfo(), does the implementation and version you have support IDNA? Do you need to do the punycode conversion in your app or is it done for you inside getaddrinfo()?
Have you ever tried implementing Happy Eyeballs using just getaddrinfo() and BSD sockets? I have, and it's ugly, due in no small part to the lack of async interface in getaddrinfo(). Far easier just to call the complete, transparent, implementation Apple provide... which does not use the BSD API.
How about access to the system certificate store? Not even thought of in the 1980s.
DNS discovery? Sure, they could include a complete DNS asynch implementation in Chrome (and, for all I know, do) but there's one written by Apple just waiting.
They probably implement HTTP using a library but Apple give you that too. Again, not in the BSD API.
BTW Apple do support most of their networking improvements in the (what they regard as legacy) BSD API, but there are a few things only supported in their own implementation. Once you start trying to support iOS as well, there's no contest: you can't turn on the 4G radio using BSD, so although present it's essentially useless.
And there you have it—"the software may be insecure". The big browser makers, including Google, Apple and Microsoft are collectively forcing the web in general to abandon insecure practices, particularly bad cipher suites and old versions of SSL. This is happening in the browsers themselves, and also at the OS infrastructure level (certificates, HTTP/2).
These companies are quite open, in technical talks and blogs, that this is their aim.
Harsh as it is, I don't want people running old insecure systems online. It's the same principle as forcing old unsafe vehicles off the road. It is bad luck for those individuals but necessary for the greater good.
If they don't aggressively nuke these installations you will get the situation where you have islands of outdated clients talking to outdated servers and modern clients unable to participate. That was the situation for years with IE6 and then with Flash, and Google, Apple, etc. will be well aware of that pitfall.
It's doing networking, and Apple have been fairly aggressively tinkering with their networking stacks (yes, plural, there are multiple ones). For example, after Mavericks the entire domain name resolution subsystem was replaced. Apple also sunset insecure stuff like WEP, SMBv1 and old SSL versions before almost anyone else (because they can, and some big player has to).
I don't know how much of the networking stack Chrome implements for itself, but it's certainly possible there's some API in Yosemite or later they want to use.
This is before we get to other API changes in the UI etc.
So, yes, a program of Chrome's size and complexity probably *is* "intricately linked" to the OS, kernel and other supplied components.
Probably your comment wasn't meant seriously, but just to make things clear:
Australia is one of the most urbanised Western countries, with most people living in half a dozen cities, 5 million in each of Melbourne (where the warehouse is) and Sydney. Melbourne has a well-developed motorway system which also links in to interstate motorways, the largest container port, an airport with no curfew, and (with a 3km gap) the ferry to Tasmania. Many other companies distribute nationally from Melbourne. Some (like the supermarket chains) do their own shipping, and others use one of several logistics companies which offer overnight service to all the eastern cities. If you need an unusual part for your washing machine in Sydney, for most brands it's going come overnight to your house from Melbourne.
Of course no overnight to the outback, but nobody expects it. A couple of days to Perth or the "seachange" places along the east coast.
Only places like the Netherlands would be easier...
It isn't the amazon.com site that concerns me, but the European sites: principally amazon.co.uk, but also amazon.de and amazon.fr. There are many classical music releases in Europe available on the .com site at double the price (and of course not available at all on the .au site) of the .ukl site when they can be had at all; it's not surprising, since the classical record labels are run from Europe and that's where the music originated. Likewise, there are cinema and TV releases of varying obscurity, and Australia is in Blu-Ray region B.
Delivery from the UK has usually also been much faster than from the US, typically a week. Although the last one took a month—perhaps they were softening us up?
All statements so far have been that we'll be able to order most items from the .com site via the .au site. Not a word yet about items from the .co.uk site. And no sign of the email from them either.
Given that only fresh food is exempt from GST, and it can't be imported by mail, how hard can it be to charge 10% GST on anything shipped to Australia? I think this has more to do with trying to make their Australian warehouse viable.
"Happened to have cash and be in the right place at the right time". It seems to be a common misconception that no skill is required for photography. In fact, it's one of those things that is easy to do badly, and hard to do well. So all those people who've dabbled a bit, but haven't yet mastered it well enough to recognize a good photo from a bad one, think there's nothing to it.
That ought to sound familiar to a lot of people here. Writing software is another thing people think there's nothing to, because they've done it a bit. They know nothing about algorithmic compexity analysys, how to structure large programs, etc. They don't know what they don't know.
So you not only have to be in the right place at the right time, but if that photo is to have monetary value you have to know how to make a photo someone will pay for, reliably.
And being in the right place at the right time also requires preparation and work. That guy who has the photo of the #1 band from back when they were obscure probably has it because he spent countless hours following bands of all types and picking good talent from bad. The wildlife photographer who has the photo of a now rare or extinct species in the wild has it from spending days and sometimes weeks or months studying it and patiently waiting for the good shot. Unless you propose paying photographers stipends, allowing the occasional shot to be profitable years later is about the only monetary incentive possible for this type of thing.
It may not be a big deal in the UK, but for sure if your car were bricked in an isolated area of Australia it could kill you, just by dying from the heat and dehydration. I suspect the same might be true in parts of Scandinavia, but from the cold.
Now consider the scenario of all identical models on some outback road being bricked due to the same bad update. A very large proportion of vehicles in outback Australia are a few models of 4WD. Suddenly the failsafe of someone coming along and getting you to safety (before the water you did have in the car runs out) has also failed.
While I agree with the general thrust of your post, you haven't rigorously made your point in the last paragraph. Suppose for the moment that Apple succeeds in their apparent aim of making it impossible for anyone but Apple to repair their products. Then the failed parts they pull out of them will all get recycled according to the "fully closed cycle" policy they have so publicly nailed to their mast. In contrast, when most end users repair an electronic item the failed parts go to landfill.
Apple also, at least here in Australia, accept their products back for recycling at the end of their life. I just rocked up with my dead polycarbonate Macbook to the local mall. At the time there was no government or council-operated recycling of electronic items; they usually went to landfill.
This is not to say the way these keyboards are designed is not consumer-hostile. They should just unbolt from the top of the laptop. Apple could achieve 100% recycling by refusing to sell you a new keyboard module unless you brought the old one in for recycling.
So now we have the situation that a small, new developer with no "reputation" is presented by Microsoft's software as being less trustworthy than malware which has primed it with some benign installations.
Does no-one at Microsoft ever "wargame" their security systems before sending them out? Or is it theatre?
These are almost all mobile numbers, and mobile numbers do not have "area codes" in Australia. They all start with 04 (02 is NSW, 03 Victoria etc.). Long, long ago, the next two digits were the mobile provider, but when numbers became portable between telcos that nexus gradually fell away.
So you need to enumerate at least 8 digits.
It might be true that there are multiple microcode architectures for the various chip familiies, which would mean the job would need to be done muliple times. but probably all the server-class chips do use the same one. Even if not, it could be done by different engineers, so it should not affect the elapsed time.
Intel's x86-64 CPUs are more complex than CPUs need to be in order to support backward compatibility to the architecture. Their whole marketing strategy is "the complexity doesn't matter, we can do that and make it work". Well, it turns out they couldn't and didn't, but in the meantime other potential competitors either have reduced market share or were never deveoped.
Yes, well, it was fast but according to reports it also broke SMB sharing between High Sierra machines.
The workaround is (drum roll) to use sudo at the command line to run some obscure utility in libexec.
It seems to me not so much a lack of testing—probably testers would never have thought of that anyway—but someone monkeying around in the code without having a deep understanding of how it works.
High Sierra was supposed to be a maintenance/performance release, with a new filesystem and window manager. It's hard to see how any of the touted changes could have required messing around with the login logic. Someone needs to put an iron fist down and limit the changes in each release to what's necessary, and in particular to forbid random reimplementations of modules just because they're not in Swift or not yet common to iOS.
This has very little to do with Unicode. The UTF-8 sequence quoted in the article actually represents code point 0xA0, which is the good old nonbreaking space which has been around since ISO 8859-1 and its mutant offspring Windows-1252.
As for "figure out a way", the obvious would be for Google to check the developer names against actual identity, personal or corporate. I doubt any country would allow a corporation to include a nonbreaking space in its name or mix scripts in an ambiguous manner.
This is a machine well suited and often used for a home cinema, yet doesn't have the more modern Kaby Lake processor which has hardware support for 4K.
I get that Apple want to be a services company and sell you a 4K Apple TV instead, but the price difference should be enough for a 4K-capable mini not to threaten that strategy.
Probably no-one at Apple noticed because they're all still using the command line instead of Disk Utility for everyday volume maintenance, and the bug isn't present in the command line version. Everyone uses command line diskutil because several versions ago (I think in El Capitan) Disk Utility lost essential functionality. If, for example, you wanted to set up software RAID (e.g. mirrorring) you suddenly had to use the command line. If you've been routinely doing that for two years or more and you're a developer, of course you're going to continue instead of learning High Sierra's new Disk Utility.
The real disgrace is that El Capitan ever got out the door with that neutered Disk Utility. Lots of people in the creative industries - photographers, video editors, animators etc. - had a need for and used RAID, especially with the popular cheesegrater towers. To expect those kinds of people to use the command line was absurd.
So this isn't just a matter of poor QA on High Sierra - although it is that - but poor software development plans as set out by senior management. It was a conscious decision that power users, including Apple's own developers, would not be "eating their own dog food".
For domestic political reasons, the present Australian government would do anything not to make the CSIRO, especially its Hobart scientists, look good. The government originally proposed deep cuts to the Marine and Atmospheric Sciences Division because there is a faction of climate change deniers within the ruling parties. After huge domestic protest which went nowhere, they were eventually dressed down in a lead New York Times editorial and somewhat scaled back the cuts. So, you see, for them to have spent hundreds of millions of dollars looking in the wrong place and then be told *by the CSIRO* "no, it's over there" looks bad. It would look even worse if they resumed the search and found it where the CSIRO says it is.
So of course they won't resume it.
(I seem to recall reports in the media here at the time casting doubt on the INMAR position. This was during the period when the black box pinger was still running, and there were those that said the search should not be moved from where it was to where INMAR were saying.)
Done properly, they guard against "look-alike" URL phishing.
Suppose you meant to go to www.ibank.example.com but instead ended up at www.lbank.example.com. You might not pick this error up when checking the URL bar, because, as is well known, the human brain automatically corrects for this type of error (it's why you can proof for typos and still miss them). If you copy and paste, the impostor web site has your password. But 1Password, at least as I have it configured, does not offer the password in its right-click menu; all you will see is "Generate", because lbank.example.com is not in your vault.
Not all available password managers get this right, but 1Password is one that does.
@beddo: I don't know what you mean by "2FA password", but I assume it's the token. And it is *not required at all* with an application password. The "something you have" is the device itself, which is why a separate application password is issued for each device and not recorded (you copy it straight into the device and then close the web page that generated it).
In my case, the 2FA token is valid for only around a minute, because it uses TOTP. I log in weekly using the web interface to have a look at the spam folder and to read some less important messages that I sort to IMAP folders using a server-side sieve script. Normally I read mail on a desktop using a client which has an application password twice daily, and I don't need my phone near me for a TOTP token. If the machine were to be stolen, it could not be used to access my email, because when I'm away it's locked using my Mac login password, and this "locks" the encrypted macOS keychain on which the mail client stored the application password. Although my Mac login password is reasonably strong, it still needs to be memorable so isn't as strong as some others I use. So, if the machine is stolen, the reasonable login password should slow the perpetrator down long enough for me to invalidate the application password using the server's web interface and I'm golden. And after I do that, I can still use all my other devices (and their application passwords), or my master password and TOTP, to read my mail.
It doesn't matter if there's no 2FA support in your IMAP/POP client, because 2FA systems typically require an "application password" for those clients that is automatically generated and then copied into the client's configuration. Because the user doesn't get to choose the application password, if properly implemented it will never be weak. Because it can be invalidated or reset from the server without affecting the master password/2FA pair, the user doesn't even need to know what it is or record it.
This isn't theory: the paid email service I use works exactly like this. And, it isn't Gmail, but I believe that works the same way.
The fundamental problem here is that a container for internal state for NTFS appears as a file in the file name space.
The ODS-2 Files-11 file system format used by VMS (a development of the earlier RSX-11 ODS-1 format) had exactly the same conceptual mistake, with dellghts like BADBLK.SYS and INDEXF.SYS in the root directory. Indeed, INDEXF.SYS is the analogue of $MFT. It's not surprising that NTFS continues this, because ODS-1 and ODS-1 are said to have been designed by Dave Cutler, who Microsoft hired as the NT team leader.
It's disappointing, though, that no lessons were learned. Perhaps memory is playing tricks on me after 30 years, but this locking exploit sounds awfully familiar to me from the days when my job required passing an eye over VMS security updates before we applied them. At the very least, a good "second system" design should have cleared this cruft away.
I tried to update my Vista laptop back when this patch first came out around March (it was in the last batch of fixes before EOL) and shut the thing off after it had been "Checking for Updates" for 26 hours with no network activity. It wasn't the first time I'd attempted to get the thing up to date, either, with similar results.
The slowness isn't Microsoft's servers, but some kind of exponential algorithm in the updater client. There's said to be a registry patch+hotfix to fix 7, but nothing for Vista.
I got curious about what other undesirable application/* types might be registered, so I looked at the actual registry at iana.org. And application/hta isn't actually there! I had believed the Wikipedia article, which lists this media type. It seems it's just being used unofficially instead of the standards-compliant application/x-hta or application/vnd.ms.hta.
I had to Google what application/hta and .hta were, but when I had - what moron thought it was a good idea to invent a new file extension for an executable? Image headers have been around for decades, at least back to the 1970s. Even Unix (which doesn't have hard typed file extensions) has them for executables, in the form of magic numbers and the #! which will be in every executable script. Given this is Windows-only, this should have been a .exe file with a different header.
And then there is IANA, which registered it as application/hta. Sorry, no. MIME should be segregating executables and scripts as a major type, say executable/*. This, too, should have been obvious at the initial design but I guess they were blinded by a desire to put scripts under text/*.
If either of these mistakes had not been made, it would be a lot easier for anything embedding an executable to be flagged or blocked. As it is, each new bad type of executable has to be blacklisted.
"The need for client side execution has come to an end given that we all have broadband Internet."
I'm not saying I agree with the wholesale move to client-side - what has effectively happened is that a general mechanism was let loose on the world instead of the standards designers thinking about the actual real-world problems - but all that standards work that was never done would still need to be done. And there would be a lot of it, with standards for many different application domains, and therefore a large attack surface.
And yet I'm looking at my .emacs and the 10 lines of elisp in it I added to defeat an incompatible and pointless UI change made in version 23. They certainly had other things to be getting on with, like performant Unicode support. (So did I, but I didn't get a choice.)
I regularly have to edit certain files in Apple's TextEdit instead of the emacs they ship, because emacs goes laggy if you include certain characters. My guess: it's using pairs of surrogates internally for those. The answer of course is that with the very old code base, in very old programming languages, that kind of architectural change isn't undertaken lightly, whereas UI bling is less risky.
I think the real difference here is that *one* person looks after vim, so no bikeshedding happens.
The main reason AM hangs around here in Australia is that several of those "talk" stations are run by the government broadcaster (the ABC, like the BBC) and have a major function as emergency broadcasters during natural disasters. In the Black Saturday bushfires of 2009 the 774kHz service broadcast from near Melbourne was by far the main information/emergency information channel for people threatened by the fires. Mains power had failed in those areas (usual with fires), the emergency web sites collapsed under the load, and there was mobile phone tower congestion. But almost everyone had an AM radio, especially in their cars and knew to tune in to that frequency.
In normal times, 774 is a talk station with some serious political coverage.
The 774 transmitter is very powerful and can be heard for hundreds of km. This is not so for DAB. AFAIK most or all of the DAB+ standard enhancements to DAB were designed in Australia for the express purpose of replacing 774 and its sister statiions, and then the standard gifted to the rest of the world.
However, typically cars here still have only AM and FM, so 774 can't be shut down yet, probably for a decade or more.
Biting the hand that feeds IT © 1998–2019