Re: You've got mail
With Eudora, it was literally something to crow over and you even got an audible cue when there was NO mail...
3354 publicly visible posts • joined 6 Sep 2013
Why did the auto/truck industry invent their own thing
CAN bus goes back to 1983 before Ethernet was an obvious universal standard. It also has some characteristics that make it more robust in an automotive environment.
The bus arbitration works rather differently. In Ethernet, the sending station listens to its transmitted frame to check whether it has been stomped on by another station transmitting simultaneously and randomly backs off is so. With CAN, the potential transmitters are synchronised and the frame begins with a (unique) ID. If several nodes transmit simultaneously, they continue to emit ID bits until the transmitting node with the lowest ID is determined and that node gets to send the rest of its message without everyone having to delay and retry. That gives you a priority mechanism - low IDs get to grab the bus ahead of high IDs - that's intrinsic in the bus and doesn't require back-offs or an intelligent switch to reorder packets.
they're really not interested in open source
The Pi originally shipped with a huge binary blob containing Broadcom proprietary code. The size of that blob and the amount of closed-source code diminished over time.
The Pi team will have to speak for their own motives, but it seems to me a crucial part of the Pi's success has been in getting the price right and if that meant some foot-dragging on fully-open licensing that seems to me to be a perfectly reasonable trade-off.
I wonder if Nodding Doris knows that a different arm of her own government is intent on preventing people being able to challenge the malign effects of rogue algorithms.
But presumably those particular malign effects feed back into party coffers.
They won't of course - but it's not about effectiveness, it's about cost. This type of "marketing" is essentially free, so quantity displaces quality. It's the same as the customer "service" chatbot: they're essentially just a teleprinter with a loop of paper tape repeating "if it isn't on our website you're wasting your time", but they relentlessly deter contact with more costly minimum-wage phone-drones. That they're both touted as successful applications of AI tells you a lot about that subject, too.
And besides, who knows what marketing is effective anyway? It's always described as "crucial" and "scientific" when sales are booming but when they're falling there always seems to be some other explanation.
Mind you, having two dishwashers isn't unknown. An acquaintance of mine did that and stored the clean crockery in one, moving it to the other as it became dirty. When the second dishwasher was full, it ran a cleaning cycle and the process was repeated in reverse. Though I might foresee a rice-and-chessboard escalation in the marketing if you go down this route.
Except that there are often multiple cases. If you believe your IP is being infringed, a common remedy is to bring an action in a country where the disputed products are being sold to prevent their being imported. You might want to do that in a number of markets. If every time you try it a court in China issues you with a legal order to desist until the patent validity has been determined by a Chinese court then the fines will soon mount up.
And if they were not a deterrent, they would soon be raised to a level at which they would be.
I think it's important to realise that WASM and JVM (or .Net) are rather different things. WASM is a type of virtual machine code: it has a limited set of data types and operations that correspond to commonly-available hardware instructions. The JVM has access to the high-level type information and other metadata associated with the source code: this is how frameworks can wire up code to events and do dependency injection (for example). There's no reason you can't do that on top of WASM, but you can't do it with WASM alone.
That said, the LibreOffice demo is quite impressive - once it loads. A lot of the heavy lifting is done by QT for WebAssembly and the general performance of the UI is considerably better than I was expecting.
That said, it raises some interesting questions about WASM - about multi-threading and its relationship to other browser activity, about rendering and about the communication between WASM "components" and other DOM objects. And of course, about GUIs in general: desktop UIs that assume a high-precision pointer and a display that can represent the width of a standard sheet of paper are not a particularly good fit for today's devices.
It's not quite the same argument as claiming "View Source" makes you a hacker, but it comes close.
If the browser environment makes it possible for legitimately- and illegitimately-acquired access tokens to be combined to achieve unauthorised access, then you shouldn't be issuing tokens that can easily be acquired illegitimately.
As for "legitimate use cases", I find myself struggling to imagine what they might be for Facebook as a whole. I suppose it keeps Nick Clegg out of government but it would be difficult to justify on that basis alone.
I have an elderly Dell Mini-10, which scores high on portability, but only has 2GB of memory. It will run Linux adequately - including support for the TV tuner and MPEG-2 "accelerator" card and would probably run it better with an SSD. However, it struggles with much in the way of web browsing owing to the memory constraint, so Chrome is actively contra-indicated I suspect.
Depending on the age of your code and the version of VS you've been using, you may find that the result of an update is simply that VS no longer recognises the format of your elderly project files. I seem to recall that there was exactly one version of VS that was capable of converting between the original .Net Core project format and the subsequent one: the capability was dropped from future versions.
In my experience you have two options: either you update VS on a regular basis - which will normally ensure any format conversions will work, at the expense of potentially having to fix things in the code which have become deprecated - or stick with the version you used to create the project in the first place, accepting the support limitations.
Perhaps someone more knowledgeable could help me out here as the documentation suggests I take a 14 week introduction to Kubernetes course before I even start looking at Argo CD.
However, it looks to me like this has nothing to do with Kubernetes and much to do with Argo CD having its own authorisation system which is attempting to police different levels of access to files stored under a single set of credentials in a git repository. And the remainder of the problem being people blithely storing secrets in widely-accessible git repositories and relying on automatic encryption and decryption to ensure those secrets reach only the right people.
Neither of those sound like particularly good choices if you want to reduce your attack surface. I suppose it's inevitable when so many packages of random origin find their way into the source code or the deployment apparatus of modern software that the collective effects are poorly understood. But it does seem that Ci/CD brings with it Continuous Fragility.
I've just changed mobile phone providers, partly because the new one claims to offer Wi-Fi calling. It's an MVNO and I'd already checked that Wi-Fi calling worked on my handset with the real network provider used by the MVNO (the real network being significantly more expensive). And, obviously, it mysteriously didn't work on the MVNO. After trying the "your handset isn't compatible" line, their customer support was commendably honest: "we're a new project and we haven't got everything set up yet". I'll stick with them on that basis alone.
As regards Mr. Dabb's difficulty of the week, then, obviously with the proviso that he has a valid subscription and is legally entitled to view the content - and that simply disabling JavaScript doesn't work - this may be more effective than a chat with customer services. At least they don't require you to put your brain in a jar and delete all other remnants of your material existence before proceeding.
Actually, there seem to have been first Gazette notices for compulsory strike-off in both August and December 2021, both of which were discontinued shortly afterwards. A notice was filed yesterday reporting the termination of a director appointment back in August of last year and a filing changing the registered office address is dated today. That seems more like there's an intention to keep it going.
Which is interesting, because if this is their website (it doesn't have a company registration number or address or phone number), all that phoning around netted them only 163 "happy clients".
It's interesting that Unix was a direct counterblast to Multics (hence the name) and in terms of having a "language alongside" they both developed on very similar paths - in the case of Multics the language was (roughly) PL/I. There's some very interesting stuff here about the bootstrapping processes for the compiler.
I've not tried writing serious Rust code in anger, but my initial impression is that is falls between two stools: on the one hand it's trying to have many of the features of LISP or JVM/.Net languages, but without the convenience that comes from the managed environment and garbage collection, and on the other it doesn't directly have the language constructs for dealing with hardware directly. That doesn't mean you can't extend its capability in either direction with macros and compiler directives buried in carefully-crafted crates - but at the cost of a very steep learning curve. I don't see that it makes simple things better or hard things easier.
The laptop I'm using at this moment is probably inferior to currrent Chromebooks but it's quite adequate for routine software development, circuit design, communications, administration and even some light video editing. However, it struggles once you start opening a lot of web browser tabs.
I don't have a Chromebook, but I suspect this may be the flaw: running the front end of your applications in JavaScript on top of a complicated and unwieldy document object model that wasn't really designed for UIs means that in reality you need better hardware than if you're primarily running your applications natively. I suspect the lower margins for Chromebooks are inevitable because of this.
And that's before you start weighing the limitations and inflexibility against whatever the advantages are supposed to be.
These days, so brand-consultants* tell me, you need multiple active social media accounts: face-tic, twitgram, instaToc-H, spacereunited and so forth to cover the main demographics and an IRC channel for the vinyl enthusiasts. All of which expect to gorge constantly upon fresh new compelling content, which means you have to get a cat as well, or spend a fortune on stock photos.
Printing a few leaflets and shoving them through local doors is a doddle by comparison. And likely rather more effective.
*Sheila and George from my formerly-local coffee shop,
The Hercules simulator has been around for a while (though it simulates S/370 and later, not S/360). However, since S/370 is a superset of S/360 it will boot both MVT as well as MVS along with a selection of other IBM OSs.
There's a list of available operating systems here and a list of the compilers for MVS, MVT and others here.
The basic problem here is that there is a historic all or nothing privilege system and the mitigations for that (like polkit) involve running significant chunks of code in an elevated privilege context in which any error is potentially very serious.
A long-standing bug with similar consequences was found in sudo.
I don't think RedHat is responsible for a fundamental design error made in the 1970s - or possibly earlier.
I was getting something approaching this 15 years ago working as a developer (albeit senior) in the public sector in London.
However, think how much you'll be able to earn when you quit the job after twelve months and get to pick and choose which of the exciting portfolio of companies makes you the best offer.
How to deal with people who don't have a "traditional" full-time job seems to be something that other governments struggle with too and it's interesting to observe their different motivations. In the UK the focus seems to be on the recovery of tax from the contractor and there is seeming indifference to the lack of employment rights that results. In Portugal, the motivation seems to be to prevent temporary contractors undercutting full-time employees and undermining their rights. One consequence is that if at the end of the year it transpires you've made more than 80% of your income from one client, that client gets lumbered with a social security contribution on your behalf; there are also significant restrictions on temporary and part-time contracts.
I'd be interested to know if there are examples of good practice. There seems to be an increasing desire for at least a proportion of people to work more flexibly and it's something employment law/taxation ought to be able to accommodate rather than view as problematic.
I wouldn't normally defend the use of Excel for anything that you couldn't otherwise do on squared paper*, but there is an argument to be made in this case.
Often, the biggest issue with exercises of this kind is determining whether the data you want actually exists, whether the responsible people are capable of collecting it and whether you can learn anything useful from the data when they do.
It may well be worth finding the cheapest possible solution for collecting trial data before you build an elaborate system to collect information that doesn't exist or isn't comparable across different locations.
*Though I suspect some may struggle even with that.
Actually, it is, in the sense that IPv4 can be carried through an IPv6 network without any loss of information. The optimistic assumption was that the backbone would convert to IPv6, and then the hosts, using IPv4 addresses before the IPv4 address space ran out. At which point, IPv6 addresses could be assigned to new hosts.
The flaw in this reasoning is that the protocol designers didn't operate the networks and hadn't accounted for the inertia resulting from not wanting to invest in change when it wasn't immediately necessary.
I was never a particular fan of IPv6 technically - and it's increasingly behind the times - but it works. I was always concerned about the likelihood of the transition not happening in line with its designers' optimism and that considerably more needed to be done on the presumption of long-term coexistence. The trouble is we're now running out of kludges, especially for things like push notifications and always-on devices, and they're become significantly more painful than simply moving to IPv6 for all its problematic legacy.
That is most peoples use cases
I VPN to my home network quite a bit, but it requires DDNS (an application-layer kludge) and only works because my ISP currently doesn't share IP addresses between customers. They won't be able to do that forever. Much as you won't be able to get a static IPv4 address at a reasonable cost in perpetuity.
I know someone who's wired broadband has been out of action for some weeks thanks to OpenReach and can't replicate their current VPN connectivity with their temporary wireless connection for this precise reason. The continuing contortions to avoid IPv6 are slowly, but gradually, undermining perfectly reasonable use cases.
As others have said, the problem for an IPv4 end system is that if there are more than 2**32 hosts in the Internet there is no way it can distinguish between all of them with only a 32 bit field. Backwards compatibility at the network layer is simply physically impossible. People who argue otherwise will often suggest packing additional bits into optional or unused fields, but that's exactly the same solution as IPv6, just with the extra bits in a different place - it doesn't alter the fundamental problem.
There are possible multi-layer solutions. For example, an iPv4 host could look up a domain name in a specially-crafted DNS server. If the domain name had only an IPv6 address, the server could allocate a temporary IPv4 address to represent the IPv6 address, inform the network layer of the mapping and have the ingress/egress point perform a translation. From a technical point of view, it could well have been worth doing this 10 years ago - or even 5 - when there were fewer IPv6 stacks available. But it would still leave the control in the hands of the network provider as they'd need to provide both the DNS and the boundary translation for this to work.
However, there's no technical value in doing that now as the only thing preventing IPv6 deployment is carrier inertia. The end systems are pretty much all ready.
I do think there is a potential consumer issue, though. Most of the support documentation for years has focussed on users visiting "192.168.1.1" or variants thereof to manage their router and I suspect carriers fear a tsunami of support calls from funny-looking addresses. However, it's not as if local IPv4 stacks will stop working and the only real way to evaluate the support demand is to start doing it.