* Posts by bazza

2021 posts • joined 23 Apr 2008

Bosch and Daimler jump in together on driverless vehicle tech

bazza
Silver badge

I'm predicting that a million gallon vat of custard will suddenly up end itself all over them.

0
0

Australia joins the 'decrypt it or we'll legislate' club

bazza
Silver badge

Re: @ bazza

@Tiggity,

That's all fine and good, but your bank is using IT and Internet connections between its business centres for conducting your banking business on your behalf, even if you don't interact with them except for in the branch. I strongly support your way of using a bank, but it's security is as illusory as https and passwords are for Internet banking. That is to say, it's security is pretty good, but not completely guaranteed.

The one definite plus point is that your not using a computer for banking, so you yourself are not being hacked. It's often a user's PC / mobile being stuffed full of password sniffing trojans that lies behind ebanking frauds.

1
5
bazza
Silver badge

Re: @ bazza

@Evil Auditor,

Partially, I do. I know my account manager reasonably well and - for now, still - I identify her by voice and vice versa.

Oh to have a bank where one can recognise the staff, instead of some vast call centre... Er, have you heard of Rory Bremner? Used to phone up politicians whilst impersonating another politician, for comedic effect. Dangerous guy!

If you come up with a feasible network architecture that is inherently secure: I'm game! I doubt that it will do without encryption though. Encryption is much older than our data networks; the objectives remain the same, i.e. privacy and non-repudiation.

I'm afraid I can't beat the phone networks, and they rely entirely on control of connection points and of wires for security. Circuit switching is just a way to route through known locations, which is a plus too. But the net effect is that the phone network is less of a free for all where baddies roam (apart from the effing PPI lot). It's worth noting that the only reason why we kinda trust the phone network is that it is heavily regulated and in effect policed, in a way that the Internet just isn't.

At the end of the day there is no good solution to the identity problem. We have to meet the other person to know for sure who they are. Encryption algorithms are valueless without solving that identity problem well, and we haven't. Also what we have is only one way; you don't need a certificate to be Facebook user...

0
13
bazza
Silver badge

Re: @ bazza

@Lost all faith...

wow Bazza, you really do live with the Unicorns.

Well, not quite!

So I should just send my bank details and card details to Amazon on the back of a postcard?

Er, your bank posted them to you in the first place. Flimsy things, envelopes, very easily opened.

Or my corporate work should all be done over ftp with no password?

If that server you have no control over has been poorly set up and someone else is already inside it, your password is of zero value.

Maybe I should leave my phone unlocked, with all my personal details free to view, should I ever leave it somewhere by accident.

And if you use Android (which seems to be easily rooted by malware) and possibly soon iPhones (whose boot loader source code has leaked and may suffer a similar TITSUP), what's the difference between locked and unlocked?

Or in your world, should we just accept what our governments want to do to us and never stick two finders up and say Fuck you, I'm not putting up with this.

Feel free to do that, but I fear they're going to do it anyway. More voters couldn't care less about that, but do care about crime figures, fraud, online bullying, etc.

This generation has already gotten soft, with slacktivision being the new force of "change".

Press "Like" if you want the world to change for the better.

Finally, something to agree on.

1
27
bazza
Silver badge

Re: @ bazza

@Evil Auditor,

What about a secure communications infrastructure, one where my hypothetical millions in the bank account are not put at risk? Yes, we are talking about encryption again. No matter what kind of "secure" network architecture you use, I wouldn't trust the nodes in between me and my bank.

Well you trust the phone network when you call up your bank don't you? The world would be a whole lot better if we could trust the Internet in the same way. That certainly is wishful thinking indeed on my part, but we have to recognise that if the Internet's network were as trusted as that then a lot of the political pressures on services like Facebook, etc. would go away.

0
33
bazza
Silver badge

Re: Sauce for the goose...

@Adam 1,

And one doesn't have to wonder too hard to realise that the baddies will continue to use the existing strong encryption to communicate with each other or to lock up your files and demand a ransom. Meanwhile, your defences against this same scum are gone. You first.

I rather think you're missing the point of my reference to circuit switched networks.

Using the Internet as it is today without encryption is indeed security suicide. It's a hostile place to be. My point is that that hostility is itself something that should not exist so easily.

Every time we (as a profession) add some encrypted this, certificated that, etc. in an attempt to make the Internet "safe", we screw it up to the point where it's not working in a useful way. Look at https and the system of certificate authorities that "secures" it. It doesn't secure it at all. There's a market for certificates, and some of the vendors aren't particularly choosy who they sell certificates too.

The whole point of certificated https is to establish certainty as to who the other end point really is. Well, another way of doing that is to have a network where physical endpoint identity is guaranteed by the network provider. You dial someone's phone number, you know whose phone is ringing.

There's many an OS or browser or server that salts and encrypts passwords, but what's the bleeding point when 1) users pick daft passwords, 2) tons of software flaws mean that passwords get exposed in other ways. We've tried other means such as biometrics, but they just do not work very well in the first place. Anyway they're no better than writing down a complicated password on a piece of paper and pinning it to your monitor. And you can't change one's biometrics without a lot of surgery.

Nasties like ransomware continue to ruin many an unwary user's day, despite the many layers of protection in browsers and OSes that themselves use encryption

My reference to circuit switched networks is that you know more about how one's traffic gets from A to B and exactly who the intervening switches belong to. That also makes the network operators keener to establish user identity before hooking you up (you can't just hook up to a phone line and get a service all by oneself). So you know more about from where and from who traffic is coming from (caller ID on a phone network is a useful way of blocking that annoying Aunt who phones all the time). Network traits like that are a useful thing for keeping baddies at bay.

These are traits that the Internet just does not have, and I think that that is an increasingly bad thing. I don't think the sticking plasters we patch on top are doing a good enough job. There is a risk that the global Internet will get fragmented by concerned politicians (the really malicious ones have done it already), and the only way of heading that off it to clean up the network and make it hard for the baddies to use it anonymously (like they can now).

That's a massive and unachievable job, but unless it gets done we may have to live with some significant consequences for the network's design, operation and reach.

0
33
bazza
Silver badge

Re: @ bazza

No.

And note that I'm merely observing that a lot of the tech we get given is pretty useless really, and warning that there's the beginnings of a trend towards Internet fragmentation and control that we may not like, and the major tech companies are doing absolutely nothing substantive to discourage.

Politicians respond to costs, crime figures and votes. They absolutely will pass laws if they see Internet bullying, on-line paedophilia, terrorism, etc. becoming an electoral issue. When the Madrid train bombings happened, the sitting government was widely blamed by the population for not having done enough to prevent it. They lost the general election that followed soon after.

Is it any surprise that other governments look at that event, look at what's going on on-line, and start making noises? If you are surprised by that, or doubt it somehow, then you don't know what a politician is or how they get and keep their jobs.

Politicians are also quite good at recognising what the average voter will vote on (that's different to what they say they want). Unchecked criminal activity (including on-line nasties) is the surest way imaginable to be kicked out come the next election. On the other hand, the quite small percentage of the population that is actually going to make an election time noise about online monitoring by law enforcement agencies is, electorally speaking, ignorable. If you think otherwise then I suggest to try it out for yourself by standing for election on the issue.

That's what the large tech companies don't seem to understand. Compared to the interests of a politician in being seen to be doing something effective about law and order, the tech companies business model and their "we're secure and private" marketing is of zero concern to a sitting government. It's only in the USA (where lobbying is such a corrosive force) can the tech companies get political leverage. And ask yourself, why do the tech companies need to lobby so much?

Add in to the mix the fact that companies like Facebook, Twitter are seemingly quite content to be conduits by which the democratic process is externally influenced, and you have the perfect recipe of reasons why governments will change and pass laws about such things.

If you don't like that, try making it an electoral issue and see how your fellow countrymen vote.

Me? I'm pretty neutral on the matter. Encryption is occasionally useful, often useless, and definitely dangerous. A well set up E-Banking website is useful. Encrypting passwords is pretty useless given the myriad of other software and hardware flaws that get used to leak credentials. Tor likes to present itself as being a force for good. Given the sort of people (paedos, drug dealers, etc) who actually seem to use it in places (e.g. Western Europe) where you don't otherwise need it, and that you can't use it in places where you might need it (China), I doubt that Tor has much net social value.

If we had a network that didn't require encrypted communications for reasonable security, then

2
55
bazza
Silver badge

Re: Sauce for the goose...

Whilst many see encryption as being a tool with which to defend against baddies, one has to wonder whether we'd be better off without it. It's not actually helping those who have been on the wrong end of a ransomware attack, or a credentials loss. And in a week which has seen yet another Tor-using paedo jailed for a few decades, it is undeniable that encryption is just as powerful a tool for baddies as it it can (with difficulty) be a tool for goodies.

We only have encrypted network communications because we don't trust the transport to be free of eavesdropping. Anyone using Facebook, ebanking, an IMS, etc, is perfectly happy with the recipient reading data; https is not being used to defend data at rest. In a packet switch network, eg the Internet, one has no idea where data flows, so encryption is desirable. Don't want the Russians seeing my communications with my bank.

With a circuit switched network, one does know where data flows. Perhaps that's a desirable trait.

Skewed Debate

One of the problems with the whole debate is that Americans generally loathe and distrust their own government in a way that all other civilised societies don't. Add the commercial interests of MegaCorp Inc into the mix and you end up with technology solving problems that, on the whole, the man on the street in the UK isn't really worrying about. At the same time MegaCorp is also creating problems that man on street does care about, and will vote accordingly.

Add in differences in laws, customs, and social expectations on policing, and you have to question whether or not the Internet is going to remain as one network, or will it start getting broken apart at national boundaries. Just like China has already done.

Now if Facebook, Google, Apple, Amazon, Twitter, MS, etc. want to avoid that and preserve their business models then they're going to have to give governments a reason to not want to put up national firewall. But theyre doing the exact opposite. It's long term commercial suicide.

1
95

Qualcomm opens maw, prepares to swallow Dutch chipmaker NXP

bazza
Silver badge

Re: Reduced Diversity ???

When NXP bought Freescale, they got what had once been Motorola's successful line of PowerPC based CPUs. In certain key product areas these are still heavily used, including by a customer who has their own way of insisting on continuity of supply: the US Government.

PowerPC is widely used in a lot of quite important military systems (radars, communications systems, etc), and Uncle Sam doesn't want to have to completely re-engineer things simply because some corporate wonk has decided to prune the product list. I know it's used in F35, and it would enormously expensive to re-do those bits of that already very expensive and very late jet.

When Apple bought PA-Semi all those years ago, Apple found themselves on the wrong end of an un-ignorable instruction from the US DOD, and had to keep the PA-Semi PowerPC line running for quite some time afterwards. Which was a pity for Apple because they didn't want the PowerPC part, they wanted the staff, but failed to retain them. So it didn't work out for Apple... Qualcomm most likely will receive a similar missive.

Incidentally, PowerPC was popular because it was pretty quick (Altivec was far better than anything Intel did at the time), and designed for real time systems (no variable clock frequencies, no Management Engine cocking about with power modes, etc). Had Freescale not screwed up their 12-core roll out, PowerPC might still have been a decent contender to this day.

4
1

Oracle open-sources DTrace under the GPL

bazza
Silver badge

Re: They should relicense...

Well, so long as they don't stop releasing it under other licenses. We don't want FreeBSD and everything else being screwed simply because of switch to GPL2.

6
1
bazza
Silver badge

Re: Open source tools

Its been in Oracle Linux for years.

The problem is the license and Oracle's lawyers.

Strictly speaking it's not Oracle's lawyers that are the problem. It's GPL2 that has the problem accepting other licenses. Oracle (Sun) can license their code in whichever way they want to; it's up to the rest of us to respect that.

5
3

Opportunity knocked? Rover survives Martian winter, may not survive budget cuts

bazza
Silver badge

Re: Give it away.

One of the problems with that idea is, I fear, access to the deep space communications network. You can't just point a 1 meter dish in the general direction of Mars and get an IP connection going. You need access to NASA's network of dishes dotted around Earth. And they're pretty busy, and expensive.

However, my view is that whilst there's a prospect of the Martian marauder still doing useful science, then it would be a hideous waste of money to not use it. Arguably these two trundlers, and Curiosity too, represent astonishing science (and public) value for US taxpayer dollars; they've gotten far, far more for than anyone ever bargained for. The original project Balance of Investment report is utter toast; the costs account is so heavily outweighed by the delivered benefits account those responsible should get medals.

Really the answer should be, build more deep space communications dishes, and get even more value out of the missions that are running, and out of other future long lived missions. There is an argument that international collaboration on that network is the way forward.

38
1

A computer file system shouldn't lose data, right? Tell that to Apple

bazza
Silver badge

Re: Cutting back on features? Not exactly.

Standard way to get the best out of an IT department. Throw a 5kg bar of premium chocolate in through the door, slam it shut fast and barricade it, wait for the howls and thumping sounds to die down. Carefully open the door to make your request, be sure to ignore the general dishevellement of the staff and the broken furniture. Enjoy benefit of first class services for the entire rest of the week.

15
1

Hate to ruin your day, but... Boffins cook up fresh Meltdown, Spectre CPU design flaw exploits

bazza
Silver badge

Re: Time for NUMA, Embrace your Inner CSP

Except that we have a very, very long & sad history of the same class of bug popping up over & over. Every hear about buffer overflow? How is that even still a thing? And yet, we continue to see them.

You are right in what you are saying. It's what you are not saying that bugs me.

Ah, I think I see what you mean (apologies if not). Yes, timing is an issue.

CSP is quite interesting because a read / write across a channel is synchronous, an execution rendezvous. The sending thread blocks until the receiving thread has received, so when the transfer completes each knows whereabouts in execution the other has got to. That's quite different to Actor Model; stuff gets buffered up in comms link buffers, and that opens up a whole range of possible timing bugs.

CSP by being synchronous largely gets rid of the scope for timing bugs, leaving one with the certainty that one has either written a pile of pooh (everything ends up deadlocked waiting for everything else), or the certainty that you haven't got it wrong if it runs at all. There's no grey in between. I've had both experiences...

However, nothing electronic is instantaneous; even in a CSP hardware environment it takes a finite amount of time for signals to propagate; no two processes in CSP are perfectly synchronised, so there is some tiny holes in the armour. The software constructs may think they're synchronised ("the transfer has completed"), but actually they're not quite. But it is good for the needs of most real time applications.

One advantage of this approach is that it doesn't let one trade latency for capacity. With actor model systems data can be off-loaded into the transport (where it gets buffered). Therefore a sender can carry on, relying on the transport to hold the data until the receiver takes it. That's great right up until someone notices the latency varying, and until the transport runs out of buffer space. With CSP, because everything is synchronously transferred, an insufficient amount of compute resource late on in one's processing chain shows up immediately at the very front; there is no hiding that lack of compute resource by temporarily stashing data in the transport. This is excellent in real time systems, because throughput and latency testing is conclusive, not simply "promising".

2
1
bazza
Silver badge

Re: Time for NUMA, Embrace your Inner CSP

It's fascinating how normal UNIX commands would be a good fit for CSP architectures.

Very nearly! However piping UNIX commands together is closer to Actor Model than CSP; the pipes really are asynchronous IPC pipes, not synchronous channels that CSP has. Also there's some limits on how commands can be plumbed together; I don't think you can do anything circular.

The irony of IPC pipes is that what they provide is an asynchronous byte transport, but they're implemented in the kernel using memory shared between cores and semaphores. The ironic part is that that shared memory is faked; it's an SMP construct that is synthesised on top of a NUMA architecture. That in turn is knitted together by high speed serial links (QPI, Hypertransport), and these links are asynchronous byte transports! Grrrrr!

The one hope is that microkernel OSes come to predominate, with bits of the OS joined up using IPC pipes instead of shared memory. That opens up the opportunity for the hardware designers to think more seriously about dropping SMP. It may happen; even Linux, courtesy of Google's Project Treble, is beginning to head that way.

1
1
bazza
Silver badge

Re: Time for NUMA, Embrace your Inner CSP

Thanks for the CSP references, best of luck with them too, though the marketdroids and 'security researchers' won't thank you for them. Maybe there's a DevoPS angle on them somewhere?

No worries. I've no idea about what security researchers would think, etc. Adopting CSP wholesale is pretty much a throw-everything-away-and-start again thing, so if there is a DevOPS angle it's a long way in the future!

Personally speaking I think the software world missed a huge opportunity to "get this right" at the beginning of the 1990s when Inmos Transputers (and other things like them) looked like the only option for faster computers. Then Intel cracked the clock frequency problem (40MHz, 66MHz, 100MHz, topping out at 4GHz) and suddenly the world didn't need multi-processing. Single thread performance was enough.

It's only in more recent times that multiple core CPUs have become necessary to "improve performance", but by then all our software (OSes, applications) had been written around SMP. Too late.

As far as I know, modern RISC processors have tended to be built around a memory model which does not require external memory to appear consistent across all processes at all times. So if some code wants to know that its view of memory is consistent with what every other process/processor sees, it has to take explicit action to make it happen.

Indeed, that is what memory fences are, op codes explicitly to allow software to tell the hardware to "sort its coherency out before doing anything else". Rarely does one call these oneself, they're normally included in other things like sem_post() and sem_wait(); they get called for you. The problem seems to be that the CPUs will have a go at doing it anyway, so that when a fence is reached in the program flow it takes less time to complete. And this is what has been exploited.

Where can readers find out more about this particular historical topic and its current relevance?

A lot of it is pre-internet, so there wasn't vast repositories online to be preserved to the current day! The Meiko Computing Surface was a super computer based on Transputers - f**k-loads of them in a single machine. Used to have one of these at work - it had some very cool super-fast ray tracing demos (pretty good for 1990). I heard someone once used one of these to brute force the analogue scrambling / encryption used by Sky TV back then, in real time.

The biggest barrier to adoption faced by the Transputer was development tooling; the compiler was OK, but machine config was awkward and debugging was diabolically bad. Like, really bad. Ok, it was a very difficult problem for Inmos to solve back then, but even so it was pretty horrid.

I think that this tainted the whole idea of multi-processing as a way forward. Debugging in Borland C was a complete breeze by comparison. If you wanted to get something to market fast, you didn't write it multi-thread back in those days.

However, debugging a multi-threaded system is actually very easy with the right tooling, but there's simply not a lot of that around. A lot of modern debuggers are still rubbish at this. The best I've ever seen was the Solaris version of the VxWorks development tools from WindRiver. These let you have debugger session open per thread (which is really, truly nice), instead of one debugger handling all threads (which is always just plain awkward). WindRiver tossed this away when they moved their tool chain over to Windows :-(

There was a French OS called (really scrapping the memory barrel here) Coral; this was a distributed OS where different bits of it ran on different Motarola 68000 CPUs. I also recall seeing demos of QNX a loooong time ago where different bits of it were running on different computers on a network (IPC was used to join up parts of the OS, and these could just as easily be network connections).

The current relevance is that languages like Scala, Go and Rust all have CSP implementations in them. CSP can be done in modern languages on modern platforms using language fundamentals instead of an add-on library. In principal, one attraction of CSP is system scalability; your software architecture doesn't change if you take your threads and scatter them across a computer network instead of hosting them all on one computer. Links are just links. That's a very modern concept.

Unfortunately AFAIK Scala's, Go's and Rust's CSP channels are all stuck in-process; they aren't abstract things that can be implemented as either a tcp socket, ipc pipe, or in-process (corrections welcome from Go, Scala and Rust aficionados). I think Erlang CSP channels do cross networks. Erlang even includes an ASN.1 facility, which is also very ancient but super-useful for robust interfaces.

The closest we get to true scalability is ZeroMQ and NanoMsg; these allow you to very readily switch from joining threads up with ipc, tcp, in-process exchanges, or combinations of all of those. Redeployment across a network is pretty trivial, and they're blindingly fast (which is why I've not otherwise mentioned RabbitMQ; its broker is a bottleneck, so it doesn't scale quite as as well).

I say closest - ZeroMQ and NanoMsg are Actor Model systems (asynchronous). This is fine, but this has some pitfalls that have to be carefullly avoided, because they can be lurking, hidden, waiting to pounce years down the line. In contrast CSP (which has the same pitfalls) thrusts the consequences of one's miserable architectural mistakes right in one's face the very first time you run your system during development. Perfect - bug found, can be fixed.

There's even a process calculii (a specialised algebra) that one can use to analyse the theoretical behaviour of a CSP system. This occasionally gets rolled out by those wishing to have a good proof of their system design before they write it.

Not bad for a 1970s computing science idea!

OpenMPI is also pretty good for super-computer applications, but is more focused on maths problems instead of just being a byte transport.

1
1
bazza
Silver badge

Re: Time for NUMA, Embrace your Inner CSP

Yes, in NUMA, every application is required to figure out how to do this, as opposed to having hardware do it.

But NUMA systems are still going to be vulnerable to this sort of thing, absent proactive steps taken by the design team. You have to have some way to manage synchronization. Timing will always matter.

The nice thing about a NUMA system is that if the software gets it wrong, it can be fixed in software. Plus, faults in software are going to be fairly specific to that software. The problem with having hardware second guess what software might do is that it does it the same way no matter what, and if it gets it wrong (like has been reported in this article) it's machine fault that transcends software and cannot be easily fixed. Ooops!

1
1
bazza
Silver badge

Re: Time for NUMA, Embrace your Inner CSP

You're missing the point. Naively written anything will overwhelm underlying hardware. There's nothing magical about shared memory in a SMP-on-top-of-NUMA system that means that poorly written code won't run into the limits of the QPI / Hypertransport links between CPUs.

Synthesising SMP on top of NUMA requires a lot of traffic to flow over these links to achieve cache coherency. Ditch the SMP, and you've also ditched the cache coherency traffic on these links, meaning that there's more link time available for other traffic (such as data transfers for a CSP system). And you've got rid of a whole class of hardware flaws revealed in the article, and you have a faster system. What's not to like?

From what I hear from my local Rust enthusiast, Rust's control of memory ownership boils down to being the same as CSP. Certainly, Rust has the same concept of synchronous channels that CSP has.

One of the good things about CSP is that it makes it abundantly clear that one has written rubbish code; there's a lot of channel reads and writes littering one's code.

4
1
bazza
Silver badge

Re: Don't panic, "No exploit code has been released."

Oh don't worry, some of us have been "deeply concerned" (actually, quivering wrecks but masking it well, chin up) for quite some time now.

This whole thing is going to pan out to be far worse than Y2K, for there will be real and far reaching consequences.

8
1
bazza
Silver badge

Re: Just kill ALL code in a browser.

Though perfectly valid, that's a very "me" point of view. I wholeheartedly agree that running arbitrary code downloaded into some sort of browser based execution engine is asking for trouble.

Other people have the problem that, by intent and design, they're letting other users choose what to run on their hardware. Services like AWS are exactly that. If one lets employees install software on their company laptop, it's the same problem. A computer that is locked down so that only code that the IT administrator knows about is running is very often a useless tool for the actual user.

So really, the flaws need fixing (as well as ditching Javascript) otherwise computers become pretty useless tools for the rest of us.

11
1
bazza
Silver badge
Mushroom

Time for NUMA, Embrace your Inner CSP

This particular round of hardware flaws has come about because the chip manufacturers have continued to support SMP whilst building architectures that are, effectively, NUMA. The SMP is synthesised on top of the underlying NUMA architecture. That's what all these cache coherency and memory access protocols are for.

This is basically a decades long bodge to save us all having to rewrite OSes and a whole shed load of software. This is the biggest hint that the entire computing community has been recklessly lazy by failing to change. If we want speed and security it seems that we will have to rewrite a lot of stuff so that it works on pure NUMA architectures.

<smugmode>The vast majority of code I've ever written is either Actor Model or Communicating Sequential Processes, so I'm already there</smugmode>

Seriously though, languages like Rust do CSP as part of their native language. An OS written in Rust using its CSPness wouldn't need SMP. Though the current compiler would need changing because of course it too currently assumes an underlying SMP hardware architecture... If the SMP bit of our lives can be ditched we'll have faster CPUs and no cache coherency based design flaws, instead of slowed down software running on top of bodged and rebodged CPU microcode.

Besides, CSP is great once you get your head around it. It's far easier to write correct and very reliable multi-threaded software using CSP than using shared memory and mutexes. You can do a mathematical proof of correctness with a CSP system, whereas you cannot even exhaustively test a multithreaded, shared memory + mutexes system.

Oh, and Inmos / Tony Hoare got it right, and everyone else has been lazy and wrong.

12
2

What if I told you that flash drives could do their own processing?

bazza
Silver badge

Re: I'd say CAFS

Yup Bazza, ICL had VME [Virtual Machine Environment] decades before VMware re-invented virtualisation

Ultimately we have to credit Alan Turing with the idea, showing in one of his papers on computability that any Turing machine could emulate any other Turing machine, given enough memory.

Though I don't suppose Turing foresaw the horror that is VmWare's Web based VM console viewer.

0
1
bazza
Silver badge

Re: I'd say CAFS

Your patience has been rewarded!

Sometimes I wonder if the next great "innovation" can be brought forward by employing a historian instead of a computer scientist < 60years old. Historians with a bit of nounce can be very good at unearthing hidden gems from old stuff.

For me it's been amusing to see Communicating Sequential Processes being reinvented yet again.

2
1

Still not on Windows 10? Fine, sighs Microsoft, here are its antivirus tools for Windows 7, 8.1

bazza
Silver badge

Wild Arse Guess

Could this be them thinking, "there's a lot of Windows 7 machines out there that we don't want falling victim to Meltdown / Spectre vectored threats, 'coz that'd make us look bad even if it is an old OS."?

23
7

No sh*t, Sherlock! Bloke suspected of swallowing drug stash keeps colon schtum for 22 DAYS

bazza
Silver badge

Re: The Assagne option

Have they offered him Ferrero Rocher?

Yes, every police station in the UK has some ready just in case some ex-Aussie info peddler with an addiction problem drops in. You can't be too careful with dietary requirements these days...

9
1

Wow, MIND-BLOWING: Florida Man gets an earful from 'exploding Apple AirPod' bud

bazza
Silver badge

Anyone Said It Yet?

He was wearing it the wrong way.

7
3

You can resurrect any deleted GitHub account name. And this is why we have trust issues

bazza
Silver badge

What am I missing here?

Not a lot so far as I can see. Just another symptom of the lack of rigour within the software dev community these days.

Internet repos have made programmers lazy. Perhaps there ought to be a quarterly outage (on some random day within the quarter) on all of these online repositories, just to remind people. Bit like Netflix have a piece of software that deliberately randomly crashes bits of their infrastructure to make sure that their devs have built a resilient system.

As for GitHub permitting a new account to be stood up with a previously used name - terrible.

56
2

Apple's top-secret iBoot firmware source code spills onto GitHub for some insane reason

bazza
Silver badge

Re: Cupertino's highly secretive idiot-tax operations

(Yes, El Reg hacks use Macs. That's part of the joke. We also have a new rule that you have to split your time between macOS / Linux and Windows, so we get the same daily experience of crap technology our readers face.)

Er, you should add Solaris, FreeBSD and OS/2 to your list; I fear you're missing out on the adventure of a lifetime!

1
1

Facebook users are Zucking off, but that's what Zuck wants

bazza
Silver badge

Re: What's your angle, bitch?

FTFY:

Silicon Valley: socially liberal to a fault but libertarian where it really matters ($$$$$$$$$$$)

I don't see why he should be content with less hours on site. Ad revenue will always turn out to be proportional to how many adverts are seen. And anyone remember Myspace? They experienced a drop in hours on site too.

In reality Zuckerberg is likely beginning to feel a few qualms. Or at least to the extent that anyone with a few $Billion in the savings account can ever feel qualms. Regulation in some quite large markets is a real and growing possibility. User can potentially desert the site overnight. But ultimately I don't think he personally will lose much sleep over it.

2
3

When you play this song backwards, you can hear Satan. Play it forwards, and it hijacks Siri, Alexa

bazza
Silver badge

Re: How well does this attack work

"Hey Siri, buy me an Alexa"

Or Any Windows 10 PC, soon, apparently.

1
1

NASA finds satellite, realises it has lost the software and kit that talk to it

bazza
Silver badge

Re: Need help, NASA?

Good golly - they need Windows ME!!

No, they really really really do not need ME...

11
2

Firefox to emit ‘occasional sponsored story’ in ads test

bazza
Silver badge

Re: Firefox should get money from UK's TV licence...to save it from ads.

Introduce the Millennials to the joys of Muffin the Mule!

"We like Muffin, Muffin the Mule …"

Or as Billy Connelly put it, "Muffin the Mule, not allowed to do that anymore..."

0
1

UK's iconic Jodrell Bank Observatory nominated as World Heritage Site

bazza
Silver badge

Re: Doesn't that mean the crap dish can't be replaced by new gear?

I drive past the Mullard Radio Astronomy Observatory every day of the week, and it looks to be in pretty busy use. And of the three routes under consideration for the Bedford / Cambridge stretch of the re-opened Varsity line, only one of them involves loosely following the old route and avoids the MRAO. Jodrell is in active use (or was less than 18 months ago when I was last there). And just in case you're in any doubt about the continued usefulness of places like Jodrell and Mullard, take a look at E-Merlin.

6
1

Can't login to Skype? You're not alone. Chat app's been a bit crap for five days now

bazza
Silver badge

Re: Any good replacement that would be simple for elder users to install?

Try BBM. Works on iOS, Android (including new BlackBerries), BB10 (old BlackBerries). My family is using it, crumbly parents included. The video / sound quality is pretty good, and so far zero problems with availability. It does some things really well.

1
1

Trump White House mulls nationalizing 5G... an idea going down like 'a balloon made out of a Ford Pinto'

bazza
Silver badge

Re: JohnFen What race?

With the rush to 4G, 3G is in danger of being forgotten, which is kind of a missed opportunity. There's 3G spectrum out there, and a growing stockpile of spare kit; it may as well be used properly.

I know that 3G networks are nasty to set up well (ask any network engineer about cell breathing). But a well set up network is still a very good thing. Anyone with experience of Japan's NTT Docomo's 3G network whilst on a Shinkansen train doing 190mph in a tunnel and still getting 20Mbit/s will testify to the potential 3G had / has.

It's very easy for us Europeans to tease the US about the poor state of some of its infrastructure (power, wireless networks, etc). However I don't think many Europeans really realise just how vast and empty large chunks of the US really are. It's vast.

It's a major engineering challenge to provide things like power and comms to places which are very nearly empty. Same goes for Canada, Australia, Russia, China, Africa, etc.

5
2
bazza
Silver badge

Re: What race?

Indeed, who? And if it comes to "racing" against other countries, the US is nowhere near the front. AFAIK the Japanese, South Koreans, and large chunks of Europe had 4G long before the US...

Japan and South Korea seem to deploy networks so quickly that the ink has barely had time to dry on the standards documents before they have national coverage and an array of competing providers. I know they have smaller geographic areas to cover, but even so.

There's a kind of nationalisation here in the UK; network operators are under some pressure these days to share base station sites. That's not so very far removed from turning all the operators into virtual networks on a single physical network...

Some kind of sense? Perhaps. I suspect that a radio network works and scales far better when it's the only network, instead of having to compete for spectrum, cell tower sites, back haul network capacity, etc.

10
1

I want life to be boring, says Linus Torvalds as Linux 4.15 debuts

bazza
Silver badge

Re: Retpoline

Apparently they got the idea from John Lewis.

0
1

FYI: Processor bugs are everywhere – just ask Intel and AMD

bazza
Silver badge

Er, there's OpenPOWER from IBM (current and open source) and Sun used to give away SPARC designs for free (I think Oracle still do).

OpenPOWER is particularly attractive, there's a bunch called Raptor Engineering doing a completely open source machine (chips, board schematic, firmware and Linux) based on it. There's lots of reasons to buy one of those!

29
3

Death notice: Moore’s Law. 19 April 1965 – 2 January 2018

bazza
Silver badge

Re: Absolute tosh!

@Charles,

IOW, caching is basically a case of "Ye cannae fight physics," hitting a hard limit with the Speed of Electricity.

I think we can do a little better than the DRAM that's currently used. HP's memristor is (apparently) faster than DRAM. As well as being non-volatile and with no wear life problems and huge capacities. So a SIMM based on that would be quicker. But still not quick enough to eliminate the need for a cache.

As things are today it's kinda nuts; the signalling rate down those PCB traces is so fast that they're RF transmission lines, and there's more than 1 bit on the trace at any one time! It was the Cell processor in the PS3 that first used that style of RAM connection. Sigh - I miss the Cell; 100GByte/sec main memory interface. It was one helluva chip.

Unless, of course, latency comes into play. Why do you think network computing has such limited use outside the controlled environment of LANs? Because the Internet is itself an untrusted, unreliable environment. You're simply trading one set of disadvantages for another.

Not really. We already have an elaborate certification system to establish that the website I'm getting data from is in fact the website it says it is. All I'm talking about is changing the data that's recevied. At present it's a blend of html, javascript, css, etc. That's not a problem if it comes from a website we trust, but the javascript is potentially disasterous if it comes from a malicious website. However, if what my "browser" received were simply a remote display protocol then I don't care what the website is showing me, it cannot (assuming the protocol implementation is good) run arbitrary code on my machine. There would be no such thing as a malicious site, because there would be no mechanism by which any site could launch arbitrary code on a client's machine.

I suppose I have to trust the site to run the code they've said they will. But I do that anyway today; for example I trust Google to send me the correct Javascript for what is to be done.

As for reliability - services like Google Docs are all about the Internet and Google's computers being reliable (or at least they're supposed to be).

And for many, the reason the code MUST run client-side is because you need the speed you cannot get other than from a locally-run machine. Ask any gamer.

That's true enough; a game that runs in a browser is better off running client side instead of server side. I suppose I'd counter that line of argument by asking what's wrong with a proper piece of installable software instead (I know, I know; web, write once run anywhere, etc etc).

But for the majority of what most of us do with the web I dare say that we'd not notice the difference. Furthermore the monstrous size of the pages some websites dish up these days is ridiculous (www.telegraph.co.uk is appallingly bloaty). We really would be better off getting a remote display data stream instead; it'd be less data to download.

As far as I can tell there is no real disadvantage for the client in having server side execution viewed with some sort of remote display protocol (unless it's a game), and only positive benefits. The server's worse off though; instead of just dishing out a megabyte or so of html/javascript/css/images, it'd have to actually run the darn stuff itself. That would take considerably more electrical power than the likes of Google, Amazon, consume today. The economic model of a lot of today's "free" services would be ruined.

I think that it's unfortunate that the companies that would lose a lot by such a massive change (Google, Facebook, etc) are also those with a lot of influence over the web technologies themselves (especially Chrome from Google). Instead of getting web technologies that are better for clients, they're in a position to ensure that we keep using technologies that are better for themselves. That's not so good in my view.

Interestingly I've been taking a close look at PCoIP a lot recently. One of the directions Teradici seem to be headed is that you use that protocol to view a desktop hosted on AWS. That's not so far away from the model I've outlined above...

5
2
bazza
Silver badge

Re: Absolute tosh!

This is sheer prophecy - i.e. total BS

It's a bit like when scientists claim that we currently know everything worth knowing except maybe some constants to an even higher number of decimal places. It's happened many times through history and these prophecies have always been wrong.

Who knows knows what innovations will come? Nobody does.

So long as DRAM is slower than CPU cores, we'll need caches and speculative execution to keep things as fast as they currently are. Given that DRAM latency is effectively governed by the speed of a signal along the PCB trace between the CPU and the SIMM, I'd say we're pretty much stuffed.

Stop Executing Arbitrary Code

One aspect overlooked in a lot of the discussion is that this is only, and really only, a problem if you are executing code on your machine that you don't trust. If you trust all the software that's running, then you have no need to patch or redesign to avoid Meltdown and Spectre.

The real problem behind this is that these days pretty much everything we have in modern software involves running code we don't trust. This might be Javascript in a browser tab, or hosting VMs on a public cloud. It would be utterly crazy if we reversed a whole 22 years of CPU design progress simply because our modern approach to running software is, well, ludicrously risky.

I say a better approach would be to retreat from arbitrary code execution, and start thinking about how we might have remote presentation protocols instead. There's no particular need to run the code client side, just so long as the code output is visible client side. So far so very X-server. However, we should recognise that it's impossible to exploit an properly implemented execution-less protocol; perhaps we should consider it as a way forward.

7
2
bazza
Silver badge

Re: Speculative execution

We're headed back towards the Transputer in more ways than you'd imagine.

Firstly, today's SMP execution environment provided by Intel and AMD is implemented on an architecture that is becoming more and mode NUMA (especially AMD; Intel have QPI between chips, not between cores). The SMP part is faked on top of an underlying serial interconnect (Hypertransport for AMD, QPI for Intel).

So, the underlying architecture is becoming more and more like a network of Transputers, with the faked SMP layer existing only to be compatible with the vast amount of code we have (OSes and applications) that expects it.

And then languages like Rust and Go are implementing Communicating Sequential Processes as a native part of the language; just like Occam on Transputers. Running CSP style software on a SMP environment which is itself implemented on top of NUMA (which is where CSP shines) simply introduces a lot of unnecessary layers between application code and microelectronics.

Sigh. Stick around in this business long enough and you can say you've seen it all come and go once before. Possibly more.

Having said all that, I'm not so sure that a pure NUMA architecture would actually solve the problem. The problem is speculative execution (Spectre) and Intel's failure to enforce memory access controls in speculatively executed branches (Meltdown), not whether or not the microelectronic architecture of the machine is SMP, nearly SMP, or pure NUMA. A NUMA architecture would limit the reach of an attack based on Spectre, but it would not eliminate it altogether.

7
1

It's 2018 and… wow, you're still using Firefox? All right then, patch these horrid bugs

bazza
Silver badge

Re: Where's the Rust?

It'll be interesting to see where they go with Rust. From what I've heard the parts that have been Rusted-up are remarkably good, so perhaps they are strongly motivated to get on with rewriting the remainder.

From what I've seen Rust is rapidly becoming the language to use. High level enough to make life easy (though the learning curve is a bit steep), fast, and some really nice tricks, yet low level enough to be a systems language.

The warning signs for everyone are in the Redox OS project; they've done an awful lot of code in a pretty short time. From ground up to an OS that boots and runs a GUI in the time they've taken is pretty impressive. It would interesting to compare their progress to Google's Fuchsia (AFAIK written in C/C++)

6
2

'WHAT THE F*CK IS GOING ON?' Linus Torvalds explodes at Intel spinning Spectre fix as a security feature

bazza
Silver badge

There do seems to be some signs of desparation eminating from Intel at the moment. This kind of fault is a real, real danger to the commercial health of a company such as Intel. They're going to need a new or modified core design pretty damn soon.

Intel are fortunate that AMD's chips aren't completely SPECTRE proof, which is muddying the waters somewhat. Can you imagine what would be happening if AMD's chips weren't affected at all? Intel would be struggling to sell a single chip at the moment.

50
1
bazza
Silver badge

Re: Why are the patches so late?

Writing microcode isn't the easiest of jobs I imagine... I can see that under normal circumstances it might take quite a long time to develop.

Also the kernel changes are pretty significant. AFAIK the Linux kernel patches were already in the can, but only because someone else had thought such an architectural change might be a good idea.

13
1

Linux's Grsecurity dev team takes blog 'libel' fight to higher court

bazza
Silver badge

Re: GRSec.

It can only be a matter of contract law if it's actually in a contract. However if GR security spot you leaking their code, exercising one's GPL2 rights, all they need to do is refuse future purchase orders from you. There doesn't actually need to be anything written down anywhere at all in any contract whatsoever, and they don't need to have told you in advance. I'm guessing that's how they've done it, and they've just relied on word getting around the industry. Law cannot ordinarily make you sell something you don't want to sell.

Sneaky? Certainly. Illegal? Probably not.

If they have written it into their contract, that would be very bold indeed, and certainly much more challengeable, but certainly still not a slam-dunk gonna lose in court document. I think that we should presume that it doesn't exist and that GR has actually got a position far stronger than most people think.

I come back to my point about disturbing a hornet's nest. If this does ever come to court, and GR win (which I think they will), then where does that leave everyone else? What's worse, a GPL2 license of doubtful but untested strength, or a GPL2 license that is confirmed broken by a court case. If GR win then anyone else can take GPL2 licensed code (not just the Linux kernel), sell it with unwritten constraints, and there'd be nothing that anyone can do about it from that point onwards. How about if, purely hypothetically, RedHat decided to follow suit? Not that I can ever see RedHat doing such a thing of course.

The ultimate solution to all this is to relicense Linux to satisfactorily reflect what the kernel community generally actually wants in this modern era. That is going to be difficult; some of the contributors are dead, and their code would have be expunged / re-written. The longer this is left, the worse this problem with GPL2 not really being fit for purpose will get.

0
1
bazza
Silver badge

Re: GRSec.

At one point Spengler's work was marvellous and free and actually had a rational point.

I would not be entirely surprised if opinions differed. I'm not saying that the mainstream kernel community's approach to the immense CVE list is invalid; it's perfectly acceptable in a normal, open society. But it's not one that everyone wants. And opinion shouldn't be allowed to stop someone else doing something about it, even if most people think that what they've done is crazy.

Perens's opinion is one I happen to share. And have shared. (Historically) I was the initial SA deploying linux at an enterprise and there was some push at *that* time to pull in GRSec patches, however the conflict between the GRSec agreement and the RH agreement at the *legal* level at the time was already a substantial issue. It was made worse by the "pay only" model that Spengler took on...

What I do find objectionable about this whole situation is the use of public opinion to sway public perceptions of what the license actually says. Contrary to what most people think, there is no obligation under GPL2 to do anything more than sending source on a CD-R in the post, on request. Even punched paper tape is, technically speaking, acceptable. There is no obligation to do even that after three years. There is no obligation to distribute the source to the entire population of the planet, only to people you have given a binary to. There is no obligation to send the source code again simply because some of it has changed. Clause 6 mentions "The Program"; not any other program, or future versions of it, and applies only if you actually choose to distribute it to some one. There is no obligation to onward distribute source code you have acquired, unless you distribute a binary built from it (just as well, otherwise we'd all be in trouble).

We Don't Want to be in a World Where License Terms Can be Changed Retrospectively

The role of public opinion in this is important. Most people are of the firm opinion that open source always means "I can download it from some server whenever I like". Some licenses are like that. GPL2 really is not.

However, if a court eventually caves in to the weight public opinion stoked up by people like Perens and forces a re-interpretation of the GPL2 to include terms like making it available on a web server to all and sundry, then a very important thing will have happened:

The source code would have been forcibly released under a different license terms by a court not acting at the request of or with the consent of the author(s).

That would be an atrocious precedent to set. It seriously threatens the certainty of all software licenses. It would mean that all GPL2 code everywhere was now fair game. And if GPL2, why not some more proprietary licenses?

That would cost us all dearly, in the end.

There's enough of a problem brewing with Google resorting to claiming "Fair Use" in its dispute with Oracle over Java. If Google ultimately win that one (it's still rumbling along), and Peren's firm opinion gets adopted as a precedent by some court somewhere, then as far as I can tell all bets are off, source code (either proprietary or free) can no longer be adequately defended by copyright law.

And it's copyright law that licenses such as GPL2, GPL3, etc utterly rely on.

So I'm annoyed with Perens for stirring up the pot. Is the Linux source code licensing situation ideal for what most contributors want? No, frankly it's crap. But it's nearly 30 years too, too late to correct that. Are the actions of GR legal? Probably yes. Are they in any way significant to what the rest of the Linux world does? Completely not. Could this all turn into a clusterfsck for the rest of us? Quite easily. Why risk that? Leave sleeping hornets nests alone I say.

Inevitable

Situations such as this were always kind of inevitable with the GPLs. Their copyleft nature is their very own weakness; any flaw in their terms is unrecoverable. Fixing the perceived flaws by stretching the copyright laws that the licenses rely on is going to weaken the licenses in other ways.

Personally speaking I think that GPL has not been of significant benefit to Linux or other projects when compared to, say, the BSD license. FreeBSD is even more freely licensed than GPL2, and that's not done FreeBSD any harm at all (in terms of community activity, code quality, etc).

GPL2 has also been a significant barrier to getting useful freely available code into Linux (ZFS, DTRACE, device drivers, etc). Getting stoked up by people like Perens about GPL2 adherence simply raises the barriers to becoming more accepting of other licenses, which brings its own problems.

To get around some of these legal barriers and issues we see projects like Google's Project Treble emerging. That stands a very good chance of fixing device driver issues on Android (and thence everywhere else), but it will then be significantly different to the mainstream. Fragmentation is a bad thing; it dilutes effort.

3
8
bazza
Silver badge

Grsecurity claim this means they're abiding by GPLv2, Perens says it breaks GPLv2. I suspect Perens is right, but the IP lawyers will have a bun fight over it in court.

I'm not so sure. There is no mention anywhere whatsoever in GPL2 about future releases. You don't even have to distribute the source of the binaries you have distributed after 3 years, and you certainly don't have to put it on the Internet open to all.

0
5
bazza
Silver badge

Re: Way to damage your own credibility

Whatever happened to freedom of speech?

Nothing. What the US constitution does not guarantee is a lack of consequences arising from what one has said.

A factor that is also often overlooked by the commentariat is that Perens is not just some random commentard. He's been an expert witness in court cases involving open source license disputes. So it is reasonable to consider his opinion to be rather more weighty, regardless of whether it's right or wrong. That might cost him dearly.

0
4
bazza
Silver badge

Re: Way to damage your own credibility

What happened was that loads of people didn't send them any money and / or ripped off theirs trademarks and company name. This behavior included quite large outfits such as (reportedly) Intel.

So it's not surprising that they got fed up that.

0
5

Meltdown/Spectre week three: World still knee-deep in something nasty

bazza
Silver badge

But we've also heard an industry-wide silence about CPU-makers’ roadmaps for a Meltdown-and-Spectre-free future. Rumours are rife that a generation of products will have to be redesigned, at unknowable expense and after un-guessable amounts of time.

It varies. Right now I'm not sure that Intel has anything in its product portfolios that you'd actually want to buy. AMD, Oracle SPARC and IBM Power are less affected but they still have to sort out Spectre.

So far as I can see the only sure way out of this is to not use speculative execution. Welcome back to the Dark Ages of CPU architectures. Things will get very slow...

Whilst there's a single cache, memory system and speculative execution there is no true fix for this. One could lock the cache whilst a branch is executing, but then you would have to wonder about thread preemption. It's a real mess.

6
2

Forums

Biting the hand that feeds IT © 1998–2018