* Posts by ATeal

191 posts • joined 20 Dec 2013


Alibaba sketches world's 'fastest' 'open-source' RISC-V processor yet: 16 cores, 64-bit, 2.5GHz, 12nm, out-of-order exec


RE: 50 instructions

It's REALLY hard to count instructions, for example NOP and "right shift by zero" are the same on one arch (MIPS?) also look at MOV/move instructions, are they just moves or loads and stores too? What about move (x86 now (-64 too)) which loads/stores but from a calculated address for small values? (it shares this with the LEA instruction for most common cases)

Really.... ultimately, don't you think that each instruction should count as one? So if we have an instruction giving 3 registers, (say A+B-->C) - should that count as R^3 instructions where R is the number of (eligible) registers? Say we have 16 general purpose, that's 4096 instructions right there, okay now suppose we support 4 sizes, just shy of 16.4k instructions now - or is this just 1?

So I can well-believe it's 50, which is actually "5 instructions" in some sense with some parameters for a few of them which allow us to consider it as 50 overall.

So how many things can RISC-V do really? I haven't checked but I'd expect it to have 32 registers (integer and FP - so at least 64) at least, - okay so 131k instructions or 1?

Oh I just thought of a good example. is LEA separate from a MOV (in this case a memory using MOV not the reg-reg type) - or is it the same instruction with a bit set to indicate to store the address not the data itself?

I'd prefer a table of what they say they added TBH. So I don't doubt there are 50 in some sense.

Take the bus... to get some new cables: Raspberry Pi 4s are a bit picky about USB-Cs


Re: Need to wait for

People missed the reference, crap cables don't work with USB3.0 properly and thus "would work" - see my comment above.


Re: I at least partially blame USB C

I apologise for my lack of punctuation and lack of any sort of proof-reading.


I at least partially blame USB C

This has probably been said but not as fast as I thought. In USB-C systems a cable is not just a cable. You can screw it up now. Remember a few years ago where there was that google guy who made a page reviewing cables, as it could actually be dangerous and damage devices?

I don't like this. Not only have we lost backwards compatibility (at least it being so easy) there are now /so many/ variables in play, what can the cable handle? What can that device output? I've been avoiding it and I'm starting to thing I wont be able to much longer.

Forgive my lack of a strong argument, memories I don't want to bring back and also I GTG now. But was it worth it to switch? Okay so weird flip the thing 3 times wont happen any more.

I've not worked directly with a 3.0 or above device (I've used libusb and do have some good notes on it but that was oof a good 7 years ago now) on the software side nor read much about can someone answer with how QoS is maintained? I know of some (dead) protocols like firewire before the consortium ballsed it up where devices got timeslots basically. Your mouse would always get a chance to send stuff irrespective of what the rest of the traffic was, because it was guaranteed some regular slots. With all I've heard (but again not looked into) about the quest to do everything over USB-C it'd need this surely?

Microsoft doles out PowerShell 7 preview. It works. People like it. We can't find a reason to be sarcastic about it


Re: No sarcasm you say

Addendum: I was just reading below and I can see the comments on "my team" are not very good - I'm not with those guys! Take this at face value.


Like the emperor in Star Wars "I shall become more powerful than you can imagine" or something? It's very hard to measure this (beyond the 4 levels of computational power ending with "turing machine") and I'm not sure "oh yeah in this language you can just type the letter z and I made it do that exact thing for you" is good - as that'd make whatever the fuck it is "more powerful" right?

However I'm very sceptical that PowerShell is something that what 50 years of shell developers thought "you know what we don't need" and it isn't a case of "we do it how we've always done it" (look at pipes, albeit an old example).

Having said that I don't really care, I hear that it's got a notion of types so you can stick JSON between things, and I've not bothered to look it up to see if this is true. It didn't stop me shaking my head and sighing - thinking "when will they learn" to myself. I could write (so trivially that it's not worth checking for existing work from, considering only time and effort) JSON stuff to extract, append, ect ("" XML with XPATH) - I could also write that parallel for thing mentioned as a program which takes as arguments what to run without touching the shell, just as it is possible (modulo low hanging fruit for error messages and general speed improvements) to do with arithmetic

Seriously it'd be easy. Easier than faffing around with job control ;)

I think we should measure power as "A is at least as powerful than B <--> for all things B can do A can do it too" (said differently, B is no more powerful than A) - without invoking Turing! I could work in XML if I wanted to. I could not use anything on the normal PATH and use some other stuff instead....

I've admitted I'm not fit to compare the two, but is there even a point to trying? What I've described is (mostly? Dare I say all?) vanilla POSIX shell stuff!

That's a hell of Huawei to run a business, Chinese giant scolds FedEx after internal files routed via America


OMG some actual evidence!

I don't like the hysteria that seems to surround Huwiiu (nor the Kasperskydoviksky lot) because of the total lack of evidence; so it's nice to see some actual evidence (presumably undisputed).

Before drawing any conclusions is there any legit reason for this? Like suppose it was "next day delivery" - it may go a really silly route distance wise, but make it time wise? Is there some really big hub at their HQ where planes come in from all over the place, sorting happens, then things are sent to all over the place?

Having said that I wouldn't put it past either of them (US/China, large US companies - for example the relatively small RSA did some bad stuff) to "do bad things" as both of them /could/ (and one, for sure, in recent history, has - like a lot) so maybe I should take a moment to separate out my distaste first!

Either way "us lot" are fucked though.

NPM today stands for Now Paging Microsoft: GitHub just launched its own software registry


Anyone else worrying about too much power?

I felt this pre Microsoft buying it BTW, so no EEE links please!

A lot of projects are switching from their own hosted thing for the main repo itself and putting mirrors about to using github as /the/ central repo of truth - ignoring the irony of "it's decentralised" supposedly being a big selling point of git and the lack of decentralised actual users. I can't be the only one worrying that GitHub, this free (as in cost) thing that is basically being used as a CDN for some projects, might not be around forever, or might not be trust-worthy.

Furthermore a lot of projects with actual and useful wikis (as in mediawiki, or something proper and worthy of being called it) are now using GitHub's "wiki" - which really isn't fit for the name. It's a step back.

To my knowledge, you cannot easily view past states of a repository on github, IIRC you can scroll down an infinite-scroll page of the commit log that way, but for a busy repo finding what happened a few years ago is a NIGHTMARE that I gave up on doing. There's no easy way to navigate temporally.

It does worry me. I must confess I don't use Git for any projects (I can get git repos, update them, that's more or less it) I work mainly with centralised ones (version numbers <3) - but if update because I trust the committers I don't diff the update or read the log really. Does anything stop github from submitting their own changes (if they wanted to) - obviously they could make the website not show that step, but would any others be likely to notice?

Although not so much of an issue now, I was also worried about GitHub dying, at least with git (modulo large files?) every client has the complete history so if it did go down quick copies would be plentiful. So many scripts and project pages point to it now, some are even redirecting to a github.io subdomain.

I digress. The problem with my worries is that eventually something will happen (nothing is permanent blah blah blah) - but I make no comment about when anything might happen. "GitHub is forever" of course cannot be. Maybe it's just my general resistance to change.... I dunno.

Anyone have any comments? I'd love to hear from someone who takes the stance that projects are not migrating fast enough, why are you okay with it?

Hi! It looks like you're working on a marketing strategy for a product nowhere near release! Would you like help?


Re: Cost centers

I really do hate the "regurgitate everything" part of that. "They" seem to think that the refactoring book (written by the guy who did enterprise design patterns, Fowler?), for example, is like some sort of universal truth, along with other rules of thumb and names/concepts to help us reason about what we do.

It takes a lot of experience to realise (the implications of realising) that all this stuff is man made, not some natural or canonical (unless some hipster has gone full lambda calculus) thing.

If this was ever to be realised I suspect the dev-ops tab would vanish pretty quick ;)

Microsoft slaps the Edge name on SQL, unveils the HoloLens 2 Development Edition


Re: Differential Linguistics - AI parsing

Is this that "I have done computer science III" kid I was warned about?

Wannacry-slayer Marcus Hutchins pleads guilty to two counts of banking malware creation


Re: So now he has admitted to creating nasty malware.

man strings

man grep

IBM Watson Health cuts back Drug Discovery 'artificial intelligence' after lackluster sales


RE: No actually AI is...

I'm getting board of bumbling around trying to sell people on functional analysis like some sort of deranged Jehovah Witness (but armed with proof)

The problem we are all suffering from is that you can get convergence with pretty much any method, the conditions you can show sufficient for convergence are one of the rare cases where the conditions are actually really really broad (typically if you can show if you have X then it converges, X is some tiny bit of information, very specific, here X is very broad and the "NN (structure activiation function of ( sum of inputs * weights ) + bias ) )" ) (FTW (LISP))

Nice to see some philosophy in there with schemas of knowledge, ironic you are bickering about what AI is, as if philosophy hasn't asked all those questions and formed an array of answers for us to muse on for 5 minute before thinking we've come to some deep conclusions.

Seriously quantify more.

Addendum: you know how people think stats can assign a number to something with perfectly describes that thing? Well polynomials are dense in continuous functions - that is to say a polynomial can get as close as you like (under a metric like the sup norm - I hate being vague I set off my own noob-dar) to any continuous function. Well floating point and the "network" structure can get pretty damn close too. So people are imbuing it with "statistics says okay" sort of reverence.

We've read the Mueller report. Here's what you need to know: ██ ██ ███ ███████ █████ ███ ██ █████ ████████ █████


Re: How was it redacted?

Yeah I'd noticed how careful they were, not sure if shitty jpeg or 2nd gen either.


Re: Character spacing?

I can imagine you going around with Google Translate thinking "these X-ards speak shit X" when actually they have the hang of it.

Just for a laugh I did "Greetings simple peasant folk" into one language and then that to English (so one hop, English -> X -> Y -> English would be 2) and I got:

"Simple country farmers greetings"

Do you know how many words of various lengths their are? Fucking loads. When the redactions (of this document) start coming out as "investigative techniques of how Bush did 9/11", "the ongoing aftermath of Bush's doing of 9/11", "had a meeting to discuss how Bush did 9/11" - it'd so be accused of bias :P

Brit Watchkeeper drone fell in the sea because blocked sensor made algorithms flip out


Just to confirm:

They literally (not like "training issue") can't control where it crashes beyond working out with some trig and iffy assumptions where it'll land if we tell it to go somewhere from a certain direction?


Scare-bnb: Family finds creeper cams hidden in their weekend rental by scanning Wi-Fi



"Hidden cameras in listings"?

So what he was fine because the camera wasn't mentioned in the listing?

Back to drawing board as Google cans AI ethics council amid complaints over right-wing member


FFS you'll find a reason to hate anyone if you look close enough

I don't see why this board had to have public names (beyond the "about us" page on some site somewhere) - I wouldn't want a board active on twitter, you'd want people who knew their shit and were fit to consider the arguments watching what goes on through the various projects in various offices.

The personal politics of members /may/ give them a stance when it comes to (a guess, and for example) "should we write software (naturally using AI and maybe blockchain - or something we can call either/both) to better manage those jail things for immigrants?" should that kind of stuff come up, but that's kind of the point of them! To walk the line between what's okay and what's not.

It's (urgh this isn't going to look good) animal testing. There was some lipstick example which wasn't and caused great harm to humans, after that stuff was (since we now know what stuff does the need for it has rather died down) - that has always been a balancing act between "is the knowledge gain truly something we cannot learn any other way?", "to what level of harm and for how long are animals exposed?" and so on - you don't want a group that go "never" to every answer, you want a mix and an "open" (to some extent) group that can see the points and give ground as needed.

For better or worse you need someone who can see (for better or worse) lipstick and the market for it is .... beyond their scope (dare I say) and so on.

As a "for worse" side, if you have a board that never yields in either direction you can end up with what dogs have endured at the hands of cigarette companies for decades, so they can fudge the evidence in that case. For better or worse lipstick was going to stay, there were dangers to humans and animal testing and cosmetics has a fairly good history believe it or not (which is why it's textbook).

Today's "no animal testing" cosmetic industry is only here because of animal testing to work out what was safe or unsafe and what could or couldn't be used, the premium a brand could charge for this (in the earlier days) offset the extra they'd pay for not using cheaper and newer ingredients and so on.

The situation is somewhat similar. For better or worse, like cosmetics, Google are here to stay, an ethics board must fight between both sides, otherwise be discarded unfortunately - again this may suck, but the situation is what it is and at least in the next few years isn't going to change.

Two Arkansas dipsticks nicked after allegedly taking turns to shoot each other while wearing bulletproof vests


To be fair you would want to try it.

I'd want to go first and have to go afterwards ;)

Amazon consumer biz celebrates ridding itself of last Oracle database with tame staff party... and a Big Red piñata


WTF is up with that Dabbb guy?

https://forums.theregister.co.uk/user/66711/ he's all over this thread.

I'm guessing you have some sort of Oracle DB certificate thing - you know most of that knowledge will carry over right? Since 2012 MariaDB (at least) has really started doing well at adding stuff (before that it was like "we support /the syntax of/ constraints" and stuff like that) - just yesterday I was reading about the recursive "WITH" statements - which is the only thing I've ever actually looked up and found an Oracle DB documentation link for and little else. Sequences I found in Microsoft's "transact SQL" (dialect used with MSSQL?) - IT HAS THAT TOO (now) - forgive that tangent it's just ... they've really pulled their socks up.

But the point still stands, if you can do Oracle you can transfer those skills over. Yes ALL of them take the SQL standard "under advisement" but transactions and isolation levels really only have a few ways to go - so the knowledge carries over.

FWIW BTW I think this has been a long time coming. For example: I've never used Oracle (or MS') DB offerings - well except for MySQL. I cut my teeth on that with PHP, I then carried that over to other projects; I still use the long end-of-lifed GUI tools. I imagine I'm far from alone. It's probably only lasted this long because of the MS courses you can get certified for (probably - not my area)

Although from what I've heard there are still some rough edges around MySQL/MariaDB - for example replication. You get the binlog (give the slave a set of changes to make to tables) or statement based (make the slave run the statement and derive the changes for itself) methods and have to be careful not to use non-deterministic functions in the latter case and so on. Oracle can do this better(? - I've just "heard" it) - there's also the WITH statement stuff mentioned, but MariaDB has that now. The other case I imagine is indexes. I hear TransactSQL gives you A LOT of control over table structure and index types. For a long time InnoDB (Sometimes called XtraDB but not any more) - the one with transactions - only had B-Tree indexes (they could be unique, I mean the actual structure) - not hash or R-tree, or fulltext. Again times are (FINALLY) changing!

However I imagine loads of other people (from those I've met, and my own work) just did what I did: build around it. Join us ;)

Brit founder of Windows leaks website BuildFeed, infosec bod spared jail over Microsoft hack


Who loves windows that much?

The sick bastards.

They're so lucky the judge imagined the convo:

"What are you in for?"


*Leans in close*

"You know *the* big tech company? Well I headed a site for *real* fans where we speculated on leaks and just generally worshipped stuff - in the end I fished for some credentials and had 2 and a half weeks of internet access no fetish could rival in pleasure - I saw everything"


"Oh wow ... what secrets did you find?"


"Oh it's a surprise my friend, but I will say this: brace yourself, next time you get pestered for updates and your computer comes up from that restart you are going to ejaculate when you see the next default wallpaper"

Seriously WTF?

Huawei savaged by Brit code review board over pisspoor dev practices


Yeah I was gonna say...

I wont bitch about how shit the world is now and all that - I may copy and paste the bitching but....

It's about par for the course - all Huweiewaiwoo can say is "we take it very seriously" - like the others. A lot of gear that doesn't get much public exposure to penetration testers/security researchers/pointers-at-the-emperor's-genitals ect is hidden in telecom stuff - and some of the most dreadful stuff is there,

Not to get all philosophical on you guys but the more "open" "it" (the platform, to the ideal of the code and a usable toolchain*) is the better. For example take a TV (getting a "dumb" one is hard these days), skybox, games console, ect. These are computers but locked to fuck and there's absolutely no way without HUGE effort to poke with these. That barrier is bad, it was once thought it was good enough but as a certain TLA has taught us, actually some will go to great lengths to "poke at it" and keep it secret.

It needs to be pokable.

Open-source 64-ish-bit serial number gen snafu sparks TLS security cert revoke runaround


Re: How do they know how many values are "wrong"

I've just realised that the number is simply how many were generated by this method and I feel silly now.


How do they know how many values are "wrong"

Shouldn't half have the top bit set and half not (a very small "ish" obviously) if they come from a (C)PR source?

The wording is very weird from that quote as "must be positive" - well you can always interpret some zeros and ones that way?

FFS now I've got specdiving.

Buffer overflow flaw in British Airways in-flight entertainment systems will affect other airlines, but why try it in the air?


No flame war on the "Right Thing" to do?

I imagine *A LOT* of things with text-boxes are ill-equipped to deal with this (editors should be alright, they have like nice trees, can work with a file rather than all in ram, blah blah, I'm talking something that is ultimately a null-terminated string) I am surprised that there has been no real talk of what the program *ought* to have done.

In this case an absolute limit should be fine, but generally these are not good (some old editors have 4kib or even 16kib hard limits - not very future proof and often exceeded for generated files, Bison has options to generate small code even today because of this)

Anyway what *should* you do guys? C'mon, absolute limit in *this* case but those old editors.... should they try to find out how much ram is free and use that (I bet that's fired some of you lot up)



Let's not forget:

He copy and pasted stuff. He put some random crap into the window, selected that, copied, pasted a few times, then selected that copied and pasted a few times <--- that's it.

He plugged in a mouse right? If it was that USB device that bricks stuff you connect it to (the one that charges slowly then shoves a lot back into the port) then yeah you'd have a point.

Imagine that "try copying and pasting loads of text" becomes some standard benchmark that "average people" try for "fun", seeing if "software is up to par" - then there's no "security researcher" here.

C'mon guys get some perspective. If he attached a debugger then yeah maybe, but f'cking copying and pasting a few times?

Linus Torvalds pulls pin, tosses in grenade: x86 won, forget about Arm in server CPUs, says Linux kernel supremo


Re: Wrong way round

As a very informal rule of thumb (but as with all rules of thumb, useful - I only put this here to start a pissing contest with some of the tools here :P)

x86-64 has around a "40% crap tax" - that is you pay 40% of some measure to account for specific things. For example 40% extra power for the nasty decoding problems, 40% lower throughput if you skimp on those (which is why the atoms sucked so bad. They still needed active cooling and were easily out-performed by a 2012 phone, but that's a long time ago I don't remember when I got the netbook and joined the revolution of "so much battery life, so portable - tiny keyboard is unusable, and it's unusably crap")

In that trade-off is 40% more power usage.

I stress it's a rule of thumb, SandyBridge (although the idea existed before as a trace cache) and possibly Neelhalm (the one before) use an "LSD" - loop stream detector (it was botched for Skylake, discovered by the Haskell people (naturally) and de-activated in an patch) the idea is that this stores loops entirely that are small enough, allowing you to switch off the decoders (a huge saving!) so tight loops just run near perfectly. That sounds pretty weird right? If the decoders could keep it fed why bother? Power savings.

Lastly, why this tax?

x86-64 instructions can be up to 15 bytes in length, and as short as 1 byte. So if I give you some quantity of bytes you cannot mark out where instructions begin and end without decoding at least their lengths, but the use of prefix bytes and all kinds of other crap means that this "pre-decoding" if you will is basically a full decode. Once you've done this step the register masks are there to be syphoned off and stuff, it's a really really big penalty.

RISC architectures traditionally (kinda have to have to be RISC to be honest) have nice uniform instruction lengths, eg all 4 bytes. VLIW and EPIC are sub-types of RISC and have longer (like 32 bytes is the lower end VLIW) - and alignment requirements, that is the address of the first byte of the instruction must be divisible by say 4 in this example

So given any chunk of bytes, I can say "if an instruction is here it starts where the address ends in 00" - next is 8 bytes, that'd end in "000" - job done.

I *believe* but it's not my area (see above) that ARM chips can switch modes. they have a short 2 byte instruction form that covers loads of common cases - the chip must be switched between modes - another I've heard of requires an instruction to be 4 bytes, either 2x 2-byte-short instructions or one full size 4 byte (I've heard of something similar with 8 byte instructions accepting 2x 4 bytes for common cases)

But you get the gist, very easy. For x86(-64) this affects everything, branching for example, "where's the start of the next instruction" - nope you can't just add 4. This needs to be known for branch histories too. Decoding is also an absolute nightmare. This is why RISC emulators run reasonably well (for pure emulation now) compared to emulating x86-64 (yes brute force lets us run some stuff like this practically), but x86-64 is way way way more difficult.

What happened to RISC you might ask? If it's so good right? Well there were like 4 or 5 RISC arches and suddenly there were zero, they all thought Itainium would be a good idea (enjoy looking that up) - no one mentioned "but hey guys, doesn't that force an NP-hard problem onto the compiler?" and "doesn't that you can't be sure code you wrote ages ago will work even reasonably well on later versions because of architecture changes?"

But anyway.... It was a bit before my time, but only PowerPC was left standing sort of.

Itanium was supposed to be Intel's 64 thing, that's why it's called IA64 (Intel Arch 64) IA32 is x86, and AMD64 is what we call "x86-64" sometimes because AMD realised "hey backwards compatibility FTW!"

Now you asked about power, speaking purely of the CPU and not the Larabee derived Xeon Phi accelerators (They're like .... gimped/Atom-esque cores with AVX-512 bolted onto them, crap CPUs but decent vectorisers, this sits in a rare niche where a GPU is even today too not-general-purpose to do it, so it needs the CPU parts)

40% savings to power - or you could have 40% extra of the "uncore" (weirdly this means "the core of the core" kinda thing) transistors to use for not paying the tax, you get the idea. This 40% transistors doesn't include the cache BTW - purely "uncore".

That's a big deal.

Furthermore the time of "wait 6 months, then it'll be faster" (hardware would get faster) is long over. We're now deep into the "scaling out" side of things and some algorithms are probabilistic (bad term on my part, NOT "probabilistic algorithms" - something else - see next paragraph) , yes there's a lot of work not geared for this, but there's a lot of this work too! 40% is nearly half, you could run another core almost with that.

For "algorithms that are probabilistic" I meant for example a certain search engine beginning with "G" (at least, I imagine it's common. It's easy) actually sends out 3 of every web-query it gets, and shows the results from the first one to come back and ignores the rest. This hugely cuts down on latency,

It's a hell of a saving and as I whined about above, I've wanted to see it for a long time.


Re: Well currently the problem with ARM is not the CPU

The problem though is that it's not a question of whether such an ARM chip exists; at least not for me, but you epic prototypers out there - you guys rule!

It's whether or not it is crap and can be purchased reasonably or not (eg £499 for an equiv Raspberry Pi (pick your edition) is not something I'd pay). And by crap I mean "has DMA controller" because seeing it spend almost all of its time waiting for IO killed my enthusiasm for it.


Re: Well currently the problem with ARM is not the CPU

There are block diagrams openly available, check out the wikichip.org thing I mentioned. You can reason about the architecture quite easily - see Agner Fog's guides I also mentioned. You can get the most out of it pretty easily.

Yeah there are no masks available but the "high level" of the cache, its coherency mechanisms (the protocol it uses), how it handles read-after-write (if you read a 2 byte bit of a 16 byte value just written, it can satisfy the request from the write it has yet to do, on older arches it dependence on alignment, as of Neelhalm, it can do all for 8 byte write and 16 byte write, but has an extra cycle penalty for reads straddling the 8 byte boundary)

It's all out there. Compiler writers use this too. I don't see what the issue is.

Frankly I find it absurd that you thought I meant that the *actual* schematics for their product line.


Re: Well currently the problem with ARM is not the CPU

I get that familiarity helps, but there are so many distinct versions sometimes under the same group name, it's difficult.

As for intel's, it's a word, so it is memorable. And the second word is a series, so SandyBridge and Ivy Bridge are both from the "Bridge" series. There's been Bridge, Well and Lake - not too bad! Order is a bit harder I give you (I often get Haswell and Broadwell mixed up) but I know that SandyBridge is (until they change it again) 2xxx, Ivy, 3xxx, then there's 4xxx ... Skylake 6xxxx .

There's plenty of documentation for these and they're not that different for all the things with the same name.


Re: Well currently the problem with ARM is not the CPU

Yeah I was looking for this. I want to like Arm but the problem is there are just so many versions of everything with "hidden" or actually hidden features abound. At least on x86-64 we have (never thought I'd say this) http://www.acpi.info/DOWNLOADS/ACPIspec50.pdf <-- ACPI! Yes there are a few things I'd tweak (so hardware developers didn't fuck them up so often) but at the time it was made and given inexperience you can forgive it for being what it is (and any big changes now would just make supporting it worse)

We also have the CPUID instruction which while also really complicated and tedious to use, requiring a manual, sticky notes, a pencil and a rubber (because although the legend says the other side of the rubber can erase pen, this is a legend with no basis in fact) making notes about where what was mentioned.

BUT it is there!

It's also been designed (AMD 64 fixed a lot, they basically said "okay we're doing this new 64 thing, it's very much like what we had but if you did a find and replace for 32 to 64 and renamed e.x to r.x (I'm simplifying but you get what I mean) and we mandate that you at least have SSE2 and this bunch other bunch of stuff" - that made it easier for all sides. New features (like AVX for example) have to be turned on by a kernel aware of them for userspace to be able to access them, so any aware kernel knows to save the registers as needed (and that they're bigger now) - stuff like that.

I want to use ARM stuff, I really want to start looking at parallel systems where they're not quite as incestuous as current systems (and have way more cores, by an order of magnitude really) - by that I mean often there's an L3 cache that sits under all the cores and they're far from independent really and write some stuff about ideas I had from learning to use the Cell chip in the PS3 (something else you couldn't use anywhere else!) - I'd love to actually use it for work!

People here have brought up the Raspberry Pi. I got a B I think it was, It had no DMA controller so reads from the SD card took forever (you could see the thing spent most of its time waiting for data) half of the exposed CPU features didn't work or couldn't be accessed. The GPU drivers were extremely bad and buggy (yes I was using the legit one) and it was basically unusable as a computer. I hate software bloat (I remember Excel 97 on an 800mhz P3 really fondly, the speed, so little memory (compared to now) - and I work hard to buck the trend here - rant for another time) as much as the next guy, but I don't think it was just that.

objdump works great on x86-64, if I don't know an instruction (pah!) you can copy and paste it and find a link to one of Intel's biblically sized volumes on the matter (or if you don't want to self harm or in my case be driven to the harm of others) find some other reference. FelixCourtier(?) pops up a lot now, yes it's mechanically generated but it's handy.

I have no bloody idea how many or which arm instruction set I'm using. They have this really opaque marketing number system and that doesn't work "wait I thought 11 was better than 9" - "oh the Cortex's are crap when the number is a single digit" - "oh except when it's big.LITTLE because that's really a double digit one with a single digit one thrown in" - "BUT a single digit one with NEON THUMB is good right?"

Anyone who got used to perf's event counters prepare to be sent to the dark ages of when debuggers had to modify the binary to set breakpoints and step! SORT OF SOMETIMES MAYBE

They also give out very little information on the chip's insides (if you can work out WTF it actually is), where as both Intel and AMD publish schematics - for example Agner Fog's guides and Wikichip.org (put in SandyBridge (and go through the well documented architectures from then until now), get a sexy textwall) and you can reason about what kind of throughput you can get, why it's not giving you what you thought, ect - a large part of my job is squeezing everything I can out of SandyBridge and Skylake series chips.

Anyway I digress....

ARM keep their cards close, then burn them at the first opportunity. then eat the ash and keep their shit in a vault.

It's like trying to get the ring from Smeagal, "Trixy consumer, it's ours the precious, you can't know about [NEON, THUMB, design, the one where it executes JVM bitecodes directly, information about buses, timing of instructions, ...]"

Which is weird as I love reading old ARM documentation at bitsavers.org (It was before my time but they're very detailed).

Crypto exchange in court: It owes $190m to netizens after founder 'dies without telling anyone vault passwords'


Re: Someone had wikipedia open when they wrote that

I didn't. I studied (past tense) this stuff, and I remember it?

What's annoying though is I gave a book with an easily available PDF which is accessible to ... let's call them "crypto-users" - people who could read it and just ignore/gloss over the formal parts AND THIS IS STILL GOING ON.

There's no excuse for this, but I'm arguing solo and not going to help someone against their will ;)


Re: Someone had wikipedia open when they wrote that

*sigh* c'mon guys read before you post.


Someone had wikipedia open when they wrote that

EDIT: Added book recommendation at the bottom; should be readable for any half-decent *cough* "programmer".

So a little bit snarky but secret sharing systems usually have some huge caveat. That's why we don't already use what would be a brilliant thing (I don't want to argue what constitutes shared secrets (not secret sharing) so let's take the "I give you a string of abstract bytes, you give me some pieces back I can give to a fixed number of people such that [the schemey bastard threshold > 50% OR SOMETHING)"

It's really not that easy. Doing it on paper is even worse - computers can at least do the tedious crap we can't do well.

So to answer your question: "none" - unless you are also like these numptys and willing to risk so much on some C++ thing you wrote once - from how you suggest it I kinda doubt you've got past rule 1:

1) Don't write your own crypto.

With age and a fuck-tonne of maths spanning abstract algebra, measure theory, combinatorial optimisation and a few others (of which I've spent 6 years veering towards in no rush) AFTER the obvious stuff (like not counting undergrad) and the thought of breaking this rule WITH $190m on the line should allow one to cut a cigar with your anus (should it rest on your implementation).

Then you relax as you realise "wait the GPL says "no warranty fuckers!" ;)"

Lastly: the problem is actually not too difficult for small divisions, for example 3 people you have "1/3rd" as the only fraction in play when it comes to colluding. A big problem that quickly bites is you get an n! (for sharing among n) trying to work out what order to put the pieces in.

Conversely (for me anyway, I've come to accept that the statement "no one else gives a shit") - and this is what I find interesting - if we give up on the "suppose we want exactly n parts given out and all n required to unlock the secret, with no probable way to work out the order (this has a formal meaning)" above (which for reasons stated is impractical for n after a few, and for n small enough the bastard ratio is huge (eg 67% if 2 people collude with n=3)) it actually leads to some quite interesting ideas

None of which are practical of course ;)

Book: Foundations of Cryptography - Volume 1: Basic Tools - Oded Goldreich

A lot of this is English so you can just open it somewhere and "enjoy" - which is why I mention it. I know PDFs of it can be found. PLEASE DON'T THINK THAT THIS IS ANYTHING TO DO WITH WRITING CRYPTO STUFF THAT YOU SHOULD TRUST it's pure theory and is one of the few books I've found that spans both sides of the formal/informal divide.

Wow, fancy that. Web ad giant Google to block ad-blockers in Chrome. For safety, apparently


Re: Google are cunts

Doesn't that just get them money?

Open sourcerers drop sick Fedora Remix to get Windows Subsystem for Linux pumping


Re: Seriously ...

I imagine it's aimed at servers where Linux has been the default for eons now (is it still?) there are a lot of installations of one moderate to large servers as single computers management tends to want windows, they know it (even the name) and this can tip the constant battle in their favour if the grumbled "technically yes" to "can we run that shit on it?" is extracted.


Re: Not Linux

You're kinda making his point stronger. I've not checked in a while (I've weaned myself off the notepad++ .... just the deprecated since 2010(?) MySQL GUI tools to go) but somehow Wine runs stuff faster than Windows when running windows benchmarks that hammer the OS. By "windows benchmarks" I mean benchmarks compiled as an exe or in many cases "natural to windows" that are not like "how fast can we write a file?" or things no OS bothers to optimise because they're so rare.

If anyone cares I'll go on. You have no idea how much work is behind paging working fast.

ANYWAY what he's saying is the stacked interfaces there converting between one another. Having however many layers less can only be a good thing. It depends how they've built it TBH (like Wine-type thing to having kernel support to support Linux "native" (there are loads of types) syscalls ect and being able to understand them without a trip back into userspace and back out)

We did Nazi see this coming... Internet will welcome Earth's newest nation with, sigh, a brand new .SS TLD


To be fair...

There are only 26*26 two letter versions (and thus 26 cubed 3 letter ones) - some are bound to clash. SS for "south Sudan" makes sense.

Although some domains are totally misused (.io, .js, the list goes on, then add themed TLDs....) and dare I say almost never relevant to their TLD I can't really see .SS being used by nutjobs much, the .xxx thing being used by porn sites *actually made some sense* as a thing for them to do. I can't see hitler-was-misunderstood.ss being an issue.

It is a little bit amusing though

Are you sure your disc drive has stopped rotating, or are you just ignoring the messages?


Re: I can believe it!

Yeah that'd be the problem - you must come from simpler times.


Re: I can believe it!

Thanks, I'll be taking credit for that.

Florida man stumbles on biggest prime number after working plucky i5 CPU for 12 days straight


Serious gripe

BOINC has become a leader-board of burnt CPU time, when I last checked (November, it's not a regular thing) the famed posterchild for it, SETI@HOME had a broken science page with a big PHP error at the top and no content.

It saddens me that MAYBE some articles out there will link to a text file of it - and maybe some of those will have a scroll and go "it's indeed a big one" - but that'll be it.

Now I'm a mathematician technically, so I *love* abstract crap right.... I'd love to see "another BOINC" that rather than tying your computer to a project instead pooled it with "missions" (that you can opt out of if you *really* don't want whomever getting your cycles!) where the criteria for inclusion is the project must have some ending criteria even of the weak form "and if this *doesn't suck and actually works* we then refine the result until we've got a decent map of the thing we want to simulate's behavior for various initial conditions" - OR SOMETHING. At least:

1) It defines failure - so a big feasibility study that fails will still die

2) There are some criteria (if it does work) for completion - even if it is "and any finer and approximations at this scale are useless" - which is a very high bar to meet indeed.

I'd also like to add some notion of priority. That is "this long term sponge of ARSE@CRACK has had a good chunk for many months - this one project that's new would take 0.5% of daily capacity for 10 days - or we could just do it in like an hour and then give time back to ARSE"

If I may reach for the stars (I'm debating doing this) - I'd really like to lower the barrier to brute-forcing stuff that'd take more than like an hour on my computers - I'd love to "bank time" for myself (say I get 0.5 "work units" for every "work unit" I give to the system) as a lot of colleagues and me often have some numeric integral we need to evaluate once very accurately for the problem at hand. Or to test something other a large range.

The only problem with that last bit is the reduced trust barrier. I have a compiler fetish (it's weird. If this could make me a sex offender, totally guilty) which helps (a language to express the parallelism via work units and the problem, I've done something similar to exploit SIMD before - it'd be *something* that didn't require trust to run) - but even that is a possible solution - it'd also deal with different arches (I am always hoping to have something else I can buy instead of x86-64 - and I'm a little bit afraid of the 12 AVX-512 variations there are - not to mention permutations and the way they're just bolted on outside what is basically a Skylake core still.... we have GPUs too; I love talking about this and there's no reason it can't be iterated)

I digress, I've been seriously thinking about doing it for a while, but I'm not a "build it and they will come" - I've also worked out that with Intel chips (not got my hands on a Ryzen one yet) post Sandybridge (possibly Nealharm - core series) you can use "a bit above idle" without using much more power (intel_rapl is the kernel module, find it from there, you can measure power in real time) - that is to say if idling (with a web-browser, firefox always seems to be doing something even when the tabs are not according to about:performance) - for a little while the power increase with work is less than linear; and for small increases even with a huge sample none of my ("statistical" - I'm fit to do these. I didn't do all that measure theory crap for nothing!) tests lost to noise. So say you idle at 5-10% usage (ignore the fuzz of hyper threading for now) You can run at a "steady 12-14%" basically for free power wise. It gets more weird if you start bringing in AVX rather than sticking with XMM registers entirely or just SSE up to whatever version - BUT it can be done.


Scumbag hackers lift $1m from children's charity


Anyone remember the terrorist on benefits

It's a bit like that for me. This was years ago (like 2008?) and there was this story about some terrorists (presumably worthy of the name, this was before it became the label it is today, but was well on it's way) and the news was that the guy had the "audacity" to claim benefits.

I remember wondering why people loathed this so much, the terrorists that worked were alright with them? The scroungers though...

I actually thought "good for you" - because if I were in his position I've managed to do that tiny bit more harm to my target. Albeit not by much (the amount will be a drop in the ocean). He expended somewhat minimal effort to cost his enemy that bit more.

In reality this means fuck all, drop in the ocean again, good faith from the DWP ect.

Now let's apply that to these guys. They're *thieves*, the bad guys! What if they'd stolen from Facebook, or Google, or Ube.... oh right I see your point.


One year on after US repealed net neutrality, policymakers reflect soberly on the future


Please clarify: "academic-lobbyists"

WTF is a "academic-lobbyists"

Is that like academics (so faculty staff?) that lobby about in their spare time? Or! Lobbyists that "are academics" in their spare time?

Or is it like "economists for Brexit" (IIRC it was 2 or 3 of them, so technically plural)?

Early to embed and early to rise? Western Digital drops veil on SweRVy RISC-V based designs


(RE AC and "distributed computing")

Calm down, chaining those logically blocks is not a great idea.

Drives relocate sectors all the time, you're better off connecting 10 drives to one controller than 10 drives together as they'll need to communicate if some drive breaks it may partition the remaining devices (trivial case: drive connects to each neighbour, one of the ends can be used to control the array, any drive goes something gets cut off)

Don't hype this up into something it's not.

In all seriousness if you wanted the world to be a better place (which this wont help) hope for better data durability, actually KNOWING when some data is written, the writes got there in order ect is EXTREMELY difficult, the more between the program and the drive (like NFS for example - worst case for this problem) a RAID array can be bad too ect ect.

You get the idea, this is just a different controller - best case is it cheapens drives a bit, to miss a little accuracy but to save a lot of explanation: "turing complete"-ness, trivial to show (modulo infinite tape) a computer can simulate a TM (turing machine) thus a computer is at least as powerful as one. So they can *already* make any kind of magical controller they can make with RISC-V (worst case: emulate the RISC-V on whatever they use, it may suck speed wise but they can do it, compiling for whatever arch they used would reduce the speed issue if it is an issue at all!)

So there's nothing new here. Arm is the home of invisible custom busses varying from everything with everything.

Now if you excuse me I'm going to go and work out (a bound) for the minimum number of links between n drives such that you can remove m drives and the rest stay connected.


Linux kernel Spectre V2 defense fingered for massively slowing down unlucky apps on Intel Hyper-Thread CPUs


The SMT discussions

SMT is a good thing almost always, and the situations where it isn't a good thing ought to be few and far between. Most of us here (except one really weird comment above - WTF m8?) get that it's the thing that is a really tight loop that doesn't benefit from SMT when there are more of them than physical cores. Linux's scheduler for eons (and I'm sure many others) are aware of hyper threading, the only difficulty is that it's bloody hard for a program to look around and go "ah this is Intel - I'll just halve that reported core count"

The difficulty is in software knowing it's dealing with SMT and adjusting itself accordingly, it can go all the way with processor affinities easily enough and it can spin up as few or as many threads as needed.

For as long as we can specify as an environmental variable or command line argument or config file (whatever) a non-default argument to these programs - leave it on. Those who shouldn't are those running binaries from others - maybe some interpreted languages - that's another issue for another time; maybe they should use affinities instead - again another time.

The modern cores in Intel and AMD (ignoring the piledriver-grade pounding they gave me kinda) are very very super-scalar in that they're extremely hard to keep busy even close to half the time. They're made so software heavy in various areas can run fast, No one stream will use *everything*. Someone above mentioned registers. You're looking at 168 integer registers now (and since Sandy Bridge/2012 which might be 154.... maybe...) this is another area where there are lots of resources one weird bit of software might use (needing lots of values, huge spilling or something) - but that software wont need all of the (some other area of the execution engine); I've not proved the claim "forall instruction sequences [ that sequence uses the full resources in at most one arbitrary partition of execution resources ]" but there's only around 150 to 180 ops in flight at any one time *max* so to use all those registers leaves you with like 12 operations to do something else - you get my point I hope.

Anyway that's why SMT is good. If you use perf and read the manuals (I've written some devices drivers that abuse ioctl to expose machine specific registers - there's all kinds of things these can do, but they're so model specific....) you can confirm this and see what's going on (ish) to get the most out of it. A lot of my jobs involve squeezing performance out of Sandy Bridge chips so trust me on this - there's a lot there. It's just so model specific I can see why perf et al went "screw that" WRT supporting it.

I actually don't like that it's just 1 extra thread. I think powerPC or SPARC - one of them - uses like 8 or 16 threads per core. Not even they fully utilise it most of the time (you can make it SMT512 if you like, but it doesn't matter if you're not running anything that isn't hammering SIMD floating point instructions, those units are going to be idle...).

It's a good thing. Dare I say "for as long as the execution units are there to do work - it wont bottleneck" but this is the problem with hard real time systems. You're just opening programs that are not coordinating so this is impossible to say or measure. However we can all see that this means "don't run that floating point heavy stuff with more threads than there are FPUs total on this system ish"

Modulo whatever.

I'm one of those die hard hippies that trusts his computer though, If I ran windows (not a dig but all those things bring their own DLLs, phoning home, ect) I can see why you'd be worried. Or if you sell CPU time - you get my point. I trust the software I run and don't run software I wouldn't trust without some restrictions - and I'd say I pay a price or that. However how this affects a database (great use of SMT there) ....

You see my point. Generally a very good thing. Spectre is such an issue (see my comment here https://forums.theregister.co.uk/forum/1/2018/07/26/netspectre_network_leak/ - you can't just"jitter the clocks") that any CPU fixed to it (ie lying about the current time, no more rdtsc ect) could still do SMT and be safe. I've long been thinking about this, but as it's not my job (sadly) I don't know if my "mitigated system" would be practical (it involves lying about the time, pretending everything is deterministic and isolated, yet doing it much like today) or if it'd be way to slow. But it can be shown easily if you can do that safely you can make the SMT system running on top safe.

It's the real Heart Bleed: Medtronic locks out vulnerable pacemaker programmer kit


Re: Humanity is doomed

"Perhaps it should be secure" referring to the internet.

You know the "evil bit" was a joke right?


Re: Why didn't they do this in the first place?

WTF? "There" lack of security has been making headlines since at least 2010 frequently.


Re: Why didn't they do this in the first place?

Lol have you lived under a rock since .... oohh... 2010?


Re: Humanity is doomed

You know the TV trope of "aliens that have never heard of lying" - this is *surely* linked to stupidity!

All of this stuff simply stems from "consider your options" (lets ignore lying by omission et al) so with this device, consider your options? You have *the option* to explore it and potentially tamper with it.

What you're arguing would lead to "why bother with laws if no one breaks them?" - again consider your options, you *can* (and *may*) pick up a big kitchen knife (I'd be torn between the serrated bread knife, and that big heavy pointy one) and go postal. It's an option you have that is very difficult to take away.

Another option is that of hiring a van and buying loads of bricks and then ramming it into people - you *can* and *may* do this.

I think there's something fundamental in us that makes it so we don't want to, even for terrorists (apparently there are loads) this technique is rare (and I've been wondering why they don't for years) it takes a special kind of person I think to look at people walking down a pathway - and turn into them.

HOWEVER pressing enter at a virtual terminal to a device potentially the opposite side of the globe, for a person known only to us by the prompt with the make and model number of their pacemaker or something... you'd press enter and nothing would seem to happen, you'd have to be within a mile to (potentially) even hear the ambulance!

I trust people far less when they're not looking at me.

So unless you have a way to stop people from enumerating their options and then doing one that nothing actually inhibits them from doing (ideally still giving us the tools to rent a van and cut cucumber (not the big knife now)) - I want companies that should hire someone with a pessimistic "what *can* they do" mindset and really look into ways to want the software you get (which is a huge topic in and of itself that I love).

Anyway I digress...

Add to that "chances of getting caught" + Tor and do you really think you wont find someone who'd do this for a bit of money. Perhaps without knowing exactly what? (As in: told it's in a lab, or a virtual one)

Hope this helps you realise that the world is not fit for purpose ;)

Russian rocket goes BOOM again – this time with a crew on it


Re: You can't just be like "it's a lovely morning time to...

I'll be honest, I didn't bother checking first. You know the rules of thumb like "don't do your own X" - but programs doing X exist, the rule exists as a sort of check, if you don't know why the rule is, you're not ready to break it.

I trust myself to do it, I don't know how to convey that without just splurting some words like "numerical methods", "proper understanding of floating points" (which really go hand in hand) ect ect.

But what's wrong with another one right? ;)

Thanks for the link though, looking at the screenshots that's like flightsim style (you have a console and instruments, bottom right screenshot). This isn't quite that, but I wont go into it here, I look forward to checking it out though!

I must confess I've also started playing around with railguns (firing 2 at the same time opposite sides of the centre of mass so you don't start spinning), "realistic joints" (so much effort went into these) so they snap properly. I'm not too happy with collisions (which is a huge subject) but they exist. I've also left the foundations in for special relativity (not in terms of graphics - but I could with great effort, that's an "instant rendering") but in terms of delay and signal forms - right now none of that's implemented but I can add it without having to rewrite everything.

Sorry for the textwall, My intent is to make sure this is worth my time writing, so there are many things you can turn on or off (or are currently off but one day - maybe - time permitting you can turn on!), for example right now the "rail gun" is pretty much hard coded with some equations I worked out on paper, one day - maybe you design it "proper".

I've made some good plotting features so you can generate plots of variables against any other (and these can come from simulations). So I've really got some good room to develop stuff and see the relationships in play.

Tl;DR sorry.

Thanks for the link again!


You can't just be like "it's a lovely morning time to...

launch a rocket and put people up in the ISS"

You have to wait for them to be able to meet essentially - you fire this thing off in a direction, make it tilt a bit, then that spirals into the way of the ISS and with a bit of manoeuvring they line up.

Remember you're not trying to intersect with the ISS - you want to move in so your direction and the ISS line up.

You can do this anyway, but you really want that big-ass rocket you discard to do MOST of the work, and the remaining propellant (fuck all, few drops - compared to rocket) to give you a nudge towards it so you do align. This is what corrects the rocket's error in where it went.

As a slight ... brag (I really want to show people who are not students who are automatically in awe of everything) I've been working on like "KSP for adults" (without ever playing KSP) because I was annoyed by the number of students thinking this made them experts (I don't hate students, I hate some of them, especially the ones who are given a grain of knowledge and suddenly know everything - but I'll suppress that rant)

I recently tested it with moon landing (naturally) and Hayabusa 2 - it's going well (this mission mostly worked, I got it to the right place at the right time for a lot of things, bit of a language barrier - but with their mass and propulsion capabilities everything worked! Looking to get more accurate "what they did when" but they keep good English updates so I can infer a lot, like checking remaining reserves line up ect).

Love to share it, it's really coming together.

How to (slowly) steal secrets over the network from chip security holes: NetSpectre summoned


RE: Add random jitter - IT WONT WORK

That wont work. It'll slow it down, but it wont work.

I have so much measure theory in here I shit thick books on various types and call it "statistical physics" so I'm going to dumb this down - internet pedants have fun!

When you take basically any "random variables" (eg uniform, whatever) and you add them it's very very difficult to NOT get a normal distribution! You then times this by 1/n - which basically isn't an RV (you're considering this for a given n) it's a number - and we're talking average.

That's basically why this is so slow anyway. Seeing two values very close together (compared to their variance) requires a large sample to tell apart.

Consider flipping a coin; When do you call it biased? 20/20 heads - sure, biased (ALMOST certainly, you could be REALLY lucky) but 16/20? Nah that's not, well what if you wanted to detect a coin with 50.0001% chance for heads? You'd need A LOT of samples

Same principle.

The fixes I've heard about involve the information being lost in the truncation of a time value. That is say you're trying to detect (for the sake of example) 1 ms (because I hate writing 1us) and you have a clock with 100ms accuracy you need to somehow do your thing on 199ms and hope that you can tell say 199ms apart from 200ms - if the clock only does this it does raise the barrier to waiting 199ms but you can still find a boundary, eg a for loop doing nothing which you time, you find for n=whatever it takes 100ms by this clock, for n=whatever+1 it takes 200ms - you're straddling the boundary (this is further fuzzed by real time differences (any microbenchers in the house?!?)

(Hopefully you understand how randomly "jittering" the clock now isn't an absolute fix, eg if 50.0001% of the time you get a 200ms reading, and 49.9999% of the time you get 300ms reading, with enough samples this can be determined to be arbitrary likely, that is you can pick a number, say 0.001% - and get enough samples to be sure to that probability, that the result you're seeing means you're in the super lucky 0.001% - OR it's not random and there's a distinction here.

You get the idea.

It's an extremely tricky issue -

EDIT: I use "almost certainly" in the English sense - inb4 - also you wanna round your clock instead? Find the odd multiples of 50ms to split instead!


Biting the hand that feeds IT © 1998–2019