* Posts by psyq

69 posts • joined 21 Jul 2009


Resident evil: Inside a UEFI rootkit used to spy on govts, made by you-know-who (hi, Russia)


Re: How do I

Intel started putting UEFI firmwares on their Core 2 mobos circa 2006/2007 if I remember correctly (I had one of those small mini-ITX mobos, Fry Creek or smth. it was called). They had BIOS emulation (CSM) by default, but behind it it was all UEFI.

I think already on Nehalem it was not possible anymore to initialize the CPU without UEFI firmware, not 100% sure but with Sandy Bridge it was for sure UEFI all the way down.


Re: Hardware button?

Yes, but how then would you expect somebody from other side of the world to be able to remotely patch firmware on your worldwide fleet of computers?

PHBs would like to know? So... no.

Intel boss admits chips in short supply, lobs cash into the quagmire


Re: The real reason they are running short on capacity

Cascade Lake Xeon (Purley refresh) is being produced right now - originally, we'd be on 10nm by now. However, since this did not happen Intel is left with a huge mess, they need to produce huge quantities of new Xeon SP for hyperscalers, clouds, etc. and all that on the same 14nm process as everything else. Tough.

Mind you, these Xeons are tough, since they are huge monolithic 28-core monster dies, so few chips per wafer + lots of possibilities for failure (of course, Intel does bin those failed chips as cheaper 18/22/24 cores, but still).

ARM chip OG Steve Furber: Turing missed the mark on human intelligence


Equivalent to the brain of...?

"Put four chips on a board and you get 72 ARM cores, which equates to the brain of a pond snail. Put 48 chips on it and you get 864 cores, equivalent to the brain of a small insect."

I am sorry, but no, until we have a satisfactory model of neural computation stating that XYZ ARM (or any other) cores is somehow equivalent to the brain of >any< living being is preposterous.

Needless to say, at this moment we do not have such model, so the actually required compute power is still an unknown. Should we model networks, spikes, membrane dynamics, ionic channels, proteins, molecules...? What is the appropriate level of abstraction, if any? Nobody has yet found the answer so, no, bunch of CPU cores is not equivalent to biological anything.

Brazilian whacks Intel over 'exploding' Atom smartphone chips


Re: Ah, so that's why they killed Atom.

But those CPUs you mentioned embedded on the ITX boards are not the same CPUs as the ones produced for mobile phones. There is a huge difference in TDP, the chips that go into phones have 0.5-2W TDP, while the ones embedded on ITX boards are typically 5-10W.

So you cannot really compare them, it's not apples to apples comparison.

Ghost of DEC Alpha is why Windows is rubbish at file compression


Actually, most of the data stored on the disc is probably already compressed (multimedia, etc.) and OS would simply waste CPU cycles with another layer of, in that case, pointless - or even counterproductive (as in: more data needed to store what OS sees as random sequence of bytes) compression.

File-system level compression might had its day when a) the prices of storage were much higher and b) where most files stored on the medium were compressible. Those were the days of DoubleSpace / Stacker and similar products from the 1990s.

Now, it could be still useful for compressing the OS / application executable binaries but the gains to be got from that are simply diminishing (say, you get 30% compression - how much does that help as opposed to increased CPU loads / latency due to on-the-fly compression/decompression)?

Having file system compression as a feature can definitely be useful but having it ON by default? Definitely not. For most typical usage patterns where most of the I/O would be actual application data, not binaries, it would just waste CPU cycles trying to compress something already compressed.

Reminder: iPhones commit suicide if you repair them on the cheap


As far as I can recall, this error is nothing new - it just looks some journo found it out and decided to post about it.

If you search for "Error 53" you will notice that this is nothing new.

I even remember when 5S came out (first iPhone with the fingerprint sensor), that it was almost immediately known that would not be possible to replace the sensor outside of Apple service network.

Intel admits Skylakes can ... ... ... freeze in the middle of work


We finally reached the stage...

We had for a while with large consumer software: wait for the service pack 2 before buying.

With Skylake, looks like we reached that phase with the CPUs as well, and the advice will be: "wait for the second / third stepping before buying, unless you want to be a beta tester".

Anybody who has worked on the system/firmware level of the modern CPUs, or have seen/worked with the BIOS writers guide, knows that the complexity of the software needed just to initialize the CPU to the OS boot time has became staggeringly complex, probably more so than the entire operating systems of the late 90s.

This is in no defense of Intel, CPU crapping while doing hard math is simply inexcusable, but it does not surprise me one bit considering the fact what the modern CPUs have become, there are so many things (PCIe controllers, GPUs, complex power management, multiple levels of cache and ring buses shared by different on-die peripherals, memory controllers, etc.) which could go wrong when various "weird" conditions are created.

Of course, Intel should have found out about this on their own during the engineering phase, but seeing this just shows why the server CPUs need at least 1+ year of quality assurance in order to be "releasable" - maybe that is actually what the minimum quality should be, and what we are seeing in consumer space is just degradation of quality even in the CPU itself :(

I am not optimistic in this regard, and to me it looks like we will be seeing worse - TSX fiasco was not just a fluke, these things are becoming more complex and market wants them sooner.

In this particular case, HPC and financial industry was just lucky the bug showed before Skylake EP/EX platform was launched. I guess they would not be amused if their brand new Xeon chips crapped while doing heavy math.



There ways to make this extremely hard and unlikely.

Whether Intel adheres to such practices, I cannot say for sure. But considering that their multi billion dollar business literally depends on this, and the company does not have a lack of excellent security and engineering talent, I would say that the process is probably as secure as it could be.

Microcode is a lousy place for hiding rogue code anyway, since it gets reverted to "factory" code after every reboot, so you have to exploit the system firmware or OS and update the microcode after every reboot, and if you already got there, you probably already have what you need, there is no need to invest millions into trying to "crack" a CPU and redo such job with every new process shrinkage / generation change.

Also, microcode is almost certainly tied to the particular architecture, which means it changed at least every 2 years.

It does not make too much sense from the cost/benefit point of view.

I would be more worried about system firmware. Many of the modern PC / notebook motherboards are using system firmware implementation which was at least on TianoCore UEFI code. If somebody smart found a good hole there, it is likely that such hole could be exploited on multiple generations of system boards since big part of that code is platform-independent and probably not touched too much by firmware vendors.

Of course, since at least Haswell (and, I think, at least Sandy Bridge EP) platform, there are ways to prevent this by forcing hardware validation of the firmware images (making updating patched UEFI images impossible), but such things were rarely enforced and probably not even wanted in some parts of the home market where it was/is desired to be able to "patch" UEFI image for software piracy reasons.

But, at least in theory, it would be possible to make this process very >very< hard by insisting on hardware validation which cannot be overridden in software at all. This is still less secure since with many OEMs there is a higher chance that a private key leaks, which is why I would prefer to have a system with a jumper which prevents >any< firmware update, but this seems too much to ask today :(



I did not state the systems are infallible, but at least in Intel's and AMD's case (as far as we are talking about x86 world only), there are no known faults with the microcode update implementations. Absence of evidence is not evidence of absence, but so far, uCode has been proven safe and not because of lack of trying.

I cannot say for sure, but I would place my bet that Intel >really< tested the microcode management part of the silicon. Maybe not primarily for the benefit of customer security, but because of their own business.

Apart from the microcode used for bug fixing and, sometimes, implementing future instructions (such as AVX2 GATHER in Haswell) there is also a part which typical client almost never sees - stuff used for debugging and feature enabling/disabling / operating point control. With those, it is possible to have more control over the CPU compared to what the "normal" MSRs can do and enable facilities which are "not there" as per model information.

And >that< is protected with the strongest cryptography, for the manufacturer's own sake. Basically, CPU is controlling itself in this case, and without passing signature checking it will refuse to do anything with the blob, and you cannot "force" it from the outside, other than either:

a) Breaking whatever encryption Intel/AMD are using, which I am sure is the strongest available


b) Physically manipulating the CPU by cutting the package and doing in-silico modification. Let's forget the part where multi-million equipment is needed and such CPU would last only few hours, this method is hardly undetectable


c) Finding and exploiting a bug in uCode validation procedure on the CPU. I doubt this is realistic, since such procedure can (and probably is) made with simple and mathematically provable code. Maybe this is way more likely than a), but I really doubt in its existence.

I do not know for sure, but I would be willing to bet that authentication and checking of the microcode and operating point protection is most likely most audited and checked part of the CPU design :-)

If I am to think of ways to "exploit" Intel or AMD x86 CPU, uCode update would be very low on my list.



BIOSes (and UEFI firmwares) could update Intel CPU microcodes for >years<, ever since Pentium Pro.

Threat to security of the microcode update process is quite low since the microcode update is checked by the CPU itself prior to update and will be rejected if the signature fails. Unless you get access to Intel's private key, you cannot do anything useful with the microcode BLOB, except to try sending it to the CPU and watch the process fail unless the BLOB is unchanged and newer than microcode currently "running" on the CPU.

And, anyway, after a cold boot CPU just reverts to the original microcode which was stored on it during the specific stepping production process. BIOS/UEFI then updates the uCode with its latest version followed by the OS, which typically has the latest.

This is nothing to say that Intel has royally f*cked up with this bug, but there is no justification for blaming microcode update for being a security liability. It is not.

D-Wave: 'Whether or not it's quantum, it's faster'


Re: D-Wave processors are like GPUs

D-Wave's current product cannot, in principle, implement Shor's algorithm due to the limitations of their implementation (or "implementation").

Not that working D-Wave device would not be useful if it would actually work (and by saying "work", I mean work as in "outclass classical algorithms"), but breaking public key cryptography would not be one of the problems this device would perform.

In any case, it looks like these guys are not even there - for years they seem to be moving the goalposts when it comes to benchmark metrics and to this day still struggle to even prove the device is actually performing quantum computation. To me it looks like they gave up on the comparing the speed with state-of-the-art classical computers, I remember a year or so ago they were using single threaded classical implementation as a reference (I can only guess if this was the only "crippling" done), now even this is off the table. Not that this would be a problem in itself, even if the first quantum implementation would be slower than "state of the art" classical implementation, it would still be huge and more than worth pursuing, as it is often the case that the early versions of new technologies are less efficient than the current state-of-the-art. No, the problem is - they seem to be constantly moving the goalposts.

I would very much like to be wrong in this case, but to a complete outsider like myself, their behavior awfully looks like trying to sell the hype. I apologize to the honest hard working scientists and engineers who probably invested millions of man-hours and sleepless nights working for this company, but something just smells fishy with regards to the company's outside behavior.

This is to be expected, I guess - as with every big "new thing" in IT, there is a lot of hype and PR spin. Quantum computing is not exactly "new" at least as a concept, but the actual working implementation which can solve real-world problems faster than classical computing would be absolutely huge, probably the biggest technological jump since the first computer.

So, yes, the whole thing is big. But I am not sure if these guys will be the pioneers.

Again, I would like to be wrong but everything so far points more to attempts to hype their way out of investment losses than actual progress :(

Relax, it's just Ubuntu 15.04. AARGH! IT'S FULL OF SYSTEMD!!!


Re: systemd? Do not want.

Actually, no, MMX came with Pentium refresh (P55C revision, to be precise).


First Pentium II did not bring anything new on that level, it was Pentium III which brought the first version of the SSE instruction set.

OpenSSL preps fix for mystery high severity hole


Re: "has to be written in C"


I can't help feeling a little less concern for the last fractional percent of performance could have saved millions of collective hours of misery if the same principles had been applied to Windows NT.


From my experience, software written in "higher" level languages suffers from the same crappiness and malfunctions equally the same. It might not be a buffer overflow, but human idiocy will always find a way around.

MELTDOWN: Samsung, Sony not-so-smart TVs go titsup for TWO days


Re: Storm in a teacup!

What exactly is tin foil hat about not wanting my TV to phone home?

If this is what you fancy, by all means please go on. You can also request your next house to be made of glass. Some of us still do not wish that kind of openness and there is nothing wrong with it.

SIM hack scandal biz Gemalto: Everything's fine ... Security industry: No, it's really not


I very much doubt it has anything to do with the performance, even at the time of standardization of the 4G, phones had more compute performance than PCs from 90's and in miniature / low-power form.

The elephant in the room is the incompatibility of the worldwide mobile standard set by an intergoverment entity with the desire of the goverments to be able to intercept their (and other) citizen communications.

In the ideal world, a modern telephony standard would maintain forward secrecy and the voice data would never be transmitted unencrypted, with the keys tied only to the handsets themselves and overridable by the users of the said handsets. This way, data which goes through the switching office would be perfectly useless from the point of the contents. It is not realistic to expect that the international public telecommunication standard insists on further secrecy, like mechanisms for preventing locating the originator and the destination of the call.

But even "just" secrecy of the contents, not so-called "meta"data is simply against laws set up in most countries nowdays which require an ability to do covert listening (after court order or with less oversight, depending on the country).

So, no, there will be no forward secrecy in a public telephony standard.

Lenovo shipped lappies with man-in-the-middle ad/mal/bloatware


Re: Secure boot?

Looks like UEFI secure boot is the new bogeyman for some people.

The purpose of the secure boot is to establish a chain of trust from the power ON. The purpose of this is to help prevent modification of the boot files >in deployment<. However, if you own or have the access to the trusted certificate, you can make your own bootloader which does whatever you want to. System OEMs can put their certificates in the UEFI firmware and validate whatever they want.

Also, secure boot does not prevent an OS from launching anything after boot which is trusted (or not trusted but allowed by the system security policy). Once the OS is booted, it is completely up to the said OS configuration / security policy what to launch or not. If you, as a root/admin or OEM, install malware which does MITM - UEFI secure boot will not stop you (and it is not even designed to do that).

Now, if you have only trusted certificates installed - in UEFI firmware, validating OS files and in OS certificate store, validating executables run by the OS, then you have a system which has one more hurdle for a potential adversary to crack.

Beyond the genome: YOU'VE BEEN DECODED, again


Great article, questionable title though...

While I applaud the author for a very nice explanation for the layman, I think the term "(de)coding" is too much abused today and used where it does not really belong.

This is very similar case as in, for example, scientists using term neural "coding" - saying that something is (de)coded implies that it has been "coded" in the first place. Of course, decoding compressed audio or video signal results in (almost, sometimes) original audio or video signal, but this is precisely because the said signal has been coded in the first place, we know, because we did the coding.

Gene information, on the other hand... not really. Genes (or, better, clumps of molecules we call "genes") are inherent part of the living beings, these molecules do not "code" anything, not any more than, say, crankshaft "codes" anything in the internal combustion engine. These things are parts of the process and not just "code".

While sometimes it can be very useful to compare or abstract living processes using concepts from the information theory or computing, this can be dangerously misleading if taken too far. Biological processes are not computation. While these processes can, to some extent, be compared to or modelled with concepts from the information theory, they are much more than that.

Please do not get me wrong, I applaud the science for working on understanding the processes responsible for keeping matter alive, and I am quite sure that better understanding of the molecular machinery will lead to better medicine and quality of life for humans and animals, but this "decoding", and, let's not forget, "-omics" fashion (for some reason, it became very fashionable to stick "-omics" name to things recently, probably having something to do with better grants) can give false sense that we understand something more than we actually currently do.

It awfully reminds me of claims that we'll "crack" the problem of intelligence in late 50s. 60 years later, we are still discovering new dimensions of the problem. I fear this will also apply to the molecular processes underlying "bootstraping" (damn it, I did it too) the living organism.

/end rant :)

Boffins attempt to prove the universe is just a hologram


Re: Testing for a simulation

Here is one idea:


And a general audience version: http://www.phys.washington.edu/users/savage/Simulation/Universe/

TLDR: If we are living in a "beta" simulation, there are possible ways to find it out, and the paper proposes one way, by measuring ultra high energy cosmic rays, and check the direction of travel of the highest energy particles (near so-called GZK cut-off). The idea is that hypothetical simulation might reveal its symmetry if the highest energy particles are following a certain direction.

I am not a physicist, so I have no clue if this could work, or whether eventually detected phenomena can be explained with something else (most probably IMO).

IBM boffins stuff 16 million-neuron chips into binary 'frog' brain


"Frog" brain... or "any" brain...

If it would be able to catch a fly for its dinner, Dr. Modha would be most likely earning himself a Nobel prize.

Unfortunately, Dr. Modha is known for sensationalistic announcements (several years ago it was a "cat" brain, which sadly did not do much either) and little real material.

Putting bunch of simplified models of neurons together is nothing new. It has been done dozens of times before:

- In 2008, Edelman and Izhikevich made a large-scale model of human brain with 100 billion (yes, billion) of simulated neurons (http://www.pnas.org/content/105/9/3593.full)

- Since then, there have been numerous implementations of large-scale models, ranging from million to hundreds of millions of artificial neurons

- Computational neuroscience is my hobby, and I managed to put together a simulation with 16.7 million artificial neurons and ~4 billion synapses on a beefed-up home PC (http://www.digicortex.net/). OK, it was not really a home PC, but it will be in few years

- And, of course, there is a Blue Brain Project, which evolved into Human Brain Project. Blue Brain Project had a model of a single rat cortical column, with ~50000 neurons, but modelled to a much higher degree of accuracy (each neuron was modelled as a complex structure with few thousands of independent compartments with hundreds of ion-channels in each compartment).


All of these simulations have one thing in common: while they do model biological neurons with a varying degrees of complexity (from simple "point" process to complex geometries with thousands of branches), they all show "some" degree of network behavior similar to living brains, from simple "brain rhythms" which emerge and are anti-correlated when measured in different brain regions, to some more complex phenomena such as acquiring of receptive fields (so e.g. neurons fed with visual signal become progressively "tuned" to respond to e.g. oriented lines etc.) - NONE OF THEM is yet able to model large-scale intelligent behavior.

To put it bluntly, Modha's "cat" or "frog" are just a lumps of sets of differential equations. These lumps are capable of producing interesting emergent behavior, such as synchronization, large-scale rhythms and some learning through neural plasticity which result in simple neuro-plastic phenomena.

But they are NOWHERE near anything resembling "intelligence" - not even of a flea. Not even of a flatworm.

I do sincerely hope we will learn how to make intelligent machines. But we have much more to learn. At the moment, we simply do not know what level of modelling detail is needed to replicate intelligent behavior of a simple organism. We simply do not know yet.

I do applaud Modha's work, as well as work of every computational neuroscientist, AI computer scientist, AI software engineer and also all developers playing with AI as their hobby. We need all of them, to advance our knowledge of intelligent life.

But, for some reason, I do not think PR like this is very helpful. AI, as a field, has suffered several setbacks in the history thanks to too much hype. There is even a term, "AI winter" which came to be precisely as a result of one of those hype cycles, very early in the history of AI.

I am also afraid that Human Brain Project, for all it is worth, might lead us to the same (temporary) dead end. I do hope HBP will achieve its goals, but announcements that Dr. Markram made in the last years, especially (I paraphrase) "We can create human brain in 10 years" will come back to haunt us in 10 years if HBP did not reach its goals. EU agreed to invest one billion euros in this - I hope we picked the right time, but I am slightly pessimistic. Otherwise we will be up for another AI winter :(

NASA tests crazytech flying saucer thruster, could reach Mars in days


Err, actually it works the other way: people with extraordinary claims need to come up with extraordinary proofs.

Inventor claims he invented a brilliant new method of propulsion, which seems to violate some physics laws. No biggie, if it really works I am quite sure the inventor will have no problem selling / licensing / giving away / whatever implementations of his invention.

If people had to recreate every single silly apparatus just to state that it does not work, the civilization would be busy with recreating garbage.

Mind you, I am not saying this particular thing is garbage, maybe it is a paradigm shift in space travel. But the burden of proof is on the inventor and people claiming the invention works.

NASA experiment did not prove this thing work. The fact that they got something out of deliberate setup designed NOT to work casts doubts on the validity of experiment. It also does not help that they did not perform the experiment in vacuum.

Nevertheless, if this invention does indeed work, it will have no issue whatsoever in being confirmed experimentally.


Re: Paradigms

Sorry, but that is a load of bull.

Relativity did not have any problem to get accepted. Quantum mechanics, too.

Because these things were proven conclusively and repeatedly. Sure, there were probably people who did not "believe" until their deaths, but actually most of the academic world quickly catched up.

If this thing "works", then it will be absolutely no problem to replicate the setup and confirm that it really works. Perhaps somebody will also eliminate the possible causes of concern, such as the fact that the NASA chaps did not test in vacuum. If the proposed invention is really meaningful, it will have no problem with replication and confirmation in a rigorous setup.


Unfortunately, no, they did not prove it works.

Even the modified setup "worked" - although it was not supposed to.

This suggest that more likely cause is error in experiment setup / measurement.

It would be really great if this thing worked, but it's going to take a bit more to prove it.

HIDDEN packet sniffer spy tech in MILLIONS of iPhones, iPads – expert


Re: What did you expect?

Nobody is stopping you from modifying software you purchased, but nobody is forced to provide you with everything needed for the most convenient way. With binary code, you'll have to do it in assembler but nobody stops you in principle.

Did your vacuum cleaner company give you the production tooling and source files used to build the vacuum? No? Did your car vendor hand the source code for the ECU? Did they give you VHDL code for the ICs? Assembly instructions? No? Bas*ards!

As far as for banning, I'd first start with banning stupidity. But, for some reason it would not work.

Chrome browser has been DRAINING PC batteries for YEARS


Re: This is a Windows API problem

It would not work. There are too many applications written in the crappy way with pieces of code working properly only with decreased time quantum. Their time accounting would be fcked and the results would range from audio / video dropouts to total failure (in case of automation / embedded control software).

For example in all cases where code expects timer to be accurate to the level of, say, 1ms or 5ms. Too many multimedia-related or automation-related code would be broken.

It is sad, but true. Microsoft should never allow this Win 3.x / 9x crap in the NT Win32 API but I suppose they were under pressure to make crappily written 3rd party multimedia stuff to work on NT Windows flavors, otherwise they might have problems with migrating customers to NT.

Of course, nowdays (since Vista) there are much better APIs dedicated to multimedia / "pro" audio, but here the problem is the legacy.

At least, Microsoft could have enforced API deprecation for all software linked as NT 6.x so that this terrible behavior could be avoided for new software. But that, too, is probably too much to ask due to "commercial realities". Consumer's PC would suck at battery life, nothing new here :(


Re: This is a Windows API problem

It is timeBeginPeriod API:


On most systems today you can get down to 1ms. But this has a high cost associated with the interrupt overhead.


Re: This is a Windows API problem

I meant wake up the CPU 1000 times in a second.


This is a Windows API problem

Windows, even since pre-NT versions (Win95 & Co), allowed usermode applications without admin rights to change the system quantum ("tick" duration).

By default, quantum in NT kernels is either 10ms or 15ms, depending on the configuration (nowdays it tends mostly to be 15ms). However, >any< user mode process can simply request to get this down to 1 ms using an ancient "multimedia" API.

Needless to say, in the old days of slow PCs etc. this was used by almost all multimedia applications since it is much easier to force the system to do time accounting on 1ms scale than to do proper programming.

For example, idiot thinks he needs 1ms time precision for his timers - voila, just force the >entire OS< to wake the damn CPU 100 times a second, do all interrupt processing just to wake his ridiculous timer because the developer in question has no grasp of efficient programming. In most cases, it is perfectly possible to implement whatever algorithms with 10/15ms quantum, but it requires decent experience in multithreaded programming. This, of course, is lacking in many cases.

Only a very small subset of applications/algorithms need 1ms clock-tick precision. However, for those, system >should< ask for admin righs, as forcing the entire OS to do 10x times more work has terrible consequences for laptop/mobile battery life.

Microsoft's problem is typical: they cannot change the old policy as it would break who-knows-how-much "legacy" software.

Apple: We'll tailor Swift to be a fast new programming language


Yet another...

Every time a company in the Valley becomes big enough (sometimes not even that), they have to give it a try and make their own programming language.

Apple already did this, it looks like this is their second try.

World is full of "C replacements", they come and go... but for some reasons, C is pretty much still alive and kicking and something tells me that it is going to be alive long after the latest reiteration of Valley's "C replacement" is dead and forgotten.

TrueCrypt turmoil latest: Bruce Schneier reveals what he'll use instead


I am sorry, but this is simply not true (that open source software >cannot< have backdoors because someone, somewhere might spot it).

Very good backdoors are made so that they look like plausible bugs (and all general purpose software is full of them). Something like missed parameter validation or a specific sequence of things which triggers a behavior desired on the most popular architectures/compilers allowing adversary to read the contents of a buffer, etc..

It takes awful lot of time to spot issues in complex code - it took Debian more than a year and a half to figure out that their pseudorandom generator is fatally flawed due to stupid attempt of optimization. And >that< was not so hidden, it was there in plain sight. Not to mention that crypto code >should not< be "fixed" by general-purpose developers (actually, this is what caused the Debian PRNG problem in the first place), so your pool of experts that would review the code drastically shrinks. So you gotta hope that some of these experts will invest their time to review some 3rd party component. This costs hell lot of time and, unless somebody has a personal interest, I doubt very much that you would assemble a team of worldwide crypto experts to review your github-hosted project without paying for this.

Then, complex code is extremely hard to completely review. This is why in aerospace and vehicle telematics, critical software is written from the scratch so that it can be proven that it works by following very strict guidelines on how the software should be written and tested (and, guess what, even then - bugs do occur). General-purpose software with millions of lines of code? Good luck. The best you can do is to schedule expert code reviews and, in addition, go through the code with some fuzzing kit and spot pointer overruns etc. but even after all that, more sinister vulnerabilities might still pass.

So, sorry, no - being open source does not guarantee you lack of backdoors. Because in this day and age, smart adversary is not going to implement a backdoor in plain sight. Instead, it will be an obscure vulnerability that can easily be attributed to simple programmer error.

Faith that open source code is backdoor free because it is open is pretty much similar to the idea that infinite amount of monkeys with infinite amount of typewriters will write all Shakespeare work. Please do not get me wrong, I am not attempting to compare developers to monkeys, but the principle that just because there is some chance of something to happen - it will happen. No, this is not guaranteed.


Love it or not, there is no objective reason why you would trust Microsoft less than some bunch of anonymous developers.

Microsoft has a vested interest in selling their product worldwide, and backdoor discovered in their crypto would severely impair their ability to sell Windows to any non-USA goverment entity and probably big industry players, too.

I am not saying that BitLocker has no backdoors - but there is no objective reason to trust BitLocker less than TrueCrypt.

Sad thing is, when it comes to crypto there is, simply, no way to have 100% trust >unless< you designed your own hardware, assembler for building your own OS and its system libraries and, finally, crypto.

Since nobody does that, there is always some degree of implicit trust and, frankly, I see no special reason why one would trust some anonymous developers more than a corporation. Same goverment pressure that can be applied to a corporation can be applied to an individual and we do not even know if TrueCrypt developers are in USA (or within USA government's reach) or not. Actually, it is easier for a goverment to apply pressure to an individual, which has far less resources to fight compared to cash-full corporation that can afford a million $ a day legal team if need be.

The fact that TrueCrypt is open source means nothing as far as trust is concerned. Debian had a gaping hole in its pseudorandom number generator for everybody to see for 1.5 years. Let's not even start about OpenSSL and its vulnerabilities.

There is, simply, no way to guarantee that somebody else's code is free of backdoors, You can only have >some< level of trust between 0% and less than 100%.


Re: Whoa there

Actually, BitLocker does >not< require TPM since Windows 7. Since Windows 7 it allows a passphrase in pretty much the same way as TrueCrypt. I use it, since TrueCrypt does not (and, probably, never will after the announcement) support UEFI partitions.

Also, BitLocker does not, by default, leave a "backdoor" for domain admins. If this is configured, then it is done so by a corporate group policy, but it is not ON by default.

BitLocker does not allow plausible deniability on the other hand, and there people will need to find some other option now that TrueCrypt development seems to be ending.

The problem of trust is there for both TrueCrypt and its closed-source alternatives such as BitLocker. There are ways to insert vulnerabilities that would look like ordinary bugs and be very hard to catch even when somebody is really looking in the source code (see how long it took people to figure out that Debian pseudorandom number generator was defunct). At the end of the day, unless one writes OS and compilers and "bootstrap" them from its own assembler, it is always involving some degree of implicit trust of 3rd parties.

What we need is a truly open-source disk encryption tool which is:

a) Not based in the USA, so that it cannot be subverted by "Patriot" act

b) Which undergoes regular peer-reviews by multiple crypto experts

c) With strictly controlled development policies requiring oversight and expert approval of commits

The problem is: b and c cost money, so there needs to be a workable business model. And that needs to be creative due to a), which would preclude standard revenue stream from software licensing.

And even then, you still need to trust these guys and those crypto experts as well as compilers that were used to build the damn thing...

Patch iOS, OS X now: PDFs, JPEGs, URLs, web pages can pwn your kit


Well, considering the fact that Google's business are ads, it is in their interest that the ads are viewed without undue interruptions even on Apple's kit.

That and the fact that WebKit engine has shared roots.

Meet the man building an AI that mimics our neocortex – and could kill off neural networks


Actually, the fact that the memory is temporal is known for quite a long time.

At least since early 90s, after the discovery of spike timing dependent plasticity (STDP) - http://www.scholarpedia.org/article/Spike-timing_dependent_plasticity it became obvious that the neurons encode information based on the temporal correlations of their input activity. By today, our knowledge has been greatly expanded and it is known that the synaptic plasticity operates on several time scales and its biological foundations. There are also dozens of models of varying complexity with even some simple ones being able to reproduce many plasticity experiments on pairs of neurons quite well.

Since early 90s there had been lots of research into working memory and its neural correlates. While we do not have the complete picture (far from it, actually), we do know by now very well that the synaptic strength is heavily dependent on temporal correlations and that biological neural network behaves like auto-associative memory. There are several models that are able to replicate simple things including reward-based learning, but all in all, it can be said that we are really just at the beginning of understanding how the memory of the living beings works.

As for Ray Kurzweil, sorry but anybody who can write something called "how to create a mind" is just preposterous. Ray Kurzweil has no clue how to create a mind. Not because he is not smart (he is), but because NOBODY on this planet has a clue how to create a mind, yet. Ray does, however, obviously know how to separate people from their money by selling books that do not deliver.

If somebody offers to tell you "how to create a mind" (other than, well, procreation, which pretty much everybody knows how to do it) just ask them why is that they did not create it, but instead they want to tell you that. That will save you some money and quite a lot of time. While I do not disprove motivational value of such popular books, scientifically they do not bring anything new and this particular book is just a rehash of decades-old ideas.


Re: Let a thousand flowers bloom

Well, pre-cortical brain structures can do impressive things as well.

Lizards and frogs do not have neocortex, but are doing pretty well in surviving. Even octopuses are pretty darn smart and they do not even have brain parts even the lizards have.

Today we are very far even from the lizard vision (or octopus vision if you will), and for that you do not need an enormous neocortex. I am pretty sure that something on the level of lizard intelligence would be pretty cool and excite the general populace enough.

These things are hard. I applaud Jeff's efforts but for some reason I think this guy is getting lots of PR due to his previous (Palm) achievements while, strictly speaking, AI-wise, I do not see a big contribution yet.

This is not to say that he shouldn't be doing what he is doing, to the contrary, the more research in AI and understanding how the brain works, the better. But too much hype and PR can damage the field, as it happened before, as the results might be disappointing compared to expectations.


Re: model a neurone in one supercomputer

The reason computer always responds the same to the same inputs is only because algorithm designed made it so.

There is nothing stopping you from designing algorithms which do not always return the same responses to the same inputs. Most of the today's algorithms are deterministic simply because this is how the requirements were spelled out.

Mind you, even if your 'AI' algorithm is 100% deterministic, if you feed it with the natural signal (visual, auditory, etc.) the responses will stop being "deterministic" due to the inherent randomness of natural signals. Now, you can even extend this with some additional noise in the algorithm design (say, random synaptic weights between simulated neurons, adding "noise" similar to miniature postsynaptic potentials, etc.).

Even the simple network of artificial neurons modeled with two-dimensional algorithms (relatively simple algorithms, such as adaptive exponential integrate and fire) will exhibit chaotic behavior when introduced to some natural signal.

As for the Penrose & Hameroff Orch OR theory, several predictions this theory made were already disproved, making it a bad theory. I am not saying that quantum mechanics is not necessary to explain some aspects of consciousness (maybe, maybe not), but that will need some new theory, which is able to make testable predictions which are confirmed. Penrose & Hameroff's Orch OR is not that theory.

So, Linus Torvalds: Did US spooks demand a backdoor in Linux? 'Yes'


Re: This is the case for open source operating systems.

Jesus effin' Christ - Debian generated useless pseudorandom numbers for almost year and a half.

NOBODY spotted the gaping bug for >months<.

No, it is >not< possible to guarantee that software is 100% backdoor-free - open or closed, it does not matter.

Linux, like any modern OS, is full of vulnerabilities (Windows is not better, neither is Max OS X). Some of these vulnerabilities >might< be there on purpose.

The only thing you can do is to trust nobody and do the best security practice - limited user rights, firewalls (I would not even trust just one vendor), regular patching, minimal open ports on the network, etc. etc.

NEC tag teams with HP on high-end x86 servers


Re: AC

They probably mean Xeon E5, as Xeon E7 is by no means "latest" since it is waiting to be upgraded to Ivy Bridge EX in Q1/2014. Currently it is based on now ancient Westmere microarchitecture.

Xeon E5 is based on Sandy Bridge uarch, and the upgrade to Ivy Bridge is imminent (in couple of weeks) - however, E5 is limited to 4 sockets unless you use 3rd-party connectivity solution such as NUMAlink.

Intel's server chip biz holds steady ahead of Ivy Bridge Xeons


Re: IB and Haswell a big disappointment

This news article is focused on Xeon, not the consumer line.

Ivy Bridge EP brings 50% more cores (12 vs. 8) within the same TDP bracket. This >is< significant, for the target market of the Xeon chips.

Ivy Bridge EX scales in the same way - 120 cores in 8P configuration vs. 80 cores available today.

This is nothing short of impressive, considering the fact that Ivy Bridge is just a "tick".

Intel: Haswell is biggest 'generational leap' we have EVER DONE


Re: Ultrabook debacle

Actually, the first "ultra thin" notebook was Sony X505, invented by Sony in 2004.

Google it up - that was good couple of years before Macbook Air.

Of course, Sony being Sony - they marketed the device for the CEO/CTO types, and priced it accordingly (it well above 3K EUR in Germany). Hence, it was not very successful.

But in terms of actual invention - this was "it". Apple just take more sane approach and priced the Air in the range of "affordable luxury" item - certainly not cheap, but well within reach of middle class.

Same flop (typical of Sony) was repeated with the Z series - Sony made the dream machine which was more powerful than most Macbook Pros (before 15" Retina) but ligther and actually thinner than the first-gen 13" Air. And Full HD 13" screen since 2010 - something that took Apple quite a bit of time to catch up. All in all, a perfect notebook - I know since I owned all Z models, before I switched to Macbook Retina 15".

Again, thanks to their ridiculous business model and practice of stuffing crapware (at some point Sony even had the audacity to ask $50 for a "clean" OS installation) world will remember Apple Macbook Air and Retina as the exemplars of ultra-thin and ultra-powerful machines, and not Sony X and Z series.

However, nothing changes the fact that it was Sony delivering the innovation years before Apple.

Intel's answer to ARM: Customisable x86 chips with HIDDEN POWERS


Re: Security

Normally, additional features that command a premium are fused-out in 'common' silicon and enabled only for special SKUs.

To temporarily enable fused-out feature you would need several things none of which are present in the computers employed in ordinary businesses. And even if you had all the tooling and clearances (which is next to impossible) the process of temporary enabling is not going to be unnoticed. Hardly something that can be used for exploitation. There are much easier avenues - including more and more rare kernel-level exploits.

ARM's new CEO: You'll get no 'glorious new strategy' from me


Re: Talk about apples and moonrocks

Sorry, but you obviously do not know what are you talking about, sorry.

Intel's modern CPUs have different micro-ops which are used internally. x86 instruction set is only used as a backwards-compatibility measure and gets decoded to the series of micro-ops (besides, modern instructions such as AVX have nothing to do with the ancient x86). Today's Sandy Bridge / Ivy Bridge architecture has almost nothing in common with, say, Pentium III or even Core. Intel tweaks their architectures in the "tick" cycle, but the "tock" architectures are brand new and very different from each other.

As for X86 instruction set, and the age-old mantra (I believe coming from Apple fanboys) that x86 is inherently more power-demanding, this has been nicely disproved lately with refined Intel Atoms (2007 architecture, mind you) which are pretty much on par with modern ARM architectures in terms of power consuption.

I am not a fan of x86 at all, I use what gets my job done in the best possible way.

But when I read things like this, it really strikes me how people manage to comment on something they obviously do not understand.

Google's Native Client browser tech now works on ARM


Re: Where is the advantage?

Err, why would a sandboxed userspace code cause "computer to crash" any more than anything else that runs in the userland? I see absolutely no point in that statement. If a computer crashes today, this is almost universally due to bad kernel-level 3rd party code such as drivers. NaCL has nothing to do with this any more than Javascript - both languages would execute in the userspace and, at the end, use OS supplied userspace APIs.

And how is Javascript any higher-level than, say, C++ ?

Sorry, the fact that the language is more open to script kiddies does not make it any more "high level", nor it does make buggy code any less likely to occur. Crappy code is crappy code, it is caused by the developer and not by the language.

The advantage of the NaCL would be performance - if someone needs it say, for multimedia, gaming, etc... It does not mean that everybody needs to use it but it would be good to have it as an option. The fact that computers are getting faster does not render it any less relevant as modern multimedia/gaming is always pushing the limits of the hardware.

'SHUT THE F**K UP!' The moment Linus Torvalds ruined a dev's year


Re: The Kernel of Linux

Cars engine management?

Yeah, that's exactly where I'd want a general purpose OS like Linux...

Thankfully, automotive industry is still not going crazy and critical ECU-s are driven by hardened RTOS-es.

Scientists build largest ever computerized brain


Re: So, not exactly Orac then

My bet is on the truly neuromorphic hardware.

Anything else introduces unnecessary overhead and bottlenecks. While initial simulations for model validation are OK to be done on general-purpose computing architectures, scaling that up efficiently to match the size of a small primate brain is going to require elimination of overheads and bottlenecks in order not to consume megawatts.

Problem with neuromorphic hardware is of "chicken and the egg" type - to expect large investments there needs to be a practical application which clearly outperforms the traditional Von Neumann sort - and to find this application, large research grants are needed. I am repeating the obvious, but our current knowledge of neurons is probably still far from being on the level enough to make something really useful.

Recognizing basic patterns with lots of tweaking is cool - but for a practical application it is not really going to cut it as the same purpose could be achieved with much cheaper "conventional" hardware.

If cortical modelling is to succeed - I'd guess it needs to achieve goals which would make it useful for military/defense purposes (can be something even basic mammals are good at - recognition, and today's computers still suck big time when fed with uncertain / natural data).. Then, the whole discipline will certainl get a huge kick to go to the next level.

Even today, there is a large source of funding - say, for Human Brain Project (HCP). But, I am afraid that the grandiose goals of HCP might not be met - coupled with pimping of general public and politician's expectations, the consequences of failure would be disastrous and potentially yield to another "winter" similar to the AI Winters we had.

This is why I am very worried about people making claims that we can replicate human brain (or even brain of a small primate) in the near future - while this is perhaps possible, failing to meet the promises would bring unhealthy pessimism and slow down the entire discipline due to cuts in funding. I, for one, would much rather prefer smaller goals - and if we exceed them, so much for the better.


Re: Better platform needed

There is still a tiny issue of connectivity - despite the fact that synaptic connectivity patterns are of "small world" type (highest % of connections are local), there is still a staggering amount of long-range connections that go across the brain. Average human brain contains on order of hundreds of thousands of kilometers (nope, that is not a mistake) of synaptic "wiring".

Currently our technologies for wiring things on longer distances are not yet comparable to mother nature's. Clever coding schemes can somehow mitigate this (but then, you need to plan space for mux/demux, and those things will consume energy), however - but still, the problem is far from being tractable with the today's tech.


Re: Strong AI will, of course, use Linux

Operating system choice has absolutely nothing to do with brain modelling.

Most models are initially done in Matlab, which exists on Linux, OS X and Windows.

Then, applying this in large-scale practice is simply a question of tooling, and tooling exists on all relevant operating systems today. You have CUDA and OpenMP on Linux and Windows. Heck, you even have Intel compiler if you love x86 on both Linux and Windows. It is more a practical choice which is down to the other requirements.

On the other hand, it is true, however, that there is a large choice of support tools (such as Freesurfer) existing on Linux and not on Windows. But then, anybody can run anything in a virtual machine nowdays.


Re: So, not exactly Orac then

Hmm, machine language would be a huge waste of time as you could accomplish the same with the assembler ;-) Assuming you meant assembly code - even that would be an overkill for the whole project and actually it might end up with slower code compared to an optimizing C/C++ compiler.

What could make sense is assembly-code optimization of critical code paths, say synaptic processing. But even then, you are mostly memory-bandwidth bound and clever coding tricks would bring at most few tenths of % of improvement in the best case.

However, that is still drop in the bucket compared to the biggest contributor here - for any decent synaptic receptor modelling you would need at least 2 floating point variables per synapse and several floating point variables per one neuron compartment.

Now, if your simulation accuracy is 1 ms (and that is rather coarse as 0.1 ms is not unheard of) - you need to do 1000 * number_of_synapses * N (N=2) floating point reads, same number of writes - and several multiplications and additions for every single synapse. Even for a bee brain-size, that is many terabytes per second of I/O. And >that< is the biggest problem of large-scale bilogically-plausible neural networks.


Java or not...

Actually, Java is the smallest problem here (although it is rather lousy choice if high-performance and high-scalability is a design goal, I must agree).

The biggest problem is brain "architecture" which is >massively< parallel. For example, typical cortical neuron has on order of 10000 synaptic inputs and a typical human has on order of 100 billion neurons with 10000x as much synapses. Although the mother nature did lots of optimizations in the wiring, so the network is actually of a "small world" type (where most connections between neurons are local, with small number of long-range connections so that wiring, and therefore energy, is conserved) - it is still very unsuitable for the Von Neumann architecture and its bus bottleneck.

For example, you can try this:


This is the small cortical simulator I wrote, which is highly optimized for Intel architecture (heavily using SSE and AVX). It is using multi-compartment model of neurons which is not biophysical, but phenomenological (designed to replicate desired neuron behavior - that is, spiking ,very accurately without having to calculate all intrinstic currents and other bilogical variables we do not know) .

Still, to simulate 32768 neurons with ~2 million synapses in real time you will need ~120 GB/s of memory bandwidth (I can barely do it with 2 Xeons 2687W with heavily overclocked DDR 2400 RAM!) ! You can easily see why the choice of programming language is not the biggest contributor here - even with GPGPU you can scale by max. one order of magnitude compared to SIMD CPU programming, but the memory bandwidth requirements are still nothing short of staggering.

Then, there is a question of the model. We are still far far away from fully understanding the core mechanisms that are involved in learning - for example, long term synaptic plasticity is still not fully understood. Models such as Spike-timing dependent plasticity (STDP) which were discovered in late 90's are not able to account for many observed phenomenons. Today (as of 2012) we have somewhat better models (for example, post-synaptic voltage dependent plasticity by Clopath et al.) but they still are not able to account for some experimentally observed facts. And then, how many phenomena are still not discovered experimentally?

Then, even if we figure out plasticity soon - what about glial contribution to neural computation? We have much more glial cells which were though to be just supporting "material", but now we know that glia actively contributes to the working of the neural networks and has signalling of its own...

Then, we still do not have too much clue in how the neurons actually wire. Peters rule (which is very simple and intuitive - and therefore very popular among scientists) is a crude approximation with already discovered violations in-vivo. As we do know that neural networks mature and evolve depending on the inputs - figuring out how the neurons wire together is of uttermost importance if we are really to figure out how this thing really works.

In short, today we are still very far away from understanding how brain works in the detail required to replicate it or fully understand it down to the level of neurons (or maybe even ions).

However, what we do know already - and very well indeed, is that the brain architecture is nothing like Von Neumann architecture of computing machines, and emulation of the brains on the Von Neumann systems is going to be >very< ineffective and require staggering amounts of computational power.

Java or C++ - it does not really matter that much on those scales :(

They said it wasn't right for biz - but Samsung unveils TLC SSD


Re: I've gone for large capacity.

Not all SSD are created equal - SLC SSDs can sustain writes on orders of petabytes and that's why they are used in the enterprise sector.


Biting the hand that feeds IT © 1998–2019