50 posts • joined 21 Jul 2009
"Frog" brain... or "any" brain...
If it would be able to catch a fly for its dinner, Dr. Modha would be most likely earning himself a Nobel prize.
Unfortunately, Dr. Modha is known for sensationalistic announcements (several years ago it was a "cat" brain, which sadly did not do much either) and little real material.
Putting bunch of simplified models of neurons together is nothing new. It has been done dozens of times before:
- In 2008, Edelman and Izhikevich made a large-scale model of human brain with 100 billion (yes, billion) of simulated neurons (http://www.pnas.org/content/105/9/3593.full)
- Since then, there have been numerous implementations of large-scale models, ranging from million to hundreds of millions of artificial neurons
- Computational neuroscience is my hobby, and I managed to put together a simulation with 16.7 million artificial neurons and ~4 billion synapses on a beefed-up home PC (http://www.digicortex.net/). OK, it was not really a home PC, but it will be in few years
- And, of course, there is a Blue Brain Project, which evolved into Human Brain Project. Blue Brain Project had a model of a single rat cortical column, with ~50000 neurons, but modelled to a much higher degree of accuracy (each neuron was modelled as a complex structure with few thousands of independent compartments with hundreds of ion-channels in each compartment).
All of these simulations have one thing in common: while they do model biological neurons with a varying degrees of complexity (from simple "point" process to complex geometries with thousands of branches), they all show "some" degree of network behavior similar to living brains, from simple "brain rhythms" which emerge and are anti-correlated when measured in different brain regions, to some more complex phenomena such as acquiring of receptive fields (so e.g. neurons fed with visual signal become progressively "tuned" to respond to e.g. oriented lines etc.) - NONE OF THEM is yet able to model large-scale intelligent behavior.
To put it bluntly, Modha's "cat" or "frog" are just a lumps of sets of differential equations. These lumps are capable of producing interesting emergent behavior, such as synchronization, large-scale rhythms and some learning through neural plasticity which result in simple neuro-plastic phenomena.
But they are NOWHERE near anything resembling "intelligence" - not even of a flea. Not even of a flatworm.
I do sincerely hope we will learn how to make intelligent machines. But we have much more to learn. At the moment, we simply do not know what level of modelling detail is needed to replicate intelligent behavior of a simple organism. We simply do not know yet.
I do applaud Modha's work, as well as work of every computational neuroscientist, AI computer scientist, AI software engineer and also all developers playing with AI as their hobby. We need all of them, to advance our knowledge of intelligent life.
But, for some reason, I do not think PR like this is very helpful. AI, as a field, has suffered several setbacks in the history thanks to too much hype. There is even a term, "AI winter" which came to be precisely as a result of one of those hype cycles, very early in the history of AI.
I am also afraid that Human Brain Project, for all it is worth, might lead us to the same (temporary) dead end. I do hope HBP will achieve its goals, but announcements that Dr. Markram made in the last years, especially (I paraphrase) "We can create human brain in 10 years" will come back to haunt us in 10 years if HBP did not reach its goals. EU agreed to invest one billion euros in this - I hope we picked the right time, but I am slightly pessimistic. Otherwise we will be up for another AI winter :(
Err, actually it works the other way: people with extraordinary claims need to come up with extraordinary proofs.
Inventor claims he invented a brilliant new method of propulsion, which seems to violate some physics laws. No biggie, if it really works I am quite sure the inventor will have no problem selling / licensing / giving away / whatever implementations of his invention.
If people had to recreate every single silly apparatus just to state that it does not work, the civilization would be busy with recreating garbage.
Mind you, I am not saying this particular thing is garbage, maybe it is a paradigm shift in space travel. But the burden of proof is on the inventor and people claiming the invention works.
NASA experiment did not prove this thing work. The fact that they got something out of deliberate setup designed NOT to work casts doubts on the validity of experiment. It also does not help that they did not perform the experiment in vacuum.
Nevertheless, if this invention does indeed work, it will have no issue whatsoever in being confirmed experimentally.
Sorry, but that is a load of bull.
Relativity did not have any problem to get accepted. Quantum mechanics, too.
Because these things were proven conclusively and repeatedly. Sure, there were probably people who did not "believe" until their deaths, but actually most of the academic world quickly catched up.
If this thing "works", then it will be absolutely no problem to replicate the setup and confirm that it really works. Perhaps somebody will also eliminate the possible causes of concern, such as the fact that the NASA chaps did not test in vacuum. If the proposed invention is really meaningful, it will have no problem with replication and confirmation in a rigorous setup.
Unfortunately, no, they did not prove it works.
Even the modified setup "worked" - although it was not supposed to.
This suggest that more likely cause is error in experiment setup / measurement.
It would be really great if this thing worked, but it's going to take a bit more to prove it.
Re: What did you expect?
Nobody is stopping you from modifying software you purchased, but nobody is forced to provide you with everything needed for the most convenient way. With binary code, you'll have to do it in assembler but nobody stops you in principle.
Did your vacuum cleaner company give you the production tooling and source files used to build the vacuum? No? Did your car vendor hand the source code for the ECU? Did they give you VHDL code for the ICs? Assembly instructions? No? Bas*ards!
As far as for banning, I'd first start with banning stupidity. But, for some reason it would not work.
Re: This is a Windows API problem
It would not work. There are too many applications written in the crappy way with pieces of code working properly only with decreased time quantum. Their time accounting would be fcked and the results would range from audio / video dropouts to total failure (in case of automation / embedded control software).
For example in all cases where code expects timer to be accurate to the level of, say, 1ms or 5ms. Too many multimedia-related or automation-related code would be broken.
It is sad, but true. Microsoft should never allow this Win 3.x / 9x crap in the NT Win32 API but I suppose they were under pressure to make crappily written 3rd party multimedia stuff to work on NT Windows flavors, otherwise they might have problems with migrating customers to NT.
Of course, nowdays (since Vista) there are much better APIs dedicated to multimedia / "pro" audio, but here the problem is the legacy.
At least, Microsoft could have enforced API deprecation for all software linked as NT 6.x so that this terrible behavior could be avoided for new software. But that, too, is probably too much to ask due to "commercial realities". Consumer's PC would suck at battery life, nothing new here :(
Re: This is a Windows API problem
It is timeBeginPeriod API:
On most systems today you can get down to 1ms. But this has a high cost associated with the interrupt overhead.
Re: This is a Windows API problem
I meant wake up the CPU 1000 times in a second.
This is a Windows API problem
Windows, even since pre-NT versions (Win95 & Co), allowed usermode applications without admin rights to change the system quantum ("tick" duration).
By default, quantum in NT kernels is either 10ms or 15ms, depending on the configuration (nowdays it tends mostly to be 15ms). However, >any< user mode process can simply request to get this down to 1 ms using an ancient "multimedia" API.
Needless to say, in the old days of slow PCs etc. this was used by almost all multimedia applications since it is much easier to force the system to do time accounting on 1ms scale than to do proper programming.
For example, idiot thinks he needs 1ms time precision for his timers - voila, just force the >entire OS< to wake the damn CPU 100 times a second, do all interrupt processing just to wake his ridiculous timer because the developer in question has no grasp of efficient programming. In most cases, it is perfectly possible to implement whatever algorithms with 10/15ms quantum, but it requires decent experience in multithreaded programming. This, of course, is lacking in many cases.
Only a very small subset of applications/algorithms need 1ms clock-tick precision. However, for those, system >should< ask for admin righs, as forcing the entire OS to do 10x times more work has terrible consequences for laptop/mobile battery life.
Microsoft's problem is typical: they cannot change the old policy as it would break who-knows-how-much "legacy" software.
Every time a company in the Valley becomes big enough (sometimes not even that), they have to give it a try and make their own programming language.
Apple already did this, it looks like this is their second try.
World is full of "C replacements", they come and go... but for some reasons, C is pretty much still alive and kicking and something tells me that it is going to be alive long after the latest reiteration of Valley's "C replacement" is dead and forgotten.
I am sorry, but this is simply not true (that open source software >cannot< have backdoors because someone, somewhere might spot it).
Very good backdoors are made so that they look like plausible bugs (and all general purpose software is full of them). Something like missed parameter validation or a specific sequence of things which triggers a behavior desired on the most popular architectures/compilers allowing adversary to read the contents of a buffer, etc..
It takes awful lot of time to spot issues in complex code - it took Debian more than a year and a half to figure out that their pseudorandom generator is fatally flawed due to stupid attempt of optimization. And >that< was not so hidden, it was there in plain sight. Not to mention that crypto code >should not< be "fixed" by general-purpose developers (actually, this is what caused the Debian PRNG problem in the first place), so your pool of experts that would review the code drastically shrinks. So you gotta hope that some of these experts will invest their time to review some 3rd party component. This costs hell lot of time and, unless somebody has a personal interest, I doubt very much that you would assemble a team of worldwide crypto experts to review your github-hosted project without paying for this.
Then, complex code is extremely hard to completely review. This is why in aerospace and vehicle telematics, critical software is written from the scratch so that it can be proven that it works by following very strict guidelines on how the software should be written and tested (and, guess what, even then - bugs do occur). General-purpose software with millions of lines of code? Good luck. The best you can do is to schedule expert code reviews and, in addition, go through the code with some fuzzing kit and spot pointer overruns etc. but even after all that, more sinister vulnerabilities might still pass.
So, sorry, no - being open source does not guarantee you lack of backdoors. Because in this day and age, smart adversary is not going to implement a backdoor in plain sight. Instead, it will be an obscure vulnerability that can easily be attributed to simple programmer error.
Faith that open source code is backdoor free because it is open is pretty much similar to the idea that infinite amount of monkeys with infinite amount of typewriters will write all Shakespeare work. Please do not get me wrong, I am not attempting to compare developers to monkeys, but the principle that just because there is some chance of something to happen - it will happen. No, this is not guaranteed.
Love it or not, there is no objective reason why you would trust Microsoft less than some bunch of anonymous developers.
Microsoft has a vested interest in selling their product worldwide, and backdoor discovered in their crypto would severely impair their ability to sell Windows to any non-USA goverment entity and probably big industry players, too.
I am not saying that BitLocker has no backdoors - but there is no objective reason to trust BitLocker less than TrueCrypt.
Sad thing is, when it comes to crypto there is, simply, no way to have 100% trust >unless< you designed your own hardware, assembler for building your own OS and its system libraries and, finally, crypto.
Since nobody does that, there is always some degree of implicit trust and, frankly, I see no special reason why one would trust some anonymous developers more than a corporation. Same goverment pressure that can be applied to a corporation can be applied to an individual and we do not even know if TrueCrypt developers are in USA (or within USA government's reach) or not. Actually, it is easier for a goverment to apply pressure to an individual, which has far less resources to fight compared to cash-full corporation that can afford a million $ a day legal team if need be.
The fact that TrueCrypt is open source means nothing as far as trust is concerned. Debian had a gaping hole in its pseudorandom number generator for everybody to see for 1.5 years. Let's not even start about OpenSSL and its vulnerabilities.
There is, simply, no way to guarantee that somebody else's code is free of backdoors, You can only have >some< level of trust between 0% and less than 100%.
Re: Whoa there
Actually, BitLocker does >not< require TPM since Windows 7. Since Windows 7 it allows a passphrase in pretty much the same way as TrueCrypt. I use it, since TrueCrypt does not (and, probably, never will after the announcement) support UEFI partitions.
Also, BitLocker does not, by default, leave a "backdoor" for domain admins. If this is configured, then it is done so by a corporate group policy, but it is not ON by default.
BitLocker does not allow plausible deniability on the other hand, and there people will need to find some other option now that TrueCrypt development seems to be ending.
The problem of trust is there for both TrueCrypt and its closed-source alternatives such as BitLocker. There are ways to insert vulnerabilities that would look like ordinary bugs and be very hard to catch even when somebody is really looking in the source code (see how long it took people to figure out that Debian pseudorandom number generator was defunct). At the end of the day, unless one writes OS and compilers and "bootstrap" them from its own assembler, it is always involving some degree of implicit trust of 3rd parties.
What we need is a truly open-source disk encryption tool which is:
a) Not based in the USA, so that it cannot be subverted by "Patriot" act
b) Which undergoes regular peer-reviews by multiple crypto experts
c) With strictly controlled development policies requiring oversight and expert approval of commits
The problem is: b and c cost money, so there needs to be a workable business model. And that needs to be creative due to a), which would preclude standard revenue stream from software licensing.
And even then, you still need to trust these guys and those crypto experts as well as compilers that were used to build the damn thing...
Well, considering the fact that Google's business are ads, it is in their interest that the ads are viewed without undue interruptions even on Apple's kit.
That and the fact that WebKit engine has shared roots.
Actually, the fact that the memory is temporal is known for quite a long time.
At least since early 90s, after the discovery of spike timing dependent plasticity (STDP) - http://www.scholarpedia.org/article/Spike-timing_dependent_plasticity it became obvious that the neurons encode information based on the temporal correlations of their input activity. By today, our knowledge has been greatly expanded and it is known that the synaptic plasticity operates on several time scales and its biological foundations. There are also dozens of models of varying complexity with even some simple ones being able to reproduce many plasticity experiments on pairs of neurons quite well.
Since early 90s there had been lots of research into working memory and its neural correlates. While we do not have the complete picture (far from it, actually), we do know by now very well that the synaptic strength is heavily dependent on temporal correlations and that biological neural network behaves like auto-associative memory. There are several models that are able to replicate simple things including reward-based learning, but all in all, it can be said that we are really just at the beginning of understanding how the memory of the living beings works.
As for Ray Kurzweil, sorry but anybody who can write something called "how to create a mind" is just preposterous. Ray Kurzweil has no clue how to create a mind. Not because he is not smart (he is), but because NOBODY on this planet has a clue how to create a mind, yet. Ray does, however, obviously know how to separate people from their money by selling books that do not deliver.
If somebody offers to tell you "how to create a mind" (other than, well, procreation, which pretty much everybody knows how to do it) just ask them why is that they did not create it, but instead they want to tell you that. That will save you some money and quite a lot of time. While I do not disprove motivational value of such popular books, scientifically they do not bring anything new and this particular book is just a rehash of decades-old ideas.
Re: Let a thousand flowers bloom
Well, pre-cortical brain structures can do impressive things as well.
Lizards and frogs do not have neocortex, but are doing pretty well in surviving. Even octopuses are pretty darn smart and they do not even have brain parts even the lizards have.
Today we are very far even from the lizard vision (or octopus vision if you will), and for that you do not need an enormous neocortex. I am pretty sure that something on the level of lizard intelligence would be pretty cool and excite the general populace enough.
These things are hard. I applaud Jeff's efforts but for some reason I think this guy is getting lots of PR due to his previous (Palm) achievements while, strictly speaking, AI-wise, I do not see a big contribution yet.
This is not to say that he shouldn't be doing what he is doing, to the contrary, the more research in AI and understanding how the brain works, the better. But too much hype and PR can damage the field, as it happened before, as the results might be disappointing compared to expectations.
Re: model a neurone in one supercomputer
The reason computer always responds the same to the same inputs is only because algorithm designed made it so.
There is nothing stopping you from designing algorithms which do not always return the same responses to the same inputs. Most of the today's algorithms are deterministic simply because this is how the requirements were spelled out.
Mind you, even if your 'AI' algorithm is 100% deterministic, if you feed it with the natural signal (visual, auditory, etc.) the responses will stop being "deterministic" due to the inherent randomness of natural signals. Now, you can even extend this with some additional noise in the algorithm design (say, random synaptic weights between simulated neurons, adding "noise" similar to miniature postsynaptic potentials, etc.).
Even the simple network of artificial neurons modeled with two-dimensional algorithms (relatively simple algorithms, such as adaptive exponential integrate and fire) will exhibit chaotic behavior when introduced to some natural signal.
As for the Penrose & Hameroff Orch OR theory, several predictions this theory made were already disproved, making it a bad theory. I am not saying that quantum mechanics is not necessary to explain some aspects of consciousness (maybe, maybe not), but that will need some new theory, which is able to make testable predictions which are confirmed. Penrose & Hameroff's Orch OR is not that theory.
Re: This is the case for open source operating systems.
Jesus effin' Christ - Debian generated useless pseudorandom numbers for almost year and a half.
NOBODY spotted the gaping bug for >months<.
No, it is >not< possible to guarantee that software is 100% backdoor-free - open or closed, it does not matter.
Linux, like any modern OS, is full of vulnerabilities (Windows is not better, neither is Max OS X). Some of these vulnerabilities >might< be there on purpose.
The only thing you can do is to trust nobody and do the best security practice - limited user rights, firewalls (I would not even trust just one vendor), regular patching, minimal open ports on the network, etc. etc.
They probably mean Xeon E5, as Xeon E7 is by no means "latest" since it is waiting to be upgraded to Ivy Bridge EX in Q1/2014. Currently it is based on now ancient Westmere microarchitecture.
Xeon E5 is based on Sandy Bridge uarch, and the upgrade to Ivy Bridge is imminent (in couple of weeks) - however, E5 is limited to 4 sockets unless you use 3rd-party connectivity solution such as NUMAlink.
Re: IB and Haswell a big disappointment
This news article is focused on Xeon, not the consumer line.
Ivy Bridge EP brings 50% more cores (12 vs. 8) within the same TDP bracket. This >is< significant, for the target market of the Xeon chips.
Ivy Bridge EX scales in the same way - 120 cores in 8P configuration vs. 80 cores available today.
This is nothing short of impressive, considering the fact that Ivy Bridge is just a "tick".
Re: Ultrabook debacle
Actually, the first "ultra thin" notebook was Sony X505, invented by Sony in 2004.
Google it up - that was good couple of years before Macbook Air.
Of course, Sony being Sony - they marketed the device for the CEO/CTO types, and priced it accordingly (it well above 3K EUR in Germany). Hence, it was not very successful.
But in terms of actual invention - this was "it". Apple just take more sane approach and priced the Air in the range of "affordable luxury" item - certainly not cheap, but well within reach of middle class.
Same flop (typical of Sony) was repeated with the Z series - Sony made the dream machine which was more powerful than most Macbook Pros (before 15" Retina) but ligther and actually thinner than the first-gen 13" Air. And Full HD 13" screen since 2010 - something that took Apple quite a bit of time to catch up. All in all, a perfect notebook - I know since I owned all Z models, before I switched to Macbook Retina 15".
Again, thanks to their ridiculous business model and practice of stuffing crapware (at some point Sony even had the audacity to ask $50 for a "clean" OS installation) world will remember Apple Macbook Air and Retina as the exemplars of ultra-thin and ultra-powerful machines, and not Sony X and Z series.
However, nothing changes the fact that it was Sony delivering the innovation years before Apple.
Normally, additional features that command a premium are fused-out in 'common' silicon and enabled only for special SKUs.
To temporarily enable fused-out feature you would need several things none of which are present in the computers employed in ordinary businesses. And even if you had all the tooling and clearances (which is next to impossible) the process of temporary enabling is not going to be unnoticed. Hardly something that can be used for exploitation. There are much easier avenues - including more and more rare kernel-level exploits.
Re: Talk about apples and moonrocks
Sorry, but you obviously do not know what are you talking about, sorry.
Intel's modern CPUs have different micro-ops which are used internally. x86 instruction set is only used as a backwards-compatibility measure and gets decoded to the series of micro-ops (besides, modern instructions such as AVX have nothing to do with the ancient x86). Today's Sandy Bridge / Ivy Bridge architecture has almost nothing in common with, say, Pentium III or even Core. Intel tweaks their architectures in the "tick" cycle, but the "tock" architectures are brand new and very different from each other.
As for X86 instruction set, and the age-old mantra (I believe coming from Apple fanboys) that x86 is inherently more power-demanding, this has been nicely disproved lately with refined Intel Atoms (2007 architecture, mind you) which are pretty much on par with modern ARM architectures in terms of power consuption.
I am not a fan of x86 at all, I use what gets my job done in the best possible way.
But when I read things like this, it really strikes me how people manage to comment on something they obviously do not understand.
Re: Where is the advantage?
Sorry, the fact that the language is more open to script kiddies does not make it any more "high level", nor it does make buggy code any less likely to occur. Crappy code is crappy code, it is caused by the developer and not by the language.
The advantage of the NaCL would be performance - if someone needs it say, for multimedia, gaming, etc... It does not mean that everybody needs to use it but it would be good to have it as an option. The fact that computers are getting faster does not render it any less relevant as modern multimedia/gaming is always pushing the limits of the hardware.
Re: The Kernel of Linux
Cars engine management?
Yeah, that's exactly where I'd want a general purpose OS like Linux...
Thankfully, automotive industry is still not going crazy and critical ECU-s are driven by hardened RTOS-es.
Re: So, not exactly Orac then
My bet is on the truly neuromorphic hardware.
Anything else introduces unnecessary overhead and bottlenecks. While initial simulations for model validation are OK to be done on general-purpose computing architectures, scaling that up efficiently to match the size of a small primate brain is going to require elimination of overheads and bottlenecks in order not to consume megawatts.
Problem with neuromorphic hardware is of "chicken and the egg" type - to expect large investments there needs to be a practical application which clearly outperforms the traditional Von Neumann sort - and to find this application, large research grants are needed. I am repeating the obvious, but our current knowledge of neurons is probably still far from being on the level enough to make something really useful.
Recognizing basic patterns with lots of tweaking is cool - but for a practical application it is not really going to cut it as the same purpose could be achieved with much cheaper "conventional" hardware.
If cortical modelling is to succeed - I'd guess it needs to achieve goals which would make it useful for military/defense purposes (can be something even basic mammals are good at - recognition, and today's computers still suck big time when fed with uncertain / natural data).. Then, the whole discipline will certainl get a huge kick to go to the next level.
Even today, there is a large source of funding - say, for Human Brain Project (HCP). But, I am afraid that the grandiose goals of HCP might not be met - coupled with pimping of general public and politician's expectations, the consequences of failure would be disastrous and potentially yield to another "winter" similar to the AI Winters we had.
This is why I am very worried about people making claims that we can replicate human brain (or even brain of a small primate) in the near future - while this is perhaps possible, failing to meet the promises would bring unhealthy pessimism and slow down the entire discipline due to cuts in funding. I, for one, would much rather prefer smaller goals - and if we exceed them, so much for the better.
Re: Better platform needed
There is still a tiny issue of connectivity - despite the fact that synaptic connectivity patterns are of "small world" type (highest % of connections are local), there is still a staggering amount of long-range connections that go across the brain. Average human brain contains on order of hundreds of thousands of kilometers (nope, that is not a mistake) of synaptic "wiring".
Currently our technologies for wiring things on longer distances are not yet comparable to mother nature's. Clever coding schemes can somehow mitigate this (but then, you need to plan space for mux/demux, and those things will consume energy), however - but still, the problem is far from being tractable with the today's tech.
Re: Strong AI will, of course, use Linux
Operating system choice has absolutely nothing to do with brain modelling.
Most models are initially done in Matlab, which exists on Linux, OS X and Windows.
Then, applying this in large-scale practice is simply a question of tooling, and tooling exists on all relevant operating systems today. You have CUDA and OpenMP on Linux and Windows. Heck, you even have Intel compiler if you love x86 on both Linux and Windows. It is more a practical choice which is down to the other requirements.
On the other hand, it is true, however, that there is a large choice of support tools (such as Freesurfer) existing on Linux and not on Windows. But then, anybody can run anything in a virtual machine nowdays.
Re: So, not exactly Orac then
Hmm, machine language would be a huge waste of time as you could accomplish the same with the assembler ;-) Assuming you meant assembly code - even that would be an overkill for the whole project and actually it might end up with slower code compared to an optimizing C/C++ compiler.
What could make sense is assembly-code optimization of critical code paths, say synaptic processing. But even then, you are mostly memory-bandwidth bound and clever coding tricks would bring at most few tenths of % of improvement in the best case.
However, that is still drop in the bucket compared to the biggest contributor here - for any decent synaptic receptor modelling you would need at least 2 floating point variables per synapse and several floating point variables per one neuron compartment.
Now, if your simulation accuracy is 1 ms (and that is rather coarse as 0.1 ms is not unheard of) - you need to do 1000 * number_of_synapses * N (N=2) floating point reads, same number of writes - and several multiplications and additions for every single synapse. Even for a bee brain-size, that is many terabytes per second of I/O. And >that< is the biggest problem of large-scale bilogically-plausible neural networks.
Java or not...
Actually, Java is the smallest problem here (although it is rather lousy choice if high-performance and high-scalability is a design goal, I must agree).
The biggest problem is brain "architecture" which is >massively< parallel. For example, typical cortical neuron has on order of 10000 synaptic inputs and a typical human has on order of 100 billion neurons with 10000x as much synapses. Although the mother nature did lots of optimizations in the wiring, so the network is actually of a "small world" type (where most connections between neurons are local, with small number of long-range connections so that wiring, and therefore energy, is conserved) - it is still very unsuitable for the Von Neumann architecture and its bus bottleneck.
For example, you can try this:
This is the small cortical simulator I wrote, which is highly optimized for Intel architecture (heavily using SSE and AVX). It is using multi-compartment model of neurons which is not biophysical, but phenomenological (designed to replicate desired neuron behavior - that is, spiking ,very accurately without having to calculate all intrinstic currents and other bilogical variables we do not know) .
Still, to simulate 32768 neurons with ~2 million synapses in real time you will need ~120 GB/s of memory bandwidth (I can barely do it with 2 Xeons 2687W with heavily overclocked DDR 2400 RAM!) ! You can easily see why the choice of programming language is not the biggest contributor here - even with GPGPU you can scale by max. one order of magnitude compared to SIMD CPU programming, but the memory bandwidth requirements are still nothing short of staggering.
Then, there is a question of the model. We are still far far away from fully understanding the core mechanisms that are involved in learning - for example, long term synaptic plasticity is still not fully understood. Models such as Spike-timing dependent plasticity (STDP) which were discovered in late 90's are not able to account for many observed phenomenons. Today (as of 2012) we have somewhat better models (for example, post-synaptic voltage dependent plasticity by Clopath et al.) but they still are not able to account for some experimentally observed facts. And then, how many phenomena are still not discovered experimentally?
Then, even if we figure out plasticity soon - what about glial contribution to neural computation? We have much more glial cells which were though to be just supporting "material", but now we know that glia actively contributes to the working of the neural networks and has signalling of its own...
Then, we still do not have too much clue in how the neurons actually wire. Peters rule (which is very simple and intuitive - and therefore very popular among scientists) is a crude approximation with already discovered violations in-vivo. As we do know that neural networks mature and evolve depending on the inputs - figuring out how the neurons wire together is of uttermost importance if we are really to figure out how this thing really works.
In short, today we are still very far away from understanding how brain works in the detail required to replicate it or fully understand it down to the level of neurons (or maybe even ions).
However, what we do know already - and very well indeed, is that the brain architecture is nothing like Von Neumann architecture of computing machines, and emulation of the brains on the Von Neumann systems is going to be >very< ineffective and require staggering amounts of computational power.
Java or C++ - it does not really matter that much on those scales :(
Re: I've gone for large capacity.
Not all SSD are created equal - SLC SSDs can sustain writes on orders of petabytes and that's why they are used in the enterprise sector.
You are comparing a simple ability to find what you want (application-wise) online... by say, googling for it with writing of the 4-line bash script?
Wow... standards have indeed fallen these days. I guess in 10 years even mere searching would be considered an intellectual chore.
If money is no object THIS is the ultimate "extreme desktop CPU"
Ok, it is not exactly "desktop" but it is workstation - close enough :) And it allows dual-CPU configurations.
I have two of these puppies and the software I write is very happy with the speedup. Not to mention that it is quite easy to crank the Samsung's 30nm ECC "green memory" up to 2133 MHz (the official specs of the RAM and the Xeon's E5 allowed max is 1600 MHz) which makes any large-scale biological simulation quite happy due to the insane memory bandwidth (~69 GB/s).
Intel decided to cripple 3960X/3930K with fusing-out the two cores as they understand overclockers will push the voltages north of 1.3v... If they left all 8 cores on, this would generate extreme amounts of heat which would be very hard to evacuate from today's desktop setups, unless they are cooled off with some heavy-duty water cooling setup.
The funniest part...
Is that TomTom sells "Speed Cameras" service to the users, helping them to avoid speed traps...
Then, they sell the data to cops, so they can maximize the revenue.
Looks like the dream business model :-)
Matt Asay is chief operating officer of Ubuntu commercial operation Canonical. With more than a decade spent in open source
Gee... doesn't look like a Redmond PR-droid, does he?
Re: Instead of telling us where the cameras are ...
@Harry - Actually, TomTom does support warning you if you exceed the speed limit. It is a standard feature in all TomTom navigation units.
Re: Speed Camera Databased
Actually, it is only Switzerland that is >very< anal about the speed camera POI databases, and they threatened TomTom and other GPS device manufacturers that they would stop sales of their devices if they do not remove that feature.
In Germany, it is also illegal to have such aid in a car, but they can't be bothered... You'll see TomTom GPS devices with "Speed Camera Database" advertisements in tech stores like Saturn. Illegal to use, legal to sell that is.
Please educate yourself about the matter you are writing about
AAC (like MP3 or H.264) has NOTHING to do with DRM. It is a worldwide international standard (ISO/IEC 14496 in case of AAC, ISO 11152 in case of MP3) and is related to audio coding only.
Encrypting this with your DRM has nothing to do whatsoever with the AAC (or H.264, or MP3) standard itself. Hell, you can even put Ogg Vorbis in DRM container if you wish. This is purely the decision of the implementer. Standards alone do not enforce nor prohibit DRM.
Second, H.264, MP3 and AAC are as open as it gets - they ALL are fully documented and available from ISO/ITU directly (or your country standards body) with open source reference C/C++ software being available as well. When those patents expire, they will be fully public domain - much more "free" (as in freedom) than some GPL-ed stuff.
The fact of the matter is - yes, complex audio and video codecs ARE based on patented technology more often than not. And there is NOTHING bad in that and NOTHING preventing them from being open for everyone to implement and use, with reasonable and non-discriminatory cost model.
It is only the freetards of this planet who are trying to spread FUD about the well-proven international standards like H.264. Sorry guys, technology has a price - someone worked very hard to invent it. No, those companies WILL NOT give it away "for free" so Apples and Googles of this would would use it to make money.
And, by the way - these ITU/ISO standardized technologies are in almost all modern digital TV, optical media, mobile and satellite standards. That makes much more relevant than what some FSF-tards would like people to know.
Ehm... because of... quality?
Because VP6 is quite better in terms of quality per equal bitrate comared to Ogg Theora?
Meaning that bandwidth required to carry quality streams will be lower?
Or, meaning that people would get higher video quality per same bandwidth utilized?
Not to mention that streaming HD streams with Theora requires ludricous bit rate compared to H.264 or even On2 VP6. Argument that "everyone has broadband" does not cut it - first of all because it is still NOT enough for HD Theora streams, and second -because someone HAS to pay for all that bandwidth being moved between servers.
Of course that anyone with a hint of technical competence would choose better codec (better in terms of quality per bits engaged) as it decreases costs and improves quality of viewing.
Patents and Trade Secrets
Usually patents contain only brief and basic information about the technology.
In most complex cases, the devil is in the detail of implementation, and that implementation is typically held as trade-secret kind of IP.
i4i did offer their technology - they were in fact negotiating with Microsoft long before the court case.
What MS did - they thought they can avoid the patent (email proof shown in court) - and decided to say goodbye to i4i...
Well, at the end - it looks like it did not play well for Microsoft, and what they did in fact was willful infringement.
So, before you call people morons, get your facts straight, mr. Anonimous Coward.
Unfortunately - no GMA500
I just saw it is GMA3150 - 45nm shrinkage of GMA3100
It is not capable of hw. decoding of anything except MPEG-2. This is a really ridiculous decision from Intel's side - especially when NVidia's ION is available.
Actually, GMA can do HD
GMA500, that is - the kind part of Intel's "MID" and embedded ATOM parts, and probably the same GPU in the new Pineview parts.
However, the biggest problem of GMA500 (PoverVR) is extremely crappy driver support, so actually most people can't see what this thing can truly deliver.
In fact, I have it decoding 1080p AVC with ~3-5% of CPU load - and, yes, that's a 1.3 GHz Z5xx ATOM.
I guess this is progress...
Wow, if Sony really manages to pull this off - and get $2K for an Atom-based polished toy, I am really losing faith in the future of the mankind... This must be the most overpriced piece of IT equipment in the last few years.
Core 2 Duo of "CULV" sort can wipe the floor with any Atom - and, Sony used to offer that with TZ and TT series... These little Core2 Duos were quite OK for many uses, with an SSD my old TZ was actually faster for day to day use than many higher-clocked C2D desktops (with the hard drive inside)
The only good thing in this is US15W "Poulsbo" chipset, which is perfectly able to decode even 1080p content (with the proper decoders and players) - but then, man can get Dell's Mini 10 for the fraction of a price...
What value add do you really get with X apart from carbon fiber? Bragging rights?
Well, I guess - Apple proved the point that you can actually capitalize on those very well...
For all those dismissing this as a abnormality...
Would you buy a car that has spontaneously combusted in "only 11 cases" ? No? I though so :)
How many notebooks combusted in flames before Sony decided to do a global battery recall? I recon far less than 11...
How many phones are sold by e.g. Nokia? Hundreds of millions - still, I cannot recall this amount of cockups.
It is already possible to enable VT in Sony
And I am quite sure it will be possible to do it on Fujitsu, too.\
Someone managed to hack into Insyde EFI and find how to unlock an "Advanced" menu which offers VT-X and VT-D enable/disable (among other goodies)
I already did it on my Vaio Z - and it works great
"If you actually used a Mac instead of bleating on internet forums about them, you'd know they do "just work."
Suuuure - that's why they have www.macfixit.com
Wow, Opera w*nkers are really over the top...
What's next - when Microsoft even changes IE icon, Opera will request that Internet Explorer gets renamed to "WhgrhGB0l Grhvrw1AL" - otherwise, the name would be too familiar and users would chose it due to familiarity reasons alone...
Jesus, will those Opera tossers die already...
Re: Oh, so that'll not be Windows Mobile is any better then.
Last time I checked, Win Mobile does have a feature to encrypt the data on the storage card.
There is no "catch"
First of all, Linux Kernel is GPL v2 - there is nothing inherently "evil" with Microsoft releasing Linux kernel code with the same license as the rest of the Linux Kernel. I will skip the GPL v3 "patent protection" nonsense, even Linus Torvalds is not buying it...
As for the hidden agenda, there is none - Microsoft cannot ignore the simple fact that there are tons of cheap small Linux web servers used for e.g. hosting plans, etc... Those are all going to be virtualized and hosted on more powerful multi-core systems.
So, instead of doing nothing they can at least try to get the money for the hosting OS - and for this, they need the client VMs to run as fast as possible, so they can have an edge over e.g. VMWare.
This is actually a "win win" for Microsoft and Linux - Microsoft cannot ignore reality, they certainly cannot make all those Linux servers in question switch to Win Server 2008 - but they can at least try to get something by offering to host those systems on a Hyper-V server.
- Review Is it an iPad? Is it a MacBook Air? No, it's a Surface Pro 3
- Game Theory The agony and ecstasy of SteamOS: WHERE ARE MY GAMES?
- Hello, police, El Reg here. Are we a bunch of terrorists now?
- Worstall on Wednesday Wall Street woes: Oh noes, tech titans aren't using bankers
- Kate Bush: Don't make me HAVE CONTACT with your iPHONE