36 posts • joined 21 Jul 2009
Actually, the fact that the memory is temporal is known for quite a long time.
At least since early 90s, after the discovery of spike timing dependent plasticity (STDP) - http://www.scholarpedia.org/article/Spike-timing_dependent_plasticity it became obvious that the neurons encode information based on the temporal correlations of their input activity. By today, our knowledge has been greatly expanded and it is known that the synaptic plasticity operates on several time scales and its biological foundations. There are also dozens of models of varying complexity with even some simple ones being able to reproduce many plasticity experiments on pairs of neurons quite well.
Since early 90s there had been lots of research into working memory and its neural correlates. While we do not have the complete picture (far from it, actually), we do know by now very well that the synaptic strength is heavily dependent on temporal correlations and that biological neural network behaves like auto-associative memory. There are several models that are able to replicate simple things including reward-based learning, but all in all, it can be said that we are really just at the beginning of understanding how the memory of the living beings works.
As for Ray Kurzweil, sorry but anybody who can write something called "how to create a mind" is just preposterous. Ray Kurzweil has no clue how to create a mind. Not because he is not smart (he is), but because NOBODY on this planet has a clue how to create a mind, yet. Ray does, however, obviously know how to separate people from their money by selling books that do not deliver.
If somebody offers to tell you "how to create a mind" (other than, well, procreation, which pretty much everybody knows how to do it) just ask them why is that they did not create it, but instead they want to tell you that. That will save you some money and quite a lot of time. While I do not disprove motivational value of such popular books, scientifically they do not bring anything new and this particular book is just a rehash of decades-old ideas.
Re: Let a thousand flowers bloom
Well, pre-cortical brain structures can do impressive things as well.
Lizards and frogs do not have neocortex, but are doing pretty well in surviving. Even octopuses are pretty darn smart and they do not even have brain parts even the lizards have.
Today we are very far even from the lizard vision (or octopus vision if you will), and for that you do not need an enormous neocortex. I am pretty sure that something on the level of lizard intelligence would be pretty cool and excite the general populace enough.
These things are hard. I applaud Jeff's efforts but for some reason I think this guy is getting lots of PR due to his previous (Palm) achievements while, strictly speaking, AI-wise, I do not see a big contribution yet.
This is not to say that he shouldn't be doing what he is doing, to the contrary, the more research in AI and understanding how the brain works, the better. But too much hype and PR can damage the field, as it happened before, as the results might be disappointing compared to expectations.
Re: model a neurone in one supercomputer
The reason computer always responds the same to the same inputs is only because algorithm designed made it so.
There is nothing stopping you from designing algorithms which do not always return the same responses to the same inputs. Most of the today's algorithms are deterministic simply because this is how the requirements were spelled out.
Mind you, even if your 'AI' algorithm is 100% deterministic, if you feed it with the natural signal (visual, auditory, etc.) the responses will stop being "deterministic" due to the inherent randomness of natural signals. Now, you can even extend this with some additional noise in the algorithm design (say, random synaptic weights between simulated neurons, adding "noise" similar to miniature postsynaptic potentials, etc.).
Even the simple network of artificial neurons modeled with two-dimensional algorithms (relatively simple algorithms, such as adaptive exponential integrate and fire) will exhibit chaotic behavior when introduced to some natural signal.
As for the Penrose & Hameroff Orch OR theory, several predictions this theory made were already disproved, making it a bad theory. I am not saying that quantum mechanics is not necessary to explain some aspects of consciousness (maybe, maybe not), but that will need some new theory, which is able to make testable predictions which are confirmed. Penrose & Hameroff's Orch OR is not that theory.
Re: This is the case for open source operating systems.
Jesus effin' Christ - Debian generated useless pseudorandom numbers for almost year and a half.
NOBODY spotted the gaping bug for >months<.
No, it is >not< possible to guarantee that software is 100% backdoor-free - open or closed, it does not matter.
Linux, like any modern OS, is full of vulnerabilities (Windows is not better, neither is Max OS X). Some of these vulnerabilities >might< be there on purpose.
The only thing you can do is to trust nobody and do the best security practice - limited user rights, firewalls (I would not even trust just one vendor), regular patching, minimal open ports on the network, etc. etc.
They probably mean Xeon E5, as Xeon E7 is by no means "latest" since it is waiting to be upgraded to Ivy Bridge EX in Q1/2014. Currently it is based on now ancient Westmere microarchitecture.
Xeon E5 is based on Sandy Bridge uarch, and the upgrade to Ivy Bridge is imminent (in couple of weeks) - however, E5 is limited to 4 sockets unless you use 3rd-party connectivity solution such as NUMAlink.
Re: IB and Haswell a big disappointment
This news article is focused on Xeon, not the consumer line.
Ivy Bridge EP brings 50% more cores (12 vs. 8) within the same TDP bracket. This >is< significant, for the target market of the Xeon chips.
Ivy Bridge EX scales in the same way - 120 cores in 8P configuration vs. 80 cores available today.
This is nothing short of impressive, considering the fact that Ivy Bridge is just a "tick".
Re: Ultrabook debacle
Actually, the first "ultra thin" notebook was Sony X505, invented by Sony in 2004.
Google it up - that was good couple of years before Macbook Air.
Of course, Sony being Sony - they marketed the device for the CEO/CTO types, and priced it accordingly (it well above 3K EUR in Germany). Hence, it was not very successful.
But in terms of actual invention - this was "it". Apple just take more sane approach and priced the Air in the range of "affordable luxury" item - certainly not cheap, but well within reach of middle class.
Same flop (typical of Sony) was repeated with the Z series - Sony made the dream machine which was more powerful than most Macbook Pros (before 15" Retina) but ligther and actually thinner than the first-gen 13" Air. And Full HD 13" screen since 2010 - something that took Apple quite a bit of time to catch up. All in all, a perfect notebook - I know since I owned all Z models, before I switched to Macbook Retina 15".
Again, thanks to their ridiculous business model and practice of stuffing crapware (at some point Sony even had the audacity to ask $50 for a "clean" OS installation) world will remember Apple Macbook Air and Retina as the exemplars of ultra-thin and ultra-powerful machines, and not Sony X and Z series.
However, nothing changes the fact that it was Sony delivering the innovation years before Apple.
Normally, additional features that command a premium are fused-out in 'common' silicon and enabled only for special SKUs.
To temporarily enable fused-out feature you would need several things none of which are present in the computers employed in ordinary businesses. And even if you had all the tooling and clearances (which is next to impossible) the process of temporary enabling is not going to be unnoticed. Hardly something that can be used for exploitation. There are much easier avenues - including more and more rare kernel-level exploits.
Re: Talk about apples and moonrocks
Sorry, but you obviously do not know what are you talking about, sorry.
Intel's modern CPUs have different micro-ops which are used internally. x86 instruction set is only used as a backwards-compatibility measure and gets decoded to the series of micro-ops (besides, modern instructions such as AVX have nothing to do with the ancient x86). Today's Sandy Bridge / Ivy Bridge architecture has almost nothing in common with, say, Pentium III or even Core. Intel tweaks their architectures in the "tick" cycle, but the "tock" architectures are brand new and very different from each other.
As for X86 instruction set, and the age-old mantra (I believe coming from Apple fanboys) that x86 is inherently more power-demanding, this has been nicely disproved lately with refined Intel Atoms (2007 architecture, mind you) which are pretty much on par with modern ARM architectures in terms of power consuption.
I am not a fan of x86 at all, I use what gets my job done in the best possible way.
But when I read things like this, it really strikes me how people manage to comment on something they obviously do not understand.
Re: Where is the advantage?
Sorry, the fact that the language is more open to script kiddies does not make it any more "high level", nor it does make buggy code any less likely to occur. Crappy code is crappy code, it is caused by the developer and not by the language.
The advantage of the NaCL would be performance - if someone needs it say, for multimedia, gaming, etc... It does not mean that everybody needs to use it but it would be good to have it as an option. The fact that computers are getting faster does not render it any less relevant as modern multimedia/gaming is always pushing the limits of the hardware.
Re: The Kernel of Linux
Cars engine management?
Yeah, that's exactly where I'd want a general purpose OS like Linux...
Thankfully, automotive industry is still not going crazy and critical ECU-s are driven by hardened RTOS-es.
Re: So, not exactly Orac then
My bet is on the truly neuromorphic hardware.
Anything else introduces unnecessary overhead and bottlenecks. While initial simulations for model validation are OK to be done on general-purpose computing architectures, scaling that up efficiently to match the size of a small primate brain is going to require elimination of overheads and bottlenecks in order not to consume megawatts.
Problem with neuromorphic hardware is of "chicken and the egg" type - to expect large investments there needs to be a practical application which clearly outperforms the traditional Von Neumann sort - and to find this application, large research grants are needed. I am repeating the obvious, but our current knowledge of neurons is probably still far from being on the level enough to make something really useful.
Recognizing basic patterns with lots of tweaking is cool - but for a practical application it is not really going to cut it as the same purpose could be achieved with much cheaper "conventional" hardware.
If cortical modelling is to succeed - I'd guess it needs to achieve goals which would make it useful for military/defense purposes (can be something even basic mammals are good at - recognition, and today's computers still suck big time when fed with uncertain / natural data).. Then, the whole discipline will certainl get a huge kick to go to the next level.
Even today, there is a large source of funding - say, for Human Brain Project (HCP). But, I am afraid that the grandiose goals of HCP might not be met - coupled with pimping of general public and politician's expectations, the consequences of failure would be disastrous and potentially yield to another "winter" similar to the AI Winters we had.
This is why I am very worried about people making claims that we can replicate human brain (or even brain of a small primate) in the near future - while this is perhaps possible, failing to meet the promises would bring unhealthy pessimism and slow down the entire discipline due to cuts in funding. I, for one, would much rather prefer smaller goals - and if we exceed them, so much for the better.
Re: Better platform needed
There is still a tiny issue of connectivity - despite the fact that synaptic connectivity patterns are of "small world" type (highest % of connections are local), there is still a staggering amount of long-range connections that go across the brain. Average human brain contains on order of hundreds of thousands of kilometers (nope, that is not a mistake) of synaptic "wiring".
Currently our technologies for wiring things on longer distances are not yet comparable to mother nature's. Clever coding schemes can somehow mitigate this (but then, you need to plan space for mux/demux, and those things will consume energy), however - but still, the problem is far from being tractable with the today's tech.
Re: Strong AI will, of course, use Linux
Operating system choice has absolutely nothing to do with brain modelling.
Most models are initially done in Matlab, which exists on Linux, OS X and Windows.
Then, applying this in large-scale practice is simply a question of tooling, and tooling exists on all relevant operating systems today. You have CUDA and OpenMP on Linux and Windows. Heck, you even have Intel compiler if you love x86 on both Linux and Windows. It is more a practical choice which is down to the other requirements.
On the other hand, it is true, however, that there is a large choice of support tools (such as Freesurfer) existing on Linux and not on Windows. But then, anybody can run anything in a virtual machine nowdays.
Re: So, not exactly Orac then
Hmm, machine language would be a huge waste of time as you could accomplish the same with the assembler ;-) Assuming you meant assembly code - even that would be an overkill for the whole project and actually it might end up with slower code compared to an optimizing C/C++ compiler.
What could make sense is assembly-code optimization of critical code paths, say synaptic processing. But even then, you are mostly memory-bandwidth bound and clever coding tricks would bring at most few tenths of % of improvement in the best case.
However, that is still drop in the bucket compared to the biggest contributor here - for any decent synaptic receptor modelling you would need at least 2 floating point variables per synapse and several floating point variables per one neuron compartment.
Now, if your simulation accuracy is 1 ms (and that is rather coarse as 0.1 ms is not unheard of) - you need to do 1000 * number_of_synapses * N (N=2) floating point reads, same number of writes - and several multiplications and additions for every single synapse. Even for a bee brain-size, that is many terabytes per second of I/O. And >that< is the biggest problem of large-scale bilogically-plausible neural networks.
Java or not...
Actually, Java is the smallest problem here (although it is rather lousy choice if high-performance and high-scalability is a design goal, I must agree).
The biggest problem is brain "architecture" which is >massively< parallel. For example, typical cortical neuron has on order of 10000 synaptic inputs and a typical human has on order of 100 billion neurons with 10000x as much synapses. Although the mother nature did lots of optimizations in the wiring, so the network is actually of a "small world" type (where most connections between neurons are local, with small number of long-range connections so that wiring, and therefore energy, is conserved) - it is still very unsuitable for the Von Neumann architecture and its bus bottleneck.
For example, you can try this:
This is the small cortical simulator I wrote, which is highly optimized for Intel architecture (heavily using SSE and AVX). It is using multi-compartment model of neurons which is not biophysical, but phenomenological (designed to replicate desired neuron behavior - that is, spiking ,very accurately without having to calculate all intrinstic currents and other bilogical variables we do not know) .
Still, to simulate 32768 neurons with ~2 million synapses in real time you will need ~120 GB/s of memory bandwidth (I can barely do it with 2 Xeons 2687W with heavily overclocked DDR 2400 RAM!) ! You can easily see why the choice of programming language is not the biggest contributor here - even with GPGPU you can scale by max. one order of magnitude compared to SIMD CPU programming, but the memory bandwidth requirements are still nothing short of staggering.
Then, there is a question of the model. We are still far far away from fully understanding the core mechanisms that are involved in learning - for example, long term synaptic plasticity is still not fully understood. Models such as Spike-timing dependent plasticity (STDP) which were discovered in late 90's are not able to account for many observed phenomenons. Today (as of 2012) we have somewhat better models (for example, post-synaptic voltage dependent plasticity by Clopath et al.) but they still are not able to account for some experimentally observed facts. And then, how many phenomena are still not discovered experimentally?
Then, even if we figure out plasticity soon - what about glial contribution to neural computation? We have much more glial cells which were though to be just supporting "material", but now we know that glia actively contributes to the working of the neural networks and has signalling of its own...
Then, we still do not have too much clue in how the neurons actually wire. Peters rule (which is very simple and intuitive - and therefore very popular among scientists) is a crude approximation with already discovered violations in-vivo. As we do know that neural networks mature and evolve depending on the inputs - figuring out how the neurons wire together is of uttermost importance if we are really to figure out how this thing really works.
In short, today we are still very far away from understanding how brain works in the detail required to replicate it or fully understand it down to the level of neurons (or maybe even ions).
However, what we do know already - and very well indeed, is that the brain architecture is nothing like Von Neumann architecture of computing machines, and emulation of the brains on the Von Neumann systems is going to be >very< ineffective and require staggering amounts of computational power.
Java or C++ - it does not really matter that much on those scales :(
Re: I've gone for large capacity.
Not all SSD are created equal - SLC SSDs can sustain writes on orders of petabytes and that's why they are used in the enterprise sector.
You are comparing a simple ability to find what you want (application-wise) online... by say, googling for it with writing of the 4-line bash script?
Wow... standards have indeed fallen these days. I guess in 10 years even mere searching would be considered an intellectual chore.
If money is no object THIS is the ultimate "extreme desktop CPU"
Ok, it is not exactly "desktop" but it is workstation - close enough :) And it allows dual-CPU configurations.
I have two of these puppies and the software I write is very happy with the speedup. Not to mention that it is quite easy to crank the Samsung's 30nm ECC "green memory" up to 2133 MHz (the official specs of the RAM and the Xeon's E5 allowed max is 1600 MHz) which makes any large-scale biological simulation quite happy due to the insane memory bandwidth (~69 GB/s).
Intel decided to cripple 3960X/3930K with fusing-out the two cores as they understand overclockers will push the voltages north of 1.3v... If they left all 8 cores on, this would generate extreme amounts of heat which would be very hard to evacuate from today's desktop setups, unless they are cooled off with some heavy-duty water cooling setup.
The funniest part...
Is that TomTom sells "Speed Cameras" service to the users, helping them to avoid speed traps...
Then, they sell the data to cops, so they can maximize the revenue.
Looks like the dream business model :-)
Matt Asay is chief operating officer of Ubuntu commercial operation Canonical. With more than a decade spent in open source
Gee... doesn't look like a Redmond PR-droid, does he?
Re: Instead of telling us where the cameras are ...
@Harry - Actually, TomTom does support warning you if you exceed the speed limit. It is a standard feature in all TomTom navigation units.
Re: Speed Camera Databased
Actually, it is only Switzerland that is >very< anal about the speed camera POI databases, and they threatened TomTom and other GPS device manufacturers that they would stop sales of their devices if they do not remove that feature.
In Germany, it is also illegal to have such aid in a car, but they can't be bothered... You'll see TomTom GPS devices with "Speed Camera Database" advertisements in tech stores like Saturn. Illegal to use, legal to sell that is.
Please educate yourself about the matter you are writing about
AAC (like MP3 or H.264) has NOTHING to do with DRM. It is a worldwide international standard (ISO/IEC 14496 in case of AAC, ISO 11152 in case of MP3) and is related to audio coding only.
Encrypting this with your DRM has nothing to do whatsoever with the AAC (or H.264, or MP3) standard itself. Hell, you can even put Ogg Vorbis in DRM container if you wish. This is purely the decision of the implementer. Standards alone do not enforce nor prohibit DRM.
Second, H.264, MP3 and AAC are as open as it gets - they ALL are fully documented and available from ISO/ITU directly (or your country standards body) with open source reference C/C++ software being available as well. When those patents expire, they will be fully public domain - much more "free" (as in freedom) than some GPL-ed stuff.
The fact of the matter is - yes, complex audio and video codecs ARE based on patented technology more often than not. And there is NOTHING bad in that and NOTHING preventing them from being open for everyone to implement and use, with reasonable and non-discriminatory cost model.
It is only the freetards of this planet who are trying to spread FUD about the well-proven international standards like H.264. Sorry guys, technology has a price - someone worked very hard to invent it. No, those companies WILL NOT give it away "for free" so Apples and Googles of this would would use it to make money.
And, by the way - these ITU/ISO standardized technologies are in almost all modern digital TV, optical media, mobile and satellite standards. That makes much more relevant than what some FSF-tards would like people to know.
Ehm... because of... quality?
Because VP6 is quite better in terms of quality per equal bitrate comared to Ogg Theora?
Meaning that bandwidth required to carry quality streams will be lower?
Or, meaning that people would get higher video quality per same bandwidth utilized?
Not to mention that streaming HD streams with Theora requires ludricous bit rate compared to H.264 or even On2 VP6. Argument that "everyone has broadband" does not cut it - first of all because it is still NOT enough for HD Theora streams, and second -because someone HAS to pay for all that bandwidth being moved between servers.
Of course that anyone with a hint of technical competence would choose better codec (better in terms of quality per bits engaged) as it decreases costs and improves quality of viewing.
Patents and Trade Secrets
Usually patents contain only brief and basic information about the technology.
In most complex cases, the devil is in the detail of implementation, and that implementation is typically held as trade-secret kind of IP.
i4i did offer their technology - they were in fact negotiating with Microsoft long before the court case.
What MS did - they thought they can avoid the patent (email proof shown in court) - and decided to say goodbye to i4i...
Well, at the end - it looks like it did not play well for Microsoft, and what they did in fact was willful infringement.
So, before you call people morons, get your facts straight, mr. Anonimous Coward.
Unfortunately - no GMA500
I just saw it is GMA3150 - 45nm shrinkage of GMA3100
It is not capable of hw. decoding of anything except MPEG-2. This is a really ridiculous decision from Intel's side - especially when NVidia's ION is available.
Actually, GMA can do HD
GMA500, that is - the kind part of Intel's "MID" and embedded ATOM parts, and probably the same GPU in the new Pineview parts.
However, the biggest problem of GMA500 (PoverVR) is extremely crappy driver support, so actually most people can't see what this thing can truly deliver.
In fact, I have it decoding 1080p AVC with ~3-5% of CPU load - and, yes, that's a 1.3 GHz Z5xx ATOM.
I guess this is progress...
Wow, if Sony really manages to pull this off - and get $2K for an Atom-based polished toy, I am really losing faith in the future of the mankind... This must be the most overpriced piece of IT equipment in the last few years.
Core 2 Duo of "CULV" sort can wipe the floor with any Atom - and, Sony used to offer that with TZ and TT series... These little Core2 Duos were quite OK for many uses, with an SSD my old TZ was actually faster for day to day use than many higher-clocked C2D desktops (with the hard drive inside)
The only good thing in this is US15W "Poulsbo" chipset, which is perfectly able to decode even 1080p content (with the proper decoders and players) - but then, man can get Dell's Mini 10 for the fraction of a price...
What value add do you really get with X apart from carbon fiber? Bragging rights?
Well, I guess - Apple proved the point that you can actually capitalize on those very well...
For all those dismissing this as a abnormality...
Would you buy a car that has spontaneously combusted in "only 11 cases" ? No? I though so :)
How many notebooks combusted in flames before Sony decided to do a global battery recall? I recon far less than 11...
How many phones are sold by e.g. Nokia? Hundreds of millions - still, I cannot recall this amount of cockups.
It is already possible to enable VT in Sony
And I am quite sure it will be possible to do it on Fujitsu, too.\
Someone managed to hack into Insyde EFI and find how to unlock an "Advanced" menu which offers VT-X and VT-D enable/disable (among other goodies)
I already did it on my Vaio Z - and it works great
"If you actually used a Mac instead of bleating on internet forums about them, you'd know they do "just work."
Suuuure - that's why they have www.macfixit.com
Wow, Opera w*nkers are really over the top...
What's next - when Microsoft even changes IE icon, Opera will request that Internet Explorer gets renamed to "WhgrhGB0l Grhvrw1AL" - otherwise, the name would be too familiar and users would chose it due to familiarity reasons alone...
Jesus, will those Opera tossers die already...
Re: Oh, so that'll not be Windows Mobile is any better then.
Last time I checked, Win Mobile does have a feature to encrypt the data on the storage card.
There is no "catch"
First of all, Linux Kernel is GPL v2 - there is nothing inherently "evil" with Microsoft releasing Linux kernel code with the same license as the rest of the Linux Kernel. I will skip the GPL v3 "patent protection" nonsense, even Linus Torvalds is not buying it...
As for the hidden agenda, there is none - Microsoft cannot ignore the simple fact that there are tons of cheap small Linux web servers used for e.g. hosting plans, etc... Those are all going to be virtualized and hosted on more powerful multi-core systems.
So, instead of doing nothing they can at least try to get the money for the hosting OS - and for this, they need the client VMs to run as fast as possible, so they can have an edge over e.g. VMWare.
This is actually a "win win" for Microsoft and Linux - Microsoft cannot ignore reality, they certainly cannot make all those Linux servers in question switch to Win Server 2008 - but they can at least try to get something by offering to host those systems on a Hyper-V server.
- Opportunity selfie: Martian winds have given the spunky ol' rover a spring cleaning
- Spanish village called 'Kill the Jews' mulls rebranding exercise
- Reddit users discover iOS malware threat
- Pics R.I.P. LADEE: Probe smashes into lunar surface at 3,600mph
- Ex–Apple CEO John Sculley: Ousting Steve Jobs 'was a mistake'