29 posts • joined Tuesday 21st July 2009 08:38 GMT
Normally, additional features that command a premium are fused-out in 'common' silicon and enabled only for special SKUs.
To temporarily enable fused-out feature you would need several things none of which are present in the computers employed in ordinary businesses. And even if you had all the tooling and clearances (which is next to impossible) the process of temporary enabling is not going to be unnoticed. Hardly something that can be used for exploitation. There are much easier avenues - including more and more rare kernel-level exploits.
Re: Talk about apples and moonrocks
Sorry, but you obviously do not know what are you talking about, sorry.
Intel's modern CPUs have different micro-ops which are used internally. x86 instruction set is only used as a backwards-compatibility measure and gets decoded to the series of micro-ops (besides, modern instructions such as AVX have nothing to do with the ancient x86). Today's Sandy Bridge / Ivy Bridge architecture has almost nothing in common with, say, Pentium III or even Core. Intel tweaks their architectures in the "tick" cycle, but the "tock" architectures are brand new and very different from each other.
As for X86 instruction set, and the age-old mantra (I believe coming from Apple fanboys) that x86 is inherently more power-demanding, this has been nicely disproved lately with refined Intel Atoms (2007 architecture, mind you) which are pretty much on par with modern ARM architectures in terms of power consuption.
I am not a fan of x86 at all, I use what gets my job done in the best possible way.
But when I read things like this, it really strikes me how people manage to comment on something they obviously do not understand.
Re: Where is the advantage?
Sorry, the fact that the language is more open to script kiddies does not make it any more "high level", nor it does make buggy code any less likely to occur. Crappy code is crappy code, it is caused by the developer and not by the language.
The advantage of the NaCL would be performance - if someone needs it say, for multimedia, gaming, etc... It does not mean that everybody needs to use it but it would be good to have it as an option. The fact that computers are getting faster does not render it any less relevant as modern multimedia/gaming is always pushing the limits of the hardware.
Re: The Kernel of Linux
Cars engine management?
Yeah, that's exactly where I'd want a general purpose OS like Linux...
Thankfully, automotive industry is still not going crazy and critical ECU-s are driven by hardened RTOS-es.
Re: So, not exactly Orac then
My bet is on the truly neuromorphic hardware.
Anything else introduces unnecessary overhead and bottlenecks. While initial simulations for model validation are OK to be done on general-purpose computing architectures, scaling that up efficiently to match the size of a small primate brain is going to require elimination of overheads and bottlenecks in order not to consume megawatts.
Problem with neuromorphic hardware is of "chicken and the egg" type - to expect large investments there needs to be a practical application which clearly outperforms the traditional Von Neumann sort - and to find this application, large research grants are needed. I am repeating the obvious, but our current knowledge of neurons is probably still far from being on the level enough to make something really useful.
Recognizing basic patterns with lots of tweaking is cool - but for a practical application it is not really going to cut it as the same purpose could be achieved with much cheaper "conventional" hardware.
If cortical modelling is to succeed - I'd guess it needs to achieve goals which would make it useful for military/defense purposes (can be something even basic mammals are good at - recognition, and today's computers still suck big time when fed with uncertain / natural data).. Then, the whole discipline will certainl get a huge kick to go to the next level.
Even today, there is a large source of funding - say, for Human Brain Project (HCP). But, I am afraid that the grandiose goals of HCP might not be met - coupled with pimping of general public and politician's expectations, the consequences of failure would be disastrous and potentially yield to another "winter" similar to the AI Winters we had.
This is why I am very worried about people making claims that we can replicate human brain (or even brain of a small primate) in the near future - while this is perhaps possible, failing to meet the promises would bring unhealthy pessimism and slow down the entire discipline due to cuts in funding. I, for one, would much rather prefer smaller goals - and if we exceed them, so much for the better.
Re: Better platform needed
There is still a tiny issue of connectivity - despite the fact that synaptic connectivity patterns are of "small world" type (highest % of connections are local), there is still a staggering amount of long-range connections that go across the brain. Average human brain contains on order of hundreds of thousands of kilometers (nope, that is not a mistake) of synaptic "wiring".
Currently our technologies for wiring things on longer distances are not yet comparable to mother nature's. Clever coding schemes can somehow mitigate this (but then, you need to plan space for mux/demux, and those things will consume energy), however - but still, the problem is far from being tractable with the today's tech.
Re: Strong AI will, of course, use Linux
Operating system choice has absolutely nothing to do with brain modelling.
Most models are initially done in Matlab, which exists on Linux, OS X and Windows.
Then, applying this in large-scale practice is simply a question of tooling, and tooling exists on all relevant operating systems today. You have CUDA and OpenMP on Linux and Windows. Heck, you even have Intel compiler if you love x86 on both Linux and Windows. It is more a practical choice which is down to the other requirements.
On the other hand, it is true, however, that there is a large choice of support tools (such as Freesurfer) existing on Linux and not on Windows. But then, anybody can run anything in a virtual machine nowdays.
Re: So, not exactly Orac then
Hmm, machine language would be a huge waste of time as you could accomplish the same with the assembler ;-) Assuming you meant assembly code - even that would be an overkill for the whole project and actually it might end up with slower code compared to an optimizing C/C++ compiler.
What could make sense is assembly-code optimization of critical code paths, say synaptic processing. But even then, you are mostly memory-bandwidth bound and clever coding tricks would bring at most few tenths of % of improvement in the best case.
However, that is still drop in the bucket compared to the biggest contributor here - for any decent synaptic receptor modelling you would need at least 2 floating point variables per synapse and several floating point variables per one neuron compartment.
Now, if your simulation accuracy is 1 ms (and that is rather coarse as 0.1 ms is not unheard of) - you need to do 1000 * number_of_synapses * N (N=2) floating point reads, same number of writes - and several multiplications and additions for every single synapse. Even for a bee brain-size, that is many terabytes per second of I/O. And >that< is the biggest problem of large-scale bilogically-plausible neural networks.
Java or not...
Actually, Java is the smallest problem here (although it is rather lousy choice if high-performance and high-scalability is a design goal, I must agree).
The biggest problem is brain "architecture" which is >massively< parallel. For example, typical cortical neuron has on order of 10000 synaptic inputs and a typical human has on order of 100 billion neurons with 10000x as much synapses. Although the mother nature did lots of optimizations in the wiring, so the network is actually of a "small world" type (where most connections between neurons are local, with small number of long-range connections so that wiring, and therefore energy, is conserved) - it is still very unsuitable for the Von Neumann architecture and its bus bottleneck.
For example, you can try this:
This is the small cortical simulator I wrote, which is highly optimized for Intel architecture (heavily using SSE and AVX). It is using multi-compartment model of neurons which is not biophysical, but phenomenological (designed to replicate desired neuron behavior - that is, spiking ,very accurately without having to calculate all intrinstic currents and other bilogical variables we do not know) .
Still, to simulate 32768 neurons with ~2 million synapses in real time you will need ~120 GB/s of memory bandwidth (I can barely do it with 2 Xeons 2687W with heavily overclocked DDR 2400 RAM!) ! You can easily see why the choice of programming language is not the biggest contributor here - even with GPGPU you can scale by max. one order of magnitude compared to SIMD CPU programming, but the memory bandwidth requirements are still nothing short of staggering.
Then, there is a question of the model. We are still far far away from fully understanding the core mechanisms that are involved in learning - for example, long term synaptic plasticity is still not fully understood. Models such as Spike-timing dependent plasticity (STDP) which were discovered in late 90's are not able to account for many observed phenomenons. Today (as of 2012) we have somewhat better models (for example, post-synaptic voltage dependent plasticity by Clopath et al.) but they still are not able to account for some experimentally observed facts. And then, how many phenomena are still not discovered experimentally?
Then, even if we figure out plasticity soon - what about glial contribution to neural computation? We have much more glial cells which were though to be just supporting "material", but now we know that glia actively contributes to the working of the neural networks and has signalling of its own...
Then, we still do not have too much clue in how the neurons actually wire. Peters rule (which is very simple and intuitive - and therefore very popular among scientists) is a crude approximation with already discovered violations in-vivo. As we do know that neural networks mature and evolve depending on the inputs - figuring out how the neurons wire together is of uttermost importance if we are really to figure out how this thing really works.
In short, today we are still very far away from understanding how brain works in the detail required to replicate it or fully understand it down to the level of neurons (or maybe even ions).
However, what we do know already - and very well indeed, is that the brain architecture is nothing like Von Neumann architecture of computing machines, and emulation of the brains on the Von Neumann systems is going to be >very< ineffective and require staggering amounts of computational power.
Java or C++ - it does not really matter that much on those scales :(
Re: I've gone for large capacity.
Not all SSD are created equal - SLC SSDs can sustain writes on orders of petabytes and that's why they are used in the enterprise sector.
You are comparing a simple ability to find what you want (application-wise) online... by say, googling for it with writing of the 4-line bash script?
Wow... standards have indeed fallen these days. I guess in 10 years even mere searching would be considered an intellectual chore.
If money is no object THIS is the ultimate "extreme desktop CPU"
Ok, it is not exactly "desktop" but it is workstation - close enough :) And it allows dual-CPU configurations.
I have two of these puppies and the software I write is very happy with the speedup. Not to mention that it is quite easy to crank the Samsung's 30nm ECC "green memory" up to 2133 MHz (the official specs of the RAM and the Xeon's E5 allowed max is 1600 MHz) which makes any large-scale biological simulation quite happy due to the insane memory bandwidth (~69 GB/s).
Intel decided to cripple 3960X/3930K with fusing-out the two cores as they understand overclockers will push the voltages north of 1.3v... If they left all 8 cores on, this would generate extreme amounts of heat which would be very hard to evacuate from today's desktop setups, unless they are cooled off with some heavy-duty water cooling setup.
The funniest part...
Is that TomTom sells "Speed Cameras" service to the users, helping them to avoid speed traps...
Then, they sell the data to cops, so they can maximize the revenue.
Looks like the dream business model :-)
Re: Speed Camera Databased
Actually, it is only Switzerland that is >very< anal about the speed camera POI databases, and they threatened TomTom and other GPS device manufacturers that they would stop sales of their devices if they do not remove that feature.
In Germany, it is also illegal to have such aid in a car, but they can't be bothered... You'll see TomTom GPS devices with "Speed Camera Database" advertisements in tech stores like Saturn. Illegal to use, legal to sell that is.
Please educate yourself about the matter you are writing about
AAC (like MP3 or H.264) has NOTHING to do with DRM. It is a worldwide international standard (ISO/IEC 14496 in case of AAC, ISO 11152 in case of MP3) and is related to audio coding only.
Encrypting this with your DRM has nothing to do whatsoever with the AAC (or H.264, or MP3) standard itself. Hell, you can even put Ogg Vorbis in DRM container if you wish. This is purely the decision of the implementer. Standards alone do not enforce nor prohibit DRM.
Second, H.264, MP3 and AAC are as open as it gets - they ALL are fully documented and available from ISO/ITU directly (or your country standards body) with open source reference C/C++ software being available as well. When those patents expire, they will be fully public domain - much more "free" (as in freedom) than some GPL-ed stuff.
The fact of the matter is - yes, complex audio and video codecs ARE based on patented technology more often than not. And there is NOTHING bad in that and NOTHING preventing them from being open for everyone to implement and use, with reasonable and non-discriminatory cost model.
It is only the freetards of this planet who are trying to spread FUD about the well-proven international standards like H.264. Sorry guys, technology has a price - someone worked very hard to invent it. No, those companies WILL NOT give it away "for free" so Apples and Googles of this would would use it to make money.
And, by the way - these ITU/ISO standardized technologies are in almost all modern digital TV, optical media, mobile and satellite standards. That makes much more relevant than what some FSF-tards would like people to know.
Ehm... because of... quality?
Because VP6 is quite better in terms of quality per equal bitrate comared to Ogg Theora?
Meaning that bandwidth required to carry quality streams will be lower?
Or, meaning that people would get higher video quality per same bandwidth utilized?
Not to mention that streaming HD streams with Theora requires ludricous bit rate compared to H.264 or even On2 VP6. Argument that "everyone has broadband" does not cut it - first of all because it is still NOT enough for HD Theora streams, and second -because someone HAS to pay for all that bandwidth being moved between servers.
Of course that anyone with a hint of technical competence would choose better codec (better in terms of quality per bits engaged) as it decreases costs and improves quality of viewing.
Patents and Trade Secrets
Usually patents contain only brief and basic information about the technology.
In most complex cases, the devil is in the detail of implementation, and that implementation is typically held as trade-secret kind of IP.
i4i did offer their technology - they were in fact negotiating with Microsoft long before the court case.
What MS did - they thought they can avoid the patent (email proof shown in court) - and decided to say goodbye to i4i...
Well, at the end - it looks like it did not play well for Microsoft, and what they did in fact was willful infringement.
So, before you call people morons, get your facts straight, mr. Anonimous Coward.
Unfortunately - no GMA500
I just saw it is GMA3150 - 45nm shrinkage of GMA3100
It is not capable of hw. decoding of anything except MPEG-2. This is a really ridiculous decision from Intel's side - especially when NVidia's ION is available.
Actually, GMA can do HD
GMA500, that is - the kind part of Intel's "MID" and embedded ATOM parts, and probably the same GPU in the new Pineview parts.
However, the biggest problem of GMA500 (PoverVR) is extremely crappy driver support, so actually most people can't see what this thing can truly deliver.
In fact, I have it decoding 1080p AVC with ~3-5% of CPU load - and, yes, that's a 1.3 GHz Z5xx ATOM.
I guess this is progress...
Wow, if Sony really manages to pull this off - and get $2K for an Atom-based polished toy, I am really losing faith in the future of the mankind... This must be the most overpriced piece of IT equipment in the last few years.
Core 2 Duo of "CULV" sort can wipe the floor with any Atom - and, Sony used to offer that with TZ and TT series... These little Core2 Duos were quite OK for many uses, with an SSD my old TZ was actually faster for day to day use than many higher-clocked C2D desktops (with the hard drive inside)
The only good thing in this is US15W "Poulsbo" chipset, which is perfectly able to decode even 1080p content (with the proper decoders and players) - but then, man can get Dell's Mini 10 for the fraction of a price...
What value add do you really get with X apart from carbon fiber? Bragging rights?
Well, I guess - Apple proved the point that you can actually capitalize on those very well...
For all those dismissing this as a abnormality...
Would you buy a car that has spontaneously combusted in "only 11 cases" ? No? I though so :)
How many notebooks combusted in flames before Sony decided to do a global battery recall? I recon far less than 11...
How many phones are sold by e.g. Nokia? Hundreds of millions - still, I cannot recall this amount of cockups.
It is already possible to enable VT in Sony
And I am quite sure it will be possible to do it on Fujitsu, too.\
Someone managed to hack into Insyde EFI and find how to unlock an "Advanced" menu which offers VT-X and VT-D enable/disable (among other goodies)
I already did it on my Vaio Z - and it works great
Wow, Opera w*nkers are really over the top...
What's next - when Microsoft even changes IE icon, Opera will request that Internet Explorer gets renamed to "WhgrhGB0l Grhvrw1AL" - otherwise, the name would be too familiar and users would chose it due to familiarity reasons alone...
Jesus, will those Opera tossers die already...
There is no "catch"
First of all, Linux Kernel is GPL v2 - there is nothing inherently "evil" with Microsoft releasing Linux kernel code with the same license as the rest of the Linux Kernel. I will skip the GPL v3 "patent protection" nonsense, even Linus Torvalds is not buying it...
As for the hidden agenda, there is none - Microsoft cannot ignore the simple fact that there are tons of cheap small Linux web servers used for e.g. hosting plans, etc... Those are all going to be virtualized and hosted on more powerful multi-core systems.
So, instead of doing nothing they can at least try to get the money for the hosting OS - and for this, they need the client VMs to run as fast as possible, so they can have an edge over e.g. VMWare.
This is actually a "win win" for Microsoft and Linux - Microsoft cannot ignore reality, they certainly cannot make all those Linux servers in question switch to Win Server 2008 - but they can at least try to get something by offering to host those systems on a Hyper-V server.