Nvidia had better watch out. Texas Instruments is not only its rival when it comes to making ARM processors that might end up in servers someday, but it is also repositioning its digital signal processors so they can be used as math coprocessors for standard x86 CPUs – and perhaps ARM processors one day. Nvidia obviously has the …
Actually the software probably is the big advantage
Unlike GPUs DSPs always had an open instruction set. So it was always fairly easy to program them. It's so simple you can even program them in assembler.
Because of those open architectures, institutions buying those computers can develop Fortran compilers for them within weeks. And once you have Fortran, you can run most HPC software.
Software for C66xx available
Consider it done. To get maximum performance, one needs very efficient (small code size, low communication latency) distributed (RT)OS. http://www.altreonic.com/content/opencomrtos-supports-high-performance-c66xx-dsp-texas-instruments
UK-grown software, for similar but not identical hardware...
"EDINBURGH, UK - 7 March 2011 - 3L, the leading multiprocessing and reconfiguration company, now supports the TMDSEVM6474L multicore module from Texas Instruments. Diamond 4 allows you to get applications running on the three 1GHz cores quickly and easily, making use of 3L's powerful development model that has been providing efficiency through simplicity since 1987. "
I have no connection with them other than having had the pleasure of working with them (all too briefly) many years ago; pleased to see they're still around.
Give them a ring, Mr PM.
The NeXT had a TI DSP display processor
The NeXT machines (1988-1992, more or less) had a 25 or 33 MHz 68000-series microprocessor, and a TI DSP to process the Display Postscript display instructions. In 1999-2000 I compared a 25MHz NeXTstation to the latest and greatest desktop Macintosh machines, and for regular workstation use - flipping windows around the screen, scrolling text, drawing pictures, etc. - the old NeXTstation seemed at least as fast (although it only did 2-bit greyscale, not color). But, compile time was abysmal compared to modern machines - I regularly had compiles that ran all night. The DSP was accessible from source code but I never tried to do anything with it.
DSPs for graphics
Yes, at the time (late 80s to early 90s) using (discrete) DSPs for workstation graphics was pretty common. IBM's "Megapixel" display (a Sony Trinitron monitor at 1024x1024x8bpp, hence the name) for the RT PC was driven by a DSP-based adapter, for example.
If only this was 5 years ago
The performance there talking about for the $2k board is purchasable via a GPU for around the $150 mark. As for power usage, I'd probably say the ATI/Nvidia high end mobile chips would beat them for performance and power usage, probably some of the desktop cards as well.
Shame realy as the DSP was for many the next logical step after the FPU go intergrated into the CPU and sadly since then the DSP hasn't made enough wide interest dent in usage to blip on any radar of mass adoption. They have done wonders in many niche area's and I'm deeply supprised they have not been adapted more as processing add on's. Alas, beyond video editing and games and highend math stuff the end home user has ended up driving the graphics cards into a market the DSP's should of had nailed before day 1 even existed. Now there releasing products akin to 5 years ago power wise on the GPU front, ok better power usage then 5 years ago but still not enough wow to spark mass interest I'd say. Had they said sub 50w power usage then that would be noteworthy, but still not jumping out at me.
I'm not afay with the tools available to accomodate multiple DSP programming without limits on how many but all a programmer wants to do is code the the problem and to avoid any distractions about architecture as much as possible. That is there ideal world, the compiler should target the end platform for them and identifiy and parralize(sic) there code for the platform it is running upon. The other end is to manualy code at assembler level every last juice of power out of the configuration - everything else is a balance of compromises. Like having to not only work on the problem in the code but also the platform and its various nuances like process handerling. Which distracts from the problem being addressed by this compute power. Nothing is perfect in that balance but it's only getting better, one day you can focus just on the problem and allow the compilers etc to deal with the nuances for you.
I also remember back in the days when the Atari Falcon was anoounced; It had a DSP built in - sadly DSP's on your base mother board never realy took of from there and the PC standard, whilst accomodating via PCI etc for DSP's never realy mandated them. Unlike graphics cards which were kind of needed for a PC and it is from there we ended up in the situation we are today were graphics cards have in many ways become the mass consumer usage decive that pushed addon processing power the DSP market have for many years been craving.
Sadly I think it is too little too late for DSP's, at least how there projecting currently. With some intergrating with FPGA's as well then they may break a few more niche's.
Also if they can automate the ability to leverage that parrelalism of the architecture the DSP offers then they may still do it. Maybe a customer DSP thats tailored towards the rendering market, even focused on that. That is always open to cheaper more powerful and less heat producing solutions and is a large growing market that has potentual to open up towards home users one day. Otherwise most people are just looking for cluster FPU's basicly :\ and this does not compete.
I saw someone had built a suitcase sized cluster of Beagleboards on YouTube for less than that.
TLDR.... j/k :)
ahh the falcon. yes. under used DSP logic, ahh well.
and the amiga had the doublethink project to bang a DSP on the 68k bus. main idea being to handle fast (for the time) serial I/O like modem and Ethernet, but could be expanded to handle audio and other stuff too. never saw the light of day though.
both commodore and atari failing to have a consistent hardware stratagy. or management stratagy. or management in general.
waaaaait a sec, don't/didn't PC's use cut down DSPs to deal with modems and the like thesedays? wots that AC'97 standard all about then?
i remember them dual port pcmcia cards. use a dsp at your ethernet hardware layer, and you get a modem thrown in for free. or at least that wot i fort anyways.
"the compiler should target the end platform for them and identifiy and parralize(sic) there code for the platform it is running upon. "
That's where I came across 3L (see above).
Parallel C (language, compiler, and runtime) targeting (sometimes reconfigurable) arrays of processing power of various kinds (Transputers back in the day, duly followed by DSP and other clever stuff).
You can find the 3L Parallel C User Guide for Transputers (vintage 1989) at
http://www.transputer.net/prog/72-tds-179-00/book.asp for the frontmatter
http://www.transputer.net/prog/72-tds-179-00/3lparcug.pdf for the full 270+ pages.
There is a freely downloadable ancient version of 3L Parallel C for the Transputer as one of the various Transputer-related bits available via
http://www.classiccmp.org/transputer/languages.htm (for best results, start at /transputer/)
I wonder, given the way high end ARMs are going in terms of performance and performance per watt and performance per watt per dollar... no, best not.
- Review Ubuntu 14.04 LTS: Great changes, but sssh don't mention the...
- Vid CEO Tim Cook sweeps Apple's inconvenient truths under a solar panel
- Antique Code Show WTF happened to Pac-Man?
- HTC mulls swoop for Nokia's MASSIVE Chennai plant
- Study shows dangerous asteroid impacts hit Earth every six months