Feeds

* Posts by Frank Rysanek

79 posts • joined 2 Oct 2007

Page:

Post-pub nosh neckfiller: Bryndzové halušky

Frank Rysanek

Halušky with cabbage

Regarding the alternative recipe with cabbage - yes that's the less radical version, making Halušky more accessible to non-Slovaks :-) The cabbage is supposed to be pickled/fermented/sour (Sauerkraut), definitely not fresh and crisp. Not sure at what stage the cabbage gets mixed in - it's definitely not served separate and cold.

0
0
Frank Rysanek

Bryndza

Without Bryndza, you cannot say you ate the real deal. The gnocchi-like "carrier", athough some may like it alone (I do :-) is just a dull background to the incredible and breathtaking flavour of genuine Bryndza. Not sure if any British sheep cheese can rival the raw animal energy of the Slovak Bryndza. Unforgettable. I'm not a Slovak - to me, once was enough.

0
0

BAE points electromagnetic projectile at US Army

Frank Rysanek

the one thing I don't get...

How do you fire this, without nuking your own onboard electronics?

0
0

Gates and Ballmer NOT ON SPEAKING TERMS – report

Frank Rysanek

Re: to buy a failing company

To buy a company in trouble can be a successful strategy for some investors.

If it wasn't for the fact that Nokia was a technology giant, it might be a classic choice of Warren Buffett.

The Nokia phone business did have several steady revenue streams, several development teams working on some interesting projects, several good products just launched or close to a launch (which could get refactored in following generations, but didn't). As far as I can tell from outside, they might as well keep on going with a profit, if they had a chance to selectively shed some fat in terms of staff and internal projects, get more focused and "stop switching goals halfway there".

Microsoft's only plan with Nokia is to have an own vehicle for Windows Phone, which means that much of Nokia's bread-and-butter technology legacy has been wasted, and many legacy Nokia fans left in a vacuum.

21
2

Business is back, baby! Hasta la VISTA, Win 8... Oh, yeah, Windows 9

Frank Rysanek

why upgrade; OS maintenance over the years

In terms of the underlying hardware architecture, for me the last true HW motive that would warrant an upgrade was the introduction of message-signaled interrupts. MSI has the potential to relieve us all of shared IRQ lines. It required a minor update of the driver framework - and I'm sure this could've been introduced in Windows XP with SP4. Well it got introduced as part of Vista (or let's discard Vista and say "seven") - and was a part of a bigger overhaul in driver programming models, from the earlier and clumsier WDM, to the more modern and supposedly "easier to use" WDK. Along came a better security model for the user space. With all of these upgrades in place, I believe that Windows 7 could go on for another 20 years without changing the API for user space. No problem to write drivers for new hardware. USB3 and the like don't bring a paradigm shift - just write new drivers, and that change stays "under the hood". Haswell supposedly brings a finer-grained / deeper power management... this could stay under the hood in the kernel, maybe catered for by a small incremental update to the kernel-side driver API.

Linux isn't inherently long-term unmanned/seamless either. An older distro won't run on ages younger hardware, as the kernel doesn't have drivers for the new hardware, and if you replace an ages old kernel with something much more recent, you'll have to face more or less serious discrepancies in kernel/user interfaces. Specifically, graphics driver frameworks between the kernel and XWindows have been gradually developing, and e.g. some "not so set in stone" parts of the /proc and /sys directory trees have also changed, affecting marginal stuff such as hardware health monitoring. Swapping kernels across generations in some simple old text-only distro can be a different matter (can work just fine within some limits), but that's not the case for desktop users. Ultimately it's true that in Linux, the user has much more choice between distroes, between window managers within a distro, gradual upgrades to the next major version generally work. And, your freedom to modify anything, boot over a network etc. is much greater than in Windows. Specifically, if it wasn't for Microsoft pushing the UEFI crypto-circus into every vendor's x86 hardware, you could say that Linux is already easier to boot / clone / move to replacement hardware than Windows 7/8 (the boot sequence and partition layout is easier to control in Linux, with fewer artificial obstacles).

I'm curious about Windows 9. It could be a partial return to the familiar desktop interface with a start menu, and legacy x86 win32 API compatibility. Or it could be something very different. I've heard suggestions that Microsoft is aiming to unify the kernel and general OS architecture across desktop / mobile / telephones - to unite Windows x86 and Windows RT. From that, I can extrapolate an alternative scenario: Windows 9 (or Windows 10?) could turn out to be a "Windows CE derivative", shedding much of the legacy Win32 NT API compatibility, legcuffed to your piece of hardware using crypto-signed UEFI, and leashed to the MS Cloud (no logon without IP connectivity and a MS cloud account). All of that, with a "traditional desktop" looking interface... You don't need much more from a "cloud terminal". I wouldn't be surprised.

1
0

Moon landing was real and WE CAN PROVE IT, says Nvidia

Frank Rysanek

radiosity rendering? HDR?

When was the first time that I read about "radiosity rendering"? Deep in the nineties maybe? Though at that time, it was mentioned as "the next level after raytracing"... This seems more like texture mapping (not proper raytracing) but with additional voxelized radiosity-style upgrade to "hardware lighting". There are probably several earlier "eye candy" technologies in the mix - objects cast shadows, did I see reflections on the lander's legs? Not sure about some boring old stuff such as multiple mapping, bump mapping etc.

I.e. how to make it look like a razor-sharp raytraced image with radiosity lighting, while in fact it's still just a texture-mapped thing, the incredible number crunching horsepower (needed for raytracing+radiosity) has been worked around, approximated by a few clever tricks. Looks like a pile of optimizations. Probably the only way to render this scene in motion in real time, on today's gaming PC hardware. BTW, does the "lander on the moon" seem like a complicated scene? Doesn't look like a huge number of faces, does it?

I forgot to mention... that bit about "stars missing due to disproportionate exposure requirements for foreground and background" might point to support for "high dynamic range" data (in the NVidia kit). The picture gets rendered into an HDR raw framebuffer, and the brightness range of the raw image is then clipped to that of a PC display (hardly even an 8bit color depth). To mimick the "change of exposure time", all you need to do is shift the PC display's dynamic range over the raw rendering's dynamic range... Or it could be done in floating point math. Or it could get rendered straight into 8bits per color (no RAW intermediate framebuffer needed) just using a "scaling coefficient" somewhere, in lighting or geometry...

Seems that the buzzwords like HDR, radiosity or raytracing are not enough of an eyewash nowadays. The NVidia PR movie is clearly targetted at a more general audience :-)

BTW, have you ever flown a passenger jet at an altitude of 10+ km, during day time? Most of us have... at those altitudes, you typically fly higher than the clouds. There's always the sun and enough of your normal blue sky towards the horizon... but, have you tried looking upward? Surprise - the sky looks rather black! And yet there's not a hint of stars.

1
0

Three photons can switch an optical beam at 500 GHz

Frank Rysanek

Re: Awsome.

At this switching speed and gain... wouldn't it be an interesting building block for all-optical processors? Actually I can imagine why NOT: no way to miniaturize this to a level competitive with today's CMOS lithography.

0
0

Intel's Raspberry Pi rival Galileo can now run Windows

Frank Rysanek

The Galileo has no VGA

no VGA, no point in installing Windows on the poor beast.

Well you could try with a MiniPCI-e VGA, or a USB VGA... both of which are pretty exotic, in one way or another.

3
0

OpenWRT gets native IPv6 slurping in major refresh

Frank Rysanek

Re: So much better than original FW

A switch from original firmware to OpenWRT has improved signal quality and reach? Not very likely, though not entirely impossible...

Other than that, TP-Link hardware of the recent generation is a marvellous basis for OpenWRT. It runs very cool, has very few components apart from the Atheros SoC, this looks like a recipe for longevity. Only the 2-3 elyts could better be solid-poly (they're not) - I haven't found any other downside.

For outdoor setups I prefer Mikrotik gear (HW+FW) in a watertight aluminum box. And even the RB912 has classic Aluminum elyts... so I cannot really scorn TP-Link for not using solid-poly in their entry-level SoHo AP's.

1
0

Dell exec: HP's 'Machine OS' is a 'laughable' idea

Frank Rysanek

Re: no need for a file system

IMO the abstraction of files (and maybe folders) is a useful way of handling opaque chunks of data that you need to interchange with other people or other machines. Keeping all your data confined to your in-memory app doesn't sound like a sufficient alternative solution to that "containerized data interchange" purpose.

3
0
Frank Rysanek

Re: a game-changer

That's a good idea for handhelds. Chuck RAM and Flash, just use a single memory technology = lower pin count, less complexity, no removable "disk drives". Instant on, always on. A reboot only ever happens if a software bug prevents the OS from going on. Chuck the block layer? Pretty much an expected evolutionary step after Facebook, MS Metro, app stores and the cloud...

1
0
Frank Rysanek

Ahem. scuse me thinking aloud for a bit

The memristor stuff is essentially a memory technology. Allegedly something like persistent RAM - not sure if memristors really are as fast as today's DRAM or SRAM.

The photonics part is likely to relate to chip-to-chip interconnects. Not likely all-optical CPU's.

What does all of this boil down to?

The Machine is unlikely to be a whole new architecture, not something massively parallel or what. I would expect a NUMA with memristors for RAM. Did the article author mention DIMMs? The most straightforward way would be to take an x86 server (number of sockets subject to debate), run QPI/HT over fibers, and plug in memristors instead of DRAM. Or use Itanium (or ARM or Power) - the principle doesn't change much.

Is there anything else to invent? Any "massively parallel" tangent is possible, but is not new - take a look at the GPGPU tech we have today. Or the slightly different approach that Intel has taken with the Larrabee+. Are there any gains to be had in inventing a whole new CPU architecture? Not likely, certainly not unless you plan to depart from the general von-Neumannean NUMA. GPGPU's are already as odd and parallel as it gets, while still fitting the bill for some general-purpose use. Anything that would be more "odd and parallel" would be in the territory of very special-purpose gear, or ANN's.

So... while we stick to a NUMA with "von Neumann style" CPU cores clustered in the NUMA nodes, is it really necessary to invent a whole new OS? Not likely. Linux and many other OS'es can run on a number of CPU instruction sets, and are relatively easy to port to a new architecture. Theoretically it would be possible to design a whole new CPU (instruction set) - but does the prospect sound fruitful? Well not to me :-) We already have instruction sets, and CPU and SoC flavours within a particular family, and complete plaftorms around the CPU's, suited for pretty much any purpose that the "von Neumann" style computer can be used for, from tiny embedded things to highly parallel datacenter / cloud hardware.

You know what Linux can run on. A NUMA with some DRAM and some disks (spinning rust or SSD's). Linux can work with suspend+resume. Suppose you have lots of RAM. Would it be any bottleneck that your system is also capable of block IO? Not likely :-) You'd just have more RAM to allocate to your processes and tasks. If your process can stay in RAM all the time, block IO becomes irrelevant, does not slow you down in any way. Your OS still has to allocate the RAM to individual processes, so it does have to use memory paging in some form.

You could consider modifying the paging part of the MM subsystem to use coarser allocation granularity. Modifications like this have been under way all the time - huge pages implemented, debates about what would be the right size of the basic page (or minimum allocation) compared to the typical IO block size, possible efforts to decouple the page size from the optimum block IO transaction size and alignment... Effectively to optimize Linux for an all-in-memory OS, the developers managing the kernel core and MM in particular would possibly be allowed to chuck some legacy junk, and they'd probably be happy to do that :-) if it wasn't for the fact that Linux tries to run on everything and be legacy compatible with 20 years old hardware. But again, block IO is not a bottleneck if not in the "active execution path".

It doesn't seem likely that the arrival of persistent RAM would remove the need for a file system. That would be a very far-fetched conclusion :-D Perhaps the GUI's of modern desktop and handheld OS'es seem to be gravitating in that direction, but anyone handling data for a living would have a hard time imagining his life without some kind of files and folders abstraction (call them system-global objects if you will). This just isn't gonna happen.

Realistically I would expect the following scenario:

as a first step, ReRAM DIMM's would become available someday down the road, compatible with the DDR RAM interface. If ReRAM was actually slower than DRAM, x86 machines would get a BIOS update, able to distinguish between classic RAM DIMM's and ReRAM (based on SPD EEPROM contents on the DIMMs) and act accordingly.

There would be no point in running directly from ReRAM if it was slow, and OS'es (and applications) would likely reflect that = use the ReRAM as "slower storage". This is something that a memory management and paging layer in any modern OS can take care of with fairly minor modification.

If ReRAM was really as fast as DRAM, there would probably be no point in such an optimization.

Further down the road, I'd expect some deeper hardware platform optimizations. Maybe if ReRAM was huge but a tad slower than DRAM, I would expect another level of cache, or an expansion in the volumes of hardware SRAM cache currently seen in CPU's. Plus some shuffling in bus widths, connectors, memory module capacities and the like.

So it really looks like subject to gradual evolution. If memristors really turn out to be the next big thing in memory technology, we're likely to see a flurry of small gradual innovations to the current computer platforms, spread across a decade maybe, delivered by a myriad companies from incrementally innovating behemoths to tiny overhyped startups, rather than one huge leap forward delivered with a bang by HP after a decade of secretive R&D. The market will take care of that. If HP persists with its effort, it might find itself swamped by history happening outside of their fortress.

BTW, the Itanium architecture allegedly does have a significant edge in some very narrow and specific uses, from the category of scientific number-crunching (owing to its optimized instruction set) - reportedly, with correct timing / painstakingly hand-crafted ASM code, Itanium can achieve performance an order of magnitude faster than what's ever possible on an x86 (using the same approach). This information was current in about 2008-2010, not sure what the comparison would look like, if done against a 2014-level Haswell. Based on what I know about AVX2, I still don't think the recent improvements are in the same vein where the Itanium used to shine... Itanium is certainly hardly an advantage for general-purpose internet serving and cloud use.

As for alternative architectures, conceptually departing from "von Neumann with NUMA" and deterministic data management... ANN's appear to be the only plausible "very different" alternative. Memristors and fiber interconnects could well be a part of some ANN-based plot. Do memristors and photonics alone help solve the problems (architectural requirements) inherent to ANN's, such as truly massive parallelism in any-to-any interconnects, organic growth and learning by rewiring? Plus some macro structure, hierarchy and "function block flexibility" on top of that...

I haven't seen any arguments in that direction. The required massive universal cross-connect capability in dedicated ANN hardware is a research topic in itself :-)

Perhaps the memristors could be used to implement basic neurons = to build an ANN-style architecture, where memory and computing functions would be closely tied together, down at a rather analog level. Now consider a whole new OS for such ANN hardware :-D *that* would be something rather novel.

What would that be called, "self-awareness v1.0" ? (SkyOS is already reserved...)

Or, consider some hybrid architecture, where ANN-based learning and reasoning (on dedicated ANN-style hardware) would be coupled to von Neumann-style "offline storage" for big flat data, and maybe some supporting von Neumann-style computing structure for basic life support, debugging, tweaking, management, allocation of computing resources (= OS functions). *that* would be fun...

Even if HP were pursuing some ANN scheme, the implementation of a neuron using memristors is only a very low-level component. There are teams of scientists in academia and corporations, trying to tackle higher levels of organization/hierarchy: wiring, macro function blocks, operating principles. Some of this research gets mentioned at The Register. It would sure help to have general-purpose ANN hardware miniaturized and efficient to the level of the natural grey mass - would allow the geeks to try things that so far they haven't been able to, for simple performance reasons.

2
0

PCIe hard drives? You read that right, says WD

Frank Rysanek

Re: Whatever next? Direct Fibre Channel connections?

FibreChannel disk drives have been around for a very long time (perhaps no more).

This question twisted my brain into a "back to the future" dejavu.

http://forums.storagereview.com/index.php/topic/3331-fc-al-interface/

http://www.hgst.com/tech/techlib.nsf/techdocs/439F4FF2F546AE4F86256E4400673C67/$file/10K300_FC-AL_Functional_v.6.pdf

Ahh right - don't expect a duplex LC optical socket on the drive, that "direct to drive" flavour of FC-AL was wired into an SCA connector and ran over a copper PHY...

1
0

Everything you always wanted to know about VDI but were afraid to ask (no, it's not an STD)

Frank Rysanek

Pretty good reading

I live at the other end of the spectrum - in a small company, with barely enough employees to warrant some basic level of centralized IT, most of the empoyees are techies who prefer to select their PC's for purchase and manage them... It's a pretty retro environment, the centralized services are nineties-level file serving and printing, plus some VPN remote login, plus a Windows terminal server set up to cater for our single application that runs best in an RDP Console on a remote server (a databasey thingy). A major PITA is how to backup the OS on notebooks with preinstalled Windows in a somewhat orderly fashion. With the demise of DOS-based Ghost and with the recent generations of Windows, the amount of work required is staggering - the amount of work to massage the preinstalled ball of crud into a manageable, lean and clean original image suitable for a system restore, should the need arise - with a separate partition for data for instance. But it's less pain than trying to force a company of 20 geeks into mil-grade centralized IT.

To me as a part-time admin and a general-purpose HW/OS troubleshooter, the article by Mr. Pott has been a fascinating reading. There's a broad spectrum of IT users among our customers, and it certainly helps to "be in the picture" towards the upper end of centralized IT, even if it's not our daily bread and butter.

1
0

BEHOLD the HOLY GRAIL of TECH: The REVERSIBLE USB plug

Frank Rysanek

USB connector that fits either way up? That's on the market already...

I was shocked a couple months ago by the USB ports on this hub:

http://www.czc.cz/connect-it-ci-141-usb-2-0-hub-4-porty/130887/produkt?q-category-id=cep0kaggl8jm4aejnad83vui25

You can insert your peripherials either way up. It feels like you have to apply a bit of violence, but we're using it in a PC repair workshop and it's been working fine for several months now.

2
0

Windows 8 BREAKS ITSELF after system restores

Frank Rysanek

Re: approaching Windows 8 "the old way"

When it comes to Windows, I'm a bit of a retard... I always try to approach it based on common sense and generic principles of the past, which probably hints at lack of specific education on my part in the first place... I've never tried to use the Windows built-in backup/restore. The tool I tend to prefer for offline cloning is Ghost - the DOS flavour of Ghost. I've made it to work under DOSemu in Linux (PXE-booted), and recently my Windows-happy colleagues have taught me to use Windows PE off a USB stick... guess what: I'm using that to run Ghost to clone Windows the way *I* want it. With Windows 8 / 8.1 (and possibly 7 on some machines), there's an added quirk: after restoring Windows from an image onto the exact same hardware, you have yet to repair the BCD store, which is your boot loader's configuration file. Which is fun if it's on an EFI partition, which is hidden in Windows and not straightforward to get mounted... but once you master the procedure, it's not that much trouble, I'd almost say it's worth it. Symantec has already slit the throat of the old cmdline Ghost, but I'm told that there are other 3rd-party tools to step in its place... I haven't tested them though.

I've been forced to go through this on a home notebook that came with Windows 8 preloaded. Luckily I have the cloning background - as a pure home user, I'd probably be lost, at the mercy of the notebook maker's RMA dept if the disk drive went up in smoke. Well I've found the needed workarounds. And I tried to massage Windows 8.1 into a useable form, close to XP style. I've documented my punk adventure here:

http://www.fccps.cz/download/adv/frr/ACER_initial_cleanup.htm

A few days later, I had an opportunity to re-run the process along my own notes, and I had to correct a few things... and I noticed that I couldn't get it done in under 3 days of real time!

Yes I did do other work while the PC kept crunching away, doing a backup/restore or downloading Windows updates. On a slightly off topic note, the "hourglass comments" after the first reboot during the Windows 8.1 upgrade are gradually more and more fun (absurd) to read :-)

I've read elsewhere that before upgrading to 8.1, you'd better download all the updates available for Windows 8, otherwise the upgrade may not work out.

To me, upgrading from Windows 8 to 8.1 had a positive feel. Some bugs (around WinSXS housekeeping for example) have vanished. But I'm also aware of driver issues, because Windows 8.1 is NT 6.3 (= an upgrade from Windows 8 = NT 6.2). So if some 3rd-party driver has a signature for NT 6.2, you're out of luck in Windows 8.1, if the respective hardware+driver vendor embedded the precise version (6.2) in the INF file, as the INF file also appears to be covered by the signature stored in the .CAT file... Without the signature, with many drivers (with a bit of luck), you could work around the "hardcoded version" by modifying the INF file. Hello there, Atheros... On the particular notebook from Acer it was not a problem, Intel and Broadcom apparently have drivers in Windows 8.1.

I actually did the repartitioning bit as a fringe bonus of creating an initial Ghost backup. I just restored from the backup and changed the partitioning while at it.

...did I already say I was a retard?

Windows 8 appear to be capable of *shrinking* existing NTFS partitions, so perhaps it is possible to repartition from the live system without special tools. Not sure, haven't tried myselfs.

For corporate deployments of Windows 8, I'd probably investigate the Microsoft Deployment Toolkit.

That should relieve you of the painstaking manual maintenance of individual Win8 machines and garbage apps preloaded by the hardware vendor. It might also mean that you'd have to buy hardware without preloaded windows, which apparently is not so easy...

1
0

Intel details four new 'enthusiast' processors for Haswell, Broadwell

Frank Rysanek

Secret thermocouple compound

Perhaps with the "extreme edition" they'll return to soldering the heatspreader on, the way it was in the old days (I guess). Or at least use a "liquid metal" thermocoupling stuff (think of CoolLaboratory Pro or Galinstan) rather than the white smear that they've been using since Ivy Bridge...

Myself I'm not fond of number crunching muscle. Rather, I drool over CPU's that don't need a heatsink (and are not crap performance-wise). I like the low-end Haswell-generation SoC's (processor numbers ending in U and Y), and am wondering what Broadwell brings in that vein.

1
0

The UNTOLD SUCCESS of Microsoft: Yes, it's Windows 7

Frank Rysanek

Re: With 8.1 you barely have to use the "touch interface" if you don't want to

> With 8.1 you barely have to use the "touch interface" if you don't want to

Actually... a few weeks ago I've purchased an entry-level Acer notebook for my kid, with Windows 8 ex works. It was in an after-Xmas sale, and was quite a bargain. A haswell Celeron with 8 GB of RAM... I'm a PC techie, so I know exactly what I'm buying.

Even before I bought that, I knew that I would try to massage Windows 8 (after an upgrade to 8.1) into looking like XP.

The first thing I tried to solve was... get rid of Acer's recovery partitions (like 35 GB total) and repartition the drive to be ~100 GB for the system and the rest for user data. I prefer to handle system backup in my own way, using external storage - and I prefer being able to restore onto a clean drive from the backup. So it took me a while to build an image of WinPE on a USB thumb drive, as a platform for Ghost... from there it was a piece of cake to learn to rebuild the BCD on the EFI partition (typically hidden). Ghost conveniently only backed up the EFI and system partition, and ignored the ACER crud altogether :-)

Not counting the learning process, it took me maybe 3 days almost net time to achieve my goal = to have lean and clean Win 8.1 with XP-ish look and feel. The steps were approximately:

1) uninstall all Acer garbage (leaving only the necessary support for custom keys and the like)

2) update Windows 8 with all available updates

3) clean up other misc garbage, the most noteworthy of which was the WinSXS directory. I did this using DISM.EXE still in Windows 8, which was possibly a mistake. The "component install service" in the background (or watever it's called) tended to eat a whole CPU core doing nothing... but after several hours and like three reboots it was finally finished. I later found out that it probably had a bug in Win8 and was a breeze if done in Windows 8.1... BTW, I managed to reduce WinSXS from 13.8 GB down to 5.6 GB (in several steps)... and, the system backup size dropped from 12 TB down to 6 GB :-)

4) upgrade to Windows 8.1. This also took surprisingly long. It felt like a full Windows reinstall. The installer asked for several reboots, and the percentage counter (ex-hourglass) actually wrapped around several times... it kept saying funny things like "finishing installation", "configuring your system", "registering components", "configuring user settings", "configuring some other stuff" (literally, no kidding!) but finally it was finished...

5) more crapectomy (delete stuff left over from Win8 etc.)

6) install Classic Shell, adjust window border padding, create a "god mode" folder (only to find out that it's actually pretty useless), install kLED as a soft CapsLock+NumLock indicator (the Acer NTB lacks CapsLock+NumLock LEDs), replace the ludicrous pre-logon wallpaper, get rid of some other user interface nonsense...

Somewhere inbeteween I did a total of three backups: one almost ex works, another with a clean install of Windows 8.1 (after basic post-install cleanup), and one last backup of the fully customized install, just a snapshot of the system partition stored on the data partition (for a quick rollback if the kids mess up the system).

It looks and even works (at a basic level) as Windows XP. Some aspects of the user inteface work slightly different - such as, the Windows now dock to screen edges. No problem there. Even when I install some software whose installer expects the old style start menu, the installer still creates its subfolders in the ClassicStartMenu (technically alien to Windows 8) - great job there.

But: the control panels are still Windows 8 style = bloated and incomprehensible, if you're looking for something that was "right there" in Windows XP. The search tool is still absent from the explorer's context menus - you have to use the global search box in the upper end of the Win8 sidebar. The dialogs that you need to deal with when occasionally fiddling with file privileges are just as ugly as they ever have been (they weren't much nicer in XP before the UAC kicked in in Vista).

I'm wondering if I should keep the Windows 8.1 start button, only to have that nifty admin menu on the right mouse button. The left button = direct access to the start screen (even with smaller icons) is little use to me.

There's one last strange quirk, apparently down to the hyper-intelligent touchpad: upon a certain gesture, possibly by sweeping your finger straight across the touchpad horizontally, the Win8 sidebar jumps out and also the big floating date and time appears - and they just glare at you. This typically happens to me unintentionally - and whatever I was doing at the moment gets blocked away by this transparent Win8 decoration. It is disturbing - I have to switch my mental gears and get out of that Windows 8 shrinkwrap to get to work again... I hope it will be as easy as disabling all the intelligence in the touchpad control panel. For the moment I cannot do away with the Win8 sidebar entirely (even if this was possible) because I still need it now and then...

Some of the control panels are metro-only - and THEY ARE A MESS! There's no "apply" button... it's disturbing to me that I cannot explicitly commit the changes I do, or roll back in a harmless way. Typically when I happen to launch some Metro panel by mistake, I immediately kill the ugly pointless beast using Alt+F4. Thanks god at least that still works.

The new-generation start screen with mid-size icons is not a proper Start menu replacement. For one thing, the contents are not the same. Legacy software installs into the classic start menu, but its icons don't appear in the 8.1 start screen. And vice versa. The new start screen with small icons is better than the endless Metro chocolate bar of Windows 8, but still a piece of crap.

I hope my trusty old Acer that I use daily at work (XP-based) survives until Windows 9 - by then I'll have a chance to decide for myself, whether Windows 9 is back on track in the right direction, or what my next step is. If this is everybody's mindset, it's not surprising at all that Windows 8 don't sell.

9
0

Satya Nadella is 'a sheep, a follower' says ex-Microsoft exec

Frank Rysanek

If he's a server man...

If Nadella is a "server" man, he might actually understand much of the dislike that power-users have been voicing towards Windows 8. He might be in mental touch with admins and veteran Windows users.

If OTOH he's a "cloud" buzzword hipster evangelist, that doesn't sound nearly as promising.

What does the Microsoft's humongous profit consist of these days? Is it still selling Windows and Office? If that's the case, It has seemd to me lately that they've been doing all their best to kill that hen laying golden eggs... They've always been capitalizing on the sheer compatibility and historical omnipresence of their Win32 OS platform and office suite. In the recent years though, they've done a good job of scaring their once faithful customers away with counter-intuitive UI changes, software bloat and mind-boggling licensing :-(

8
2

The other end of the telescope: Intel’s Galileo developer board

Frank Rysanek

Re: PC104

PC104 is a form factor, rather than a CPU architecture thing - though it's true that I've seen a number of x86 CPU's in a PC104 form factor, but only a few ARM's...

PC104 is a relatively small board, whose special feature is the PC104 and/or PCI104 connector, making it stackable with peripherial boards in that same format. Other than that, it's also relatively expensive. And, it's easy to forget about heat dissipation in the stack.

If you need a richer set of onboard peripherials or a more powerful CPU (Atom in PC104 has been a joke), you may prefer a slightly bigger board, such as the 3.5" biscuit. There are even larger formats, such as the 5.25 biscuit or EPIC, which is about as big as Mini-ITX. The bigger board formats allow for additional chips and headers on board, additional connectors along the "coast line", and additional heat dissipation.

If OTOH you need a very basic x86 PC with just a few digital GPIO pins, and you don't need an expansion bus (PCI/ISA), there are smaller formats than PC/104 - such as the "Tiny Module" (from ICOP, with Vortex) or the various SODIMM PC's.

The Arduino format is special in that it offers a particular combination of discrete I/O pins, digital and analog - and not much else... and I agree with the other writers who point out that it's a good prototyping board for Atmega-based custom circuits.

2
0
Frank Rysanek

Re: 400 Mhz?

Oh it's got CMPXCHG8? Means it can run Windows XP? cept for the missing graphics :-)

0
0
Frank Rysanek

Re: Yes yes yes! Vortex86 rules!

Speaking of chip-level and board-level product lifetime, the boards by ICOP with Vortex chips by DMP have a lifetime of quite a few years. I believe I've read 12 years somewhere, but I don't think the Vortex86DX is *that* old :-) During the 6 years or so that I've been watching ICOP, hardly any motherboard has vanished (except maybe 3rd-party peripherial boards), new products are being added to the portfolio, there have been maybe 2 minor updates (revisions) across the portfolio to reflect some bugfixes / general improvements - while the form factors and feature sets were kept intact.

In terms of chip-level datasheets and source code, ICOP and DMP are somewhat secretive about selected marginal corner areas (the I2C controller comes to mind). Some chip-level documentation seems to be seeping from board-level 3rd-party manufacturers... But overall the state of support for the key features is pretty good - docs, drivers for key OS'es (including open-source drivers in vanilla Linux). Board-level support in terms of human responsiveness from ICOP and DMP feels like miles ahead of Intel.

1
0
Frank Rysanek

Re: finger-burningly hot = well designed for passive cooling

> > And the Quark chip runs finger-burningly hot.

>

> Presumably it is engineered to do so. As were Atoms before.

>

agreed! :-) My words exactly. Many ATOM-based fanless designs are a joke.

Compare that to the Vortex86. You can typically hold your finger on that, even without any kind of a heatsink if the board is large enough. On tiny boards, it takes a small passive heatsink that you can still keep your finger on after some runtime. That for the Vortex86DX clocked at 800 MHz at full throttle. With some power saving and underclocking, it doesn't take a heatsink.

> And any chip well-designed for passive cooling

> (because you need a fairly large delta-T before convection gets going).

>

Thanks for explaining the mindset of all the nameless Chinese R&D folks.

I'm on the other side of the barricade - I'm a troubleshooter with a small industrial/embedded hardware distributor, I'm effectively paid by system integrator people (our customers) to sooth their "burned fingers".

Imagine that you need an embedded PC board, at the heart of some book-sized control box. That box will be mounted in a cabinet. The folks at Westermo used to say that every "enclosure" adds about 15 degrees Celsius of temperature. And you have maybe 2-3 enclosures between your heatsink and the truly "ambient" temperature. In my experience, that 15 degrees applies to very conservatively designed electronics, with sub-1W-class ARM MCU's on the inside. For computers worth that name, where you aim for some non-trivial compute power, the 15* are a gross under-estimation. You have to calculate with Watts of power consumption = heat loss, thermal conductivity at surfaces and thermal capacity of coolant media (typically air) - even in the "embedded PC" business, far from the server collocation space.

Note that there are electrolytic capacitors on the motherboard, surrounding the CPU or SoC and VRM etc. They're not necessarily the solid-polymer variety. With every 10*C down, the longevity of these capacitors doubles. For low-ESR capacitors, It's typically specified at 2000 hours at 105*C. Twice that at 95*C etc.

Now... back to the mindset of our typical customer: it's fanless, right? so we can mount this in a tiny space in an unventilated box, run some cables in the remaining tight space on the inside... and sell that as a vehicle-mounted device... and remain unpunished, right? Let's go ahead with that...

(Where do you get convection around the CPU heatsink in that?)

The typical mindset of our suppliers' R&D is: let's sell this with the minimum possible heatsink, that will alow our bare board survive a 24hour burn-in test in open air, without a stress-load application running on the CPU (just idling).

Some particular fanless PC models are built in the same way. The most important point is to have an aluminium extrusion enclosure with some sexy fins on the outside. It doesn't matter if only a half of them is actually thermocoupled to the CPU and chipset, never mind the RAM and VRM and all the other heat sources on the inside (they'll take care of themselves). The enclosure needs to have fins and some cool finish, for eyewash effect - make it golden elox or harsh matte look. If it looks real mean, all the better - you can put the word "rugged" in your PR datasheets. Never mind if the surface of the fins is clearly insufficient to dissipate 15W of heat, on the back of an envelope (or just by the looks, to a seasoned hardware hacker). Perhaps also the computer maker's assembly team add a cherry on top, by optimizing the thermocoupling a bit: you can relax your aluminium milling tolerances a bit if you use 1 mm of the thermocouple chewing gum. Never mind that the resulting thermal coupling adds 20 Kelvins of temperature gradient. Even better, if the internal thermal bridge block designed by R&D is massive enough, you can probably just skip thermocouple paste or chewing gum alltogether, to accelerate the seating step on the assembly line... It takes maybe 20 minutes at full throttle before the CPU starts throttling its clock due to overheating, and the QC test only takes 2 minutes and is only carried out on every 20th piece of every batch sold.

Customer projects (close to the end user) that go titsup get later settled between purchasing and RMA departments and various project and support bosses and maybe lawyers - no pissed off troubleshooter techie has ever managed to wrap his shaking fingers around the faraway R&D monkey's throat :-)

If anyone is actually interested in practical advice, to get long-lived embedded PC's in production operation, I do have a few tips to share:

If you can keep your fingers on it, and it doesn't smell of melting plastic, it's probably okay. Do this test after 24 hours of operation, preferably in the intended target environment (enclosure, cabinet, ambient temp).

If you insist on playing with high-performance fanless stuff, do the heat math. You don't need finite-element 3D modeling, just back of the envelope math. What are the surfaces of your enclosures, times the heat transfer coefficients, times the wattage. What gradients can you come up with? Pay attention to all the heat-producing and hot components on your PCB's. All the parts inside your fanless enclosure principally run hotter than the "thermocoupled envelope". Putting sexy inward-facing heatsinks on hot components doesn't help much, inside a fanless enclosure. Consider that adding a tiny fan will turn this "roast in your own juice" feature of a fanless enclosure inside out.

If you intend to purchase third-party off-the-shelf fanless PC's for your project (complete with the finned enclosure), take a few precautionary masures: Take a look inside. Look for outright gimmicks and eyewash in thermal design, and for assembly-level goof-ups (missing thermocouple paste). Install some OS and run some burn-in apps or benchmarks to make the box guzzle maximum possible power. If there are temperature sensors on the inside, watch them while the CPU is busy - lm_sensors and speedfan are your friends. Some of the sensors (e.g. the CPU's coretemp) can be tricky to interpret in software - don't rely on them entirely, try opening the box and quickly touching its internal thermocoupling blocks and PCB's close around the CPU.

Single-board setups should be generally more reliable than a tight stack of "generic CPU module (SOM/COM) + carrier board" - considering the temperature dilatation stresses between the boards in the stack. In fanless setups, the optimum motherboard layout pattern is "CPU, chipset, VRM FET's and any other hot parts on the underside" = easy to thermocouple flat to the outside heatsink". Note that to motherboard designers, this concept is alien, it may not fit well with package-level pinout layouts for easy board routing.

Any tall internal thermal bridges or spacers are inferior to that design concept.

Yet unfortunately the overall production reliability is also down to many other factors, such as soldering process quality and individual board-level design cockups... so that, sadly, the odd "big chips on the flip side" board design concept alone doesn't guarantee anything...

If you're shopping for a fanless PC, be it a stand-alone display-less brick or a "panel PC", notice any product families where you have a choice of several CPU's, say from a low-power model to a "perfomance mobile" variety. Watch for mechanical designs where all those CPU's share a common heatsink = the finned back side of the PC chassis. If this is the case, you should feel inclined to use the lowest-power version. This should result in the best longevity.

If you have to use a "closed box with fins on the outside" that you cannot look inside, let alone modify its internals, consider providing an air draft on the outside, across its fins. Add a fan somewhere nearby in your cabinet (not necessarily strapped straight onto the fins).

Over the years, I've come to understand that wherever I read "fanless design", it really means "you absolutely have to add a fan of your own, as the passive heatsink we've provided is barely good enough to pass a 24hour test in an air-conditioned lab".

If your outer cabinet is big enough and closed, use a fan for internal circulation. Use quality bearings (possibly ball bearings or VAPO), possibly use a higher-performance fan and under-volt it to achieve longer lifetime and lower noise. Focus on ventilation efficiency - make sure that the air circulates such that it blows across the hot parts and takes the heat away from them.

Even an internal fan will cut the temperature peaks on internal parts that are not well dissipated/thermocoupled, thus decreasing the stress on elyt caps and temperature-based mechanical stresses (dilatation) on bolts and solder joints. It will bring your hot parts on the inside to much more comfortable temperature levels, despite the fact that on the outer surface of your cabinet, the settled temperature will remain unchanged!

If you merely want a basic PC to display some user interface, with no requirements on CPU horsepower, and for some reason you don't like the ARM-based panels available, take a look at the Vortex. Sadly, Windows XP are practically dead and XPe are slowly dying, and that's about the ceiling of what Vortex can run. Or you can try Linux. You get paid off by 3-5 Watts of power consumption and hardware that you can keep your finger on.

Examples of a really bad mindset: "I need a Xeon in a fanless box, because I like high GHz. I need the absolute maximum available horsepower." or "I need high GHz for high-frequency polling, as I need sub-millisecond response time from Windows and I can't desing proper hardware to do this for me." or "I need a speedy CPU because I'm doing real-time control in Java and don't use optimized external libraries for the compute-intensive stuff". I understand that there *are* legitimate needs for big horsepower in a rugged box, but they're not the rule on my job...

5
1
Frank Rysanek

Yes yes yes! Vortex86 rules!

Mind the EduCake thing at 86duino.com - it's a "shield" in the form of a breadboard.

You get a Vortex-based 86duino (= the Arduino IDE applies) with a breadboard strapped on its back.

I'm still waiting for some Docs from DMP about the Vortex86EX's "motion controller" features. It should contain at least a rotary "encoder" = quadrature counter input for 2-phase 90degree shifted signals, not sure if it's capable of hardware-based pulse+dir, or "just" PWM.

The Vortex SoC's tend to have 40 or more GPIO pins straight from the SoC, capable of ISA(bus)-level speed. Plus an almost complete legacy PC integrated inside. All of that in about 5W of board-level consumption (3W without a VGA). The number of GPIO pins is obviously limited by a particular board design, and some GPIO pins are shared with configurable peripherials (LPT, UART and many others) - but generally on Vortex-based embedded motherboards you get dedicated 16 GPIO pins on a dual-row 2mm header, on more modern SoC versions these are capable of individual per-pin PWM.

I seem to have heard that the 86duino is using DOS (FreeDOS?), rather than Linux, as its OS on the inside. Which might be interesting for real-time work, if you're only interested in the Arduino-level API and don't need the "modern OS goodies". While Intel tends to stuff ACPI and UEFI everywhere they can (ohh the joys of stuff done in SMI, that hampers your real-time response), the Vortex from DMP is principally a trusty old 486, where you still know what you can expect from the busses and peripherials.

But other than that, you can run Linux on any generic Vortex-based embedded PC motherboard, or a number of other traditional RTOS brands. I agree that when shopping for hardware for your favourite RToS, the x86 PC baggage may not appeal to you :-)

As for Linux on the Vortex, I believe Debian is still compiled for a 386 (well maybe 486 nowadays) - definitely Debian 6 Squeeze. You can install Debian Squeeze straight away on Vortex86DX and above. On a Vortex86SX (to save a few cents), you need a kernel with FPU emulation enabled (the user space stays the same). All the other major distroes rely on i686 hardware, so you cannot use them on Vortex without a recompilation from source.

To me, the only possible reason to use the 86duino (rather than a more generic Vortex-based board) is cost. The 86duino is cheaper. And then there's the breadboard :-) Other than that, the full-blown Vortex-based boards are better equipped with ready-to-use computer peripherials, such as RS232 or RS485 on DB9 sockets, USB ports, disk drive connectors and such. It really feels like lego blocks - an impression supported by the colourful sockets used by ICOP on their boards :-)

1
0

Linksys's über-hackable WRT wireless router REBORN with 802.11ac

Frank Rysanek

Price is not the only aspect...

It's one of the first wave of 802.11ac routers, which typically cost around 200 USD around here. As far as I know, the OpenWRT "supported hardware" page lists none of the existing -ac models (e.g. ASUS) as being supported. I can see in the OpenWRT forums that some people have just managed to make the new Atheros chips with -AC support (ath10k driver, qca988x hardware) work in OpenWRT at a basic level: driver + hostapd. This happend shortly before X-mas 2013. If the supposed support and assistance from Linksys helps to push the Atheros ath10k 802.11ac into mainstream, including proper configuration methods in the UCI subsystem and proper documentation, kudos for that. If someone wants that guarantee of OpenWRT support in newly purchased hardware, that's fine.

As for the price... at the moment, for my needs = basic indoor coverage, I'll stick to TP-Link. The basic model TL-WR741ND costs about 27 USD around here. I can spend another 10 USD for an extra mains adaptor. I can also run the router for a while, then say "who cares about warranty" and replace the cheap capacitors inside with solid polymer and MLCC. The last generation of TP-Link AP's are significantly cleaner and emptier on the inside: there are fewer chips, elyts and buck converters, and the current Atheros chipset runs pretty cool. Once the capacitors are beefed up, this is likely to have a pretty long service life. And all the recent TP-Links, including the higher-end dual-band WDR models (802.11n), are supported by OpenWRT.

The one thing that I don't like about the top-end dual-radio SoHo AP's (including TP-Link) is that the two radioes (2.4 and 5 GHz) share common antenna ports for the two bands - so that you can have simultaneous traffic on both radioes (NIC's visible on the PCI bus inside), but via a single set of shared dual-band antennas. Dual-band antennas are expensive and technically almost impossible to make right, it's down to basic wavelength physic. Splitters are also difficult and expensive to make.

If the new WRT54G-AC is going to have 4 antenna ports, that might as well mean separate ports for 2.4 and 5 GHz (times 2 per band for MIMO) - that would be excellent news, as it would allow you to use decent single-band antennas for either frequency band.

I actually used to consider hacking some current TP-Link to add separate antenna outputs on my own - the radio paths can be clearly identified on the PCB from the radio power stages to the passive crossover that mixes them into the shared antenna port. If the separate per-band antenna ports should take off across the industry, that would be good news.

The separate per-band antenna ports might well be worth the money for some users - they are a way to achieve best-in-class and customizable radio coverage in both bands.

And if these were really four dual-band antenna ports, standing for 4T4R MIMO, that would also be a first machine of its class. Although 4T4R MIMO is theoretically supported by 802.11n already, I've never seen actual hardware that would support that.

P.S.: I just wish anyone would kill the CRDA...

2
0

Ten classic electronic calculators from the 1970s and 1980s

Frank Rysanek

69 Factorial on a TI-25 - those were the days...

When I was a pre-school kid, in the early eighties in the then commie Czechoslovakia, my father (a graduated machinery engineer) used to have a TI-25. God knows how he got it - probably as a gift from some foreign supplier. I still remember how I was attracted to the magical green button on the otherwise black keyboard, while I couldn't count at al yet. I guess it was even before digital wrist-watches and colour TV sets (in our household anyway). I knew where my dad kept the calculator, but the shelf was too high for me to reach (and tampering was forbidden). Then gradually, as I got my wits together, my father used to let me use it a bit. And I had to protect it from my younger sisters. Then I used to carry it along to school every day, during later eightees and throughout the nineties (after the wall came down). Even throughout the nineties its all-black design (now noticeably battered) looked slim and cool, compared to the grey Japanese mainstream that flooded our market by then. I don't recall exactly anymore how and when I lost it, I guess it was in about 2004 when I lost my briefcase on the job to a random thief... Brings about a lot of childhood memories. 69 Factorial took about 6 seconds.

1
0

Acer names Jason Chen as its white knight

Frank Rysanek

Acer in trouble? sad news

I like Acer exactly for selling cheap notebook PC's with "no frills", exactly the right set of features. I prefer Intel-based notebooks with chipset-integrated Intel graphics and Intel or Atheros WiFi. 1280x800 used to be a plausible display resolution, before 1366x768 plagued that market segment. Acer traditionally uses a very basic BIOS, with no proprietary "addon MCU" garbage on the motherboard. Compared to that, I've seen several design-level cockups of that category in IBM/Lenovo machines, and generally all sorts of twisted addons or counter-ergonomic "improvements" in Compaq/HP et al. I like the vanilla / "quality no-name" feel of the Acer machines. Whether they're actually made by Compal, Wistron, Foxconn or whoever, doesn't seem to make too much of a difference.

In the recent years, I've ushered maybe 5 or 6 Acer notebooks into our broader family and as far as I know, all of them work to this day, the oldest one has been in service for 5 years and I've been dragging it to workplace and back home every day. I have a third carrying bag, a second power adaptor, a second disk drive, and the notebook still works fine. No broken hinges or whatever, despite the case looking like "cheap plastic". If the CCFL tubes wear out soon now, I'm considering replacing the tubes...

I recall one minor display glitch on a particular Acer notebook model, where some power decoupling capacitors in the display PCB got optimized away, combined with poor 3.3V power rail trainsmission (two tiny pins in the internal LVDS connector) resulting in unreliable display startup, difficult to reproduce - but I fixed that and otherwise they're pretty reliable.

Hard drives are a notorious pain, but that's down to HDD brands and developments, the NTB makers are hardly to blame. I may prefer Seagate over other brands, but that may be my personal opinion.

All "my" Acer notebooks so far had the classic "beveled" keyboard. It's sad that the whole notebook market has shifted to the ugly flat "chiclet" keyboards - looks like another counter-ergonomic twist of fashion, following an apparent general PC hardware marketing trend that mandates something like "users can't really type anymore, so they won't appreciate a real keyboard". First the displays, now the keyboards...

Makes me wonder if the XP "end of updates" finally improves the PC sales numbers :-) Since solid polymer caps and LED-backlit displays, a well-made PC can last forever...

1
0

Googlers devise DeViSE: A thing-recognising FRANKENBRAIN

Frank Rysanek

Re: yes I do feel targetted

Yesterday I went googling for some milling cutter tools for a hand-held woodwork router. In my mother language, which is not English. Guess what - as I was typing my earlier litany on AI at El Reg, the Google Ad bar on top of the page was flashing some hobby cutter sets at me - pointing me to e-shops in my country. Later yesterday I went googling for a somewhat specific sleeping mattress. Guess what the ad bar shows now... Well at least it's not showing my own employer's ads anymore (which used to be the case for the last half a year).

1
0
Frank Rysanek
Terminator

How long till consciousness

There's a growing body of research and knowledge on internal brain functioning and organization: composition of cortical columns, the various neuron species in a biological brain, a coarse global wiring schematic, knowledge of specialized subsystems, knowledge that in some areas the columns "switch purposes", plasticity of the brain, influences from the physical level (various firing/detection tresholds influenced by levels of chemicals, diseases etc), control of and feedback from endocrine glands.

In terms of computer-based modeling, some scientific teams with origins in biology and neuro-medicine approach this by trying to simulate the biological brain as precisely as possible = computationally simulate the transfer functions and behaviors of the neurons at maximum level of detail, as it is recognized that the pesky low-level details *do* matter, do have an impact on overall brain functioning at macro perespective.

Other teams (with a knowledge-engineering angle) seem to be more focused on computational performance and cunning topology (with cognitive functionality in mind), taking some inspiration from biology (the introduction of spiking neurons a few years ago) but not necessarily wasting effort on "maximum-fidelity" modeling of the biological brain...

Google has taught its neural network to classify objects based on their visual and linguistic descriptions combined. It's a neural network, not an old-school AI search term classification engine (which was essentially a network database). This artificial neural network has an inherent neural-style memory with links to external data and BLOB storage, it can classify fresh input data and can retrieve search results based on queries...

The neural network does not yet have a "flow of thought", a sense of goal or purpose to actively follow, a will, or even a possibility to take autonomous action. Or so I hope...

Makes me wonder if it would be possible to implement something resembling "flow of thought" without an active will / survival instinct or some such. There is a rudimentary neural engine, capable of sorting and searching visual+linguistic objects and concepts. Perhaps abstract concepts are not so far away. Next, implement associative search capability in that long-term "neural object storage" (maybe it's already there), add some short-term memory (for "immediate attention point"), maybe a filter of some sort (able to limit the "focus" to an object or area) and chain them in a feedback loop. Suppose the "immediate attention" is on a particular object. The associative memory offers a handful of associations, of which the filter/combiner picks a particular area/concept/object. This gets fed back into the "immediate attention" cell. Flow of thought anyone?

Makes me wonder if this would work without sensory input. Maybe add some relevant input channels to the "filter" stage in the loop (call it a "combiner", op-amp style). Or turn it inside out, and consider it a Kahlman filter made of neural building blocks... Not sure about the purpose or use of this arrangement. Perhaps to extract a model of the sensed reality in terms of objects and concepts, and suggest relevant "mental associations" and possible future developments of the current situation? A mind is probably much more complicated than that...

Perhaps a simple "flow of thought demonstrator" could be built with much less computational power and "inter-neuron communications bandwidth" than traditionally quoted for a human-level brain. If some biological baggage got "optimized away", perhaps some interesting functionality could be reached "cheaper".

Scary thoughts. A terminator face fits the topic even better than black helicopters.

1
0
Frank Rysanek

Re: The real inspector gadget? (classification of live video)

At the moment the system could probably recognize some objects in a static way, looking at the video stream as a slide show of static images.

A proper implementation of "recognizing a crime just happening" would require the system to recognize and classify motions / actions happening in time, in a video stream, preferably in real time. Probably not implemented yet. It would be the next level, for performance reasons if nothing else. A very logical next step...

Makes me wonder how much information this image classification system can extract from a photo. Break down the photo into a collection or a tree of objects: there's a street with some trees, people on the sidewalk, some cars, and there's a guy swinging a baseball bat (note: try that as a query to Google Images). A human brain would automatically pop out the eyeballs: what? Right there on a street? What or whom is he targetting? Does it look like agression? ... there are a lot of inherent defensive reflexes and experience-based context and attention to motion in a human brain, and emotional aspects, which a relatively spartan neural network trained for automated classification of static images may not possess. Not right now, at least...

1
0
Frank Rysanek
Joke

Will spammers be able to manipulate that?

Now... when this becomes airborne for production service in the Google Images back end, would it be possible to google-bomb this to return pr0n images to some harmless queries?

Or, maybe Google could use it to *detect* such google bombing attempts :-)

0
0

Mystery traffic redirection attack pulls net traffic through Belarus, Iceland

Frank Rysanek

Mixed feelings... am I missing something here?

This sounds odd. Simply advertising someone else's prefix would point the whole world (or a big part thereof) to *you*. If you were a "stub network" with no other connectivity, you wouldn't be able to forward the traffic to its actual destination (unless you were able to tunnel it to another AS, unaffacted by your BGP injection attack).

Target a single website and present your own mockup say for phishing purposes? maybe. You'd get caught and/or disconnected soon, owing to the havoc you'd cause.

Cause a big havoc by making lots of servers inaccessble? Piece of cake. Good for DoS attacks.

After inspection, redirect traffic to its rightful destination? That's difficult. You'd need a second connectivity, able to take the load. For a small target network with little traffic, a tunnel to someplace else might cut it. In order to re-route some high-volume network, you'd need a thick native link, effectively you'd need to be a transit operator. And you'd probably want to goof just a relatively limited perimeter of your peers (based on distance metric) into thinking that you are the actual origin - principally if you goofed the whole internet, you wouldn't be able to forward the traffic to its rightful destination. You need a carefully crafted local routing anomaly, which might be difficult to achieve.

And, in general you wouldn't be able to hijack traffic flowing in both directions (such as to wiretap a phonecall in full duplex), unless you did the BGP hijacking trick in *both* directions simultaneously: against both ends of the sessions you try to wiretap. Hijacking a single BGP prefix gives you just one direction of the traffic flow.

Doesn't sound like something very useful for anything except a massive and short-lived DoS attack.

Unless you have your hijacking gear installed in a big transit operator's backbone routers.

Who would you have to be, to be in that position :-)

Considering the need for a "local routing anomaly", what would be the point for the attacker's target network, somewhere in the global internet, to check the BGP for its own routing advertisements? A single check at an available nearby point wouldn't do. You'd have to check your prefix at a number of routers worldwide and analyze the "spatial propagation" for anomalies in the distance metric... hardly feasible, unless you're Google.

Then again the threat is probably real, as a number of people worldwide apparently work towards a more secure BGP. There is a decade-old standard called S-BGP... which probably hasn't reached universal use, if BGP hijacking is nowadays still (or ever more) in vogue...

1
0

Microsoft fears XP could cause Indian BANKOCALYPSE

Frank Rysanek

ReactOS or Linux

Maybe the Indian banks should fund some ReactOS developers... or just migrate to Linux. The pain might be on par with migrating to Win8. Linux can run an office suite, Linux can run an SSH session to the back-end mainframe, Linux can run a browser, Java apps run just fine in Linux... so unless the bank has lots of software written in MS .NET or ActiveX, migration shouldn't hurt all that much.

The one area where the Win32 work-alikes (ReactOS and Wine) lag behind true MS Windows, are all sorts of crypto/security services of the OS. This is a major drawback for even simple business apps, written for the MS environment.

3
0

The life of Pi: Intel to give away Arduino-friendly 'Galileo' tiny-puter

Frank Rysanek

Re: I just bought myself one of these : (Vortex)

Exactly. For the few days since the Quark was announced, I've been itching with curiosity, how it compares to the Vortex. And I'll keep on itching for a few more weeks (months?) till I put my hands on the Quark and run nbench on the critter. For the time being, could anyone please publish the contents of /proc/cpuinfo ?

The Vortex boards typically eat something between 2.5 and 5 Watts (between 500 mA and 1 A from a 5V adaptor) depending on Vortex generation, additional chips on the board, the SSD used and CPU load (and OS power saving capabilities). The 5W is a well equipped Vortex86DX board including Z9s graphics at full throttle. The MX/DX2 reportedly need less power. The Vortex SoC contains a programmable clock divider, you can underclock it to 1/8th of the nominal clock - but the undeclocking doesn't achieve much more than what Linux can achieve at full clock, merely by properly applying HLT when idle.

I'd expect the Galileo board to have a similar power consumption.

With switch-mode power supplies (the general cheap stuff on today's market), it's not a good idea to use a PSU or Adaptor whose specced wattage exactly matches your device's consumption. It's advisable to use a PSU that's twice to three times overestimated. Hence perhaps the recommendation to use a 3amp adaptor. Intel knows that these adaptors are crap. You may know them from SoHo WiFi routers/AP's. The router comes with a 2amp adaptor, likely extremely cheap, which only lasts for a year or two, 24/7. Then the elyts bid you farewell. Buy a 3amp adaptor for 10 USD and it will lasts forever.

Also note that Intel may be hinting that you need to reserve some PSU muscle for some "Arduino IO shields".

The DX vortex is made using 90nm lithography, not sure about the MX and DX2 (possibly the same). Makes me wonder what Intel could do, with all its x86 know-how, using a 32/22nm process. Run a 386 at 10 GHz maybe? I've been wondering about this for years before the Quark got announced, and now I'm puzzled - "so little so late".

I've been a Vortex fanboy for a few years - specifically, I'm a fan of the boards made by ICOP ( www.icop.com.tw ). Interestingly, I've seen other Vortex-based boards that are not as good, although using the same SoC. BTW, I don't think even the MX Vortex has MMX - it's more like an overclocked Cyrix 486, but with a well-behaved TSC and CMPXCHG, so it can run Windows XP (not Windows 7, sadly).

Vortex86SX and DX didn't have on-chip graphics, but the ICOP portfolio contains boards with or without VGA. ICOP uses an SIS/XGI Volari Z9s with 32 megs of dedicated video RAM, other board makers use different VGA chips, such as an old Lynx3D with 4 megs of video RAM. The Vortex86MX SoC (and the new Vortex86DX2) does have some VGA on chip, possibly not as powerful as the Z9s. The on-chip VGA uses shared memory (steals a few megs of system RAM). I understand that the system RAM on the Vortex chips is only 16 bits wide, which might be a factor in the CPU core's relatively poor performance.

The Geode has significantly better performance per clock tick than the Vortex86DX. The new DX2 should perform better than the older DX/MX cores (closer to the Geode). I expect the Quark at 400 MHz to be about as fast as an 800MHz Vortex86DX. The "Pascal CRT 200 MHz bug" occurs at around 400 MHz on the Vortex86DX.

The Vortex SoC traditionally contained a dual-channel parallel IDE controller. This is nowadays still useful for ATA Flash drives of various form factors (including CompactFlash), but to attach some new off-the-shelf spinning rust, you need an active SATA/IDE converter... The new DX2 SoC features a native SATA port. Since I guess Vortex86DX, the second IDE channel can alternatively be configured as an SD controller.

Since Vortex86SX, the SoC has about 40 GPIO pins - the boards by ICOP typically have 16 GPIO pins on a connector (with ground and a power rail). The DX/MX/DX2 SoC can even run HW-generated PWM on the GPIO pins (each pin has its own individual PWM config). The only thing it's missing for general tinkering is possibly an on-chip multichannel ADC.

Sice Vortex86SX, the SoC has two EHCI controllers (four ports of USB 2.0 host).

The MX/DX2 have on-chip HDA (audio).

All the Vortex SoC's have an on-chip 100Base-TX Ethernet MAC+PHY (the RDC R6040).

The SX/DX/DX2 have 4+ COM ports, one of them with RS485 auto RX/TX steering capability. (the MX has only 3 COM ports.) All of them have a good old-fashioned LPT (parallel printer port) with EPP/ECP capability, whatever twisted use you may have for that nowadays. Note that all the COM ports and LPT are on chip in the SoC - yet the SoC also has LPC, should the board maker want to expand the legacy stuff with an added SuperIO chip...

In terms of "system architecture feel", the Vortex reminds me of the 486 era. Simple hardware. Might be useful for realtime control (think of RTAI). There's a full-fledged ISA and a basic PCI (able to serve about 3 external devices). The DX2 has two lanes of PCI-e. The SX/DX/MX (not sure about the DX2) doesn't contain an IO/APIC, which means that it's a bit short of IRQ's, considering all the integrated peripherials. Yet all the integrated peripherials work fairly well. I've seen an odd collision or two: the PXE ROM is defunct if you enable the second EHCI, but both the second EHCI and the LAN work fine in Linux if you leave them both enabled (= as long as you don't need to boot from PXE). The BIOS does't provide ACPI if memory serves. All the Vortex-based hardware uses AT-style power.

The SoC doesn't have an APIC, but there's an interesting twist to the (otherwise standard) AT-PIC: the SoC allows you to select edge-triggered or level-triggered logic individually for each interrupt channel. Not that I've ever had any use for that, but it might be interesting for some custom hacks (with a number of devices that need to share a single interrupt).

And, oh, the Vortex boards all have a BIOS, i.e. can boot DOS, various indee bootloaders, and stand-alone bootable tools (think of Memtest86+). I've already mentioned PXE booting. You're free to insert your own bootable disk (SSD or magnetic) and some boards also contain an onboard SPI-based flash drive, which acts like a 4meg floppy. The AMI BIOS in the ICOP boards allows you to configure a number of the SoC's obscure features, and can be accessed via a terminal on RS232 if the board is "headless" (no VGA).

In terms of features, compared to the Vortex, the Galileo board (the Quark?) seems underwhelming. Ahh yes, it's also cheaper... And I understand that it's a first kid in an upcoming family.

When I first read about the Quark, I immediately thought to myself "Vortex is in trouble". Looking at the Galileo, I think "not yet, maybe next time". We have yet to see how the Quark copes on the compatibility front etc., what novel quirks get discovered etc.

1
0
Frank Rysanek

Re: price of the Vortex-based PC's

Where I live, the Vortex-based MiniPC's cost around 200 USD (the SX variant is cheaper but less useful).

An industrial motherboard could cost about the same - maybe more. Depends on form factor, Vortex generation and the board's additional features.

0
0

'Occupy' affiliate claims Intel bakes SECRET 3G radio into vPro CPUs

Frank Rysanek

Re: Cut the blue wire

Actually the purple wire, for +5V standby power from an ATX PSU. Or just pull the mains cord (found outside the case).

In a laptop, remove the battery.

0
0

The future of PCIe: Get small, speed up, think outside the box

Frank Rysanek

Re: Intel is dropping PCIe by 2016.

PCIe in mobile devices? Why bother if it amounts to unnecessary processing overhead. True, Linus has commented that ARM SoC designers should make all the busses enumerable (PnP fashion), which combined with low pin count points to PCIe, rather than PCI... but Linus has his specific background and aspect. He's not exactly a mobile phone hardware developer.

Even SoC's for tablets are pretty much single-purpose.

Generic support for peripherials is needed in the industrial/embedded segments.

As for desktop / full-fleedged notebook machines... if Intel thinks that its own GPU is strong enough, why not let it skip those 16 lanes of PCIe straight from the CPU?

As for servers, a beefy PCI-e x8 is certainly useful.

I'm sure Intel knows better than to shoot itself in the leg. They'll keep PCI-e around where applicable and useful: multiple x1 channels from the south bridge and maybe a couple lanes straight from the CPU socket in servers and high-performance desktops. I haven't yet noticed any future PCI-e replacement for x86 peripherial expansion.

1
0
Frank Rysanek

Re: grain of salt

Thanks for your response - I don't have hands-on experience with IB so I didn't know there. I did have a feeling that with IB being so omnipresent in HPC, "node hot swap" would probably work well.

PCI-e is also inherently hot-swap capable and so is the Windows driver framework handling it - just my theoretical matrix crossconnect thing makes node hot-swap a bit more interesting :-)

And yes I'd really love to know how a "homebrew" ccNuma machine would cope with a node outage. If this can be handled, what OS is production-capable of that etc. Except I guess I'm off topic here, WRT the original article...

1
0
Frank Rysanek

grain of salt

PCI-e over external cabling has been on the market for a couple of years - for external interconnect to DAS RAID boxes and for additional PCIe slots via external expansion boxes, the latter sometimes combined with PCI-e IOV. Besides IOV, there are also simple rackmount switches to connect multiple external expansion boxes to a single "external PCIe socket" on a host computer. Ever fancied an industrial PC with some 30 PCI-e slots? Well it's been available for a while... As for PCI-e generations, in the external DAS RAID boxes I've seen PCI-e 1.0 and 2.0 (Areca). Even the connectors seem to be somewhat standard (no idea about their name). The interface between a motherboard (PCI-e slot) and the cable is in the form of a tiny PCI-e expansion board - interestingly it doesn't carry a switch, it's just a dumb repeater, or a "re-driver" as Pericom calls the chips used on the board. Apparently the chips provide a signal boost / preemphasis / equalization for the relatively longer external metallic cabling.

As for HPC: apart from storage and outward networking, HPC typically requires low-latency memory-to-memory copy among the cluster nodes. The one thing that to me still seems to be missing, for bare PCI-e to successfully compete against IB in HPC, is some PCI-e switching silicon that would provide any-to-any (matrix style, non-blocking) host-to-host memory-to-memory DMA, that combined with a greater number of host ports. IMO it wouldn't require a modification to the PCI-e standard: it would take some proprietary configurable swiching matrix implemented in silicon, providing multiple MMIO windows with mailboxing and IRQ to each participating host, combined with OS drivers and management software, that would interface to the HPC libraries, would take care of addressing among the nodes, and maybe provide some user-friendly management of the cluster interconnect at the top end.

The switches currently on the market can do maybe 4 to 8 hosts of up to 16 lanes each, and the primary purpose is PCIe IOV (sharing of network and storage adapters), rather than direct host-to-host DMA. Check with PLX or Pericom. Perhaps it would be possible, with current silicon, to do the sort of a matrix DMA interconnect in a single chip, to cater for about 8 hosts of PCI-e x8 or x16. That's not too many nodes for an HPC cluster. For greater clusters, it would have to be cascadable. Oh wait - that probably wouldn't scale very well.

As for PCI-e huddling with the compute cores: the PCI-e actually has an interface called a "PCI-e root complex" or a "host bridge" to the host CPU's native "front side bus" or whatever it has. The PCI-e is CPU architecture agnostic - and has some traditional downsides, such as the single-root-bridge logical topology. No way for a native multi-root topology on PCI-e - that's why we need to invent some clumsy DMA matrix thing in the first place. And guess what: there's a bus that's closer to the CPU cores than the PCI-e. On AMD systems, this is called HyperTransport - AMD actually got that from Alpha machines, but that was probably before PCI-e even existed. Intel later introduced a work-alike called QPI. The internal interconnects between the cores in a CPU package (such as the SandyBridge ring) are possibly not native HT/QPI, but these cannot be tapped, so they don't really count. So we have HT/QPI to play with: these are the busses that handle the talks between CPU sockets on a multi-socket server motherboard. Think of a cache-coherent NUMA machine on a single motherboard. And guess what, the HyperTransport can be used to link multiple motherboards together, to build an even bigger NUMA machine. There are practical products on the market for that: a company called NumaScale sells what they call a "NumaConnect" adaptor card, which plugs into an HTX slot (Hypertransport in a connector) on compatible motherboards. Interestingly, there is no switch, but the NumaConnect card has 6 outside ports, that can be used to create a hypercube or multi-dimensional torus topology of a desired dimension.

The solution marketed by NumaScale uses HyperTransport to build a ccNuma machine = it keeps cache coherence across the NUMA nodes. There's a somewhat similar solution called HyperShare that seems to use a cache-incoherent approach... Either way it seems that memory-to-memory access between the nodes is an inherent essential feature.

I've never heard of Intel making its QPI available in a slot. PCI is originally an Intel child, if memory serves. Maybe that's a clue...

Makes me wonder how much sense all of this makes in terms of operational reliability and stability. Are the NumaScale and HyperShare clusters tolerant to outages? Can nodes be hot-swapped in an out at runtime? One part of the problem sure is support for CPU and memory hotplugging and fault isolation in the OS (Linux or other) - another problem may be at the bus level: how does the HT hypercube cope when a node or link goes out to lunch? Makes me wonder how my theoretical PCI-e "matrix DMA" solution would cope with that (perhaps each peer would appear as a hot-pluggable PCI device to a particular host, with surprise removal gracefully handled). Ethernet sure doesn't have a problem with that. Not so sure about IB.

3
0

David Attenborough warns that humans have stopped evolving

Frank Rysanek

Re: Evolution? Devolution!

I don't think it will ever be possible to correct the DNA in a fetus that's already started developing (cells splitting). Once the cells start splitting, you can only compensate for genetic defects (some protein or hormone missing or some such) by supplementing the missing bit in some other way. Correct me if I'm wrong there - and please elaborate on technological details :-) "Make a virus that can cut and paste the DNA in every individual cell at a very specific place in a very specific way" - doesn't sound realistic, the virus would have to be too complicated (carry along too much tooling and data).

It would seem more realistic to me to engineer a "fertilised egg" (the single initial cell with a full set of chromosomes) with a desired genome, and let that start splitting/developing into a fetus. I'd almost suggest to have a few eggs fertilised in vitro in a semi-natural way, and then select one whose genome looks best - but that would imply a non-destructive reading of a genome of that initial single cell, which again doesn't seem technically likely/feasible. Maybe let the egg split once, separate the two cells, destroy one for DNA analysis and let the other one develop into a fetus (thus effectively keeping one twin of two). Even a more problematic method would be to have a few early fetuses develop enough material for DNA analysis, and kill those you don't like. Starts to sound like a horror story...

Well actually we do already screen fetuses pre-natally for known genetic defects, and those diagnosed with serious defects are suggested for abortion. Various countries approach this in different ways, depending on the level of their healthcare system and general public opinion about abortions (yes it has a lot to do with religion). Yet based on what I know, those defects are either life-threatening already in early childhood or often directly prevent future reproduction of the individual - so these generally wouldn't proliferate in the gene pool either, even if not aborted artificially.

Looking at the "removal of natural selection" (or some particular pressures thereof) in a statistical way, the future of our society looks like another horror story. We don't have to speak genetic-based conditions that are directly life threatening. Consider just some fairly harmless genetic traits that may e.g. make you less imune to a particular type of infections. Or may mean a stronger tendency to "auto-immune" / allergic responses (let's now abstract from the fact that some cases blamed vaguely on "auto-immune response" might actually be caused by undiscovered infections). Before modern medicine, even such "harmless" genetic features would statistically decrease your chances of survival. With modern medicine, many of this is treatable and gets passed on to future generations. Even genetic traits that might normally affect your survival *after* your successful reproduction, would traditionally still hamper your ability to rear and support your offspring, hence reducing your offspring's chance for further reproduction... With modern medicine (and social support), this pressure is removed.

Modern medicine is expensive - depending on a particular country's social arrangement, modern healthcare either burdens the whole society by a special healthcare tax (e.g. many countries in Europe), possibly making doctors work a bit like mandatory conscripts for sub-prime wages (post-commie eastern Europe), or it's individually expensive and unavailable to lower-wage classes (many U.S. states and other countries).

Imagine a population of people who mostly wouldn't be able to reproduce in a natural way for one reason or another (infertility, babies growing too big to get born naturally, various lighter/treatable conditions in pregnancy that would mean trouble without modern healthcare) and permanently suffer from various non-lethal but onerous conditions throughout their childhood and especially adult life (it's likely to get worse with age).

A population of permanently suffering people, dependent on modern expensive healthcare. I fear that gradually, even with modern healthcare, the balance of natural selection -based dieoff could be restored. So that a great percentage of individuals born alive will die of disease or other medical conditions before getting "old", despite having the luxury of modern healthcare.

For how long have we had modern healthcare? Since 1900? Maybe more like since WW2, if you count antibiotics. That's just a few generations. In some respects, we're already less healthy than our ancestors. Take respiratory diseases, take fertility for instance. Some of this used to be explained by industrial pollution, but here where I live, many of the population health problems persist, even though industrial pollution has been greatly reduced over the last two decades or so. How long will it take, till the public health will degrade catastrophically, due to minor genetic-based imperfections getting accumulated due to the removal of "natural selection pressure"? A couple more human generations?

I recall a study on a particular species of butterflies, showing how a dark variant (mutation) has become prevalent in an area affected by some industry, in just a couple of years, just because the original lighter colour became better visible to its predators... and how the ratio turned back in a couple years, after the polluting industry was removed. That was also just a few insect generations.

I've noticed someone in this forum mention that people are getting gradually more intelligent. Never heard this opinion before. Educated, maybe. On the contrary, there's a popular opinion (too lazy to google for sources) that the most intelligent humans evolved during the ages of "natural selection pressure" - such as during the last ice age. And that indeed, since then, there's an evolutionary plateau in that respect - that pressure got removed, and the average IQ of the population is getting diluted (as much as I otherwise hate the IQ variable and having it individually measured and compared). It does make perfect sense. Life has still been a struggle for those 8000 years since the last ice age, but I guess it's become a lot less of a struggle in the last century or two - with industrialization, modern healthcare, modern agriculture.

I'm struggling not to get started about the growing concentration of production resources in the hands of global enterprises. About the abundance of and lack of use for human labour, college graduates etc. Heheh - and about how fragile such a society is.

What happens to modern agriculture and food supplies, when the oil runs out? How much more expensive will freight and horsepower become?

What happens if the modern society collapses for some other reason (perhaps just social events such as popular unrest, a series of revolutions) and the modern healthcare gets withdrawn, a couple generations down the road?

My answer: a more natural selection pressure will apply once again...

It's plenty of material for a couple more dystopian science fiction movies, with a socialist or radically capitalist background :-)

6
0

Reports: NSA has compromised most internet encryption

Frank Rysanek

clean OS and hardware is possible

I believe Linux is generally pretty safe against spyware. That would be a good plaform for an endpoint OS, getting rid of keyloggers and the like. As for clean hardware... suppose that Intel's on-chip IPMI/AMT is compromised. Suppose that the AMT-related autonomous backdoor exists even in Intel CPU and chipset variants that do not openly support AMT (for the sake of sales segmentation). There are other brands of CPU's, without inherent support for IPMI/AMT. And, based on what I've seen so far, I don't think such a backdoor would be very useful and reliable, given how buggy IPMI/AMT is...

1
0

6Gbps is for FOOLS! Now THIS is what we call a SAS adapter - LSI

Frank Rysanek

Revamping metallic SAS one more time

Amazing. Technological development is still blazing past. I haven't been watching the news for some time - and suddenly 12Gb SAS is here. Makes me wonder if 12Gb SAS is going to be the last SAS version running on metallic interconnects (just like U320 was the last parallel SCSI generation).

12Gb SAS sure is a desperately needed update to the disk drive interconnect - otherwise the SSD's would all migrate to direct PCI-e attachment (and the whole SAS market would vanish in a couple of years). Support for 12 Gb from LSI is important in that LSI is a key traditional SAS chipset supplier - for HBA's/initiators and targets (RAID controllers and disk drives), also providing SAS expanders and switches. But speaking of SAS chipset makers, Marvell and PMC Sierra (ex-Avago/ex-Agilent) have also announced 12Gb products.

2
0

Microsoft's summer update will be called Windows 8.1

Frank Rysanek
Thumb Up

Re: Numbering

And, Windows 2003 Server is 5.2. Makes a hell of a difference from XP in some drivers (and no, just changing that version string in the INF file often doesn't get the job done - some kernel API's really are slightly different).

Makes me wonder what the 2008 reports (no live machine at hand).

0
0

Are the PCs all getting a bit old at your office? You're not alone

Frank Rysanek
Thumb Up

C2D came along with solid-polymer caps (=> capacitor plague was over)

Yes, Core2Duo with 2 GB of RAM is quite good enough for general office work, web browsing, movie playback and the like. And the 45nm generation of Core2 doesn't even eat all that much power. Any CPU's before that, back to say the Pentium 4 heating radiators, have somewhat less performance and eat more electricity. But the C2D is quite okay. And there was another major change, that came along with the C2D: it was the solid-polymer electrolytic caps. This ended the "capacitor plague" era when motherboards only lasted like 2-4 years. With solid-polymer caps and C2D, motherboards do survive the warranty, do survive twice the warranty, do survive much longer. I work for an IPC assembly shop (which is a somewhat conservative industry) and in terms of desktop-style and PICMG motherboards, there are hardly any boards RMA'ed with solid-polymer caps on them.

4
0

Power-mad HPC fans told: No exascale for you - for at least 8 years

Frank Rysanek
Devil

20W for that much neural computing horsepower...

To me, it seems pretty wasteful to emulate a rather fuzzy/analog neural network on a razor-sharp vaguely von-Neumannian architecture (albeit somewhat massively parallel). Perhaps just as wasteful as trying to teach a human brain (a vast neural network) to do razor-sharp formal logic - with all its massive ability to create fuzzy and biased associative memories and search/walk the associations afterwards, with all its "common sense" being often at odds with strictly logical reasoning ;-)

1
0
Frank Rysanek
Joke

HPC = using electricity to produce heat... co-generate heat and crunching horsepower?

Not my invention, I admit - was it here, where I read about a supercomputer that had a secondary function of a heat source for a university campus?

Now if you needed to build a supercomputer with hundreds of MW of heat dissipation... you could as well use it to provide central heating to a fairly big city. Or several smaller cities. Such as, there's a coal-fired 500MW power plant about 40 km from Prague, with a heat pipeline going all the way to Prague. The waste heat is used for central heating. Not sure if the pipeline still works that way, it was built still within the commie era when such big things were easier to do...

The trouble with waste heat is that it tends to be available at relatively low "lukewarm" temperatures. Computers certainly don't appreciate temperatures above say 40 degrees. Then again, there are heating systems that can work with about 30 degrees Celsius of temperature at their input. Probably floor heating sums it up - not much of a temperature, LAAARGE surface area. No need to heat the water up to 70 degrees or so.

The obvious implication is: generate heat locally, so that you don't need to transport it over a long distance, which is prone to thermal losses and mechanical resistance (for a given thermal power throughput, the lower the temperature, the larger the volume of media per second).

The final conclusion: sell small electric heat generators, maybe starting with a few kW of continuous power consumption, electric-powered and interconnected by fiber optics, where the principle generating heat is HPC number crunching. Build a massive distributed supercomputer :-) Yes I know there are some show-stopping drawbacks - but the concept is fun, isn't it? :-)

1
1

Build a BONKERS test lab: Everything you need before you deploy

Frank Rysanek
Gimp

Re: Asus mainboards?

I recall trying to max out 10Gb Eth several years ago. I had two Myricom NIC's in two machines (MYRI-10G gen."A" at the time), back to back on the 10Gb link. For a storage back-end, I was using an Areca RAID, current at that time (IOP341 CPU) and a handful of SATA drives. I didn't bother to try random load straight from the drives - they certainly didn't have the IOps.... They had enough oomph for sequential load. I used the STGT iSCSI target under Linux. Note that STGT lives in the kernel (no copy-to-user). The testbed machines had some Merom-core CPU's in an i945 chipset. STGT can be configured to use buffering (IO cache) in the Linux it lives within, or to avoid it (direct mode). I had jumbo frames configured (9k). On the initiator side, I just did a 'cp /dev/sda /dev/null' which unfortunately runs in user space...

For sequential load, I was able to squeeze about 500 MBps from the setup, but only in direct mode. Sequential load in buffered mode yielded about 300 MBps. That is simplex. The Areca alone gave about 800 MBps on the target machine.

Random IO faces several bottlenecks: disk IOps, IOps throughput of the target machine's VM/buffering, IOps throughput of the 10Gb cards chosen, vs. transaction size and buffer depth...

1
0

Deep, deep dive inside Intel's next-generation processor

Frank Rysanek
Meh

Did I hear a rumor about "moving the VRM on-chip" ?

A few days ago, there were rumours that Haswell would have more say in how its input rail power is regulated - beyond the VID pins of today. Some even suggested that the VRM would move into the CPU package. Anyone knows further details? This would be more of a revolution than skin-tone enhancements...

0
0

NASA’s new lander CRASHES AND BURNS

Frank Rysanek
Thumb Up

Re: Armadillo?

It was an interesting and insightful comment nonetheless - thanks for that.

0
0

IT pros lack recent skills

Frank Rysanek
Meh

Re: Isn't that many "recent technologies" are crap?

My point exactly. I'm a geek by profession, and I've been having a self-perception of being pretty conservative, since my twenties maybe. I try to understand and leverage the underlying basics, rather than adopt any shiny new toy (after a few such toys, it gets old). A lot of the "new" technologies *are* crap. New things introduced for the sake of novelty / eyewash / sales pitch, rather than utility / progress / improvement. How many times can you sell an Office suite, with just a new version sticker on it? A new version of a windowing OS? A "radically novel" user inteface? Yet another software development environment? Increasingly nasty licensing schemes and vendor lock-in? Some of the new stuff feels increasingly degressive...

I have the luxury of working in a small company, where I'm free to study and try whatever I want.

I also meet IT and "embedded system integration" pro's in other companies. Speaking of training, in my experience, many of them would use training in "the underlying basics". Stuff as basic as Ethernet, TCP/IP, dynamic behavior of disk drives, disk partitioning and file systems (UNIX/Linux angle), vendor-independent basic OS and networking concepts - just to get some common sense. But maybe it's indeed down to everyone's personal eagerness to "peek under the hood", or ability to take a distance from the product you've just purchased. Or, down to chance, down to opportunity to work with different technologies...

Most of the training commercially available is heavily vendor-specific and brainwashy (Cisco, Microsoft). The other side of things is undoubtedly "freedom to starve to death", as Sir Terry would put it... Once you reach the "intermediate sorcerer" level, you can become a freelancer, pretty much on your own.

In my part of the world, there are a number of "training products" apparently developed with one key goal in mind: to get an EU grant from the "training/education funds of the EU". Hardly any IT training in there, or any other rigorous professional training. Mostly soft skills. The non-IT colleagues attend those trainings voluntarily, even happily. I have other, more entertaining or useful ways of wasting time / procrastinating - or study :-)

4
0

Page: