65 posts • joined 2 Oct 2007
USB connector that fits either way up? That's on the market already...
I was shocked a couple months ago by the USB ports on this hub:
You can insert your peripherials either way up. It feels like you have to apply a bit of violence, but we're using it in a PC repair workshop and it's been working fine for several months now.
Re: approaching Windows 8 "the old way"
When it comes to Windows, I'm a bit of a retard... I always try to approach it based on common sense and generic principles of the past, which probably hints at lack of specific education on my part in the first place... I've never tried to use the Windows built-in backup/restore. The tool I tend to prefer for offline cloning is Ghost - the DOS flavour of Ghost. I've made it to work under DOSemu in Linux (PXE-booted), and recently my Windows-happy colleagues have taught me to use Windows PE off a USB stick... guess what: I'm using that to run Ghost to clone Windows the way *I* want it. With Windows 8 / 8.1 (and possibly 7 on some machines), there's an added quirk: after restoring Windows from an image onto the exact same hardware, you have yet to repair the BCD store, which is your boot loader's configuration file. Which is fun if it's on an EFI partition, which is hidden in Windows and not straightforward to get mounted... but once you master the procedure, it's not that much trouble, I'd almost say it's worth it. Symantec has already slit the throat of the old cmdline Ghost, but I'm told that there are other 3rd-party tools to step in its place... I haven't tested them though.
I've been forced to go through this on a home notebook that came with Windows 8 preloaded. Luckily I have the cloning background - as a pure home user, I'd probably be lost, at the mercy of the notebook maker's RMA dept if the disk drive went up in smoke. Well I've found the needed workarounds. And I tried to massage Windows 8.1 into a useable form, close to XP style. I've documented my punk adventure here:
A few days later, I had an opportunity to re-run the process along my own notes, and I had to correct a few things... and I noticed that I couldn't get it done in under 3 days of real time!
Yes I did do other work while the PC kept crunching away, doing a backup/restore or downloading Windows updates. On a slightly off topic note, the "hourglass comments" after the first reboot during the Windows 8.1 upgrade are gradually more and more fun (absurd) to read :-)
I've read elsewhere that before upgrading to 8.1, you'd better download all the updates available for Windows 8, otherwise the upgrade may not work out.
To me, upgrading from Windows 8 to 8.1 had a positive feel. Some bugs (around WinSXS housekeeping for example) have vanished. But I'm also aware of driver issues, because Windows 8.1 is NT 6.3 (= an upgrade from Windows 8 = NT 6.2). So if some 3rd-party driver has a signature for NT 6.2, you're out of luck in Windows 8.1, if the respective hardware+driver vendor embedded the precise version (6.2) in the INF file, as the INF file also appears to be covered by the signature stored in the .CAT file... Without the signature, with many drivers (with a bit of luck), you could work around the "hardcoded version" by modifying the INF file. Hello there, Atheros... On the particular notebook from Acer it was not a problem, Intel and Broadcom apparently have drivers in Windows 8.1.
I actually did the repartitioning bit as a fringe bonus of creating an initial Ghost backup. I just restored from the backup and changed the partitioning while at it.
...did I already say I was a retard?
Windows 8 appear to be capable of *shrinking* existing NTFS partitions, so perhaps it is possible to repartition from the live system without special tools. Not sure, haven't tried myselfs.
For corporate deployments of Windows 8, I'd probably investigate the Microsoft Deployment Toolkit.
That should relieve you of the painstaking manual maintenance of individual Win8 machines and garbage apps preloaded by the hardware vendor. It might also mean that you'd have to buy hardware without preloaded windows, which apparently is not so easy...
Secret thermocouple compound
Perhaps with the "extreme edition" they'll return to soldering the heatspreader on, the way it was in the old days (I guess). Or at least use a "liquid metal" thermocoupling stuff (think of CoolLaboratory Pro or Galinstan) rather than the white smear that they've been using since Ivy Bridge...
Myself I'm not fond of number crunching muscle. Rather, I drool over CPU's that don't need a heatsink (and are not crap performance-wise). I like the low-end Haswell-generation SoC's (processor numbers ending in U and Y), and am wondering what Broadwell brings in that vein.
Re: With 8.1 you barely have to use the "touch interface" if you don't want to
> With 8.1 you barely have to use the "touch interface" if you don't want to
Actually... a few weeks ago I've purchased an entry-level Acer notebook for my kid, with Windows 8 ex works. It was in an after-Xmas sale, and was quite a bargain. A haswell Celeron with 8 GB of RAM... I'm a PC techie, so I know exactly what I'm buying.
Even before I bought that, I knew that I would try to massage Windows 8 (after an upgrade to 8.1) into looking like XP.
The first thing I tried to solve was... get rid of Acer's recovery partitions (like 35 GB total) and repartition the drive to be ~100 GB for the system and the rest for user data. I prefer to handle system backup in my own way, using external storage - and I prefer being able to restore onto a clean drive from the backup. So it took me a while to build an image of WinPE on a USB thumb drive, as a platform for Ghost... from there it was a piece of cake to learn to rebuild the BCD on the EFI partition (typically hidden). Ghost conveniently only backed up the EFI and system partition, and ignored the ACER crud altogether :-)
Not counting the learning process, it took me maybe 3 days almost net time to achieve my goal = to have lean and clean Win 8.1 with XP-ish look and feel. The steps were approximately:
1) uninstall all Acer garbage (leaving only the necessary support for custom keys and the like)
2) update Windows 8 with all available updates
3) clean up other misc garbage, the most noteworthy of which was the WinSXS directory. I did this using DISM.EXE still in Windows 8, which was possibly a mistake. The "component install service" in the background (or watever it's called) tended to eat a whole CPU core doing nothing... but after several hours and like three reboots it was finally finished. I later found out that it probably had a bug in Win8 and was a breeze if done in Windows 8.1... BTW, I managed to reduce WinSXS from 13.8 GB down to 5.6 GB (in several steps)... and, the system backup size dropped from 12 TB down to 6 GB :-)
4) upgrade to Windows 8.1. This also took surprisingly long. It felt like a full Windows reinstall. The installer asked for several reboots, and the percentage counter (ex-hourglass) actually wrapped around several times... it kept saying funny things like "finishing installation", "configuring your system", "registering components", "configuring user settings", "configuring some other stuff" (literally, no kidding!) but finally it was finished...
5) more crapectomy (delete stuff left over from Win8 etc.)
6) install Classic Shell, adjust window border padding, create a "god mode" folder (only to find out that it's actually pretty useless), install kLED as a soft CapsLock+NumLock indicator (the Acer NTB lacks CapsLock+NumLock LEDs), replace the ludicrous pre-logon wallpaper, get rid of some other user interface nonsense...
Somewhere inbeteween I did a total of three backups: one almost ex works, another with a clean install of Windows 8.1 (after basic post-install cleanup), and one last backup of the fully customized install, just a snapshot of the system partition stored on the data partition (for a quick rollback if the kids mess up the system).
It looks and even works (at a basic level) as Windows XP. Some aspects of the user inteface work slightly different - such as, the Windows now dock to screen edges. No problem there. Even when I install some software whose installer expects the old style start menu, the installer still creates its subfolders in the ClassicStartMenu (technically alien to Windows 8) - great job there.
But: the control panels are still Windows 8 style = bloated and incomprehensible, if you're looking for something that was "right there" in Windows XP. The search tool is still absent from the explorer's context menus - you have to use the global search box in the upper end of the Win8 sidebar. The dialogs that you need to deal with when occasionally fiddling with file privileges are just as ugly as they ever have been (they weren't much nicer in XP before the UAC kicked in in Vista).
I'm wondering if I should keep the Windows 8.1 start button, only to have that nifty admin menu on the right mouse button. The left button = direct access to the start screen (even with smaller icons) is little use to me.
There's one last strange quirk, apparently down to the hyper-intelligent touchpad: upon a certain gesture, possibly by sweeping your finger straight across the touchpad horizontally, the Win8 sidebar jumps out and also the big floating date and time appears - and they just glare at you. This typically happens to me unintentionally - and whatever I was doing at the moment gets blocked away by this transparent Win8 decoration. It is disturbing - I have to switch my mental gears and get out of that Windows 8 shrinkwrap to get to work again... I hope it will be as easy as disabling all the intelligence in the touchpad control panel. For the moment I cannot do away with the Win8 sidebar entirely (even if this was possible) because I still need it now and then...
Some of the control panels are metro-only - and THEY ARE A MESS! There's no "apply" button... it's disturbing to me that I cannot explicitly commit the changes I do, or roll back in a harmless way. Typically when I happen to launch some Metro panel by mistake, I immediately kill the ugly pointless beast using Alt+F4. Thanks god at least that still works.
The new-generation start screen with mid-size icons is not a proper Start menu replacement. For one thing, the contents are not the same. Legacy software installs into the classic start menu, but its icons don't appear in the 8.1 start screen. And vice versa. The new start screen with small icons is better than the endless Metro chocolate bar of Windows 8, but still a piece of crap.
I hope my trusty old Acer that I use daily at work (XP-based) survives until Windows 9 - by then I'll have a chance to decide for myself, whether Windows 9 is back on track in the right direction, or what my next step is. If this is everybody's mindset, it's not surprising at all that Windows 8 don't sell.
If he's a server man...
If Nadella is a "server" man, he might actually understand much of the dislike that power-users have been voicing towards Windows 8. He might be in mental touch with admins and veteran Windows users.
If OTOH he's a "cloud" buzzword hipster evangelist, that doesn't sound nearly as promising.
What does the Microsoft's humongous profit consist of these days? Is it still selling Windows and Office? If that's the case, It has seemd to me lately that they've been doing all their best to kill that hen laying golden eggs... They've always been capitalizing on the sheer compatibility and historical omnipresence of their Win32 OS platform and office suite. In the recent years though, they've done a good job of scaring their once faithful customers away with counter-intuitive UI changes, software bloat and mind-boggling licensing :-(
PC104 is a form factor, rather than a CPU architecture thing - though it's true that I've seen a number of x86 CPU's in a PC104 form factor, but only a few ARM's...
PC104 is a relatively small board, whose special feature is the PC104 and/or PCI104 connector, making it stackable with peripherial boards in that same format. Other than that, it's also relatively expensive. And, it's easy to forget about heat dissipation in the stack.
If you need a richer set of onboard peripherials or a more powerful CPU (Atom in PC104 has been a joke), you may prefer a slightly bigger board, such as the 3.5" biscuit. There are even larger formats, such as the 5.25 biscuit or EPIC, which is about as big as Mini-ITX. The bigger board formats allow for additional chips and headers on board, additional connectors along the "coast line", and additional heat dissipation.
If OTOH you need a very basic x86 PC with just a few digital GPIO pins, and you don't need an expansion bus (PCI/ISA), there are smaller formats than PC/104 - such as the "Tiny Module" (from ICOP, with Vortex) or the various SODIMM PC's.
The Arduino format is special in that it offers a particular combination of discrete I/O pins, digital and analog - and not much else... and I agree with the other writers who point out that it's a good prototyping board for Atmega-based custom circuits.
Re: 400 Mhz?
Oh it's got CMPXCHG8? Means it can run Windows XP? cept for the missing graphics :-)
Re: Yes yes yes! Vortex86 rules!
Speaking of chip-level and board-level product lifetime, the boards by ICOP with Vortex chips by DMP have a lifetime of quite a few years. I believe I've read 12 years somewhere, but I don't think the Vortex86DX is *that* old :-) During the 6 years or so that I've been watching ICOP, hardly any motherboard has vanished (except maybe 3rd-party peripherial boards), new products are being added to the portfolio, there have been maybe 2 minor updates (revisions) across the portfolio to reflect some bugfixes / general improvements - while the form factors and feature sets were kept intact.
In terms of chip-level datasheets and source code, ICOP and DMP are somewhat secretive about selected marginal corner areas (the I2C controller comes to mind). Some chip-level documentation seems to be seeping from board-level 3rd-party manufacturers... But overall the state of support for the key features is pretty good - docs, drivers for key OS'es (including open-source drivers in vanilla Linux). Board-level support in terms of human responsiveness from ICOP and DMP feels like miles ahead of Intel.
Re: finger-burningly hot = well designed for passive cooling
> > And the Quark chip runs finger-burningly hot.
> Presumably it is engineered to do so. As were Atoms before.
agreed! :-) My words exactly. Many ATOM-based fanless designs are a joke.
Compare that to the Vortex86. You can typically hold your finger on that, even without any kind of a heatsink if the board is large enough. On tiny boards, it takes a small passive heatsink that you can still keep your finger on after some runtime. That for the Vortex86DX clocked at 800 MHz at full throttle. With some power saving and underclocking, it doesn't take a heatsink.
> And any chip well-designed for passive cooling
> (because you need a fairly large delta-T before convection gets going).
Thanks for explaining the mindset of all the nameless Chinese R&D folks.
I'm on the other side of the barricade - I'm a troubleshooter with a small industrial/embedded hardware distributor, I'm effectively paid by system integrator people (our customers) to sooth their "burned fingers".
Imagine that you need an embedded PC board, at the heart of some book-sized control box. That box will be mounted in a cabinet. The folks at Westermo used to say that every "enclosure" adds about 15 degrees Celsius of temperature. And you have maybe 2-3 enclosures between your heatsink and the truly "ambient" temperature. In my experience, that 15 degrees applies to very conservatively designed electronics, with sub-1W-class ARM MCU's on the inside. For computers worth that name, where you aim for some non-trivial compute power, the 15* are a gross under-estimation. You have to calculate with Watts of power consumption = heat loss, thermal conductivity at surfaces and thermal capacity of coolant media (typically air) - even in the "embedded PC" business, far from the server collocation space.
Note that there are electrolytic capacitors on the motherboard, surrounding the CPU or SoC and VRM etc. They're not necessarily the solid-polymer variety. With every 10*C down, the longevity of these capacitors doubles. For low-ESR capacitors, It's typically specified at 2000 hours at 105*C. Twice that at 95*C etc.
Now... back to the mindset of our typical customer: it's fanless, right? so we can mount this in a tiny space in an unventilated box, run some cables in the remaining tight space on the inside... and sell that as a vehicle-mounted device... and remain unpunished, right? Let's go ahead with that...
(Where do you get convection around the CPU heatsink in that?)
The typical mindset of our suppliers' R&D is: let's sell this with the minimum possible heatsink, that will alow our bare board survive a 24hour burn-in test in open air, without a stress-load application running on the CPU (just idling).
Some particular fanless PC models are built in the same way. The most important point is to have an aluminium extrusion enclosure with some sexy fins on the outside. It doesn't matter if only a half of them is actually thermocoupled to the CPU and chipset, never mind the RAM and VRM and all the other heat sources on the inside (they'll take care of themselves). The enclosure needs to have fins and some cool finish, for eyewash effect - make it golden elox or harsh matte look. If it looks real mean, all the better - you can put the word "rugged" in your PR datasheets. Never mind if the surface of the fins is clearly insufficient to dissipate 15W of heat, on the back of an envelope (or just by the looks, to a seasoned hardware hacker). Perhaps also the computer maker's assembly team add a cherry on top, by optimizing the thermocoupling a bit: you can relax your aluminium milling tolerances a bit if you use 1 mm of the thermocouple chewing gum. Never mind that the resulting thermal coupling adds 20 Kelvins of temperature gradient. Even better, if the internal thermal bridge block designed by R&D is massive enough, you can probably just skip thermocouple paste or chewing gum alltogether, to accelerate the seating step on the assembly line... It takes maybe 20 minutes at full throttle before the CPU starts throttling its clock due to overheating, and the QC test only takes 2 minutes and is only carried out on every 20th piece of every batch sold.
Customer projects (close to the end user) that go titsup get later settled between purchasing and RMA departments and various project and support bosses and maybe lawyers - no pissed off troubleshooter techie has ever managed to wrap his shaking fingers around the faraway R&D monkey's throat :-)
If anyone is actually interested in practical advice, to get long-lived embedded PC's in production operation, I do have a few tips to share:
If you can keep your fingers on it, and it doesn't smell of melting plastic, it's probably okay. Do this test after 24 hours of operation, preferably in the intended target environment (enclosure, cabinet, ambient temp).
If you insist on playing with high-performance fanless stuff, do the heat math. You don't need finite-element 3D modeling, just back of the envelope math. What are the surfaces of your enclosures, times the heat transfer coefficients, times the wattage. What gradients can you come up with? Pay attention to all the heat-producing and hot components on your PCB's. All the parts inside your fanless enclosure principally run hotter than the "thermocoupled envelope". Putting sexy inward-facing heatsinks on hot components doesn't help much, inside a fanless enclosure. Consider that adding a tiny fan will turn this "roast in your own juice" feature of a fanless enclosure inside out.
If you intend to purchase third-party off-the-shelf fanless PC's for your project (complete with the finned enclosure), take a few precautionary masures: Take a look inside. Look for outright gimmicks and eyewash in thermal design, and for assembly-level goof-ups (missing thermocouple paste). Install some OS and run some burn-in apps or benchmarks to make the box guzzle maximum possible power. If there are temperature sensors on the inside, watch them while the CPU is busy - lm_sensors and speedfan are your friends. Some of the sensors (e.g. the CPU's coretemp) can be tricky to interpret in software - don't rely on them entirely, try opening the box and quickly touching its internal thermocoupling blocks and PCB's close around the CPU.
Single-board setups should be generally more reliable than a tight stack of "generic CPU module (SOM/COM) + carrier board" - considering the temperature dilatation stresses between the boards in the stack. In fanless setups, the optimum motherboard layout pattern is "CPU, chipset, VRM FET's and any other hot parts on the underside" = easy to thermocouple flat to the outside heatsink". Note that to motherboard designers, this concept is alien, it may not fit well with package-level pinout layouts for easy board routing.
Any tall internal thermal bridges or spacers are inferior to that design concept.
Yet unfortunately the overall production reliability is also down to many other factors, such as soldering process quality and individual board-level design cockups... so that, sadly, the odd "big chips on the flip side" board design concept alone doesn't guarantee anything...
If you're shopping for a fanless PC, be it a stand-alone display-less brick or a "panel PC", notice any product families where you have a choice of several CPU's, say from a low-power model to a "perfomance mobile" variety. Watch for mechanical designs where all those CPU's share a common heatsink = the finned back side of the PC chassis. If this is the case, you should feel inclined to use the lowest-power version. This should result in the best longevity.
If you have to use a "closed box with fins on the outside" that you cannot look inside, let alone modify its internals, consider providing an air draft on the outside, across its fins. Add a fan somewhere nearby in your cabinet (not necessarily strapped straight onto the fins).
Over the years, I've come to understand that wherever I read "fanless design", it really means "you absolutely have to add a fan of your own, as the passive heatsink we've provided is barely good enough to pass a 24hour test in an air-conditioned lab".
If your outer cabinet is big enough and closed, use a fan for internal circulation. Use quality bearings (possibly ball bearings or VAPO), possibly use a higher-performance fan and under-volt it to achieve longer lifetime and lower noise. Focus on ventilation efficiency - make sure that the air circulates such that it blows across the hot parts and takes the heat away from them.
Even an internal fan will cut the temperature peaks on internal parts that are not well dissipated/thermocoupled, thus decreasing the stress on elyt caps and temperature-based mechanical stresses (dilatation) on bolts and solder joints. It will bring your hot parts on the inside to much more comfortable temperature levels, despite the fact that on the outer surface of your cabinet, the settled temperature will remain unchanged!
If you merely want a basic PC to display some user interface, with no requirements on CPU horsepower, and for some reason you don't like the ARM-based panels available, take a look at the Vortex. Sadly, Windows XP are practically dead and XPe are slowly dying, and that's about the ceiling of what Vortex can run. Or you can try Linux. You get paid off by 3-5 Watts of power consumption and hardware that you can keep your finger on.
Examples of a really bad mindset: "I need a Xeon in a fanless box, because I like high GHz. I need the absolute maximum available horsepower." or "I need high GHz for high-frequency polling, as I need sub-millisecond response time from Windows and I can't desing proper hardware to do this for me." or "I need a speedy CPU because I'm doing real-time control in Java and don't use optimized external libraries for the compute-intensive stuff". I understand that there *are* legitimate needs for big horsepower in a rugged box, but they're not the rule on my job...
Yes yes yes! Vortex86 rules!
Mind the EduCake thing at 86duino.com - it's a "shield" in the form of a breadboard.
You get a Vortex-based 86duino (= the Arduino IDE applies) with a breadboard strapped on its back.
I'm still waiting for some Docs from DMP about the Vortex86EX's "motion controller" features. It should contain at least a rotary "encoder" = quadrature counter input for 2-phase 90degree shifted signals, not sure if it's capable of hardware-based pulse+dir, or "just" PWM.
The Vortex SoC's tend to have 40 or more GPIO pins straight from the SoC, capable of ISA(bus)-level speed. Plus an almost complete legacy PC integrated inside. All of that in about 5W of board-level consumption (3W without a VGA). The number of GPIO pins is obviously limited by a particular board design, and some GPIO pins are shared with configurable peripherials (LPT, UART and many others) - but generally on Vortex-based embedded motherboards you get dedicated 16 GPIO pins on a dual-row 2mm header, on more modern SoC versions these are capable of individual per-pin PWM.
I seem to have heard that the 86duino is using DOS (FreeDOS?), rather than Linux, as its OS on the inside. Which might be interesting for real-time work, if you're only interested in the Arduino-level API and don't need the "modern OS goodies". While Intel tends to stuff ACPI and UEFI everywhere they can (ohh the joys of stuff done in SMI, that hampers your real-time response), the Vortex from DMP is principally a trusty old 486, where you still know what you can expect from the busses and peripherials.
But other than that, you can run Linux on any generic Vortex-based embedded PC motherboard, or a number of other traditional RTOS brands. I agree that when shopping for hardware for your favourite RToS, the x86 PC baggage may not appeal to you :-)
As for Linux on the Vortex, I believe Debian is still compiled for a 386 (well maybe 486 nowadays) - definitely Debian 6 Squeeze. You can install Debian Squeeze straight away on Vortex86DX and above. On a Vortex86SX (to save a few cents), you need a kernel with FPU emulation enabled (the user space stays the same). All the other major distroes rely on i686 hardware, so you cannot use them on Vortex without a recompilation from source.
To me, the only possible reason to use the 86duino (rather than a more generic Vortex-based board) is cost. The 86duino is cheaper. And then there's the breadboard :-) Other than that, the full-blown Vortex-based boards are better equipped with ready-to-use computer peripherials, such as RS232 or RS485 on DB9 sockets, USB ports, disk drive connectors and such. It really feels like lego blocks - an impression supported by the colourful sockets used by ICOP on their boards :-)
Price is not the only aspect...
It's one of the first wave of 802.11ac routers, which typically cost around 200 USD around here. As far as I know, the OpenWRT "supported hardware" page lists none of the existing -ac models (e.g. ASUS) as being supported. I can see in the OpenWRT forums that some people have just managed to make the new Atheros chips with -AC support (ath10k driver, qca988x hardware) work in OpenWRT at a basic level: driver + hostapd. This happend shortly before X-mas 2013. If the supposed support and assistance from Linksys helps to push the Atheros ath10k 802.11ac into mainstream, including proper configuration methods in the UCI subsystem and proper documentation, kudos for that. If someone wants that guarantee of OpenWRT support in newly purchased hardware, that's fine.
As for the price... at the moment, for my needs = basic indoor coverage, I'll stick to TP-Link. The basic model TL-WR741ND costs about 27 USD around here. I can spend another 10 USD for an extra mains adaptor. I can also run the router for a while, then say "who cares about warranty" and replace the cheap capacitors inside with solid polymer and MLCC. The last generation of TP-Link AP's are significantly cleaner and emptier on the inside: there are fewer chips, elyts and buck converters, and the current Atheros chipset runs pretty cool. Once the capacitors are beefed up, this is likely to have a pretty long service life. And all the recent TP-Links, including the higher-end dual-band WDR models (802.11n), are supported by OpenWRT.
The one thing that I don't like about the top-end dual-radio SoHo AP's (including TP-Link) is that the two radioes (2.4 and 5 GHz) share common antenna ports for the two bands - so that you can have simultaneous traffic on both radioes (NIC's visible on the PCI bus inside), but via a single set of shared dual-band antennas. Dual-band antennas are expensive and technically almost impossible to make right, it's down to basic wavelength physic. Splitters are also difficult and expensive to make.
If the new WRT54G-AC is going to have 4 antenna ports, that might as well mean separate ports for 2.4 and 5 GHz (times 2 per band for MIMO) - that would be excellent news, as it would allow you to use decent single-band antennas for either frequency band.
I actually used to consider hacking some current TP-Link to add separate antenna outputs on my own - the radio paths can be clearly identified on the PCB from the radio power stages to the passive crossover that mixes them into the shared antenna port. If the separate per-band antenna ports should take off across the industry, that would be good news.
The separate per-band antenna ports might well be worth the money for some users - they are a way to achieve best-in-class and customizable radio coverage in both bands.
And if these were really four dual-band antenna ports, standing for 4T4R MIMO, that would also be a first machine of its class. Although 4T4R MIMO is theoretically supported by 802.11n already, I've never seen actual hardware that would support that.
P.S.: I just wish anyone would kill the CRDA...
69 Factorial on a TI-25 - those were the days...
When I was a pre-school kid, in the early eighties in the then commie Czechoslovakia, my father (a graduated machinery engineer) used to have a TI-25. God knows how he got it - probably as a gift from some foreign supplier. I still remember how I was attracted to the magical green button on the otherwise black keyboard, while I couldn't count at al yet. I guess it was even before digital wrist-watches and colour TV sets (in our household anyway). I knew where my dad kept the calculator, but the shelf was too high for me to reach (and tampering was forbidden). Then gradually, as I got my wits together, my father used to let me use it a bit. And I had to protect it from my younger sisters. Then I used to carry it along to school every day, during later eightees and throughout the nineties (after the wall came down). Even throughout the nineties its all-black design (now noticeably battered) looked slim and cool, compared to the grey Japanese mainstream that flooded our market by then. I don't recall exactly anymore how and when I lost it, I guess it was in about 2004 when I lost my briefcase on the job to a random thief... Brings about a lot of childhood memories. 69 Factorial took about 6 seconds.
Acer in trouble? sad news
I like Acer exactly for selling cheap notebook PC's with "no frills", exactly the right set of features. I prefer Intel-based notebooks with chipset-integrated Intel graphics and Intel or Atheros WiFi. 1280x800 used to be a plausible display resolution, before 1366x768 plagued that market segment. Acer traditionally uses a very basic BIOS, with no proprietary "addon MCU" garbage on the motherboard. Compared to that, I've seen several design-level cockups of that category in IBM/Lenovo machines, and generally all sorts of twisted addons or counter-ergonomic "improvements" in Compaq/HP et al. I like the vanilla / "quality no-name" feel of the Acer machines. Whether they're actually made by Compal, Wistron, Foxconn or whoever, doesn't seem to make too much of a difference.
In the recent years, I've ushered maybe 5 or 6 Acer notebooks into our broader family and as far as I know, all of them work to this day, the oldest one has been in service for 5 years and I've been dragging it to workplace and back home every day. I have a third carrying bag, a second power adaptor, a second disk drive, and the notebook still works fine. No broken hinges or whatever, despite the case looking like "cheap plastic". If the CCFL tubes wear out soon now, I'm considering replacing the tubes...
I recall one minor display glitch on a particular Acer notebook model, where some power decoupling capacitors in the display PCB got optimized away, combined with poor 3.3V power rail trainsmission (two tiny pins in the internal LVDS connector) resulting in unreliable display startup, difficult to reproduce - but I fixed that and otherwise they're pretty reliable.
Hard drives are a notorious pain, but that's down to HDD brands and developments, the NTB makers are hardly to blame. I may prefer Seagate over other brands, but that may be my personal opinion.
All "my" Acer notebooks so far had the classic "beveled" keyboard. It's sad that the whole notebook market has shifted to the ugly flat "chiclet" keyboards - looks like another counter-ergonomic twist of fashion, following an apparent general PC hardware marketing trend that mandates something like "users can't really type anymore, so they won't appreciate a real keyboard". First the displays, now the keyboards...
Makes me wonder if the XP "end of updates" finally improves the PC sales numbers :-) Since solid polymer caps and LED-backlit displays, a well-made PC can last forever...
Re: yes I do feel targetted
Yesterday I went googling for some milling cutter tools for a hand-held woodwork router. In my mother language, which is not English. Guess what - as I was typing my earlier litany on AI at El Reg, the Google Ad bar on top of the page was flashing some hobby cutter sets at me - pointing me to e-shops in my country. Later yesterday I went googling for a somewhat specific sleeping mattress. Guess what the ad bar shows now... Well at least it's not showing my own employer's ads anymore (which used to be the case for the last half a year).
How long till consciousness
There's a growing body of research and knowledge on internal brain functioning and organization: composition of cortical columns, the various neuron species in a biological brain, a coarse global wiring schematic, knowledge of specialized subsystems, knowledge that in some areas the columns "switch purposes", plasticity of the brain, influences from the physical level (various firing/detection tresholds influenced by levels of chemicals, diseases etc), control of and feedback from endocrine glands.
In terms of computer-based modeling, some scientific teams with origins in biology and neuro-medicine approach this by trying to simulate the biological brain as precisely as possible = computationally simulate the transfer functions and behaviors of the neurons at maximum level of detail, as it is recognized that the pesky low-level details *do* matter, do have an impact on overall brain functioning at macro perespective.
Other teams (with a knowledge-engineering angle) seem to be more focused on computational performance and cunning topology (with cognitive functionality in mind), taking some inspiration from biology (the introduction of spiking neurons a few years ago) but not necessarily wasting effort on "maximum-fidelity" modeling of the biological brain...
Google has taught its neural network to classify objects based on their visual and linguistic descriptions combined. It's a neural network, not an old-school AI search term classification engine (which was essentially a network database). This artificial neural network has an inherent neural-style memory with links to external data and BLOB storage, it can classify fresh input data and can retrieve search results based on queries...
The neural network does not yet have a "flow of thought", a sense of goal or purpose to actively follow, a will, or even a possibility to take autonomous action. Or so I hope...
Makes me wonder if it would be possible to implement something resembling "flow of thought" without an active will / survival instinct or some such. There is a rudimentary neural engine, capable of sorting and searching visual+linguistic objects and concepts. Perhaps abstract concepts are not so far away. Next, implement associative search capability in that long-term "neural object storage" (maybe it's already there), add some short-term memory (for "immediate attention point"), maybe a filter of some sort (able to limit the "focus" to an object or area) and chain them in a feedback loop. Suppose the "immediate attention" is on a particular object. The associative memory offers a handful of associations, of which the filter/combiner picks a particular area/concept/object. This gets fed back into the "immediate attention" cell. Flow of thought anyone?
Makes me wonder if this would work without sensory input. Maybe add some relevant input channels to the "filter" stage in the loop (call it a "combiner", op-amp style). Or turn it inside out, and consider it a Kahlman filter made of neural building blocks... Not sure about the purpose or use of this arrangement. Perhaps to extract a model of the sensed reality in terms of objects and concepts, and suggest relevant "mental associations" and possible future developments of the current situation? A mind is probably much more complicated than that...
Perhaps a simple "flow of thought demonstrator" could be built with much less computational power and "inter-neuron communications bandwidth" than traditionally quoted for a human-level brain. If some biological baggage got "optimized away", perhaps some interesting functionality could be reached "cheaper".
Scary thoughts. A terminator face fits the topic even better than black helicopters.
Re: The real inspector gadget? (classification of live video)
At the moment the system could probably recognize some objects in a static way, looking at the video stream as a slide show of static images.
A proper implementation of "recognizing a crime just happening" would require the system to recognize and classify motions / actions happening in time, in a video stream, preferably in real time. Probably not implemented yet. It would be the next level, for performance reasons if nothing else. A very logical next step...
Makes me wonder how much information this image classification system can extract from a photo. Break down the photo into a collection or a tree of objects: there's a street with some trees, people on the sidewalk, some cars, and there's a guy swinging a baseball bat (note: try that as a query to Google Images). A human brain would automatically pop out the eyeballs: what? Right there on a street? What or whom is he targetting? Does it look like agression? ... there are a lot of inherent defensive reflexes and experience-based context and attention to motion in a human brain, and emotional aspects, which a relatively spartan neural network trained for automated classification of static images may not possess. Not right now, at least...
Will spammers be able to manipulate that?
Now... when this becomes airborne for production service in the Google Images back end, would it be possible to google-bomb this to return pr0n images to some harmless queries?
Or, maybe Google could use it to *detect* such google bombing attempts :-)
Mixed feelings... am I missing something here?
This sounds odd. Simply advertising someone else's prefix would point the whole world (or a big part thereof) to *you*. If you were a "stub network" with no other connectivity, you wouldn't be able to forward the traffic to its actual destination (unless you were able to tunnel it to another AS, unaffacted by your BGP injection attack).
Target a single website and present your own mockup say for phishing purposes? maybe. You'd get caught and/or disconnected soon, owing to the havoc you'd cause.
Cause a big havoc by making lots of servers inaccessble? Piece of cake. Good for DoS attacks.
After inspection, redirect traffic to its rightful destination? That's difficult. You'd need a second connectivity, able to take the load. For a small target network with little traffic, a tunnel to someplace else might cut it. In order to re-route some high-volume network, you'd need a thick native link, effectively you'd need to be a transit operator. And you'd probably want to goof just a relatively limited perimeter of your peers (based on distance metric) into thinking that you are the actual origin - principally if you goofed the whole internet, you wouldn't be able to forward the traffic to its rightful destination. You need a carefully crafted local routing anomaly, which might be difficult to achieve.
And, in general you wouldn't be able to hijack traffic flowing in both directions (such as to wiretap a phonecall in full duplex), unless you did the BGP hijacking trick in *both* directions simultaneously: against both ends of the sessions you try to wiretap. Hijacking a single BGP prefix gives you just one direction of the traffic flow.
Doesn't sound like something very useful for anything except a massive and short-lived DoS attack.
Unless you have your hijacking gear installed in a big transit operator's backbone routers.
Who would you have to be, to be in that position :-)
Considering the need for a "local routing anomaly", what would be the point for the attacker's target network, somewhere in the global internet, to check the BGP for its own routing advertisements? A single check at an available nearby point wouldn't do. You'd have to check your prefix at a number of routers worldwide and analyze the "spatial propagation" for anomalies in the distance metric... hardly feasible, unless you're Google.
Then again the threat is probably real, as a number of people worldwide apparently work towards a more secure BGP. There is a decade-old standard called S-BGP... which probably hasn't reached universal use, if BGP hijacking is nowadays still (or ever more) in vogue...
ReactOS or Linux
Maybe the Indian banks should fund some ReactOS developers... or just migrate to Linux. The pain might be on par with migrating to Win8. Linux can run an office suite, Linux can run an SSH session to the back-end mainframe, Linux can run a browser, Java apps run just fine in Linux... so unless the bank has lots of software written in MS .NET or ActiveX, migration shouldn't hurt all that much.
The one area where the Win32 work-alikes (ReactOS and Wine) lag behind true MS Windows, are all sorts of crypto/security services of the OS. This is a major drawback for even simple business apps, written for the MS environment.
Re: I just bought myself one of these : (Vortex)
Exactly. For the few days since the Quark was announced, I've been itching with curiosity, how it compares to the Vortex. And I'll keep on itching for a few more weeks (months?) till I put my hands on the Quark and run nbench on the critter. For the time being, could anyone please publish the contents of /proc/cpuinfo ?
The Vortex boards typically eat something between 2.5 and 5 Watts (between 500 mA and 1 A from a 5V adaptor) depending on Vortex generation, additional chips on the board, the SSD used and CPU load (and OS power saving capabilities). The 5W is a well equipped Vortex86DX board including Z9s graphics at full throttle. The MX/DX2 reportedly need less power. The Vortex SoC contains a programmable clock divider, you can underclock it to 1/8th of the nominal clock - but the undeclocking doesn't achieve much more than what Linux can achieve at full clock, merely by properly applying HLT when idle.
I'd expect the Galileo board to have a similar power consumption.
With switch-mode power supplies (the general cheap stuff on today's market), it's not a good idea to use a PSU or Adaptor whose specced wattage exactly matches your device's consumption. It's advisable to use a PSU that's twice to three times overestimated. Hence perhaps the recommendation to use a 3amp adaptor. Intel knows that these adaptors are crap. You may know them from SoHo WiFi routers/AP's. The router comes with a 2amp adaptor, likely extremely cheap, which only lasts for a year or two, 24/7. Then the elyts bid you farewell. Buy a 3amp adaptor for 10 USD and it will lasts forever.
Also note that Intel may be hinting that you need to reserve some PSU muscle for some "Arduino IO shields".
The DX vortex is made using 90nm lithography, not sure about the MX and DX2 (possibly the same). Makes me wonder what Intel could do, with all its x86 know-how, using a 32/22nm process. Run a 386 at 10 GHz maybe? I've been wondering about this for years before the Quark got announced, and now I'm puzzled - "so little so late".
I've been a Vortex fanboy for a few years - specifically, I'm a fan of the boards made by ICOP ( www.icop.com.tw ). Interestingly, I've seen other Vortex-based boards that are not as good, although using the same SoC. BTW, I don't think even the MX Vortex has MMX - it's more like an overclocked Cyrix 486, but with a well-behaved TSC and CMPXCHG, so it can run Windows XP (not Windows 7, sadly).
Vortex86SX and DX didn't have on-chip graphics, but the ICOP portfolio contains boards with or without VGA. ICOP uses an SIS/XGI Volari Z9s with 32 megs of dedicated video RAM, other board makers use different VGA chips, such as an old Lynx3D with 4 megs of video RAM. The Vortex86MX SoC (and the new Vortex86DX2) does have some VGA on chip, possibly not as powerful as the Z9s. The on-chip VGA uses shared memory (steals a few megs of system RAM). I understand that the system RAM on the Vortex chips is only 16 bits wide, which might be a factor in the CPU core's relatively poor performance.
The Geode has significantly better performance per clock tick than the Vortex86DX. The new DX2 should perform better than the older DX/MX cores (closer to the Geode). I expect the Quark at 400 MHz to be about as fast as an 800MHz Vortex86DX. The "Pascal CRT 200 MHz bug" occurs at around 400 MHz on the Vortex86DX.
The Vortex SoC traditionally contained a dual-channel parallel IDE controller. This is nowadays still useful for ATA Flash drives of various form factors (including CompactFlash), but to attach some new off-the-shelf spinning rust, you need an active SATA/IDE converter... The new DX2 SoC features a native SATA port. Since I guess Vortex86DX, the second IDE channel can alternatively be configured as an SD controller.
Since Vortex86SX, the SoC has about 40 GPIO pins - the boards by ICOP typically have 16 GPIO pins on a connector (with ground and a power rail). The DX/MX/DX2 SoC can even run HW-generated PWM on the GPIO pins (each pin has its own individual PWM config). The only thing it's missing for general tinkering is possibly an on-chip multichannel ADC.
Sice Vortex86SX, the SoC has two EHCI controllers (four ports of USB 2.0 host).
The MX/DX2 have on-chip HDA (audio).
All the Vortex SoC's have an on-chip 100Base-TX Ethernet MAC+PHY (the RDC R6040).
The SX/DX/DX2 have 4+ COM ports, one of them with RS485 auto RX/TX steering capability. (the MX has only 3 COM ports.) All of them have a good old-fashioned LPT (parallel printer port) with EPP/ECP capability, whatever twisted use you may have for that nowadays. Note that all the COM ports and LPT are on chip in the SoC - yet the SoC also has LPC, should the board maker want to expand the legacy stuff with an added SuperIO chip...
In terms of "system architecture feel", the Vortex reminds me of the 486 era. Simple hardware. Might be useful for realtime control (think of RTAI). There's a full-fledged ISA and a basic PCI (able to serve about 3 external devices). The DX2 has two lanes of PCI-e. The SX/DX/MX (not sure about the DX2) doesn't contain an IO/APIC, which means that it's a bit short of IRQ's, considering all the integrated peripherials. Yet all the integrated peripherials work fairly well. I've seen an odd collision or two: the PXE ROM is defunct if you enable the second EHCI, but both the second EHCI and the LAN work fine in Linux if you leave them both enabled (= as long as you don't need to boot from PXE). The BIOS does't provide ACPI if memory serves. All the Vortex-based hardware uses AT-style power.
The SoC doesn't have an APIC, but there's an interesting twist to the (otherwise standard) AT-PIC: the SoC allows you to select edge-triggered or level-triggered logic individually for each interrupt channel. Not that I've ever had any use for that, but it might be interesting for some custom hacks (with a number of devices that need to share a single interrupt).
And, oh, the Vortex boards all have a BIOS, i.e. can boot DOS, various indee bootloaders, and stand-alone bootable tools (think of Memtest86+). I've already mentioned PXE booting. You're free to insert your own bootable disk (SSD or magnetic) and some boards also contain an onboard SPI-based flash drive, which acts like a 4meg floppy. The AMI BIOS in the ICOP boards allows you to configure a number of the SoC's obscure features, and can be accessed via a terminal on RS232 if the board is "headless" (no VGA).
In terms of features, compared to the Vortex, the Galileo board (the Quark?) seems underwhelming. Ahh yes, it's also cheaper... And I understand that it's a first kid in an upcoming family.
When I first read about the Quark, I immediately thought to myself "Vortex is in trouble". Looking at the Galileo, I think "not yet, maybe next time". We have yet to see how the Quark copes on the compatibility front etc., what novel quirks get discovered etc.
Re: price of the Vortex-based PC's
Where I live, the Vortex-based MiniPC's cost around 200 USD (the SX variant is cheaper but less useful).
An industrial motherboard could cost about the same - maybe more. Depends on form factor, Vortex generation and the board's additional features.
Re: Cut the blue wire
Actually the purple wire, for +5V standby power from an ATX PSU. Or just pull the mains cord (found outside the case).
In a laptop, remove the battery.
Re: Intel is dropping PCIe by 2016.
PCIe in mobile devices? Why bother if it amounts to unnecessary processing overhead. True, Linus has commented that ARM SoC designers should make all the busses enumerable (PnP fashion), which combined with low pin count points to PCIe, rather than PCI... but Linus has his specific background and aspect. He's not exactly a mobile phone hardware developer.
Even SoC's for tablets are pretty much single-purpose.
Generic support for peripherials is needed in the industrial/embedded segments.
As for desktop / full-fleedged notebook machines... if Intel thinks that its own GPU is strong enough, why not let it skip those 16 lanes of PCIe straight from the CPU?
As for servers, a beefy PCI-e x8 is certainly useful.
I'm sure Intel knows better than to shoot itself in the leg. They'll keep PCI-e around where applicable and useful: multiple x1 channels from the south bridge and maybe a couple lanes straight from the CPU socket in servers and high-performance desktops. I haven't yet noticed any future PCI-e replacement for x86 peripherial expansion.
Re: grain of salt
Thanks for your response - I don't have hands-on experience with IB so I didn't know there. I did have a feeling that with IB being so omnipresent in HPC, "node hot swap" would probably work well.
PCI-e is also inherently hot-swap capable and so is the Windows driver framework handling it - just my theoretical matrix crossconnect thing makes node hot-swap a bit more interesting :-)
And yes I'd really love to know how a "homebrew" ccNuma machine would cope with a node outage. If this can be handled, what OS is production-capable of that etc. Except I guess I'm off topic here, WRT the original article...
grain of salt
PCI-e over external cabling has been on the market for a couple of years - for external interconnect to DAS RAID boxes and for additional PCIe slots via external expansion boxes, the latter sometimes combined with PCI-e IOV. Besides IOV, there are also simple rackmount switches to connect multiple external expansion boxes to a single "external PCIe socket" on a host computer. Ever fancied an industrial PC with some 30 PCI-e slots? Well it's been available for a while... As for PCI-e generations, in the external DAS RAID boxes I've seen PCI-e 1.0 and 2.0 (Areca). Even the connectors seem to be somewhat standard (no idea about their name). The interface between a motherboard (PCI-e slot) and the cable is in the form of a tiny PCI-e expansion board - interestingly it doesn't carry a switch, it's just a dumb repeater, or a "re-driver" as Pericom calls the chips used on the board. Apparently the chips provide a signal boost / preemphasis / equalization for the relatively longer external metallic cabling.
As for HPC: apart from storage and outward networking, HPC typically requires low-latency memory-to-memory copy among the cluster nodes. The one thing that to me still seems to be missing, for bare PCI-e to successfully compete against IB in HPC, is some PCI-e switching silicon that would provide any-to-any (matrix style, non-blocking) host-to-host memory-to-memory DMA, that combined with a greater number of host ports. IMO it wouldn't require a modification to the PCI-e standard: it would take some proprietary configurable swiching matrix implemented in silicon, providing multiple MMIO windows with mailboxing and IRQ to each participating host, combined with OS drivers and management software, that would interface to the HPC libraries, would take care of addressing among the nodes, and maybe provide some user-friendly management of the cluster interconnect at the top end.
The switches currently on the market can do maybe 4 to 8 hosts of up to 16 lanes each, and the primary purpose is PCIe IOV (sharing of network and storage adapters), rather than direct host-to-host DMA. Check with PLX or Pericom. Perhaps it would be possible, with current silicon, to do the sort of a matrix DMA interconnect in a single chip, to cater for about 8 hosts of PCI-e x8 or x16. That's not too many nodes for an HPC cluster. For greater clusters, it would have to be cascadable. Oh wait - that probably wouldn't scale very well.
As for PCI-e huddling with the compute cores: the PCI-e actually has an interface called a "PCI-e root complex" or a "host bridge" to the host CPU's native "front side bus" or whatever it has. The PCI-e is CPU architecture agnostic - and has some traditional downsides, such as the single-root-bridge logical topology. No way for a native multi-root topology on PCI-e - that's why we need to invent some clumsy DMA matrix thing in the first place. And guess what: there's a bus that's closer to the CPU cores than the PCI-e. On AMD systems, this is called HyperTransport - AMD actually got that from Alpha machines, but that was probably before PCI-e even existed. Intel later introduced a work-alike called QPI. The internal interconnects between the cores in a CPU package (such as the SandyBridge ring) are possibly not native HT/QPI, but these cannot be tapped, so they don't really count. So we have HT/QPI to play with: these are the busses that handle the talks between CPU sockets on a multi-socket server motherboard. Think of a cache-coherent NUMA machine on a single motherboard. And guess what, the HyperTransport can be used to link multiple motherboards together, to build an even bigger NUMA machine. There are practical products on the market for that: a company called NumaScale sells what they call a "NumaConnect" adaptor card, which plugs into an HTX slot (Hypertransport in a connector) on compatible motherboards. Interestingly, there is no switch, but the NumaConnect card has 6 outside ports, that can be used to create a hypercube or multi-dimensional torus topology of a desired dimension.
The solution marketed by NumaScale uses HyperTransport to build a ccNuma machine = it keeps cache coherence across the NUMA nodes. There's a somewhat similar solution called HyperShare that seems to use a cache-incoherent approach... Either way it seems that memory-to-memory access between the nodes is an inherent essential feature.
I've never heard of Intel making its QPI available in a slot. PCI is originally an Intel child, if memory serves. Maybe that's a clue...
Makes me wonder how much sense all of this makes in terms of operational reliability and stability. Are the NumaScale and HyperShare clusters tolerant to outages? Can nodes be hot-swapped in an out at runtime? One part of the problem sure is support for CPU and memory hotplugging and fault isolation in the OS (Linux or other) - another problem may be at the bus level: how does the HT hypercube cope when a node or link goes out to lunch? Makes me wonder how my theoretical PCI-e "matrix DMA" solution would cope with that (perhaps each peer would appear as a hot-pluggable PCI device to a particular host, with surprise removal gracefully handled). Ethernet sure doesn't have a problem with that. Not so sure about IB.
Re: Evolution? Devolution!
I don't think it will ever be possible to correct the DNA in a fetus that's already started developing (cells splitting). Once the cells start splitting, you can only compensate for genetic defects (some protein or hormone missing or some such) by supplementing the missing bit in some other way. Correct me if I'm wrong there - and please elaborate on technological details :-) "Make a virus that can cut and paste the DNA in every individual cell at a very specific place in a very specific way" - doesn't sound realistic, the virus would have to be too complicated (carry along too much tooling and data).
It would seem more realistic to me to engineer a "fertilised egg" (the single initial cell with a full set of chromosomes) with a desired genome, and let that start splitting/developing into a fetus. I'd almost suggest to have a few eggs fertilised in vitro in a semi-natural way, and then select one whose genome looks best - but that would imply a non-destructive reading of a genome of that initial single cell, which again doesn't seem technically likely/feasible. Maybe let the egg split once, separate the two cells, destroy one for DNA analysis and let the other one develop into a fetus (thus effectively keeping one twin of two). Even a more problematic method would be to have a few early fetuses develop enough material for DNA analysis, and kill those you don't like. Starts to sound like a horror story...
Well actually we do already screen fetuses pre-natally for known genetic defects, and those diagnosed with serious defects are suggested for abortion. Various countries approach this in different ways, depending on the level of their healthcare system and general public opinion about abortions (yes it has a lot to do with religion). Yet based on what I know, those defects are either life-threatening already in early childhood or often directly prevent future reproduction of the individual - so these generally wouldn't proliferate in the gene pool either, even if not aborted artificially.
Looking at the "removal of natural selection" (or some particular pressures thereof) in a statistical way, the future of our society looks like another horror story. We don't have to speak genetic-based conditions that are directly life threatening. Consider just some fairly harmless genetic traits that may e.g. make you less imune to a particular type of infections. Or may mean a stronger tendency to "auto-immune" / allergic responses (let's now abstract from the fact that some cases blamed vaguely on "auto-immune response" might actually be caused by undiscovered infections). Before modern medicine, even such "harmless" genetic features would statistically decrease your chances of survival. With modern medicine, many of this is treatable and gets passed on to future generations. Even genetic traits that might normally affect your survival *after* your successful reproduction, would traditionally still hamper your ability to rear and support your offspring, hence reducing your offspring's chance for further reproduction... With modern medicine (and social support), this pressure is removed.
Modern medicine is expensive - depending on a particular country's social arrangement, modern healthcare either burdens the whole society by a special healthcare tax (e.g. many countries in Europe), possibly making doctors work a bit like mandatory conscripts for sub-prime wages (post-commie eastern Europe), or it's individually expensive and unavailable to lower-wage classes (many U.S. states and other countries).
Imagine a population of people who mostly wouldn't be able to reproduce in a natural way for one reason or another (infertility, babies growing too big to get born naturally, various lighter/treatable conditions in pregnancy that would mean trouble without modern healthcare) and permanently suffer from various non-lethal but onerous conditions throughout their childhood and especially adult life (it's likely to get worse with age).
A population of permanently suffering people, dependent on modern expensive healthcare. I fear that gradually, even with modern healthcare, the balance of natural selection -based dieoff could be restored. So that a great percentage of individuals born alive will die of disease or other medical conditions before getting "old", despite having the luxury of modern healthcare.
For how long have we had modern healthcare? Since 1900? Maybe more like since WW2, if you count antibiotics. That's just a few generations. In some respects, we're already less healthy than our ancestors. Take respiratory diseases, take fertility for instance. Some of this used to be explained by industrial pollution, but here where I live, many of the population health problems persist, even though industrial pollution has been greatly reduced over the last two decades or so. How long will it take, till the public health will degrade catastrophically, due to minor genetic-based imperfections getting accumulated due to the removal of "natural selection pressure"? A couple more human generations?
I recall a study on a particular species of butterflies, showing how a dark variant (mutation) has become prevalent in an area affected by some industry, in just a couple of years, just because the original lighter colour became better visible to its predators... and how the ratio turned back in a couple years, after the polluting industry was removed. That was also just a few insect generations.
I've noticed someone in this forum mention that people are getting gradually more intelligent. Never heard this opinion before. Educated, maybe. On the contrary, there's a popular opinion (too lazy to google for sources) that the most intelligent humans evolved during the ages of "natural selection pressure" - such as during the last ice age. And that indeed, since then, there's an evolutionary plateau in that respect - that pressure got removed, and the average IQ of the population is getting diluted (as much as I otherwise hate the IQ variable and having it individually measured and compared). It does make perfect sense. Life has still been a struggle for those 8000 years since the last ice age, but I guess it's become a lot less of a struggle in the last century or two - with industrialization, modern healthcare, modern agriculture.
I'm struggling not to get started about the growing concentration of production resources in the hands of global enterprises. About the abundance of and lack of use for human labour, college graduates etc. Heheh - and about how fragile such a society is.
What happens to modern agriculture and food supplies, when the oil runs out? How much more expensive will freight and horsepower become?
What happens if the modern society collapses for some other reason (perhaps just social events such as popular unrest, a series of revolutions) and the modern healthcare gets withdrawn, a couple generations down the road?
My answer: a more natural selection pressure will apply once again...
It's plenty of material for a couple more dystopian science fiction movies, with a socialist or radically capitalist background :-)
clean OS and hardware is possible
I believe Linux is generally pretty safe against spyware. That would be a good plaform for an endpoint OS, getting rid of keyloggers and the like. As for clean hardware... suppose that Intel's on-chip IPMI/AMT is compromised. Suppose that the AMT-related autonomous backdoor exists even in Intel CPU and chipset variants that do not openly support AMT (for the sake of sales segmentation). There are other brands of CPU's, without inherent support for IPMI/AMT. And, based on what I've seen so far, I don't think such a backdoor would be very useful and reliable, given how buggy IPMI/AMT is...
Revamping metallic SAS one more time
Amazing. Technological development is still blazing past. I haven't been watching the news for some time - and suddenly 12Gb SAS is here. Makes me wonder if 12Gb SAS is going to be the last SAS version running on metallic interconnects (just like U320 was the last parallel SCSI generation).
12Gb SAS sure is a desperately needed update to the disk drive interconnect - otherwise the SSD's would all migrate to direct PCI-e attachment (and the whole SAS market would vanish in a couple of years). Support for 12 Gb from LSI is important in that LSI is a key traditional SAS chipset supplier - for HBA's/initiators and targets (RAID controllers and disk drives), also providing SAS expanders and switches. But speaking of SAS chipset makers, Marvell and PMC Sierra (ex-Avago/ex-Agilent) have also announced 12Gb products.
And, Windows 2003 Server is 5.2. Makes a hell of a difference from XP in some drivers (and no, just changing that version string in the INF file often doesn't get the job done - some kernel API's really are slightly different).
Makes me wonder what the 2008 reports (no live machine at hand).
C2D came along with solid-polymer caps (=> capacitor plague was over)
Yes, Core2Duo with 2 GB of RAM is quite good enough for general office work, web browsing, movie playback and the like. And the 45nm generation of Core2 doesn't even eat all that much power. Any CPU's before that, back to say the Pentium 4 heating radiators, have somewhat less performance and eat more electricity. But the C2D is quite okay. And there was another major change, that came along with the C2D: it was the solid-polymer electrolytic caps. This ended the "capacitor plague" era when motherboards only lasted like 2-4 years. With solid-polymer caps and C2D, motherboards do survive the warranty, do survive twice the warranty, do survive much longer. I work for an IPC assembly shop (which is a somewhat conservative industry) and in terms of desktop-style and PICMG motherboards, there are hardly any boards RMA'ed with solid-polymer caps on them.
20W for that much neural computing horsepower...
To me, it seems pretty wasteful to emulate a rather fuzzy/analog neural network on a razor-sharp vaguely von-Neumannian architecture (albeit somewhat massively parallel). Perhaps just as wasteful as trying to teach a human brain (a vast neural network) to do razor-sharp formal logic - with all its massive ability to create fuzzy and biased associative memories and search/walk the associations afterwards, with all its "common sense" being often at odds with strictly logical reasoning ;-)
HPC = using electricity to produce heat... co-generate heat and crunching horsepower?
Not my invention, I admit - was it here, where I read about a supercomputer that had a secondary function of a heat source for a university campus?
Now if you needed to build a supercomputer with hundreds of MW of heat dissipation... you could as well use it to provide central heating to a fairly big city. Or several smaller cities. Such as, there's a coal-fired 500MW power plant about 40 km from Prague, with a heat pipeline going all the way to Prague. The waste heat is used for central heating. Not sure if the pipeline still works that way, it was built still within the commie era when such big things were easier to do...
The trouble with waste heat is that it tends to be available at relatively low "lukewarm" temperatures. Computers certainly don't appreciate temperatures above say 40 degrees. Then again, there are heating systems that can work with about 30 degrees Celsius of temperature at their input. Probably floor heating sums it up - not much of a temperature, LAAARGE surface area. No need to heat the water up to 70 degrees or so.
The obvious implication is: generate heat locally, so that you don't need to transport it over a long distance, which is prone to thermal losses and mechanical resistance (for a given thermal power throughput, the lower the temperature, the larger the volume of media per second).
The final conclusion: sell small electric heat generators, maybe starting with a few kW of continuous power consumption, electric-powered and interconnected by fiber optics, where the principle generating heat is HPC number crunching. Build a massive distributed supercomputer :-) Yes I know there are some show-stopping drawbacks - but the concept is fun, isn't it? :-)
Re: Asus mainboards?
I recall trying to max out 10Gb Eth several years ago. I had two Myricom NIC's in two machines (MYRI-10G gen."A" at the time), back to back on the 10Gb link. For a storage back-end, I was using an Areca RAID, current at that time (IOP341 CPU) and a handful of SATA drives. I didn't bother to try random load straight from the drives - they certainly didn't have the IOps.... They had enough oomph for sequential load. I used the STGT iSCSI target under Linux. Note that STGT lives in the kernel (no copy-to-user). The testbed machines had some Merom-core CPU's in an i945 chipset. STGT can be configured to use buffering (IO cache) in the Linux it lives within, or to avoid it (direct mode). I had jumbo frames configured (9k). On the initiator side, I just did a 'cp /dev/sda /dev/null' which unfortunately runs in user space...
For sequential load, I was able to squeeze about 500 MBps from the setup, but only in direct mode. Sequential load in buffered mode yielded about 300 MBps. That is simplex. The Areca alone gave about 800 MBps on the target machine.
Random IO faces several bottlenecks: disk IOps, IOps throughput of the target machine's VM/buffering, IOps throughput of the 10Gb cards chosen, vs. transaction size and buffer depth...
Did I hear a rumor about "moving the VRM on-chip" ?
A few days ago, there were rumours that Haswell would have more say in how its input rail power is regulated - beyond the VID pins of today. Some even suggested that the VRM would move into the CPU package. Anyone knows further details? This would be more of a revolution than skin-tone enhancements...
It was an interesting and insightful comment nonetheless - thanks for that.
Re: Isn't that many "recent technologies" are crap?
My point exactly. I'm a geek by profession, and I've been having a self-perception of being pretty conservative, since my twenties maybe. I try to understand and leverage the underlying basics, rather than adopt any shiny new toy (after a few such toys, it gets old). A lot of the "new" technologies *are* crap. New things introduced for the sake of novelty / eyewash / sales pitch, rather than utility / progress / improvement. How many times can you sell an Office suite, with just a new version sticker on it? A new version of a windowing OS? A "radically novel" user inteface? Yet another software development environment? Increasingly nasty licensing schemes and vendor lock-in? Some of the new stuff feels increasingly degressive...
I have the luxury of working in a small company, where I'm free to study and try whatever I want.
I also meet IT and "embedded system integration" pro's in other companies. Speaking of training, in my experience, many of them would use training in "the underlying basics". Stuff as basic as Ethernet, TCP/IP, dynamic behavior of disk drives, disk partitioning and file systems (UNIX/Linux angle), vendor-independent basic OS and networking concepts - just to get some common sense. But maybe it's indeed down to everyone's personal eagerness to "peek under the hood", or ability to take a distance from the product you've just purchased. Or, down to chance, down to opportunity to work with different technologies...
Most of the training commercially available is heavily vendor-specific and brainwashy (Cisco, Microsoft). The other side of things is undoubtedly "freedom to starve to death", as Sir Terry would put it... Once you reach the "intermediate sorcerer" level, you can become a freelancer, pretty much on your own.
In my part of the world, there are a number of "training products" apparently developed with one key goal in mind: to get an EU grant from the "training/education funds of the EU". Hardly any IT training in there, or any other rigorous professional training. Mostly soft skills. The non-IT colleagues attend those trainings voluntarily, even happily. I have other, more entertaining or useful ways of wasting time / procrastinating - or study :-)
Uh oh, whatever that means for CFast ?
A while ago, I was delighted to test-drive my first CFast card. I even managed to find some card readers, sold in Germany and elsewhere under the DeLock brand... CFast seems like a neatly open, future-proof and fairly obvious successor to CompactFlash. I'm starting to wonder if CFast is going to survive, or if it turns out to be a dead end... (owing to big-name camera vendors' marketing decisions). It would certainly be wonderful to have CFast as a ubiquitous boot drive form factor in embedded PC's for the years to come (instead of CompactFlash, now that parallel IDE is finally dead in new PC chipsets).
The SATA interface spec is nowadays capable of 600 MBps. The CFast card that I held in my hands (SLC-based, by Innodisk) was capable of about 90 MBps sequential (Linux dd-like test), featuring a 1st-gen 150 MBps SATA interface.
A couple questions come to mind... do the machines contain a Flash-based data logger, keeping track of temperatures over the service life / warranty period of the machine? As part of AMT 12.0 maybe? :-)
High-temp computing is interesting stuff. If you pay attention to board-level design, there is a lot you can do to help your design survive longer in operation at higher (broader) temperatures. And there's a lot you can *spoil* by careless board-level and system-level design.
MLCC capacitors (for power blocking) are made of several dielectric materials with quite different sensitivity to temperature - even if we speak "comparable" models in the range of tens of uF per unit, typically used to block low-volt high-amp CPU power. Some drop to ~40% of their capacity at -20 *C, some are much more stable.
Cheap Aluminum electrolytic capacitors also lose capacity at low temperatures and their ESR increases maybe tenfold, but even in conventional Al Elyts the chemistry can be slightly modified (alcohol added?) to make them perform better at low temperatures. (I hope the capacitor plague is over by now.) More importantly nowadays, solid-polymer elyts don't seem to have that low-temp problem at all, and they don't dry out at higher temp either. They last much longer. The downside is, that solid-polymer elyts are not made for voltages above say 30V - so you cannot use them at mains PSU primaries :-( So the PSU may well be the weakest spot in any computer, especially the PSU primary, which must contain conventional Al elyts and is typically a point of hot air exhaust, which certainly doesn't help the Al caps' longevity.
Next, in order to compensate for low-temp effects and gradual ageing, there may be room for designing in more capacity on a motherboard, just to be on the safe side, to have some headroom. Connecting more caps in parallel may bring the added bonus of decreasing the actual ripple current per capacitor, which decreases the capacitors' internal heating -> allows for operation in a higher ambient temperature. (The effect of cap addition -> ESR decrease might actually translate quadratically into the temperature difference / derating, which then translates into the cap's service life along some vaguely exponential curve.)
Effectively it all translates into attention to detail, and into board space occupied by the caps and by on-PCB heat dissipation space. Any additional heatsinks (e.g. on VRM FET's) mean mechanical design, which means substantial added cost (apart from designer headache) - so the board designers typically let the FET's run rather hot, because they're relatively insensitive... And space is always at a premium, especially in the ever-more-compact datacenter gear.
Let me suggest an interesting concept: if you could let your gear run at hotter ambient temperatures, you wouldn't have to use air conditioning (artificial freezing) all that much, you could use plain heat transfer more of the time. As far as I can tell, higher ambient temperatures are prevented by relatively sharp "temperature gradients" in the hardware = poor heatsinking.
Heatsinking seems to be a nasty can of worms for PCB and case designers. Especially fanless designs are highly suspicious in principle... It's interesting to see how different hardware vendors deal with this, deduce who's willing to take radical + effective + systemic steps, and who resorts to eyewash... (put shiny galvanized heatsinks on chipset + VRM, run some pointless heatpipes among them, then cover the biggest heatsink by a company logo badge).
I have to admit that in this respect, the top three name-brand hardware makers are generally in a higher league, compared to the noname market - and have been in a higher league for many years back.
One last observation maybe: even noname motherboards that started coming with solid-polymer elyts in the VRM, last much longer. The transition from the "plagued" Al Elyts to solid-polymer elyts also concided with a transition from P4 Netburst to Core2Duo (I'm Intel-based, sorry), which overall ran much cooler. My favourite way of building a long-lasting computer used to be: take an LGA775 motherboard that has enough VRM oomph to support the 130W P4's, and slot in a 45nm low-end C2D or Celeron :-) It tends to take a BIOS update, if the board has some older chipset.
agreed, for the most part
The one advantage of a hardware RAID (if integrated with a proper/compatible chassis) is failure LED's. This is somewhat difficult to get from a software RAID on a plain HBA.
At the low end, you need a disk enclosure/backplane with either SGPIO (combine with Adaptec, Areca or just about anything recent with SFF-8087) or with discrete "failure" signals, one per drive (combine with any Areca controller).
As for firmware continuity, Adaptec AACRAID used to be my favourite during several years, but in the recent years Areca has taken over their crown. Replace a controller with something you find in your dusty stock of spare parts - well that's where the fun begins :-)
Swapping cables around has never been a problem on any HW RAID. One recent experience, with a SAS-based Areca: I built an array in a 24bay external box (attached to an Areca RAID). Then I powered down the box, added another external JBOD, and scattered the drives between the two enclosures. Powered up, and voila, no problem - Areca combined all the drives correctly, from the two enclosures. Or another example: build a RAID in one 24bay enclosure, and then plug in another external SAS enclousure at runtime. No problem - enclosure detected, drives enumerated, ready to configure another RAID volume or whatever...
A quiz question: suppose you buy a new server with two drives in an Adaptec (AACRAID) mirror. Before installing your production OS, you try some recovery exercises, to see how the firmware works. You set up a mirror in the Adaptec firmware, you install an OS maybe, you remove a drive from the mirror and insert another one, to see what it takes to rebuild the mirror. The rebuild goes on just fine. You go ahead with the OS install and turn that into a production machine. You remember to "erase" the drive that you initially pulled, when testing the hot-swap: to be precise, you plug it alone into the Adaptec RAID controller once again, and remove the degraded array stump. Then you plug back your two production drives (the mirror), and put that "cleared" drive aside. After two years, one of the production drives fail - so you fumble in your drawer, produce the spare drive, plug it in, maybe a powercycle... and voila: *the production mirror array is gone* ! Explanation: the recentmost configuration change, logged on the drives, happend to be the array removal on your "spare" drive...
Otherwise I agree that for Linux it doesn't make much sense to buy a HW RAID just to mirror two drives to boot from. If you know your way through the install on a mirror, and maybe to install grub manually from a live CD, and especially if you don't plan to spend money on a proper hot-swap enclosure (so that failure LEDs are not an issue either), the Linux native SW RAID will prove similarly useful as any HW RAID firmware. For Windows users willing to spend some money on hot-swap comfort, I tend to suggest the dual-port ARC-1200 with some SAS series enclosure by Chieftec = the ones coming with workable failure LED's (the ARC-1200 is SATA-only).
As for parity-based SW RAID on Linux: if you can find it in dmesg, the MD RAID module does print a simple benchmark of several alternative parity calculation back-ends (plain CPU ALU, MMX, SSE etc) and picks the speediest one. And the reported MBps figure has been well into the GBps area for ages (since the PIII times). 3 GBps on a single core are not a problem - corresponding to 100% CPU utilization for that core.
For most practical purposes though, you'll be limited by your spindles' (drives') random seek capability. This is about 75 IOps for the basic desktop SATA drives. That is typically the bottleneck with FS-oriented operations. In such a scenario, you won't get anywhere near a HW RAID's CPU throughput limit. And yes, OS-based buffers / disk cache can sometimes help there - provided that you can configure the kernel's VM+iosched to make use of all the RAM (speaking of Linux that is).
-20% power; nearest competitor
Based on past reading, I believe they compare themselves to the 45nm Atom cores (possibly N270 still with FSB), rather than the ION's video acceleration capability :-) I have to say that the current Nano yields quite some neat number-crunching oomph, considering the 2W (or so) power envelope.
Regarding the open-source woes, actually it's not that bad - I believe their disk controllers are supported by the mainline Linux kernel, the S3 graphics subsystem is basically supported by X.org - correct me if I'm wrong here, I haven't checked for a while. In the low-power segment, DM&P Vortex has perhaps better open-source support (quite a bit of open documentation) and even lower absolute TDP - but also significantly less crunching power (which is no problem in many control applications).
22nm planar vs. 22nm tri-gate
Exactly my point :-) It really seems to me that they've merely found a way to make the die-shrink work out once again, i.e. once again somewhat in proportion to the basic geometry, before they finally have to give up any hope of dragging Moore's law any further using just silicon and lithography. 32nm->22nm = 0.6875^2 =~ 0.47 . By boasting just a 50% cut in power consumption, they're in fact admitting that that they've *almost* made the die-shrink work out up to the theoretical expectations :-D
Thanks for the _deep_ explanation BTW - if it wasn't for El Reg, I'd wade through Intel's PR fog blindfolded till the end of my days :-)
Single point of failure => what about fault tolerance?
Okay, suppose the software can cope with NUMA on this scale... the next question is, how does RHEL or SLES or Windows Server cope with individual component failures? Is RAM and CPU hot-swap there already? What if a whole multi-socket blade goes down?
(Small hardware = small problems. It's so obviously soothing. Unless you sell too many of them, and there's a systematic design flaw... Have to pet the smartphone in my pocket just to feel basically sane again...)
Wobbly CMOS clock to blame for garbled playback? Not likely...
In a particular generation of SuperMicro dual-Xeon boards (Nocona/Irwindale = socket 604), some devices in the RAM VRM's were dying. Cannot say if it was the old elyt caps or the switching FET's combined with some thermal design omission... multiple different motherboards of that generation showed higher RMA rates. Since those days, I've heard no complaints about SuperMicro - I'm pretty sure they've learned from that historical experience :-)
As for system clock distribution: that's a relatively complex issue. You have multiple levels of clocks in the system (some of them in hardware, some of them in software) and multiple levels of audio data buffers (again HW/SW). Makes me wonder if you were facing buffer underruns, or indeed wobbly playback clock (as in sampling rate). All audio cards that I know of have their own Xtal oscillators for the sampling rate clock - so the system-wide PCI clock should have little effect. BTW, the CMOS RTC clock can hardly be the culprit - the PCI clock and the various hardware timers' clocks (=> also your OS system clock) tick along some other master reference crystal, different from the non-volatile CMOS RTC. And, that multi-output clock synthesizer for the various busses and chipset subsystems can employ a technique called "spread spectrum" on purpose - to duck some EMI radiation limits simply by making the radiated "frequency poles" broader / softer. In some BIOSes, the "spread spectrum feature" can be disabled (in others, it cannot). This "spread spectrum" thing is quite common and perfectly legitimate in modern chipsets/motherboards.
However, I still don't think a little bit of added jitter in the CPU+PCI clocks would hamper your audio playback. Rather, I have a different explanation: IRQ and general bus transfer latencies, resulting in buffer underruns. "A few years back" could quite as well correlate to the transition from the old-fashoned discrete interrupt delivery over dedicated signals, to the new+hip message-signaled delivery, in-band over the "hub link" or whatever the chipset backbone link is called. The change has come in the form of chipsets such as i815 / i845. Previous Intel chipsets and contemporary chipsets from cheaper competition still used the old "out of band" IRQ delivery and were therefore showing better "interrupt latencies" under load. Another factor might be that, at about the same time, motherboard vendors (BIOSes) started to use SMI more extensively for software emulation of some missing features (such as, to emulate legacy keyboard / floppy on top of USB devices) - again resulting in occasional excessive latencies. The RTAI project even had some standard test utilities for this. I recall that some telco voice processing boards for the PCI bus did have a problem with that - and a feasible workaround at the time was to replace the Intel-based mobo's with something SiS-based.
I mean to say that none of this is a problem on part of SuperMicro - it's evolution, and it's common to a particular generation of system chipsets. Blame the chipset makers...
@"one BIOS to rule them all"
Good, thanks for that link :-) ARM has joined the "UEFI Forum" in March 2008. It's about time for some open hardware with ARM+UEFI to start to appear on the shelves.
In the PC World, I haven't been aware of UEFI very much. Some name-brand BIOSes do contain that interface, but as far as I can tell, mostly I still walk the PC BIOS side of things... Or maybe it's just that I'm not aware that Windows actually call the UEFI stuff, rather than legacy BIOS. Maybe when drives >2TB become common as system drives, I will notice :-)
As for MAC hardware (EFI), that's a bit of a special case - the hardware is legcuffed to MacOS, and it takes a bit of tweaking to get e.g. Windows running on that - provided that the proud Mac owner would ever want to dual-boot into Windows, which somehow counters the purpose of buying an Intel Mac :-)
Anyway I don't mean to argue with your good point that attempts at "ARM BIOS" have been there for quite some time...
one BIOS to rule them all
Do you know when I'd start to be afraid, in the shoes of Intel? When ARM publishes a comprehensive, uniform and open BIOS-like interface, for the ARM architecture = a software compatibility standard that would allow you to boot any operating system on a broad range of ARM machines, without you having to customize a bootstrap loader for the particular hardware model (i.e. read HW docs, modify obscure C/ASM code, compile, flash over JTAG or some other hardware probe). Another condition for being afraid would be: it would have to get adopted by the gadget makers.
*that* would be competition to the x86 PC's - to their core virtues: universal compatibility and openness.
Is that the kind of feature any of the gadget makers want? No no, quite the opposite :-) Security (vendor lock-in) by obscurity and "architecture fragmentation" reign supreme. ==> the uniform commodity x86 market is still safely in the hands of Intel (and its formal x86 competition).
Re: new = less reliable => disk drives :-)
The recentmost disk drives on the market, at any given time, are bleeding edge, and tend to have lower reliability. Highest possible areal data density, four double sided platters, quite a bit of heat produced... In the recent history, especially around 1 TB (3.5") the vendors were pulling all their best of cunning tricks to cover up physical defects at runtime, to compensate for poor reliability of the platter surface.
The most reliable drives tend to be the lowest-capacity model still being manufactured at any given time.
As for long-term durability (years, up to a decade or more): on several occasions in the last few years, while diagnosing some RMA'ed drives, or rather drives long over warranty, I've noticed an interesting "syndrome" or phenomenon. During an initial full-surface sequential reading test, the drive reports a couple of bad sectors, scattered across the surface of the drive. On repeated sequential reading tests, it's always the same sectors. Next, I tend to write the drive with all zeroes - to test if it fails when writing as well. The write test gets completed just fine. Next, I try another sequential read - lo and behold, the drive reads just fine! Even upon many repeated sequential readings, e.g. looping the full-surface read test for a week, the drive acts just fine.
My hypothesis: the payload data tend to wear off in the sectors. After years of sitting on the platter surface, the recording fades out - difficult for me to say if this is due to natural properties of the material, or due to writing/deflection magnetic field activity all around during long runtime hours. Note that this fading out does not impact the track alignment marks, comprising the skeleton of the drive's low-level format - those are made on bare platters by the disk vendor using a special machine - those tracking marks are much more durable, and "track not found" is a much more serious error than "error reading this sector".
This might have an interesting implication. To keep your data safe, you may as well want to "refresh" the recording every year or so. Just read the whole drive sector by sector, and write back the sector contents immediately to the same place, as you go along. It could keep the recording alive for many more years. As far as I know, noone does this. RAID firmwares can check the surface periodically in a read-only fashion, looking for sectors that have already failed - but as far as I know, noone has ever tried *refreshing* the recording on the platters just in case.
Good idea = use HDDs instead of tapes
Now that's an interesting idea - use cheap notebook drives instead of tape cartridges. Maybe the tape cartridge survives a few more G of a mechanical shock, but owing to its internal magnetic heads and bearings (dust-tight environment), it should OTOH survive many more overwrites, longer hours of continuous operation. Which may also compensate the highter cost of disk drives vs tape cartridges (and maybe tape drives).
Why use some mechanical or electrical multiplexing of the drive lanes? For a smaller number of drives=cartridges, you can just as well use an expander, it may be cheaper than a robot + tape drives. If you have the know-how, you can build your own virtual library along that principle - buy some cheap hot-swappable case (such as SuperMicro SC216 or SC417, or SC847 for 3.5" drives) and build a virtual library on top of that. It's not much of a problem to shut down or spin up a drive in software, or keep it in some shallower stand-by state, and even to watch out for hot-swaps. An important part remains to be solved though: the management software candy on top, and some backup client software. Something to keep track of your "tape-style disk cartridges" and maybe provide a virtual tape interface on demand to the clients. Someone in the open-source camp with plenty of free time could as well start coding something like that :-) Okay, once you run out of drive bays and you need a robot, it's back to the library makers...
There are potential fields of application where it's intriguing to deploy a big robotised tape library, with several tape drives, to perform long-term archival of some data - such as from continuous video surveillance systems (big brother kind of thing) or medical Xray/CT data. I've been told by practitioners who have attempted that kind of thing, that tapes have downsides in this application. The tape drives wear out much too fast - not up to 24x7 continuous operation in video systems. And in the medical systems, the users tend to get addicted to the possibility of having a patient's past history always at their fingertips, so that the library again just keeps huming all the time and the users are disappointed about the access time ("hey it's all in the computer somewhere anyway, so why does it take so long"). Plus, in the medical imaging technology, the data volume just EXPLODES every time a new machine is installed in the hospital (having a higher resolution, being 3D rather than 2D etc). And, somewhere inbetween all that, the tape cartridges are not very reliable after all... It's a crazy world...
ReiserFS ; spindles
The one Linux filesystem, notorious for its capability to work with myriads of small files, is ReiserFS. There are downsides though: the stable ReiserFS v3, included in the vanilla Linux kernel, has a volume size limit of 16 TB. ReiserFS v4 is not in the mainline kernel (in some part due to "functionality redundancy" reasons = inappropriate code structure) and its future is somewhat uncertain - but it is maintained out of tree and source code patches (= "installable package") for current Linux kernel versions are released regularly. Both versions also have other grey corners, just like everything else...
When working with a filesystem that large, I'd be concerned about IOps capability of the underlying disk drives (AKA "spindles"). The question is, how often you need to access those files, i.e. how many IOps your users generate... This problem is generic, ultimately independent of the filesystem you choose.
even just different x86 chipsets?
Someone has previously reported that the code worked fine on one x86 "chipset", but not on another one. And that it was a piece of onboard hardware in the drones... yes it goes against the supposed "data harvesting" categorization of the software product. That reference to two x86 chipsets may be related to another problem, rather than the originally described FP precision issue.
Difficult to say what the word "chipset" is supposed to mean here - whether just the system bridges, or if the CPU is implied as well. E.g., a mobile Core2 certainly has some more oomph than an Atom.
Either way: I recall that the people making the RTAI nanokernel (for hard-realtime control under Linux) do have statistics and testing tools to evaluate a particuar chipset's IRQ latency, and it's true that even among "x86 PC compatibles" some brands and models of chipsets show reasonably deterministic responses, while other chipsets show quite some interesting anomalies in interrupt service response time. This was real news with the introduction of Intel 8xx series of chipsets, where Interrupts were delivered inband over the HubLink for the first time in x86 history, rather than out-of-band via discrete signals or a dedicated interrupt bus - so that interrupt messages were competing for bus bandwidth (time) with general bulk payload. At that time, competing "old-fashioned" chipsets such as the SIS 650 had a much better IRQ delivery determinism than the early Intel Pentium 4 chipsets. Some cases of high IRQ latencies are attributed to unaustere use of SMI by the BIOS, e.g. for software emulation of features missing in hardware... but that again tends to go hand in hand with a particular chipset, stems from BIOS modules provided by the chipmaker etc. Don't know what the situation is now, how the different later generations of Intel hardware behave, how the Geode compares for instance...
Heheh, were (GP)GPU's involved by any chance? That's another area where floating point math can screech in its hinges...
RAID software quality from Intel etc
Several issues come to mind.
Historically, Intel has had soft-RAID "support" in several generations of their ICH's - on top of SATA HBA's, up to six drive channels. A few years ago it was called the "Application Accelerator", then it was renamed to "Matrix Storage". I don't know for sure if there's ever been a RAID5/XOR Accelerator in there, or if the RAID feature consisted of some ability to change PCI ID's of the SATA HBA at runtime + dedicated BIOS support (= RAID support consisting of software and PR/advertising, on top of a little chipset hack). Based on the vague response in the article, I'd guess that there's still no RAID5 XOR (let alone RAID6 Reed-Solomon) acceleration in the PCH hardware - what they said means that they're looking at the performance and trying to squeeze out as much as possible out of the software side. Looks like not much is new here on the software part (RAID BIOS + drivers) - the only news is SAS support (how many HBA channels?), which gives you access to some swift and reliable spindles (the desktop-grade SATA spindles are neither), if the ports support multi-lane operation they could be used for external attachment to entry-level HW RAID boxes, and if the claim about expander support is true, you could also attach a beefy JBOD enclosure with many individual drives (unless the setup gets plagued by some expander/HBA/drive compatibility issues, which are not uncommon even with the current "discrete" SAS setups). I'm wondering about "enclosure management" - something rather new to Intel soft-RAID, but otherwise a VERY useful feature (especially the per-drive failure LED's are nice to have).
The one safe claim about Intel on-chip SATA soft-raid has always been "lack of comfort" (lack of features). The Intel drivers + management software, from Application Accelerator to Matrix Storage, has been so spartan that it was not much use, especially in critical situations (drive fails and you need to replace it). I've seen worse (onboard HPT/JMicron I believe), but you can also certainly do much more with a pure-SW RAID stack - take Promise, Adaptec HostRAID or even the LSI soft-RAID for example. It's just that the vanilla Intel implementation has always lacked features (not sure about bugs/reliability, never used it in practice). Probably as a consequence, some motherboard vendors used to supply (and still do supply) their Intel ICH-R-based boards with a 3rd-party RAID BIOS option ROM (and OS drivers). I've seen Adaptec HostRAID and the LSI soft-stack. Some motherboards even give you a choice in the BIOS setup, which soft-stack you prefer: e.g., Intel Matrix Storage or Adaptec HostRAID. Again, based on one note in the article, this practice is likely to continue. I just wish Intel did something to improve the quality of their own vanilla software.
One specific chapter is Linux (FOSS) support. As the commercial software-RAID stacks contain all the "intellectual property" in software, they are very unlikely to get open-sourced. And there's not much point in writing an open-source driver from scratch on top of reverse-enginered on-disk format. There have been such attempts in history and led pretty much nowhere. Any tiny change in the vendor's closed-source firmware / on-disk format would "break" the open driver. And the open-source volunteers will never be able to write plausible management utils from scratch (unless supported by the respective RAID vendor). Linux and FreeBSD nowadays contain pretty good native soft-RAID stacks and historically the natural tendency has been to work on the native stacks and ignore the proprietary soft-RAID stacks. The Linux/BSD native soft-RAID stacks can run quite fine on top of any Intel ICH, whether it has the -R suffix or not :-)
People who are happy to use a soft-RAID hardly ever care about battery-backed write-back cache. Maybe the data is just not worth the additional money, or maybe it's easy to arrange regular backup in other ways - so that the theoretical risk of a dirty server crash becomes a non-issue. Power outages can be handled by a UPS. It's allways a tradeoff between your demands and budget.
As far as performance is concerned:
Parity-less soft-RAIDs are not limited by the host CPU's number-crunching performance (XOR/RS). If you omit the possibility of sub-prime soft RAID stack implementation, the only potential bottleneck that remains is bus throughput: the link from north bridge to south bridge, and the SATA/SAS HBA itself. Some Intel ICH's on-chip SATA HBA's used to behave as if two drives shared a virtual SATA channel (just like IDE master+slave) in the old days - not sure about the modern-day AHCI incarnations. Also the HubLink used to be just 256 MBps thick. Nowadays the DMI is 1 GBps+ (full duplex), which is plenty good enough for 6 modern rotating drives, even if you only care about sequential throughput. Based on practical tests, one thing's for sure: Intel's ICH on-chip SATA HBA's have always been the best performers around in their class - the competition was worse, sometimes much worse.
As for parity-based RAID levels (5, 6, their derivatives and others): a good indicator may be the Linux native MD RAID's boot messages. When booting, the Linux MD driver "benchmarks" the (potentially various) number-crunching subsystems available, such as the inherent x86 ALU XOR vs. MMX/SSE XOR, or several software algorithm implementations, and picks the one which is best. On basic desktop CPU's today (Core2), the fastest benchmark usually says something like 3 GBps, and that's for a single CPU core. I recall practical numbers like 80 MBps RAID5 sequential writing on a Pentium III @ 350 MHz in the old days.
The higher-end internal RAID cards, containing an IOP348 CPU at ~1GHz, tend to be limited to around 1 GBps when _not_ crunching the data with XOR (appears to be a PCI-e x8 bus limit). They're slower when number-crunching.
In reality, for many types of load I would expect the practical limit to be set by the spindles' seeking capability - i.e., for loads that consist of smaller transactions and random seeking. A desktop SATA drive can do about 60-75 random seeks per second, enterprise drives can do up to about 150. SSD's are much faster.
The one thing I've recently been wondering about is this: where did Intel get their SAS HBA susbsystem from? Already the IOP348 contains an 8way SAS HBA. Now the Sandy Bridge PCH should also contain some channels. Are they the same architecture? Are they not? Is that Intel's in-house design? Or, is it an "IP core" purchased from some incumbent in the SCSI/SAS chipmaking business? (LSI Fusion MPT or Agilent/Avago/PMC Tachyon come to mind.) The LSI-based HBA's tend to be compatible with everything around. Most complaints about SAS incompatibility that I've noticed tend to involve an Intel IOP348 CPU (on boards e.g. from Areca or Adaptec) combined with a particular expander brand or drive model / firmware version... Sometimes it was about SATA drives hooked up over a SAS expander etc. The situation gets hazy with other less-known vendors (Broadcom or Vitesse come to mind) producing their own RoC's with on-chip HBA's...
- Opportunity selfie: Martian winds have given the spunky ol' rover a spring cleaning
- Spanish village called 'Kill the Jews' mulls rebranding exercise
- NASA finds first Earth-sized planet in a habitable zone around star
- New Facebook phone app allows you to stalk your mates
- Battle of the Linux clouds! Linode DOUBLES RAM to take on Digital Ocean