* Posts by Frank Rysanek

108 posts • joined 2 Oct 2007

Page:

Google's neural network learns to translate languages it hasn't been trained on

Frank Rysanek

follow-up questions

1) is Ray Kurzweil still at the helm? He's not mentioned in the article, nor in the referenced sources, but this is pretty much in the vein of what he was hired for, and correlated to his past work.

2) on the title photo, what does the finger-puppet with the red hat actually say? Is it equivalent to Hello or is it some prank? :-)

0
0
Frank Rysanek

Re: human translator jobs

Have an upvote for mentioning Linguee :-)

For the last couple years, I've been trying to gradually improve my German by writing in German to our German suppliers. I tend to combine several dictionaries: one from my own (non-English) mother tongue which has some common cultural background/history with the Germans, then the Beolingus at TU Chemnitz, and I combine that with Linguee and general Google to prove that the meanings and phrases that I come up with sound remotely plausible in German. The downside is, that Sprechentlich es mir noch weh tut, weil ich die Wörterbücher nicht verwenden kann...

1
0
Frank Rysanek

Re: Not bad

Impressive. Considering how much trouble it gives me to translate "precisely" from English to my mother tongue or vice versa, I probably wouldn't get anywhere near this precision on a "back and forth" double translation (for proof) - separated by a couple days so that I don't remember the original in any great detail.

BTW... "da sie auf der gleichen Architektur arbeitet" : if I was to nitpick, the "sie" (she) seems to depart from the original's gender. In German, "Modell" is a neutral gender, but the translation engine chose to refer to it as a "she"... or maybe it picked up the gender from a broader context of the article? (Wrong subject on my part?) Doesn't seem so: the previous paragraph contains "the system" as a subject, which is also neutral gender in German...

0
0
Frank Rysanek

Re: Swedish/Danish/Norwegian

...I seem to recall that Norwegian and Danish are closer to each other than Swedish to either of the two... some historical reasons. But it's just a faint memory of some tour guide's explanation. (Myself coming from a slavic background.)

0
0
Frank Rysanek

Re: *the researchers found evidence that....*

> Nor do you know where "the logic" is when it's finished

Actually... if you know what to look for, and you equip your ANN engine with the right instrumentation, and/or you structure the ANN deliberately in particular ways, in the end you can get a pretty good insight into what the ANN does, how it's doing that, how fine-grained the learned model is etc. This is judging by what Ray Kurzweil has to say about his speech models, and the recent "deep" image recognition projects/startups also tend to produce visualized data learned in the upper layers...

Matt Zeiler has some nice videos on YouTube: https://www.youtube.com/watch?v=ghEmQSxT6tw

You may object that giving some a priori "structure" to the network is cheating. Well without some a priori architecture, borrowed from natural brains, our ANN's would take geological timescales to learn something useful - using GA's to start with and then some "conventional" learning...

This is actually where I see quite some room for further progress: adding more "architecture" to the ANN's. Not just more layers on top for more abstraction - maybe some loopy structures? Denser cross-connects to link multiple sensory and motor control subsystems? Reciprocating structures vaguely resembling control by feedback loop, but applied on mental reasoning tasks, attention, symbol manipulation... driven by goals/motives, maybe stratified by priority. I would hazard a guess that a cunning ANN macro-architecture could bring some radical results (at a technology demo level) even without "throwing even more raw crunching horsepower at the problem". Ray Kurzweil hints at how our brain is structured in "How to create a Mind" - someone should start playing with a coarse structure along those lines and extrapolate from there... Kurzweil himself merely concludes that we need orders of magnitude more compute horsepower and "the mind will somehow emerge on its own". I would not be so sure :-)

0
0

Leaked paper suggests EM Drive tested by NASA actually works

Frank Rysanek
Thumb Up

Re: It's a scam.

Instant peer review, at TheRegister fora :-) I'm always amazed what kind of beasts lurk here.

3
0

Robot babies fail in role as teenage sex deterrents

Frank Rysanek
Joke

Re: association with procreation

Good point... set my mind racing, the thread of thought being "how to train the two distinct situations in a realistic fashion, properly touchy feely, in close succession, to augment a cognitive association of a causal relationship" :-)

On a slightly serious note, it would probably be difficult to encourage an authentic emotional experience for the first situation too, even if real live subjects were asked to "simulate" the activity... down to choice of subjects?

0
0
Frank Rysanek

Re: Useless for the intended purpose

Thanks for that comment :-)

(joke alert) I would add a simulation of accute otitis media, when the baby starts weeping and yelling for no apparent reason, and possibly the playback of a recording of myringotomy/paracenthesis being performed to a live kid (along with an explanation that episodes of otitis media are often repetitive throughout the years of childhood). And yes I know that this therapy is far from commonplace in the developed world. A popular mothering discipline is "what the hell, what is it this time" - why does the baby yell and refuses food. There can be several other causes beyond those mentioned above, only a few of them are accompanied by a fever or some other outside hint. I.e. a proper simulation should include the "panic-level yelling for no apparent reason for at least an hour", combined with a deliberately impossible task to measure the baby's temperature...

How do you simulate a week of "coughing all night", coupled to the always present possibility of receding into a serious bronchitis or pneumonia, which would require a stay in the hospital. Heh I wouldn't dare to suggest a simulated lactational psychosis. And that's for an infant/toddler/child that's still pretty much normal and healthy, and knowing that with our modern-day medical care the risk of actual death is near zero (until the relieved pressure on natural selection has the inevitable consequences on our average health, a couple dozen generations down the road).

There are babies who start refusing breastfeeding after a few days or weeks, in spite of the mother being desperate to breastfeed, for the sake of responsibility and proper natural nourishment. That combined with lactational psychosis... "splendid". Difficult to properly simulate in a doll play.

The choice of code brown / code yellow / burp / fart is a piece of cake to master. Could be pretty entertaining to a girl, actually. Possibly attractive to a girl in her late teenage = with the right natural hormonal setup to think about motherhood. Reality is much more difficult than playing with dolls. In reality you have the accute knowledge that "this is for real", you feel alone in it, you feel responsibility, whipped up by some hormonal developments in the fresh mother... The reality is much more of a shock compared to playing with dolls who you know are just dolls.

Also, depending on your family background, as a young parent you may have a problem to feed your family - how do you simulate that? What if the teenager whom you're trying to "educate" is used to that sort of environment?

Then again... do we want to scare our adolescent population with the actual brutality of life? Some are scared enough without our deliberate effort, others would just shrug it off anyway, the way they always do. Responsibility is down to individual personality traits... For some, even the actual reality shocker of having their own baby is not shocking enough to prevent them from smoking and drinking during pregnancy or breastfeeding...

13
0

Microsoft has open-sourced PowerShell for Linux, Macs. Repeat, Microsoft has open-sourced PowerShell

Frank Rysanek

PowerShell... ah well...

On the current job, I've encounted powershell - for a couple things in W2k12 and SQL 2k12 that should've been easily configurable, but for some reason, they were not... I downloaded some other people's scripts and wrote one or two (simple ones) of my own.

Seems to me that the syntax of Powershell is a little uncertain / not strict in one particular style / confusing to me.

Powershell is more confusing to me than Perl syntax, and yes I do mind the Perl's multiple ways of doing the same thing, the boundary between Perl 5 "canonical best practices" and the mess supported for backward compatibility with Perl 4 and older...

I'm actually using Perl on Windows for slightly more complicated scripting, things that exceed the capabilities of cmd.

Sarcasm off this time - I only dare being sarcastic as an AC (not when logged in).

2
1

Microsoft's Windows Phone folly costs it another billion dollars

Frank Rysanek

Re: spontaneous upgrade to win 10

Happend to two sane people in our small biz last week. Both were royally pissed off. No data loss though - so far so good. It does seem like the GWX thing didn't even ask for permission this time. It just went ahead.

6
0

Building a fanless PC is now realistic. But it still ain't cheap

Frank Rysanek

Re: Cheating ...

In a chimney I would be suspicious of conductive carbon dust and maybe rainfall, but if those two factors were taken care of, your friend's got my thumbs up, all four of them :-)

As for separating yourself from the PC using a long cabling trunk: yes this is perfectly possible.

USB: about 5 or 8 meters max, can be extended by a hub

Gb Ethernet: 100 m over CAT6 material

HDMI/DVI/DP: this depends very much on your screen resolution. I recall someone reporting success when transporting DVI (equals HDMI) at full HD resolution at 60 Hz over a distance of 10m using just an extension cable. Must've been some pretty good cabling material. Today's highest resolutions (4k) may limit your video cable to 2-3 meters from the PC to the monitor.

BTW, attics tend to suffer from heat during the summer (unless thermally isolated).

2
0
Frank Rysanek

fanless with high performance is moot

My first silenced PC was my home 486 @ 120 MHz, back in 1995 or so. This was at the time when CPU fans started to appear - I removed the flimsy active CPU heatsink and used a larger passive heatsink body. In addition to that, I undervolted the 80mm PSU fan from 12 to 5 V. At that point, the AT PSU already had some 4 years of service behind it, catering for a 386DX - and survived maybe 6 more years, until the 486 PC finally went to the scrapyard for moral obsolescence.

Nowadays I work as a troubleshooter in a shop selling industrial PC's, both classic 19" machines and also some fanless models. We don't make fanless PC's, we import them from TW. There are maybe 5 famous brands of such fanless IPC's in TW (famous in Europe and the U.S.). It's not something you cobble together in a miditower ATX case and unplug all the fans. We sell tightly integrated x86 machines with a die-cast or extruded aluminum outside shell, with the outside surface consisting of fins everywhere. Yes, surface is the keyword. But what's even more complicated, is proper thermocoupling of the power-hungry components on the inside, to the outer shell. If the PC maker is not pedantic enough, he cheats by not thermocoupling everything inside properly to the outer shell. Whatever lives inside and consumes electricity, runs hotter than the outside shell... As there's no forced air flow, and telepathic heat transfer doesn't work, any "uncoupled" heat sources on the inside have to rely on natural convection... not a very good prospect. Especially with first-generation fanless PC's, using Banias / Dothan / Core2 "notebook" CPU platforms with about 35 W total TDP, and with botched thermo-coupling, the PC's were plagued by overheating problems. A modern Haswell or BayTrail SoC in a generous finned enclosure, that's a very different story :-)

I've seen people build fanless PC's by taking a desktop ATX case and stuffing all fanless components inside: a fanless CPU heatsink, a fanless GPU card, a fanless PSU... such a PC is rather short-lived :-) No matter how big your heatsinks are, if you keep them closed in a Miditower case without any fan, they don't have too much effect, as it's the PC case's outside surface that matters for heat dissipation - and that surface isn't very big. Without airflow, everyting inside roasts in its own heat. Even if you just use a tall CPU heatsink with heatpipes (can even be active), or water cooling with an outside radiator, pay attention to the fact that CPU and memory VRM's (point-of-load buck converters) on ATX motherboards often *rely* on the toroidal vortex around the CPU socket, caused by a conventional CPU heatsink fan.

For small internal use, I tend to build low-power servers in retired 19"/2U IPC server cases. I use a new silent-ish high-efficiency industrial PSU (with a rear 80mm fan), I adapt the old 19" case to accept a MicroATX motherboard, I remove all the noisy 80mm chassis fans and insert a single low-RPM radial 120mm blower over the CPU heatsink, with its exhaust channeled through the rear wall. The silence is marvellous...

Generally, in home-cobbled ATX PC's and small servers, I like to use large "passive" heatsinks, but only combined with a slow fan that creates a very basic draft of fresh air though the case.

4
0

Google wants new class of taller 'cloud disk' with more platters and I/O

Frank Rysanek

Re: Wishing wells are nice things

Exact same feeling by me. "I must be missing something, or the paper was written by someone from the PR."

What would you achieve by having two different platter sizes per drive, on a common spindle? (The smaller platters would be small AND slow in terms of IOps AND slow in terms of MBps - but let's not start with that.) On Google's scale, if you know beforehand what data is hot and what is not, you can sort it in software and store it on different storage volumes built with different optimizations (slow big drives vs. fast smaller drives vs. flash or whatever). How is the *drive* supposed to know which LBA sector is going to be hot and which one not? Also, map LBA sectors to physical sectors in some hashed way, other than "as linear as possible" ? Really, at the spinning drive's discretion? Even if the drive did have two sorts of platters, fast and slow, considering that it had no a-priori knowledge of what data was going to be hot, perhaps the idea is that it could move the data to the fast vs. slow region "afterwards", based on how hot vs. cold it actually turned out to be... thus wasting quite some of its precious IOps on housekeeping! Also, it would better infer the FS layout happening at several layers *above* LBA, to relocate whole files, rather than individual sectors, as otherwise it would ruin any chance of enjoying a sequential read ever again ;-) And oh, by the way, we'll take your RAM cache, you can't really use it efficiently anyway, it's more use to our FS-level caching, thank you. Seems to me that complexity-wise, having several categories of simple disk drives (of different sizes and IOps rates) is obviously more manageable than having mechanically complex drives with hallucinogenic firmware managing nonlinear sector mapping and a fiendish tendency to try to guess your FS-level allocations and occasionally get them wrong...

There's an old Soviet Russian children book series by Nikolai Nosov, sharing a lead character called Neznaika. Never mind the plots with often leftist outcomes/morales... I recall in one of the books there was a lazy lunatic cook, wasting a lot of time by trying to invent "dry water" in his head... The point of dry water was, that you could wash dishes without getting wet :-) This is a hell of a metaphor for many concepts and situations...

1
1

EU could force countries to allocate 700 MHz band to mobile by mid-2020

Frank Rysanek

Re: retune

Exactly.

Where I live, we have like 6 DVB-T MUXes (carriers) in the air, from about 4 directions, with varying signal strength (RX). Years ago when I moved in, I started to look after the "shared terrestrial aerial", serving the small appartment block I live in... I made it through the first retune (analog darkness) and a roof reconstruction with relatively modest investment, and by sheer luck the two bandpass antenna preamps are just about right for the mix of frequencies and levels... Even if we did buy a proper blade chassis with channel cards, the channel amp cards tend to have a limited tuning range = another retune into a different band would cost us another hefty sum...

When the time comes again, I'll see what I can do :-)

Interestingly, as the analog TV channels were squatting in the lower TV bands, the analog phase-out has resulted in most of the DVB-T occupying the upper channels in band V - which I didn't like very much, as the upper channels are a pig to catch if you don't have direct line of sight to the transmitter, and they have shorter reach on the legacy coax cabling I have inherited in the building. Another retune back to the lower channels (in our case, it might correlate with the shift to DVB-T2) might improve signal levels at wall sockets in our house :-) We probably won't be moving back to band III, but even band IV would be nicer to work with, than channel 58 in band V.

1
0

While we weren't looking, the WAN changed

Frank Rysanek

Same old same old

It's still the last mile that matters first and foremost. While reading the article, I was itching with curiosity all along, wondering if telepathic broadband transfer has finally been invented, or if the article would culminate with postal pidgeons over MPLS or something... none of that, actually.

In terms of weird stuff over MPLS, the weirdest I've read about was something like SDH over MPLS.

Back to last mile.

Even if you have a local cable co. / ISP wiring the whole neighborhood (around your offices) with dense FTTB at very friendly consumer prices, and you let him enter your building with a plastic pipe (the extra trench is like 15 m), once you ask for an actual proposal, for some modest symmetric bandwidth over fiber, with a /29 block of public static IPv4... if there's no competition, he will possibly propose the fairly basic service at an outrageous sum of money. I was in that situation as a small business admin and kept using two microwave links (redundancy) for several years, until the local optical ISP finally gave in and proposed something sensible (and the sales guy got fired shortly afterwards.)

Here in CZ in a mid-size town (100k people), the real news in the recent years has been that local optical startups have started trenching across our post-commie residential areas (highrise condos with lots of grass inbetween). Actually in our very town, it's not that optimistic - it's a nation-wide cable co. vs. a local optical competitor. The nation-wide behemoth doesn't bother to offer better pricing, hence the local cable/optical company (in the business for some 25 years now) is earning most of the new consumer customers, for its symmetric optical (FTTB) Ethernet... but they're actually not a new startup, they're more like a local incumbent. Next to the incumbent telco, selling DSL over 20 years old copper, which was then (in mid nineties) totally overhauled using govt subsidies...

The midsize and bigger cities tend to be barricaded against "trenching optical startups" by local incumbents with political connections. I keep hearing about even smaller towns (~10-20k residents) where wireless ISP's turned optical startups are busy trenching consumer broadband and selling it cheap, with the support of an elucidated local authority. Excellent places to live, less excellent to find a job apparently...

Hell I'm told that many locations in Prague are absolutely hopeless in terms of modern broadband, consumer or business-class. And, it's always about the last mile. Noone bothers to lay new optical cables in the densely cobbled urban areas. I used to work for two ISP's in Prague for several years around Y2k, I remember very well the numerous sales opportunities where there simply was no last mile transmission line to use... Where I work now, we have an office in Prague as well, at an outskirt of the city (a residential area with highrises and lots of grass) and our office still uses a microwave link!

I work for an admittedly small business. We don't care about MPLS. Most of our sales people are scattered throughout the country anyway, and the business software has to be useable for them from anywhere they stop for a while, so it wouldn't matter if some bigger "remote offices" had MPLS or some L2 VPN... It's OpenVPN for all of them and RDP on top of that, and the database client running against a local RDBMS on an RDP desktop is throttled mostly by ODBC latency, much more than by RDP screen refresh.

Once you get a good last mile, VPN can be quite a breeze. Perhaps we're lucky that we have a good local (national level) peering arrangement: the independent peering point (called NIX.CZ) now actually runs a distributed infrastructure with nodes in several cities... and I haven't heard about bilateral peering skirmishes among ISP's in the last 15 years or so. As for the firewalls... if you know the necessary basics, a good basic firewall can consist of a Linux PC with OpenVPN for the tunnels and Quagga(Zebra) to do some internal routing of your private subnets. Dual uplinks to two ISP's (with a double NAT) have their inherent limits for outbound internet traffic, but can be pretty nifty for a redundant VPN = if combined with redundant VPN tunnels and some dynamic routing on top (I prefer iBGP over OSPF, as BGP does *not* require a clear "link state" from the lower layers and keeps checking the connectivity on its own). You don't even need a PC for this, you can run OpenWRT on some SoHo router hardware, and theoretically Mikrotik HW/FW should also be capable of this.

Yeah right - I'm at the lowest end in terms of headcounts and bandwidth. It only starts to get interesting when you struggle with bandwidth and complexity (imagine multiple sites linked together in a massive VPN mesh).

I am told that there are off-the-shelf firewall boxes (no, not Cisco) that are miles ahead of my homebrew cobbled gateways. For the lazy folks it must be an excellent solution.

"Local Internet Breakout" - hell, I never knew it's got a dedicated name :-)

Outsourced VPN, outsourced security? God forbid, as long as I have a word... I used to work for the other side.

7
0

Uni of Manchester IT director resigns after chopping 68 people

Frank Rysanek

Re: Would love to hear some context...

Oh actually it's 60 000 students, 80 000 total. = even leaner IT.

1
0
Frank Rysanek

Re: Would love to hear some context...

Excellent, thanks for the insight :-) 200 IT staff to cater for 60 000 people actually sounds pretty lean to me... and indeed a good candidate for some uniform printing solution :-) among many other technical challenges...

2
0
Frank Rysanek

Would love to hear some context...

How big is the organisation being thus treated? How many in-house IT staff (before the layoffs), how many staff total, how many students? What does university IT mean nowadays, exactly? In the old days, it was a few computer "labs" (a couple dozen desktop PC's networked in a room), some file servers, some printing... The school I went to had a nice in-house client/server app keeping track of the students, lectures, allocation of seats in the courses to students... coded and supervised by a single guy. That was in the second half of nineties. I am told that much of the in-house DB software (then native MS-DOS apps) was later ported to web-based environment. I can imagine that a big enough university might appreciate something like an ERP business package... And then some departments might manage servers of their own, for special apps, HPC and whatnot. Technical schools are likely to have more of this arcane specialized stuff - but the chairs/dept.s also tend to have post-grad people / lecturers who take care of the high-end stuff as part of their highly specific jobs.

So... 68 people laid off. How many per cent is that of the total IT staff? What professions were laid off? Are they gonna outsource the grunts replacing broken keyboards, adding stolen mice, fixing broken Ethernet links, taking care of toner cartridges...? 68 people in university IT sounds like quite a lot... then again, my post-commie school catered for just about 1000 students per year, which might be a relatively low count, compared to universities of the western world...

1
1

Met Police: Yes, outsourcing IT to Steria has 'risks'

Frank Rysanek

headcounts

So... along with the outsourcing deal, they're downsizing in-house IT from 800 to 100 people. How big is the metropolitan police? How many people actually doing police work? How many PC's = user seats, how many user accounts in the system? How large is the "server back end" to take care of? With proper police IT, there tends to be a centralized system for "filing the records by individual cases". There will also be some in-house bookkeeping, purchasing, HR etc. - not unlike an ERP system, if the organization is big enough. Plus some printing, maybe e-mail... all of that with a police-style focus on security. And special systems and arrangements for access to various 3rd-party databases / registries that might be of interest to police investigations. 800 people to manage all of that... The number alone sounds like a lot, but was that in fact enough? Or was it too many?

2
0

Chip company FTDI accused of bricking counterfeits again

Frank Rysanek

Re: fed up with FTDI

[replying to my own comments is a sign of mental disorder ;-) ]

I recall one other encounter with FTDI - this one had a driver release engineering angle. I once bought a programming dongle for Lattice CPLD's. You guessed it: the dongle contains an (original) FTDI chip. Next, I needed XP drivers for the dongle. It was in February/March 2015 = admittedly pretty late for XP, but that's what I still run on some computers in the lab (and am happy that way). To this date, FTDI still list their driver 2.10 as compatible with XP - while in reality, 2.08 already failed to load in XP SP3. I managed to find 2.06 somewhere, and that did work in XP.

As for the opinion that "FTDI is no longer needed": in the "industry" and in the tinkerer community, RS232/422/485 are actually far from extinction. Note how simple the interface is - I wish that USB was so simple to use and debug, so universally compatible. And, RS232 doesn't force you to write your own USB driver (or work around writing your own driver by using some generic framework, libusb or some such). Especially writing drivers for Windows is a tad complicated by the required MS signature (that apart from the general driver writing complexity). Even the user space software authoring tools are more restrictive nowadays, than they were in the days of my optimistic youth... Corporations are helping each other to create a wall between how far you can get in DIY vs. what's possible with technology only available to corporations. Security measures against malware proliferation? Malware authors always find a way...

1
0
Frank Rysanek

fed up with FTDI

I've seen USB gadget chips, own designs from TW/CN companies, that were so crap that they just didn't work. Some of them were LPT and serial converters. I mean to say that some of the counterfeit FTDI chips (the ones carrying a fake logo etc) possibly don't need any deliberate bricking :-)

OTOH, I've been in contact with someone who purchased an RS422 converter board *straight from FTDI*. You know, one of those advertised straight on the FTDI's website.

http://www.ftdichip.com/Products/Cables/USBRS422.htm

And the board didn't work! FTDI's tech support admitted that they had some problems with a past batch...

Possibly the only famous alternative to FTDI is the Prolific PL2303. Not exactly a shining star. And, they also have a problem with counterfeiters.

YMMV.

It would be neat to have a "USB serial device class", with a class-based driver from Microsoft. There's the ACM CDC, but doesn't seem to be a perfect match...

Then again, we have the USB LPT device class, and many products of this class "just don't work" anyway :-( I mean - for printers, just taking print jobs from the spooler...

1
0

Eight budget-friendly 1TB SSD data packers for real people

Frank Rysanek

Re: Ever had an SSD fail?

Sure :-) We sell SSD's mostly as boot drives in "industrial process control" PC's. The endurance of an SSD depends greatly on how you configure your OS and apps. An SSD in read-only mode can last for ages. I have some firewalls booting Linux from read-only CF cards, running for almost a decade. Same thing for simple DOS-based systems that don't ever need to write to the drive (or scarcely). Same thing for Windows Embedded with EWF locked all the time.

But I also know cases where a SCADA app (configured to log data or keep a persistent image on disk) can thrash a decent 2.5" SSD in three months. Spinning rust still has its merits. Yes it can fail too - but it's not *prone* to fail in some deployments where SSD's *are* prone to fail pretty soon. And, in terms of spinning rust, you'd better shop for the *lowest* capacity currently available on the market = the simplest and proven construction, the lowest data density. The terabyte race is not a nice prospect in that context.

Ironically, most people still think that strictly nothing beats the endurance of an SSD in the role of a Windows boot drive... any SSD, in unmodified stock Windows, running Windows Update, an antivirus, a dozen self-updating apps etc.

You know - you install Windows on your shiny new expensive SSD, ohh the joy of how *fast* it is, then you go entertain yourself with something else... and a couple months down the road, when the SSD slows down noticeably, or fails outright, you tend to blame the piece of the SSD, or the early SSD model, or the brand... "Gosh, the SSD's were *crap* a year ago... must've been a bad batch or something... let me have a new one, that will surely last longer!" ;-)

It hardly comes across your mind that maybe the SSD thing is *principally* wrong for the position.

7
2

128GB DDR4 DIMMs have landed so double your RAM cram plan

Frank Rysanek

Re: Errr?

AFAICT all Intel single-socket desktop CPU's so far support 32 GB max. total, in 4 DIMM slots. Thus, you don't really need anything over 8 GB per DIMM at the moment, in the consumer segment = non-ECC/unbuffered :-( (Does AMD support more RAM?)

1
1

Printer drivers ate our homework, says NSW Dept of Education

Frank Rysanek

PostScript isn't all bad

Generic Postscript appears to be the vehicle that allows for large-scale networked printing. There are several vendors of centrally managed / networked / distributed printing systems for large corporations, that depend on Postscript as the "common denominator" = common printing language.

Using Ghostscript and its fellow RedMon (virtual printer device / redirector), you can turn a Windows-only GDI printer into a PostScript printing backend. Provided that there are still drivers for your old and cheap printer in your current Windows version - which may turn out to be your ultimate hard cheese :-) But if you do have a native Windows driver for the printer, the rest (Ghostscript+Redmon) is subject to some IT automation / scripting, if this is to be deployed on a more massive distributed scale. Yes there would be pitfalls, if the strategy so far has been "bring your own printer" :-)

Windows 8.1 / 2012 R2 still contain workable generic PS and PCL5e drivers. Ghostscript can produce PCL5 (and there's a flavour that can take PCL5e as *input* I guess). Typically, the "second cheapest" laser printer from a given vendor will take PCL5e (or at least PCL3). Not sure if inkjets are considered in the edu.au SAP project... they're a plague in their own right anyway.

I suspect that printers are just an excuse though.

1
0

Microsoft has developed its own Linux. Repeat. Microsoft has developed its own Linux

Frank Rysanek

Network switches have been running Linux-based firmware for ages

Ethernet switches have been using Linux-based firmware on the inside for ages - especially the lesser known brands / switch vendors. Cisco traditionally had their own in-house IOS, but I seem to recall that some more modern IOS strains on some HW platforms are actually linux-based too... Other popular operating systems to use for firmware are the various BSD flavours and various RtOS'es (QNX, VxWorks and the like). The CPU cores used in switching hardware (= what actually runs the firmware code) are typically PowerPC, ARM, or MIPS - Linux supports all of them. If the Ethernet switch chipset makers provide some reference firmware platform, it will most likely be Linux. So if someone like Microsoft possibly decides to develop their own firmware for some 3rd-party OEM switch hardware, Linux is a very logical choice. That's where they're likely to get the best technical support, needed to bootstrap Linux on the management CPU core, and in terms of drivers and API's for the specific hardware (L2 switch matrices, L3+ accelerators, DMA engines, individual mac/phy blocks, various misc IO such as I2C/SPI/GPIO). But I still consider it a little unlikely that they're going all the way from bare metal (Linux from scratch). I would find it more natural if they took whatever reference firmware (Linux) the chipset maker has provided, and port the Microsoft's own user-space tools / API's to it, while possibly bugfixing and modifying the reference firmware a bit in the process.

13
0

Does Linux need a new file system? Ex-Google engineer thinks so

Frank Rysanek

any improvements for "a myriad threads reading slowly in parallel" ?

There's one use case which traditionally used to be a problem for cheap spinning rust in servers: multiple (many dozen or hundred maybe) slow parallel threads, each reading a sequential file from disk drives. For optimum throughput, the FS and the OS block layer should minimize the seek volume required. With enough memory for read-ahead, it should theoretically be possible to squeeze almost the sequential rate out of a classic disk drive. A couple years ago, Linux wasn't there. Too many out-of-sequence seeks for metadata, read-ahead not aligned on stripe boundaries in a RAID (there were other unaligned things if memory serves), no I/O scheduler really tuned for this use... There was allegedly some per-flow read-ahead magic in the works, but I have no news. Not sure if a new FS even has a chance of improving this. Not that anyone has claimed any such thing, regarding bcachefs or otherwise.

1
0

Because the server room is certainly no place for pets

Frank Rysanek

Re: IRQ asmnts?

I only know the hardware side of this, never actually tried to use them in a host/hypervisor... so I cannot tell you how it's done.

The "virtualization" support in hardware comes in several degrees.

1) VT-x - this facilitates the virtualization of the CPU core. I understand that the host/hypervisor must provide the "virtual" devices (storage, network, KVM) via its own guest-side device drivers (decoupled from actual hardware). In other words, the hypervisor SW must mediate all I/O to the guests, the guest OS merely lives happily under the impression of having a dedicated CPU.

2) VT-d - essentially this allows you to assign/dedicate a physical PCI device (or even PCI dev function) to a particular VM guest instance. The secret sauce seems to have several ingredients, and IRQ's are just one part (the easiest one, I would say). I've recently found some notes on this (by no means exhaustive) in Intel 7-series PCH datasheet and in Intel Haswell-U SoC datasheet (vol.1). Interestingly, each doc explains it in a sligtly different way. I recall reading about the possibility to invoke a selective reset of a single physical PCI device (actually a PCI dev function), about delivering interrupts to a particular VM, about making DMA (several flavours) virtualization-aware (compliant) - and I must've forgotten a few more.

Only some selected on-chip peripherials lend themselves to VT-d (they're listed in the chipset datasheet).

3) SR-IOV - allows you to "slice" a physical device (peripherial) into multiple "logical partitions", where each "partition" appears as a dedicated physical device to its own assigned VM instance. It's like VLAN's on PCI-e, where SR-IOV aware peripherials (NIC's, RAID controllers) know how to work with a "VLAN trunk". SR-IOV can not only cater for multiple VM/OS instances through a single PCI-e root complex, it can actually cater for multiple PCI root complexes as well - allowing for multiple physical host machines to share a PCI-e NIC or RAID for instance (or a shelf of legacy PCI/PCI-e slots).

VT-x has been there for ages, in pretty much any modern CPU.

VT-d has been a somewhat exclusive feature, but becoming more omnipresent with newer generations of CPU's and chipsets.

SR-IOV needs VT-d in the host CPU and chipset, and most importantly, the peripherial must be capable of these "multiple personalities". Only a few select PCI-e peripherials are capable of SR-IOV. Some NIC's by Intel for instance. Likely also FC and IB HBA's. As for the multi-root-complex capability, this requires an external PCI-e switch (chip in a box) that connects to multiple host machines via native PCI-e. Or, the multi-root switch can be integrated in the backplane of a blade chassis. A few years ago, multi-root PCI-e for SR-IOV seemd to be all the rage. I recently tried to google for some products, and it doesn't seem to be some much en vogue anymore - or maybe it's just so obvious (implicit in some products) that it doesn't make headlines anymore...

As for IRQ's... IRQ's alone are nowadays message-signaled for the most part (for most of the chipset-integrated peripherials). PCI-e devices are per definition MSI compliant (MSI = one ISR per device) and most of them actually use MSI-x, where one device can actually trigger several interrupt vectors (ISR's), such as "one for RX, one for TX, and one global" with modern Intel NIC's. Even before PCI-e MSI's, the IO(x)APIC present in most machines since maybe Pentium 4 can route any IRQ line to any CPU core (any CPU core's local APIC). Considering all this, I'm wondering what the problem is, to assign a particular IRQ to a particular CPU core (running a VM instance). Perhaps the IRQ's are the least problem. Perhaps the difference with VT-d is, that the mechanism is more opaque/impenetrable to the guest OS (the guest OS has less chance of glimpsing the host machine's full physical setup and maybe tamper it). That's my personal impression.

IRQ's on PCI are, per definition, PnP (except for some *very* exotic exceptions, where you can specify in the BIOS, which GSI input on the chipset triggers which interrupt number in the IO/APIC, or where you can jumper a PCI104 board to trigger one of the PCI interrupt lines, one of your own choice). In a virtualized setup however, the IRQ routing must follow the admin-configured setup of "which VM owns which PCI device". PnP with human assistance, I would say.

1
0
Frank Rysanek

Re: Sustainable push forward

That upvote is from me, thanks for your response.

In my case, it's indeed a matter of being somewhat inertial and lazy. The scale is relatively small, hasn't pushed me in the right way very much. I'm not a full time admin, and the 4 servers that we have in the cabinet (2x Windows, 2x Linux) are not much of a problem to cope with. An upcoming migration of an important app onto new windows (2003 end of support) will raise that to 6, temporarily (read: until everybody moves their small internal projects to the new infrastructure, read: possibly forever). So far I've been approaching all this by keeping the hardware uniform and keeping the Linux distroes hardware-agnostic. I'm doing any revamps of the Windows hardware in "waves", to save a bit on the spare parts. We're a hardware shop ourselves, so I always have some old stuff lying around - all I have to hoard is an extra motherboard in every wave. There's a server or two elsewhere in the building - machines that I prefer to have physically separate from the main cabinet.

Other than the small scale, I'm a classic case for virtualization - I have Windows and Linux, and I'm too young to be a conservative old fart (which is how I actually feel in many respects :-) = I hardly have an excuse for my laziness...

Regarding potential virtualization, one possible consideration is the organizational environment. I'm ageing along with a stable gang of merry colleagues who are less conservative than I am, but more in the way of "if it's not click'n'done, like Apple, there's something wrong with it". On the job they're all well versed in managing Windows across different pieces of physical hardware, and are even ahead of me in terms of virtualization on a small scale (for testing purposes of Windows Embedded images etc.) but - they're not very good at debugging firewalls and heterogeneous tech. I'm wondering what an additional layer of indirection would present to them, if I get hit by a car someday... it's indeed a matter of documentation and deliberate internal sharing of knowledge. Or outsourcing the whole darn internal IT thing (in a house full of computer geeks).

After your comments, my impression of virtualization boils down to approximately "within a time span of several years, it will decouple your OS versions from the physical hardware and its failures, you will only have one physical machine to cater for, and yours will be the choice when to migrate the VM's to less archaic OS versions (which you have to do anyway, no escaping that) = at a time when it suits you."

2
1
Frank Rysanek

Sustainable push forward

So you've virtualized all your legacy boxes. You haven't just imaged the old versions of Windows, Netware or whatever have you - you've even installed modern Windows versions in the VM partitions, reinstalled/upgraded/replaced the apps etc. Instead of a 42U rack cabinet, you now have a pair of modern quad Xeon servers (because if it was only one server, that would be a single point of failure, right?). Now finally you can juggle the system images at a whim and Ghost has become a fading memory. Oh wait - for a proper virty orgasm, you need an external storage box to centralize your storage, of system images and data volumes. Heheh - or two storage units, to avoid the single point of failure... because disk drives, RAID controllers and power supplies are all eventually perishable. Fortunately the storage attachment technology doesn't matter much (SAS/FC/IB/PCI-e/iSCSI?) as long as you have a way of getting your data out of the old storage box a couple years down the road. To the hypervisor, they guest images are just files - so you only need to have a way of moving the files around (actually forward). Next question... will your system images of today, be compatible with a future hypervisor release 5 years down the road? What about 10 years? Will your colleagues 10 years down the road be able to maintain that old hypervisor, to restore the host OS from backups onto bare metal? Ahh yes - you can upgrade the host/hypervisor OS regularly / incrementally through the years. If you have a myriad OS images with non-trivial virtual network interconnects between them (just a LAN and DMZ with some servers in each, plus a firewall in another partition) - will your colleagues 10 years down the road be able to find their way around this? Yes of course - it's a matter of proper documentation, and passing the wisdom on... Will the virtualization make it any easier for your successors? Isn't it a matter of replacing one problem (supporting old OS on old bare metal) with the same problem in a more layered and obfuscated reincarnation? (supporting your host OS / hypervisor on the ageing bare metal, and supporting the old guest VM's in potentially new host OS / hypervisor releases?).

To me, the article is pretty disturbing. I do feel the old git taking over in my veins...

10
1

Post-pub nosh neckfiller: Bryndzové halušky

Frank Rysanek

Halušky with cabbage

Regarding the alternative recipe with cabbage - yes that's the less radical version, making Halušky more accessible to non-Slovaks :-) The cabbage is supposed to be pickled/fermented/sour (Sauerkraut), definitely not fresh and crisp. Not sure at what stage the cabbage gets mixed in - it's definitely not served separate and cold.

2
0
Frank Rysanek

Bryndza

Without Bryndza, you cannot say you ate the real deal. The gnocchi-like "carrier", athough some may like it alone (I do :-) is just a dull background to the incredible and breathtaking flavour of genuine Bryndza. Not sure if any British sheep cheese can rival the raw animal energy of the Slovak Bryndza. Unforgettable. I'm not a Slovak - to me, once was enough.

1
0

BAE points electromagnetic projectile at US Army

Frank Rysanek

the one thing I don't get...

How do you fire this, without nuking your own onboard electronics?

1
0

Gates and Ballmer NOT ON SPEAKING TERMS – report

Frank Rysanek

Re: to buy a failing company

To buy a company in trouble can be a successful strategy for some investors.

If it wasn't for the fact that Nokia was a technology giant, it might be a classic choice of Warren Buffett.

The Nokia phone business did have several steady revenue streams, several development teams working on some interesting projects, several good products just launched or close to a launch (which could get refactored in following generations, but didn't). As far as I can tell from outside, they might as well keep on going with a profit, if they had a chance to selectively shed some fat in terms of staff and internal projects, get more focused and "stop switching goals halfway there".

Microsoft's only plan with Nokia is to have an own vehicle for Windows Phone, which means that much of Nokia's bread-and-butter technology legacy has been wasted, and many legacy Nokia fans left in a vacuum.

21
2

Business is back, baby! Hasta la VISTA, Win 8... Oh, yeah, Windows 9

Frank Rysanek

why upgrade; OS maintenance over the years

In terms of the underlying hardware architecture, for me the last true HW motive that would warrant an upgrade was the introduction of message-signaled interrupts. MSI has the potential to relieve us all of shared IRQ lines. It required a minor update of the driver framework - and I'm sure this could've been introduced in Windows XP with SP4. Well it got introduced as part of Vista (or let's discard Vista and say "seven") - and was a part of a bigger overhaul in driver programming models, from the earlier and clumsier WDM, to the more modern and supposedly "easier to use" WDK. Along came a better security model for the user space. With all of these upgrades in place, I believe that Windows 7 could go on for another 20 years without changing the API for user space. No problem to write drivers for new hardware. USB3 and the like don't bring a paradigm shift - just write new drivers, and that change stays "under the hood". Haswell supposedly brings a finer-grained / deeper power management... this could stay under the hood in the kernel, maybe catered for by a small incremental update to the kernel-side driver API.

Linux isn't inherently long-term unmanned/seamless either. An older distro won't run on ages younger hardware, as the kernel doesn't have drivers for the new hardware, and if you replace an ages old kernel with something much more recent, you'll have to face more or less serious discrepancies in kernel/user interfaces. Specifically, graphics driver frameworks between the kernel and XWindows have been gradually developing, and e.g. some "not so set in stone" parts of the /proc and /sys directory trees have also changed, affecting marginal stuff such as hardware health monitoring. Swapping kernels across generations in some simple old text-only distro can be a different matter (can work just fine within some limits), but that's not the case for desktop users. Ultimately it's true that in Linux, the user has much more choice between distroes, between window managers within a distro, gradual upgrades to the next major version generally work. And, your freedom to modify anything, boot over a network etc. is much greater than in Windows. Specifically, if it wasn't for Microsoft pushing the UEFI crypto-circus into every vendor's x86 hardware, you could say that Linux is already easier to boot / clone / move to replacement hardware than Windows 7/8 (the boot sequence and partition layout is easier to control in Linux, with fewer artificial obstacles).

I'm curious about Windows 9. It could be a partial return to the familiar desktop interface with a start menu, and legacy x86 win32 API compatibility. Or it could be something very different. I've heard suggestions that Microsoft is aiming to unify the kernel and general OS architecture across desktop / mobile / telephones - to unite Windows x86 and Windows RT. From that, I can extrapolate an alternative scenario: Windows 9 (or Windows 10?) could turn out to be a "Windows CE derivative", shedding much of the legacy Win32 NT API compatibility, legcuffed to your piece of hardware using crypto-signed UEFI, and leashed to the MS Cloud (no logon without IP connectivity and a MS cloud account). All of that, with a "traditional desktop" looking interface... You don't need much more from a "cloud terminal". I wouldn't be surprised.

1
0

Moon landing was real and WE CAN PROVE IT, says Nvidia

Frank Rysanek

radiosity rendering? HDR?

When was the first time that I read about "radiosity rendering"? Deep in the nineties maybe? Though at that time, it was mentioned as "the next level after raytracing"... This seems more like texture mapping (not proper raytracing) but with additional voxelized radiosity-style upgrade to "hardware lighting". There are probably several earlier "eye candy" technologies in the mix - objects cast shadows, did I see reflections on the lander's legs? Not sure about some boring old stuff such as multiple mapping, bump mapping etc.

I.e. how to make it look like a razor-sharp raytraced image with radiosity lighting, while in fact it's still just a texture-mapped thing, the incredible number crunching horsepower (needed for raytracing+radiosity) has been worked around, approximated by a few clever tricks. Looks like a pile of optimizations. Probably the only way to render this scene in motion in real time, on today's gaming PC hardware. BTW, does the "lander on the moon" seem like a complicated scene? Doesn't look like a huge number of faces, does it?

I forgot to mention... that bit about "stars missing due to disproportionate exposure requirements for foreground and background" might point to support for "high dynamic range" data (in the NVidia kit). The picture gets rendered into an HDR raw framebuffer, and the brightness range of the raw image is then clipped to that of a PC display (hardly even an 8bit color depth). To mimick the "change of exposure time", all you need to do is shift the PC display's dynamic range over the raw rendering's dynamic range... Or it could be done in floating point math. Or it could get rendered straight into 8bits per color (no RAW intermediate framebuffer needed) just using a "scaling coefficient" somewhere, in lighting or geometry...

Seems that the buzzwords like HDR, radiosity or raytracing are not enough of an eyewash nowadays. The NVidia PR movie is clearly targetted at a more general audience :-)

BTW, have you ever flown a passenger jet at an altitude of 10+ km, during day time? Most of us have... at those altitudes, you typically fly higher than the clouds. There's always the sun and enough of your normal blue sky towards the horizon... but, have you tried looking upward? Surprise - the sky looks rather black! And yet there's not a hint of stars.

1
0

Three photons can switch an optical beam at 500 GHz

Frank Rysanek

Re: Awsome.

At this switching speed and gain... wouldn't it be an interesting building block for all-optical processors? Actually I can imagine why NOT: no way to miniaturize this to a level competitive with today's CMOS lithography.

1
0

Intel's Raspberry Pi rival Galileo can now run Windows

Frank Rysanek

The Galileo has no VGA

no VGA, no point in installing Windows on the poor beast.

Well you could try with a MiniPCI-e VGA, or a USB VGA... both of which are pretty exotic, in one way or another.

3
0

OpenWRT gets native IPv6 slurping in major refresh

Frank Rysanek

Re: So much better than original FW

A switch from original firmware to OpenWRT has improved signal quality and reach? Not very likely, though not entirely impossible...

Other than that, TP-Link hardware of the recent generation is a marvellous basis for OpenWRT. It runs very cool, has very few components apart from the Atheros SoC, this looks like a recipe for longevity. Only the 2-3 elyts could better be solid-poly (they're not) - I haven't found any other downside.

For outdoor setups I prefer Mikrotik gear (HW+FW) in a watertight aluminum box. And even the RB912 has classic Aluminum elyts... so I cannot really scorn TP-Link for not using solid-poly in their entry-level SoHo AP's.

1
0

Dell exec: HP's 'Machine OS' is a 'laughable' idea

Frank Rysanek

Re: no need for a file system

IMO the abstraction of files (and maybe folders) is a useful way of handling opaque chunks of data that you need to interchange with other people or other machines. Keeping all your data confined to your in-memory app doesn't sound like a sufficient alternative solution to that "containerized data interchange" purpose.

4
0
Frank Rysanek

Re: a game-changer

That's a good idea for handhelds. Chuck RAM and Flash, just use a single memory technology = lower pin count, less complexity, no removable "disk drives". Instant on, always on. A reboot only ever happens if a software bug prevents the OS from going on. Chuck the block layer? Pretty much an expected evolutionary step after Facebook, MS Metro, app stores and the cloud...

3
0
Frank Rysanek

Ahem. scuse me thinking aloud for a bit

The memristor stuff is essentially a memory technology. Allegedly something like persistent RAM - not sure if memristors really are as fast as today's DRAM or SRAM.

The photonics part is likely to relate to chip-to-chip interconnects. Not likely all-optical CPU's.

What does all of this boil down to?

The Machine is unlikely to be a whole new architecture, not something massively parallel or what. I would expect a NUMA with memristors for RAM. Did the article author mention DIMMs? The most straightforward way would be to take an x86 server (number of sockets subject to debate), run QPI/HT over fibers, and plug in memristors instead of DRAM. Or use Itanium (or ARM or Power) - the principle doesn't change much.

Is there anything else to invent? Any "massively parallel" tangent is possible, but is not new - take a look at the GPGPU tech we have today. Or the slightly different approach that Intel has taken with the Larrabee+. Are there any gains to be had in inventing a whole new CPU architecture? Not likely, certainly not unless you plan to depart from the general von-Neumannean NUMA. GPGPU's are already as odd and parallel as it gets, while still fitting the bill for some general-purpose use. Anything that would be more "odd and parallel" would be in the territory of very special-purpose gear, or ANN's.

So... while we stick to a NUMA with "von Neumann style" CPU cores clustered in the NUMA nodes, is it really necessary to invent a whole new OS? Not likely. Linux and many other OS'es can run on a number of CPU instruction sets, and are relatively easy to port to a new architecture. Theoretically it would be possible to design a whole new CPU (instruction set) - but does the prospect sound fruitful? Well not to me :-) We already have instruction sets, and CPU and SoC flavours within a particular family, and complete plaftorms around the CPU's, suited for pretty much any purpose that the "von Neumann" style computer can be used for, from tiny embedded things to highly parallel datacenter / cloud hardware.

You know what Linux can run on. A NUMA with some DRAM and some disks (spinning rust or SSD's). Linux can work with suspend+resume. Suppose you have lots of RAM. Would it be any bottleneck that your system is also capable of block IO? Not likely :-) You'd just have more RAM to allocate to your processes and tasks. If your process can stay in RAM all the time, block IO becomes irrelevant, does not slow you down in any way. Your OS still has to allocate the RAM to individual processes, so it does have to use memory paging in some form.

You could consider modifying the paging part of the MM subsystem to use coarser allocation granularity. Modifications like this have been under way all the time - huge pages implemented, debates about what would be the right size of the basic page (or minimum allocation) compared to the typical IO block size, possible efforts to decouple the page size from the optimum block IO transaction size and alignment... Effectively to optimize Linux for an all-in-memory OS, the developers managing the kernel core and MM in particular would possibly be allowed to chuck some legacy junk, and they'd probably be happy to do that :-) if it wasn't for the fact that Linux tries to run on everything and be legacy compatible with 20 years old hardware. But again, block IO is not a bottleneck if not in the "active execution path".

It doesn't seem likely that the arrival of persistent RAM would remove the need for a file system. That would be a very far-fetched conclusion :-D Perhaps the GUI's of modern desktop and handheld OS'es seem to be gravitating in that direction, but anyone handling data for a living would have a hard time imagining his life without some kind of files and folders abstraction (call them system-global objects if you will). This just isn't gonna happen.

Realistically I would expect the following scenario:

as a first step, ReRAM DIMM's would become available someday down the road, compatible with the DDR RAM interface. If ReRAM was actually slower than DRAM, x86 machines would get a BIOS update, able to distinguish between classic RAM DIMM's and ReRAM (based on SPD EEPROM contents on the DIMMs) and act accordingly.

There would be no point in running directly from ReRAM if it was slow, and OS'es (and applications) would likely reflect that = use the ReRAM as "slower storage". This is something that a memory management and paging layer in any modern OS can take care of with fairly minor modification.

If ReRAM was really as fast as DRAM, there would probably be no point in such an optimization.

Further down the road, I'd expect some deeper hardware platform optimizations. Maybe if ReRAM was huge but a tad slower than DRAM, I would expect another level of cache, or an expansion in the volumes of hardware SRAM cache currently seen in CPU's. Plus some shuffling in bus widths, connectors, memory module capacities and the like.

So it really looks like subject to gradual evolution. If memristors really turn out to be the next big thing in memory technology, we're likely to see a flurry of small gradual innovations to the current computer platforms, spread across a decade maybe, delivered by a myriad companies from incrementally innovating behemoths to tiny overhyped startups, rather than one huge leap forward delivered with a bang by HP after a decade of secretive R&D. The market will take care of that. If HP persists with its effort, it might find itself swamped by history happening outside of their fortress.

BTW, the Itanium architecture allegedly does have a significant edge in some very narrow and specific uses, from the category of scientific number-crunching (owing to its optimized instruction set) - reportedly, with correct timing / painstakingly hand-crafted ASM code, Itanium can achieve performance an order of magnitude faster than what's ever possible on an x86 (using the same approach). This information was current in about 2008-2010, not sure what the comparison would look like, if done against a 2014-level Haswell. Based on what I know about AVX2, I still don't think the recent improvements are in the same vein where the Itanium used to shine... Itanium is certainly hardly an advantage for general-purpose internet serving and cloud use.

As for alternative architectures, conceptually departing from "von Neumann with NUMA" and deterministic data management... ANN's appear to be the only plausible "very different" alternative. Memristors and fiber interconnects could well be a part of some ANN-based plot. Do memristors and photonics alone help solve the problems (architectural requirements) inherent to ANN's, such as truly massive parallelism in any-to-any interconnects, organic growth and learning by rewiring? Plus some macro structure, hierarchy and "function block flexibility" on top of that...

I haven't seen any arguments in that direction. The required massive universal cross-connect capability in dedicated ANN hardware is a research topic in itself :-)

Perhaps the memristors could be used to implement basic neurons = to build an ANN-style architecture, where memory and computing functions would be closely tied together, down at a rather analog level. Now consider a whole new OS for such ANN hardware :-D *that* would be something rather novel.

What would that be called, "self-awareness v1.0" ? (SkyOS is already reserved...)

Or, consider some hybrid architecture, where ANN-based learning and reasoning (on dedicated ANN-style hardware) would be coupled to von Neumann-style "offline storage" for big flat data, and maybe some supporting von Neumann-style computing structure for basic life support, debugging, tweaking, management, allocation of computing resources (= OS functions). *that* would be fun...

Even if HP were pursuing some ANN scheme, the implementation of a neuron using memristors is only a very low-level component. There are teams of scientists in academia and corporations, trying to tackle higher levels of organization/hierarchy: wiring, macro function blocks, operating principles. Some of this research gets mentioned at The Register. It would sure help to have general-purpose ANN hardware miniaturized and efficient to the level of the natural grey mass - would allow the geeks to try things that so far they haven't been able to, for simple performance reasons.

3
0

PCIe hard drives? You read that right, says WD

Frank Rysanek

Re: Whatever next? Direct Fibre Channel connections?

FibreChannel disk drives have been around for a very long time (perhaps no more).

This question twisted my brain into a "back to the future" dejavu.

http://forums.storagereview.com/index.php/topic/3331-fc-al-interface/

http://www.hgst.com/tech/techlib.nsf/techdocs/439F4FF2F546AE4F86256E4400673C67/$file/10K300_FC-AL_Functional_v.6.pdf

Ahh right - don't expect a duplex LC optical socket on the drive, that "direct to drive" flavour of FC-AL was wired into an SCA connector and ran over a copper PHY...

1
0

Everything you always wanted to know about VDI but were afraid to ask (no, it's not an STD)

Frank Rysanek

Pretty good reading

I live at the other end of the spectrum - in a small company, with barely enough employees to warrant some basic level of centralized IT, most of the empoyees are techies who prefer to select their PC's for purchase and manage them... It's a pretty retro environment, the centralized services are nineties-level file serving and printing, plus some VPN remote login, plus a Windows terminal server set up to cater for our single application that runs best in an RDP Console on a remote server (a databasey thingy). A major PITA is how to backup the OS on notebooks with preinstalled Windows in a somewhat orderly fashion. With the demise of DOS-based Ghost and with the recent generations of Windows, the amount of work required is staggering - the amount of work to massage the preinstalled ball of crud into a manageable, lean and clean original image suitable for a system restore, should the need arise - with a separate partition for data for instance. But it's less pain than trying to force a company of 20 geeks into mil-grade centralized IT.

To me as a part-time admin and a general-purpose HW/OS troubleshooter, the article by Mr. Pott has been a fascinating reading. There's a broad spectrum of IT users among our customers, and it certainly helps to "be in the picture" towards the upper end of centralized IT, even if it's not our daily bread and butter.

1
0

BEHOLD the HOLY GRAIL of TECH: The REVERSIBLE USB plug

Frank Rysanek

USB connector that fits either way up? That's on the market already...

I was shocked a couple months ago by the USB ports on this hub:

http://www.czc.cz/connect-it-ci-141-usb-2-0-hub-4-porty/130887/produkt?q-category-id=cep0kaggl8jm4aejnad83vui25

You can insert your peripherials either way up. It feels like you have to apply a bit of violence, but we're using it in a PC repair workshop and it's been working fine for several months now.

2
0

Windows 8 BREAKS ITSELF after system restores

Frank Rysanek

Re: approaching Windows 8 "the old way"

When it comes to Windows, I'm a bit of a retard... I always try to approach it based on common sense and generic principles of the past, which probably hints at lack of specific education on my part in the first place... I've never tried to use the Windows built-in backup/restore. The tool I tend to prefer for offline cloning is Ghost - the DOS flavour of Ghost. I've made it to work under DOSemu in Linux (PXE-booted), and recently my Windows-happy colleagues have taught me to use Windows PE off a USB stick... guess what: I'm using that to run Ghost to clone Windows the way *I* want it. With Windows 8 / 8.1 (and possibly 7 on some machines), there's an added quirk: after restoring Windows from an image onto the exact same hardware, you have yet to repair the BCD store, which is your boot loader's configuration file. Which is fun if it's on an EFI partition, which is hidden in Windows and not straightforward to get mounted... but once you master the procedure, it's not that much trouble, I'd almost say it's worth it. Symantec has already slit the throat of the old cmdline Ghost, but I'm told that there are other 3rd-party tools to step in its place... I haven't tested them though.

I've been forced to go through this on a home notebook that came with Windows 8 preloaded. Luckily I have the cloning background - as a pure home user, I'd probably be lost, at the mercy of the notebook maker's RMA dept if the disk drive went up in smoke. Well I've found the needed workarounds. And I tried to massage Windows 8.1 into a useable form, close to XP style. I've documented my punk adventure here:

http://www.fccps.cz/download/adv/frr/ACER_initial_cleanup.htm

A few days later, I had an opportunity to re-run the process along my own notes, and I had to correct a few things... and I noticed that I couldn't get it done in under 3 days of real time!

Yes I did do other work while the PC kept crunching away, doing a backup/restore or downloading Windows updates. On a slightly off topic note, the "hourglass comments" after the first reboot during the Windows 8.1 upgrade are gradually more and more fun (absurd) to read :-)

I've read elsewhere that before upgrading to 8.1, you'd better download all the updates available for Windows 8, otherwise the upgrade may not work out.

To me, upgrading from Windows 8 to 8.1 had a positive feel. Some bugs (around WinSXS housekeeping for example) have vanished. But I'm also aware of driver issues, because Windows 8.1 is NT 6.3 (= an upgrade from Windows 8 = NT 6.2). So if some 3rd-party driver has a signature for NT 6.2, you're out of luck in Windows 8.1, if the respective hardware+driver vendor embedded the precise version (6.2) in the INF file, as the INF file also appears to be covered by the signature stored in the .CAT file... Without the signature, with many drivers (with a bit of luck), you could work around the "hardcoded version" by modifying the INF file. Hello there, Atheros... On the particular notebook from Acer it was not a problem, Intel and Broadcom apparently have drivers in Windows 8.1.

I actually did the repartitioning bit as a fringe bonus of creating an initial Ghost backup. I just restored from the backup and changed the partitioning while at it.

...did I already say I was a retard?

Windows 8 appear to be capable of *shrinking* existing NTFS partitions, so perhaps it is possible to repartition from the live system without special tools. Not sure, haven't tried myselfs.

For corporate deployments of Windows 8, I'd probably investigate the Microsoft Deployment Toolkit.

That should relieve you of the painstaking manual maintenance of individual Win8 machines and garbage apps preloaded by the hardware vendor. It might also mean that you'd have to buy hardware without preloaded windows, which apparently is not so easy...

1
0

Intel details four new 'enthusiast' processors for Haswell, Broadwell

Frank Rysanek

Secret thermocouple compound

Perhaps with the "extreme edition" they'll return to soldering the heatspreader on, the way it was in the old days (I guess). Or at least use a "liquid metal" thermocoupling stuff (think of CoolLaboratory Pro or Galinstan) rather than the white smear that they've been using since Ivy Bridge...

Myself I'm not fond of number crunching muscle. Rather, I drool over CPU's that don't need a heatsink (and are not crap performance-wise). I like the low-end Haswell-generation SoC's (processor numbers ending in U and Y), and am wondering what Broadwell brings in that vein.

1
0

The UNTOLD SUCCESS of Microsoft: Yes, it's Windows 7

Frank Rysanek

Re: With 8.1 you barely have to use the "touch interface" if you don't want to

> With 8.1 you barely have to use the "touch interface" if you don't want to

Actually... a few weeks ago I've purchased an entry-level Acer notebook for my kid, with Windows 8 ex works. It was in an after-Xmas sale, and was quite a bargain. A haswell Celeron with 8 GB of RAM... I'm a PC techie, so I know exactly what I'm buying.

Even before I bought that, I knew that I would try to massage Windows 8 (after an upgrade to 8.1) into looking like XP.

The first thing I tried to solve was... get rid of Acer's recovery partitions (like 35 GB total) and repartition the drive to be ~100 GB for the system and the rest for user data. I prefer to handle system backup in my own way, using external storage - and I prefer being able to restore onto a clean drive from the backup. So it took me a while to build an image of WinPE on a USB thumb drive, as a platform for Ghost... from there it was a piece of cake to learn to rebuild the BCD on the EFI partition (typically hidden). Ghost conveniently only backed up the EFI and system partition, and ignored the ACER crud altogether :-)

Not counting the learning process, it took me maybe 3 days almost net time to achieve my goal = to have lean and clean Win 8.1 with XP-ish look and feel. The steps were approximately:

1) uninstall all Acer garbage (leaving only the necessary support for custom keys and the like)

2) update Windows 8 with all available updates

3) clean up other misc garbage, the most noteworthy of which was the WinSXS directory. I did this using DISM.EXE still in Windows 8, which was possibly a mistake. The "component install service" in the background (or watever it's called) tended to eat a whole CPU core doing nothing... but after several hours and like three reboots it was finally finished. I later found out that it probably had a bug in Win8 and was a breeze if done in Windows 8.1... BTW, I managed to reduce WinSXS from 13.8 GB down to 5.6 GB (in several steps)... and, the system backup size dropped from 12 TB down to 6 GB :-)

4) upgrade to Windows 8.1. This also took surprisingly long. It felt like a full Windows reinstall. The installer asked for several reboots, and the percentage counter (ex-hourglass) actually wrapped around several times... it kept saying funny things like "finishing installation", "configuring your system", "registering components", "configuring user settings", "configuring some other stuff" (literally, no kidding!) but finally it was finished...

5) more crapectomy (delete stuff left over from Win8 etc.)

6) install Classic Shell, adjust window border padding, create a "god mode" folder (only to find out that it's actually pretty useless), install kLED as a soft CapsLock+NumLock indicator (the Acer NTB lacks CapsLock+NumLock LEDs), replace the ludicrous pre-logon wallpaper, get rid of some other user interface nonsense...

Somewhere inbeteween I did a total of three backups: one almost ex works, another with a clean install of Windows 8.1 (after basic post-install cleanup), and one last backup of the fully customized install, just a snapshot of the system partition stored on the data partition (for a quick rollback if the kids mess up the system).

It looks and even works (at a basic level) as Windows XP. Some aspects of the user inteface work slightly different - such as, the Windows now dock to screen edges. No problem there. Even when I install some software whose installer expects the old style start menu, the installer still creates its subfolders in the ClassicStartMenu (technically alien to Windows 8) - great job there.

But: the control panels are still Windows 8 style = bloated and incomprehensible, if you're looking for something that was "right there" in Windows XP. The search tool is still absent from the explorer's context menus - you have to use the global search box in the upper end of the Win8 sidebar. The dialogs that you need to deal with when occasionally fiddling with file privileges are just as ugly as they ever have been (they weren't much nicer in XP before the UAC kicked in in Vista).

I'm wondering if I should keep the Windows 8.1 start button, only to have that nifty admin menu on the right mouse button. The left button = direct access to the start screen (even with smaller icons) is little use to me.

There's one last strange quirk, apparently down to the hyper-intelligent touchpad: upon a certain gesture, possibly by sweeping your finger straight across the touchpad horizontally, the Win8 sidebar jumps out and also the big floating date and time appears - and they just glare at you. This typically happens to me unintentionally - and whatever I was doing at the moment gets blocked away by this transparent Win8 decoration. It is disturbing - I have to switch my mental gears and get out of that Windows 8 shrinkwrap to get to work again... I hope it will be as easy as disabling all the intelligence in the touchpad control panel. For the moment I cannot do away with the Win8 sidebar entirely (even if this was possible) because I still need it now and then...

Some of the control panels are metro-only - and THEY ARE A MESS! There's no "apply" button... it's disturbing to me that I cannot explicitly commit the changes I do, or roll back in a harmless way. Typically when I happen to launch some Metro panel by mistake, I immediately kill the ugly pointless beast using Alt+F4. Thanks god at least that still works.

The new-generation start screen with mid-size icons is not a proper Start menu replacement. For one thing, the contents are not the same. Legacy software installs into the classic start menu, but its icons don't appear in the 8.1 start screen. And vice versa. The new start screen with small icons is better than the endless Metro chocolate bar of Windows 8, but still a piece of crap.

I hope my trusty old Acer that I use daily at work (XP-based) survives until Windows 9 - by then I'll have a chance to decide for myself, whether Windows 9 is back on track in the right direction, or what my next step is. If this is everybody's mindset, it's not surprising at all that Windows 8 don't sell.

9
0

Satya Nadella is 'a sheep, a follower' says ex-Microsoft exec

Frank Rysanek

If he's a server man...

If Nadella is a "server" man, he might actually understand much of the dislike that power-users have been voicing towards Windows 8. He might be in mental touch with admins and veteran Windows users.

If OTOH he's a "cloud" buzzword hipster evangelist, that doesn't sound nearly as promising.

What does the Microsoft's humongous profit consist of these days? Is it still selling Windows and Office? If that's the case, It has seemd to me lately that they've been doing all their best to kill that hen laying golden eggs... They've always been capitalizing on the sheer compatibility and historical omnipresence of their Win32 OS platform and office suite. In the recent years though, they've done a good job of scaring their once faithful customers away with counter-intuitive UI changes, software bloat and mind-boggling licensing :-(

8
2

The other end of the telescope: Intel’s Galileo developer board

Frank Rysanek

Re: PC104

PC104 is a form factor, rather than a CPU architecture thing - though it's true that I've seen a number of x86 CPU's in a PC104 form factor, but only a few ARM's...

PC104 is a relatively small board, whose special feature is the PC104 and/or PCI104 connector, making it stackable with peripherial boards in that same format. Other than that, it's also relatively expensive. And, it's easy to forget about heat dissipation in the stack.

If you need a richer set of onboard peripherials or a more powerful CPU (Atom in PC104 has been a joke), you may prefer a slightly bigger board, such as the 3.5" biscuit. There are even larger formats, such as the 5.25 biscuit or EPIC, which is about as big as Mini-ITX. The bigger board formats allow for additional chips and headers on board, additional connectors along the "coast line", and additional heat dissipation.

If OTOH you need a very basic x86 PC with just a few digital GPIO pins, and you don't need an expansion bus (PCI/ISA), there are smaller formats than PC/104 - such as the "Tiny Module" (from ICOP, with Vortex) or the various SODIMM PC's.

The Arduino format is special in that it offers a particular combination of discrete I/O pins, digital and analog - and not much else... and I agree with the other writers who point out that it's a good prototyping board for Atmega-based custom circuits.

2
0
Frank Rysanek

Re: 400 Mhz?

Oh it's got CMPXCHG8? Means it can run Windows XP? cept for the missing graphics :-)

1
0

Page:

Forums