55 posts • joined 19 Apr 2007
Re: Fishtanks are useless
My next-door neighbour would disagree - when on holiday his main stress factor is whether the fish in his tank are doing OK - and would love to get notifications on his mobile phone to confirm the fact!
Re: What OS for Apollo
Well the last one out was Domain/OS SR10.4 or 10.5, IIRC?
BTW there were rumours that HP had DomainOS running on the 9000/700 series but chose not to release it....
Son of Clippy?
That Office Upload Centre dialog sounds like a chip off the old block...
Big thumbs up for CiviCRM.
It does take a while to get into, but as there is never enough time to do everything it is better to invest the time that there is into high-level work that benefits the entire organisation than one-by-one PC-shop jobs.
Some other things we have been using:
* email - Zimbra (not perfectly happy with it)
* files - OwnCloud (just getting into it, if it works as adertised it should be possible to sync the desktop and my Docs of each PC)
* server backup - BoxBackup
All of these things could run on a local server or in a data centre, depending on the size & sophistication of the organisation (and its Internet links!)
The biggest problem has been rollout, training, and subsequent hand-holding.
Sounds over-complicated and high-risk, I'm afraid...
There are some additional risks that others have not mentioned - mostly related to that fact that you are looking at a customer-specific solution. Firstly, even as a volunteer your time is valuable and you are likely to spend many hours getting a setup based on creaky old desktops to work. Then there is the problem of the *sysadmin* becoming the single point of failure - i.e. it will be really hard for someone else to support a bespoke system that has been cunningly constructed to minimise costs, especially as it may be complex/creative and therefore time-consuming to document fully. And of coure the creaky old desktop hardware remain creaky old hardware.
This is a "Been there, done that, got the T-shirt" comment - I personally have made all the above mistakes (though not with VDI), and in the long run really regretted taking that approach.
If I were in your situation I'd look at replacing the desktops with three year old ex-corporate machines (Windows 7 licensed). If you can get Windows 7 licenses from CTX for a couple of pounds each, you could look for early Core2 machines that are contaminated with Vista licenses. The key thing here will be to get machines that are similar enough that you can support them with a single image. I think you have a fighting chance of finding *free* machines of this era - what you would have to do is upgrade the RAM and if possible the disk drive (which is the main performance bottle-neck in these systems). For older machines I'd also look at replacing the fans.
The standard-image desktop approach works well for us, as we can have a couple of machines on the shelf "ready to roll" so that if a machine dies when I'm not around, it can be swapped for a working one and no-one's work is affected (assuming they have played ball and kept their stuff on the server).
BTW for servers our entire setup is open-source, using DRBD for server-to-server replication (ask if you want more info). Servers for the users are virtualised using KVM, and virtual disks are mirrored by the VM hosts - so the users' servers don't need any funky configuration for data redundancy.
The challenge in scoping a technically complex system is that it is very hard for the business decision maker to understand the proposal. What has worked for me is to prepare 3 proposals - high road, middle road, low road - with a description of how they vary in cost and benefit (finding the "best cheap option" makes for a stimulating challenge).
Then it's up to the leaders to choose an option - and the compromises inherent in the chosen option become their responsibility, not that of the IT guys.
What is a killer (learned the hard way) is to try to do things as cheaply as possible, because that *will* cost time and ultimately need re-doing. However sometimes it is necessary because until the benefit of some new technology is perceived, the funds for a proper implementation will not be forthcoming. Probably what is key here is that the "quick cheap option" must be accompanied by clear written caveats expressed in business terms, e.g. "this system will support at most 10 users and the equipment will need to be replaced in 3 years' time".
This is OT, but I've just been through the pain of reloading my laptop - XP suffered fatal internal disintegration and was bluescreening. So I bit the bullet and installed a Win 8 upgrade. Cue several hours of faffing around with Classic shell etc. OK, finally all working... except for a USB to serial converter for PICAXE programming.
I'm completely with you on the distro confusion, but I followed the herd and went for Mint with MATE - simple install alongside Win8, everything worked out of the box. I've not needed to drop to the command line at any point. Although the intention was to only boot into Linux when wanting to play with the PICAXEs, I haven't bothered to go back to WIndows.
Knowing a smidgen more about these things (maybe?)
A few weeks ago we bought a Nook HD for £119 at JL.
As an ereader it is a bit of a loss (I would definitely buy a different product, probably the Kobo one), but the "great enabler" for this device is that it can boot from SD.
The first step was to get the Google apps onto the Nookified Android. A 4GB microsd card is needed - instructions here:
The result is pretty good, the only problem is that some Play Store apps are listed as not available - e.g. Evernote had to be installed from the B&N shop thing.
However there is an even better plan - get a fast microSD card (e.g. UHS type) and install vanilla Android on it, run the thing from the SD card:
That's working really well for me, all apps install happily (Evernote and Skitch included) and the device has a nice his-res screen that's bright enough to be usable in sunshine.
As a device it *does* have e-reader type limitations - no cameras, no GPS...
in due course there will be frustration at the reliability (or not) of those solutions and training gaps and the fact that different and incompatible solutions are used in various departments. Some bright spark will say "why don't we hire someone to take charge of this chaos". A couple of hiring cycles and a smidgen of empire building later later and the result will be indistinguishable from the much reviled It dept of old.
And some groaning user will exclaim "there must be a way that involves less red tape"...
Re: The music industry: @Mark Honman
> On the other hand, the sound quality was dreadful, commercial pressings of the most atrocious quality
You must be a fellow South African, then... one of the big benefits of visiting the UK in the early 1990s was to buy some decent LPs. On the other hand classical LPs were of brilliant quality (N.O.T. pressed in SA, obviously) and any warps were my own fault, really. What used to drive me nuts was end-of-side distortion...
There is the whole quasi-religious thing you describe, and it's hard to even guess at how much that affects the perception of sound quality (or one might put it, the index of overall satisfaction gained from playing an album).
Re: The music industry: Still late for their own funeral
Although I'm something of a vinyl luddidite, you still deserve an upvote.
But where are these cheap Chinese styli you speak of? Any self-respecting vluddite will spend as much on a stylus as the rest of the world spends on an iThing, and probably for the same reasons (an elevated life-form has descended to their plane and offers a tangible object thickly encrusted with magic pixie dust).
While like you I'd still tend to buy CDs for convenience and rippability, the weird thing with vinyl is that while objectively speaking it cannot match CD for sound quality, for most types of music the deficiencies of vinyl are less annoying than those of CD.
Re: PDP-8 any good?
Don't think so, I worked with PDP-8 derived HP1000s which had a totally different instruction set/register architecture to the PDP-11. Much as I loved the HP1000s, I have to admit that the PDP-11 had a much more elegant architecture.
Re: Why under?
The closer the vehicle is to the loop, the more efficient the energy transfer is.
What I want to know is why the electric trolley buses were done away with all those years ago...
Re: So it's partly the communcations overhead and the synchronosation that's the problem.
You're thinking of Transputers. Which were certainly the right idea for CFD, not so much on account of the architecture but because of the good balance between compute and communication speeds, and especially the very low communication latency. Low latency meant that relatively little time was wasted hanging around at the end of an iteration, waiting for data to arrive from neighbouring processors. But they had their problems too - especially absence of any kind of global/broadcast communication.
I don't know how things have changed since those days, but at that time there was a tension between algorithmic efficiency and parallel processing - the more efficient CFD algorithms coupled cells across the whole domain, which was generally OK to parallelise on a shared-memory system but was no-go on a distributed-memory architecture where less efficient algorithms could be used.
So... real kudos to the guys & gals, both system architects and software developers, who have pulled off the feat of building a system and a real-world application that scale across 1^^6 processing elements.
Windows Phone 8 is free (beer) already... just about
Due to platform support payments from MS to Nokia, WP licenses for Nokia phones are effectively paid for by Microsoft. I'm not sure how that will change as the number of phones sold by Nokia goes up or down, but as Nokia produces the vast majority of WP phones, the average Windows phone has a free OS.
Once the tablet sales breakdown - Surface vs. the others - becomes available it will be possible to determine whether the average Windows RT tablet also has a free OS (i.e. MS hardware effectively has a free OS, even if it is paid for by internal funny money).
Re: Comment wisdom
The 20 lines bit comes from the terminals of old - that's how much you can see at one go (60 lines on a printout page).
I've got to agree with the "wrapper" type of function, hat you've described. I guess my opinion comes from having done a lot of maintenance, and the frustration of following the flow of control in OO code that passes from one inconsequential little method to another. is a maze of twisty little methods, all alike.
On the other hand there were some shockers in the old days, 1000-line Fortran IV routines (don't get me started on common blocks) , and a supervisor who having discovered the Pascal CASE statement considered that it made IF... THEN.. ELSE obsolete.
OO shockers seem to be small, e.g. a simple "pure" function (no state, global references or side effects) that was implemented as a method, and the caller of this 8-line function was expected to instantiate an object of this class, call the method, and then throw the object away.
In fact there's a pattern here... it's so easy when one has discovered something new (COMMON blocks, CASE statements, object-orientation) to unthinkingly use this new shiny (or orgnisatoinally mandated tool/method) to solve every imaginable problem including those that can be more effectively solved the old-fashioned way.
As another commenter said, that is a common factor in poor-quality code.
It's so long ago that I can't give credit where it is due, but the best approach to commenting that I've found is:
* If you need to comment a line of code, that code is obscure - do it differently. (not always possible, as it may be a toolset restriction) - but line-by-line comments should be exceptional and are there to say "here be dragons"
* every module and routine should have a comment header that explains its purposes, inputs outputs and side-effects, and an easy-to-read outline of the steps that occur in processing
* write the header comments before writing any code
The rationale is that line-by-line comments generally make code harder to read because one can't see the totality of a routine on-screen. On the other hand the header comments are an expression of the programmer's understanding of the requirements and allow one to clarify how the problem is going to be solved before getting into the nuts-and-bolts of language and toolset specifics.
BTW the old rule on function size probably still holds - if it is less than 20 lines, it's probably too small, more than 60 definitely too big (yes I know OO results in lots of dinky methods... but too much of that results in lousy readability as one has to skip from one routine to another to follow the logic).
No time to refactor
and it was ever so....
One thing that works for me is to allocate some time for "while you're in there" refactoring when estimating development times. Then there is a bit of buffer when changes need to be made to some particularly manky old piece of code - by the time one has understood it it's not that much extra work to refactor it.
The biggest problems are that (a) I'm an optimist and (b) managers always try to compress the development schedule, so there isn't much in the way of buffer.
Hmm, if every family member gave me a Pi for Xmas that would be a decent start.
from the enquiring minds want to know (and are presently too lazy to read the blog) dept:
* can the graphics part of the Pi processor be used as a floating-point vector processor?
* what is the computation/communication performance ratio?
* Linpack performance?
I guess it will all become clear in time...!
Re: @Mark H
Android isn't free to OEMs who want to offer the Google stuff - I guess the store is particularly important here.
Probably still cheap compared to the offical price for WP licenses.
What I'm getting at with that is that maybe WP8 is technically better than iOS or Android (as the MS fans on here would have it, but my guess is that the three will be very much of a muchness in terms of quality/stability).
I chose OS/2 anaolgy because, back in the days, OS/2 was technically way ahead of the MS offerings, was well thought through, had the might of IBM (then the 800lb gorilla of the IT world) behind it, and still failed to gain traction in the market because Windows was "good enough" and already there (for a fairly small value of "good", admittedly). I wonder how OS/2 sales compared to Mac, though?
So even if WP8 turns out to be significantly superior to the competition, my guess is that the same combination of "good enough and everyone knows it" and "expensive comfort zone" competitors will not leave any room in the market for it.
But if WP8 is truly based on the traditional Windows kernel I have a horrible feeling that it will suffer from traditional Windows problems.
In the case of Nokia, Nokia are paying full price for the WP7 licenses and then getting some kind of "marketing support" from MS that is set up to magically counterbalance the cost of the licenses. So MS can report revenue on WP7 while hiding the subsidy as marketing expenses. (See Nokia financial results for a glimmer of how the deal works).
Win-win for both companies, MS gets shipments & revenue, Nokia gets free software. No wonder it was a no-brainer for Nokia to go for WP7 rather than Android.
A side-thought there - given the history if internal feuding at Nokia, it's very likely that any Nokia Android would have taken a lot longer to get to market that the WP7 phones - the hardware spec of the latter is so locked-down that it leaves no room for turf wars over what the hardware will be and how much of a classic Nokia personality it should have.
Ironically I think this time round WP8 is probably in the spot where OS/2 was in the PC operating system wars, with Android 2.x/4.x in the role of Windows 9x/NT respectively.
May well be positive if they are first going to put some effort into stabilising it
We have a lot of TB users here, and the main pain points are
* bugs that haven't been fixed for years
* the accursed upgrade treadmill confuses the users
* add-on functionality that really should be core
* the accursed upgrade treadmill breaks add-ons
IMO they should look at integrating the add-ons that provide collaboration functionality (Lightning, the IMAP permissions thing, Folder Account, etc.), fix the bugs, and NOT MESS WITH THE UI. After that, maintenance mode... it's a solved problem, no need to keep turd-polishing.
Ahem, please excuse the little scream of pain there.
Not forked, re-implemented
More accurately, the re-implementation is Apache's Harmony.
Re-implementation of Java is OK but you can't call it Java (see famous Sun-MS lawsuit).
Nicking other people's source code is wrong - and Oracle did find some test subsystem code that was present in Android distributions. That was removed as soon as it was brought to the Android people's attention, leaving the copyright focus of this case to be the Java API.
By definition a re-implemented API is going to look *very* much like the original (think SysV vs. BSD Unix header files) and it's hard to tell how much may have been copied vs. how much just has to be the same for that API to be useful.
Up til now nobody has thought it worthwhile to sue over APIs, so the rights and wrongs are not clearly established, but every dev who has worked on multiple platforms has probably seen the same basic API ingredients rehashed on each one - so probably not much hope for Oracle except where the APIs in question are uniquely Java-ish and not derived from something else.
Something like the time Apple sued Microsoft and HP over the appearance of Windows 3.1/NewWave - and got off on the basis that Apple had themselves based their UI on Xerox's work.
Re: Misconceived idea of Lewis' work
It's sci-fi as written by a professor of ancient literature, i.e. quite bookish. The plot of 'Out of the silent planet' is not unlike 'Avatar', i.e. greedy humans go and strip-mine alien planet for its wealth) and similarly has a great deal on how the alien society works.
However the third book, 'That Hideous Strength' is a fantastic read with broad English caricatures - very funny with its observations of human nature and especially that of ladder-climbers in large organisations.
All three of these are definitely written from a Christian worldview.
Of Lewis' novels for adults, 'That Hideous Strength' is probably my favourite, but 'Til we have faces' while quite a difficult read draws one into an other-world (based more on Greek mythology than anything else) which has the greatest emotional impact of any of his books.
Shannon's theorem applies to continuous signals
The Sampling Theory applies to continuous signals, rather different from the complex shapes of musical notes which have attack and decay - not just a bunch of frequencies, but contained within an 'evelope' shape.
There is also a practical problem with replay of digital audio streams, which is jitter - imperfections in the clock frequency of the digital stream.
Another lesser problem with signals resulting from D->A conversion is the sampling noise at frequencies above 22.1kHz - while inaudible to adults it may affect the performance of equipment that was designed on the assumption that the input signal contains only audible frequencies. Example: some metal-dome tweeters have resonances around 22-25 kHz.
But these three effects pale into insignificance when compared with the destruction wreaked by modern dynamic range compression.
Although the technical performance of CD is better than vinyl, the debate still rages - I think because the shortcomings of vinyl reproduction are less noticeable when concentrating on music. Perhaps because static click are essentially random, and surface noise is concentrated in the lower frequency range and therefore less objectionable.
the fuss is that up til now BT have not been taking responsibility for the problem or being pro-active about it. In Oct our FTTP modem failed, but we first had to go through a ritual with the ISP (PlusNet) and BT technicians who fiddled with wiring in the DP thus changing an intermittent problem to a hard fault. Then as our phone services are purchased through a 3rd party we have to contact *them* to get BT to send a technician to fix the wiring - after which we were back to an intermittent brandband fault (well more like intermittently working). Eventually we were lucky enough to be assigned a BT technician who was aware of the root cause and simply replaced the modem.
The whole thing took about 3 weeks from start to finish and I've very glad we had additional broadband links to fall back on (supplied by Be). It wasn't the nature of the fault, it was the run-around that was so galling and we are now trialling a bonded Be service as a possible replacement.
Much better for the company and customers to send out a £20 router and save N site visits and unhappy customers.
Free remote wipe for Android 2.2 onwards. Also provides for wipe on SIM change, wipe on excessive "password" attempts.
As the other poster says, OpenVPN is at least in progress so AFAIK the only missing ingredient is local encryption.
yes, they can fail that often
out of 3 OCZ Onyx entry-level SSDs, we have one fail (total loss of data), had it replaced under warranty, and then had the replacement fail. The other two have been fine.
The bummer with SSD failures has been that it is a total loss of data, very rare for 'real' disk drive failures.
Or there's the ZTE Skate
ZTE Skate (aka Orange Monte Carlo) has a nice 4.3 inch screen with an even better (and more useful) resolution of 800x480. Not as fast, of course - but fast enough.
Battery lasts 9 days with wi-fi off. very surprise to see some posters claiming that Android needs regular reboots, app killing, etc... all that is alien in my experience & that of the Mrs (who has a ZTE Blade). I guess that must be astroturfing?
Another happy punter here - just got an ex-corporate Finkpad with 1920x1200. It's 4 years old, but at £200 who's complaining? ... especially not after having fitted a 60GB SSD.
actually we flog it
SA does burn a lot of coal, but the best coal we export so that others can burn it... in excess of 60 million tons a year.
On the cheap
We're in the SMB bracket, and use a KVM cluster (using the Proxmox VE distribution).
Direct attached storage is used with 2-way disk replication (DBRD). So with a small number of VM hosts it is possible to get the speed advantage of direct attached disk, and still have live migration.
What we aren't able to do is have a pool of VM hosts that are effectively interchangeable.
Not virtualisation, I'm afraid...
The SVSV and BSD syscalls were actually mapped to underlying Aegis calls - so it was really one OS with multiple personalities on the surface. And X was dreadfully inefficient when running in a DM window.
That said, it really was an amazing OS with particularly impressive features like the ability to host diskless clients having different CPU architectures (and hence executable files containing code for 2 different architectures). And the distributed networking stuff... and having 2 or more DM windows open on the same source file, all nicely synchronised...
@Sergie - computational error - reboot programmer
Uhh, the US price when converted to squids is 599*.71 = £420 ish.
So the comparison is between 420 and 434 pre-VAT - not too bad. Just a pity that the pound has fallen from US$0.50! The missing £14 is probably "forward cover", i.e. insurance against the pound getting worse.
You get what you pay for
Apple is an old-style company that sells vertically integrated systems (like old IBM, DEC, etc.).
Every company wants to "farm" its customers like a bunch of cattle, taking them for milking twice a day. Companies that sell a hardware-software system usually link everything to the hardware.
And it used to be that companies made or lost their reputation on the hardware.
Microsoft has no effective control over PC hardware, so it's important for their business model that there are no effective competitors in the software space... which is why they are so forceful in seeking a monopoly position.
With Xbox you can see that Microsoft employs a vertically integration model when it can - and it would be a lot more representative to compare Mac vs XBox in terms of openness.
The big difference between the two is that while Apple makes overpriced toys, Microsoft makes over-hyped buggy bloatware AND over the years has employed an astonishing number of illegal or at least highly unethical stunts to exclude competitors from the marketplace.
Microsoft tries to prevent other firms from making a business from selling the same kind of thing that they do... Apple tries to prevent others from entering their ecosystem. There's a world of difference.
No, I don't have a Mac (and probably never will) but I regularly recommend them to people who are incapable of being their own system administrator.
One of my lecturers once was involved in developing a compiler on Perkin Elmer hardware - not sure whether that was genuinely British or OEM'ed murrican hardware - circa 1983 I'd say.
Though I cut my teeth on HP minis, the VAXed were really very very good.
Need patriotic icons for long-gone computer systems.
Real life non-geek use
We have a non-geek (but smart) friend who ditched his smartphone in favor of an AA1 which he carries *everywhere*.
The neighbours' kids all want one for Christmas.
And the obligatory IT angle, the geek writing this is a bit clumsy and has put both a Clie PDA and the wife's AA1 to the involuntary drop -from-table-height test - of the two, the AA1 is the one still working.
Give the chipmakers a year and we'll be seeing Pentium-M performance levels from these toys, still at the £200 price point. Sweet... especially if one can has 20" widescreens at home and office to plug into.
For those sufficiently well off, the future is something like that - carry your life around and plug in the usual peripherals wherever you stop. And a Mac Mini with a few terabytes of external storage at home, for doing the media stuff.
Keeping it simple
This emphasis on the command line does a disservice to Linux systems. As an old-timer I usually dive into the command line because it works consistently on just about any kind of Unix system (another big advantage of *nix).
However with the rise of netbooks the friends who just want to do a bit of internetting get the horrors when I explain the quick way of doing things.
So, any takers on how to do useful stuff on a netbook without command-line magic?
Here is the first step to AA1 bliss without the command-line
hit Alt-F2 to get a dialog in which you can specify the program you want to run.
Type xfce-setting-show and press Enter
Click on the Display icon and on the Advanced tab (I think... our AA1 has a bat flattery) tick "Show root menu on right-click".
You can now right-click on the AA1's desktop to get a menu of all installed programs.
From the System submenu, there is a Package Manager GUI that can be used to download and install programs.
Newly installed programs can be accessed from the right-click menu. Putting programs in the Acer menu is text editor stuff, unfortunately.
Other niggles with the article...
so what about no root login... that's so 20th century. Open a terminal window and type "sudo su" and the world's your ostrich.
SR10... sorry to say much as I love Apollos I never noticed a virtualization layer in there. But I've ready to be educated as there are so many technologies touted as "new" that I first encountered in the Apollo OS.
One I'm still waiting for someone to reinvent is the "just hook the new machine up to the network and it talks to all the others" idea.
And not to subvert the discussion into a totally Apollo-head direction:
* Which is the better, SR9.7 or 10.4?
* Anyone here seen the PA-RISC port of Domain/OS in operation?
(why is there no Apollo icon here?)
No guest OS??
Well, that would mean that all resource management would be handled by VMware... in which case it wouldn't be significantly different to a normal OS - especially as the guests would need some ways to communicate and share data.
In which case we would want a hypervisor to run multiple instances of VMware on the same hardware.
The other fallacy with this idea is that most of the big-beastie apps are either multithreaded or multiprogrammed - so inherently require some sort of task management. Not to mention the cloud of helper apps that usually accompany such a beastie - app specific data loading, backup, etc.
Actually it's funny to see this band-aid being applied to the axe wound of Windows one-app-per-server mentality. That was certainly necessary in the days of Windows NT, but the sheer number of Windows boxes and accompanying CALs meant that at the end of the day the cost was similar to the single AS400/HP3000/Unix box it was supposed to replace.
So, in this modern day and age, is it still not possible to configure a Windows server to reliably fulfil more than one task? I ask this because we are happily running a linux box as a combined file/email/intranet server and we all know that proprietary software is supposed to be better.
I must admit that there are open-source apps out there which assume that they have a machine to themselves, or just take everything over anyway (Zimbra I'm looking at *you*). But that's the app rather than the platform.
you win some, you lose some
Was part of a team developing an in-house replacement for a wonky custom-built MES system. A "new MBA" was hired to lead the financial systems guys... and soon started poking his nose in everywhere. Didn't have a clue about technology, his rule of thumb was "if a corporation is big, their technology must be good". So he set about convincing the IT director that Windows was the way forward (this is in the days of NT 3.1/3.5).
Brought in some so-called consultant to look at everything and then say that "NT can do it". What a waste of productive time! This guy wasn't from the firm's regular fleet of consultants - he was cherry-picked to give the answers desired by mr "new MBA".
Fortunately the sotware we were doing was going to use existing hardware (HP unix boxes) - so just the operator stations went onto NT. Except they didn't - it couldn't cope with running Oracle Forms and an X server and two screens all at the same time.
So, often consultants are brought in to whitewash management's preconceptions.
There are also a lot of consultants that are a walking health risk - the dodginess of the system we were replacing could be directly traced to the inaccurate and incomplete specifications produced by consultants from a well known accounting firm that sounds like sanitaryware.
Stef Hits Nail On Head
"But the average person on the street shouldn't have to build their own machine just to run Vista.. ...Lets be clear: Vista Home is a piece of crap."
I've also had a specially built and tweaked Vista machine running sweetly. But it took just as much work to get into that state as Linux desktops were taking 5 years ago.
We now have a strange situation where "consumer" Linuxes like Ubuntu "just work" on most everyday machines, and the flagship Microsoft OS needs TLC for it to be usable.
I'd be interested to know whether anyone has bought an OEM machine (Dell, HP, whatever) where Vista runs wonderfully "out of the box" and hasn't needed any tweaking. If not, Microsoft is in really serious trouble.
War is wasteful
When there is an actual shooting war, it's amazing how much equipment goes to waste. If not destroyed by the enemy, the squaddies will put paid to it.
So I'm with Lewis on the "buy overseas" approach. Right now the UK has a war on its hands, and the goal should be to end it (preferably in victory) at the lowest cost in lives and treasure. That puts a priority on generous quantities of proven, low-cost equipment - not much of it will come back from Afghanistan whether the UK wins or loses.
Newly developed weapons are not going to make it out there in time to affect the outcome of the war - although the arms companies are using the war to justify development of new technologies.
It was George Orwell who said that the British always prepare for the last war. It does seem that too much effort is going into preparation for large-scale conventional wars (that is, apart from the small scale of the UK forces).
BTW I used to work in the tech side of the arms industry, and it is fun to work on military technology... but in our case (South Africa) it was the cheap and cheerful stuff that made the difference in the field.
pity about the client
Thanks to Windows update, the *client* computer can never have 99.9% uptime!
I love it when Windows decides to reboot overnight (I still haven't broken the Unix habit of leaving a bunch of programs running).
Obligatory disk crash story
Financials system goes down at a time of high pressure (month-end). HP engineer duly comes to attend the failed "washing mashine" 400MB drive. Diagnoses likely head crash. Boss thinks maybe the disk pack is bad, persuades engineer to put backup pack in drive....
nope, looks like it was a head crash all along.
P.S. Last resort, lets see if there are any backup tapes that the operators bothered to verify....
P.P.S. don't get me started on DN10000 X-bus terminators
Writing it isn't luck... getting rich off it is!
There are loads of brilliant products developed by gifted engineers that were before their time, or too late to make a difference in a crowded market.
That I think was Shuttleworth's point about being lucky to be rich - he had the right product at the right place and time... and the business sold to the right buyer at the right time, too.
Does having lots of money make your opinions valid?
We,, I'd argue a limited yes... if the money was made from developing a piece of software and then a business around it that was bought out by a megacorp for way too much money.
Then that person might have some useful opinions on software, megacorp acquisitions, and the ways in which they can go wrong.
Having got rich, that person might also have the humility to concede how much getting rich is basically down to luck.
Let's revive the Unix wars!
It'll make a change from pc and Mac kiddies bashing each other.
now me, I'm in the Apollo Domain corner.
I wonder what it is that gives HP the technology dumping death-wish? Let's see... Apollo Domain, VMS, Alpha, PA-RISC... all brilliant technologies terminated before their time.
Particularly Alpha - that processor architecture was so far ahead of its competitors.
And there should be space on the the HP cenotaph for HP's own slightly-bizarre-but-worked-well-in-practice HP1000s and HP3000s.
@ Don Mitchell
1) It's interesting that replacing NT with Win2K caused a significant decrease in memory parity errors.
2) Windows is so modular that graphics drivers have to run in Ring 0?
That registry corruption results in an unbootable system?
That you can't swap out a motherboard (to different chipset) and it will figure out the right drivers for itself?
That you can't run it without IE?
That until last year you couldn't run it without a GUI?
3) In 1990 Unix had not one but two RPC standards - ONC-RPC (originated by Sun) and DCE-RPC (originated by Sun's then-big competitor Apollo). At that time the best Microsoft could do was Windows 2.11 for '386, which certainly caused a lot of memory parity errors.
FWIW I consider both Windows and Linux to be second-tier operating systems and I'm waiting for the day that they are as stable as the HP and Apollo Unixes were in the early 90s.Sorry I can't offer comment on the other leading Unixes - SunOS (and later Solaris), IRIX, and AIX as I had no practical experience with those.
Comments - best on top
I wholeheartedly agree that comments mixed in with code are the stench of death (unless it is assembly code!!!).
If you need a comment to explain what a particular line of code is doing, it's very bad news for anyone who will be maintaining that code.
And in the days of 24-line terminals having comments interspersed with the code made it hard to get a routine onto one or two screens' worth of space.
*However* it is incredibly useful to have program heading comments, which can be laid out as follows:
2-4 line description of the routine's purpose in life.
description of the parameters - what they mean
if the routine is a bit hairy, a structured english (remember that) description of the routine's logic.
It works really well to write all of this *before* beginning any coding - you have thought so much about what you want to do that the resulting code is high quality stuff. Not to mention that if you struggle to write the comments, the routine as designed is probably too complex & should be refactored *before* you start any coding.
The modern idea of writing the tests before the code has a similar effect in making one think about the code you are about to write - but doesn't result in documentation of the "why" in the same way.
All in all, concise heading comments save more time than they cost... as long as one doesn't get too religious about them.
- Review Apple iPhone 6: Looking good, slim. How about... oh, your battery died
- 'Kim Kardashian snaps naked selfies with a BLACKBERRY'. *Twitterati gasps*
- +Comment EMC, HP blockbuster 'merger' shocker comes a cropper
- Moon landing was real and WE CAN PROVE IT, says Nvidia
- Apple's iPhone 6 first-day sales are MEANINGLESS, mutters analyst