Yep, it's not just a river - it's a common name for the whole Amazon basin region, probably close to half the land area of South America.
94 posts • joined 19 Apr 2007
Re: Are sanctions effective?
Sanctions against South Africa definitely resulted in the pullout from Angola. We could not compete with the Russian weapons systems used by the Angolans & Cubans.
I don't think sanctions contributed much to the end of apartheid. Rather, the end of the cold war meant that the superpowers were no longer interested in sponsoring our internal conflict. I would like to believe that Gorbachev's reforms in the USSR gave the SA government the idea that they too could open up; but have no evidence!
Re: You do know that Moore’s law says nothing about speed?
Yup, it's more correct to say that the "golden era" of single-threaded computing is gone - a time when moving to the next process node would enable higher operating frequencies _and_ the doubled transistor count could be used for new architectural features - such as speculative execution - and improved performance through integration of functions that were previously off-chip.
Many of the architectural features that boot single-threaded performance are costly in area, and now that applications _must_ exploit parallelism to get improved performance there is a tipping point. If the applications scale well on highly parallel systems, for a given chip size more system performance can be had from many simple cores than a smaller number of more sophisticated cores.
That is, provided the interconnect and programming model are up to scratch!
Using the building analogy, it is possible to vary the construction along the way, as long as the building's foundations are sufficient. You can even add a second story if the original foundations are deep enough.
Extensions are a pain because their foundations will have to be designed to become as one with the original foundations.
In terms of software requirements, then, it's important to get the big picture of what the system might grow into, as a starting point for system design. That allows one to make appropriate platform and system architecture decisions that should prevent the system running into a brick wall as it grows.
There are 2 problems, of course:
The more capable the platform & more sophisticated the architecture, the longer it takes to get to something that stakeholders can see "turning its wheels".
Ultimately every successful system hits that brick wall where its requirements have outgrown the foundations. Succession planning is essential; we should be thinking about scheduling "moves to a new building" well before they become necessary.
What remains very difficult is making the case for the cost-effectiveness of a re-write; that as a system's actual requirements diverged from the requirements on which its design conception is based, there is a crossover point where the cost of maintaining multiple levels of workaround becomes greater than the cost of a re-write.
Good to have Dominic back
"Neural Networks were a joke in the 1980s. I built one, for a given value of "built" since it never ever did anything useful or even go wrong in a particularly interesting way."
Some Transputer-using friends got a bit further than Dominic, then... they trained their neural net with photos of team members, using a wheelie bin as control. Despite their best efforts, the net never managed to distinguish the bin from the team's rugby-playing member.
Re: Prior Art
> The case is about associating payments directly with the meter number.
Yes, that is how STS works. IMO it is about the only sane way to handle credit transfer to prepayment meters.
STS was originally intended for developing-world markets, where many customers are illiterate. Typically the customer is given a magstripe card with the meter number on it. This is all that is needed at POS for the customer to make their purchase.
Re: Prior Art
Not just that, but in the STS specification for prepayment credit transfer the meter serial number is the unique ID of the meter. STS has been around since the mid-1990s.
As a side-note, one of the cute things about STS is that the serial number is part of the credit encryption scheme; therefore in a wireless system it is feasible to broadcast the credit tokens...
Re: 100GFLops at what power?
Tom, sorry to say that those Adapteva numbers are "guaranteed never to exceed" ones; in this case even more so than usual because Adapteva didn't get enough good 64-core Epiphany-IV chips to fulfil the kickstarter orders.
102GFlops corresponds to all cores doing solely fused multiply-add operations, and ignores the problem of where the data is coming from (i.e. nothing like any real application or even a benchmark). On Parallella the DDR is attached to a Zynq ARM+FPGA hybrid meaning about 300MB/s maximum RAM bandwidth, and the Zynq uses about as much power as the Epiphany.
IIRC the 2W figure is for the 16-core Epiphany-III - but it is still a good GFlops/W figure.
But Kudos to Adapteva for trying - I had high hopes for the Parallella when it came out, but to my surprise it led me into the world of FPGAs.
On the oil rig there will already be a centralised control system, and raw data feeds into that. Not an IoT scenario, I'm afraid.
The concept of very dumb "connected sensors" (cortex M0 at most!) is best for domestic use - with possible data concentration before upload to the processing centre.
By data concentration I mean elimination of no-change or insignificant-change data - massively reducing data volumes.
But that too is a unidirectional data flow, and security is best served by that edge box not being remotely accessible.
IBM used to be the evil callous gorilla of the IT world; somewhere in the 1990s they realised their stranglehold was lost and the only way to stay in business was to build positive (as opposed to extortionate) relationships with their customers.
I hope & expect that Microsoft will do the same, probably retreating to being the world's leading provider of enterprise client computing software (yes, with the cloudy bits).
There are bits of the company that are showing signs of the new "give the customer what they want" behaviour, and I look forward to the "RS6000 moment" when Microsoft products can be recommended on their merits and without fear of lock-in.
[computing books] Re: apple wants to video about computer stuff ?
Another great computing book
"Dealers of Lightning" by Michael Hiltzik
And does anyone remember Tracy Kidder's "Soul of a new machine"?
On the deficit side, sometime in the early 1970s a book called "the Glitch" (IIRC) was published - this claimed that electronic circuits were frequency-limited by "glitches" that increase with frequency (probably referring to metastability) and that the computer industry was doomed, doomed...
Re: I'm a bit confused
> Microsoft has reached the point where optimizing and tuning is what they're focusing on.
Well I think that is what they *should* be focusing on.
Hence why massive changes in the user interaction model are not going down well with people.
They really don't need to do much: just provide UI continuity while as you say, focusing on optimising and tuning what exists.
I would happily pay a subscription for ongoing OS support (security/technology upgrades) for that optimising and tuning.
In the 1990s I did some maintenance work on a process-control system (HP1000 minicomputers, all of 2MB main memory - running about 100 processes - amazing what one could do in the old days).
One of the original developers had passed into management, which was just as well as while a very nice person had limited coding skills. He had done a lot of the Pascal code in the system, and had fallen in love with CASE - it was used everywhere, almost to the exclusion of IF, and the code was peppered with 100 line long (or more!) CASE constructs. In that system the assembly code was generally much better written, commented, and documented than the Pascal code.
One of the tasks in this project was to move to 50MB disk drives, which meant a different driver set & thus different OS (RTE) memory layout, requiring a complete rebuild of the system from source.
Except - one little time-difference calculation function was missing. So we (myself, another young'un, and the boss) looked at the code that was using it, reverse-engineered its functionality, and re-created it (all of 15 lines IIRC).
However... it was our bad luck that the program on which the big rebuild's compilation had failed was a G-code masterpiece, and when the system went live _every other_ program which used TDIFF was getting wrong date calculation results.
Re: Cores are cheap, it's how to use them...
Actor model - now that's interesting... from that brief description it looks like it's rather like the traditional RTOS message-passing model (now I will _really_ show my age - HP1000 RTE class I/O?).
Synchronous vs. asynchronous message passing is one of those wonderful design trade-offs. On the one hand the predictability of synchronous interactions makes it easier to reason about what is going on in parallel software, and the very simple rules make direct implementation in hardware feasible.
On the other hand in the real world one always seems to need a certain degree of decoupling between processes (easy enough to provide in a CSP environment via lightweight buffer processes, as long as the buffering depth is bounded).
The most useful form of buffering seems to be a depth of 1, on the data-producer side of the data path. Enough to release the producer to get on and produce more data, but avoids dynamic resource management overheads and still allows one to reason about where producer and consumer are in their respective control flows?
Re: Cores are cheap, it's how to use them...
> Transputer and Occam not worth mentioning in the context of efficient message-passing?
I wanted to emphasise that CSP is even more applicable now than 30 years ago. While Occam and the Transputer were so closely coupled that the CSP primitives had equivalents in the instruction set, the XMOS take on CSP is IMO very clever because it addresses the problems that limited adoption of the Transputer and Occam. Specifically, the computational performance advantage of a RISC architecture, and reluctance to write substantial software in new languages.
And £10 will get one an XMOS devkit to play with...
As for me, I'm very happy to be working for a "Transputer company" that _still_ has a modular multiprocessing vision; even more so that when we had a clear-out last year there were some TRAMs needing a new home..
Cores are cheap, it's how to use them...
Having taken the plunge into embedded software development, the unikernel concept doesn't seem very different from the standard embedded way of doing things.
Perhaps a differerence in that in embedded-land, specs are much "harder" and clearly thought-through, and testing much more thorough than in web-service land.
However there is a better way, that (of course) was first developed in concept more than 25 years ago.
That is, a bunch of unikernels that are functionally separate and only interact by exchanging messages. This re-introduces isolation, both conceptual and physical, without corresponding overheads. The model is called Communicating Sequential Processes.
CSP also includes a concept of low-overhead fork/join and yes, works best with languages that are amenable to high-quality static analysis (so that resource requirements can be computed in advance). XMOS microcontrollers and their C-derivative language XC are the modern-day examples; how I wish they'd produce some proper _fast_ processors!
The key idea with the XMOS chips is that there is direct hardware support for all the CSP constructs, so message-passing overheads and task switching are very efficient.
Back in the land of normal processors these kind of facilities are not available; but cores are now so cheap that it is possible in concept to dedicate a core to managing each peripheral. I'm not thinking of the x86 architecture here, but devices like the lowrisc chip, Zynq MPSoC, and especially TI's DaVinci media processors.
What is still lacking is decent message-passing hardware in mainstream processors, so that hand-off of requests to lightweight coprocessors does not need to go through a memory-mapped interface (where the resource management handshaking always seems to get ugly).
Some more good reads
Robert Crisp: http://www.theguardian.com/sport/2013/mar/05/the-spin-bob-crisp-amazing-life
His books "Blazing Chariots" and "The Gods were Neutral" are brilliant reads.
And there's Deneys Reitz, who wrote "Commando" about fighting the Brits during the Boer War, and "Trekking On" about serving with them in WW1.
Real-life application: GbE smart switch
Have actually opened one up to change a fan, and noticed it had a Xilinx Spartan FPGA in it.
That's a good example of FPGA sweet-spot where you need significant bandwidth (16Gbps), some smarts (MAC filtering, VLANs, web GUI), updatability, and perhaps the ability for others to OEM the product with their own firmware branding and "secret sauce".
Re: TCP Offload?
Despite the cloudwashing, from the partner list this looks to be an embedded systems thing - so not much that can be offloaded.
It is probably more cost-effective to add another cheap processor core than integrate a high-performance NIC that can't do anything useful if there isn't TCP offload work going.
From embedded-land it is certainly interesting - the heavyweight TCP/IP processing in the Linux kernel was too much for one of our projects, and we had to use bare-metal software (and lwIP) to get full performance from the SoC's gigabit connection.
Re: No comprende
> Except when the device has to communicate its data elsewhere. Or talk to the smartphone app that controls it. Etc,
Those don't need complex layered protocols. If the functionality is simple, the protocol can be simple. e.g. a sensor application will be returning the same data over and over - so it can be as simple as encapsulating the binary data in a UDP packet & handing that over to the network hardware. (I won't go into security question here, the trade-offs involved would make for an essay but suffice it to say that cheap microcontrollers like the XMEGAs have AES encryption support in hardware...).
Bluetooth is handy for smartphone comms - and is increasingly integrated in microcontroller hardware and supporting bare-metal software stacks. e.g. Cypress PSoC BLE (which would be my favourite platform for hobbyist projects). So, don't need an OS for that.
lwIP is a free TCP/IP stack that can integrated with an RTOS (e.g. FreeRTOS) or be used bare-metal. Doesn't force a requirement for an OS. There are also wi-fi modules available that allow one to offload the wi-fi and TCP stacks.
I's still failing understand why anyone would want to run an full-fat OS on an embedded sensor/controller device. Even a "concentrator" would do fine with a lightweight RTOS.
The point of an OS is sharing resources between functions; when the device is single-function there is no reason to have something to manage the hardware resources. And no need for complicated layered communication protocols when the device has a single function.
Having worked on bare-metal embedded products (including ethernet-connected) AND traditional Windows and Unix software, it's clear that each platfrorm has its place. However hardware size and cost dominate all considerations (including development costs) in commodity embedded systems and any processor that costs more than $2 (in volume) probably won't get a look-in.
I was going to end there, but there are 2 other dimensions to embedded computing that make Windows a non-starter.
First, support life - Microsoft just loves to hype up new tools, only to obsolete them a couple of years later. A device manufacturer wants to know that once the expensive development has been done, the product can be manufactured & sold for as many years as it remains competitive in the market.
Second, non-portability. Since the RPi design can be licensed it is feasible for device manufacturers to build the relevant bits of it into their designs; but the device manufacturer loses the negotiating power of being able to say "mr chip vendor, give us a better price or we build these million boards using someone else's processor". Every penny saved on components goes straight to the manufacturer's bottom line, and some firms have "value engineering" teams whose sole aim in life is to remove components from successful designs, or substitute cheaper alternatives.
To me, "success" is when the system is in use for 5+ years, "really good" is when it is successfully adapted to changing requirements during the maintenance phase.
And what's really nice is to hear via the grapevine that when the system is eventually replaced that it was due to changing fashions in technology and that the end-users want it back.
Projects that were delivered but could not be deployed due to business reasons (re-orgs, Marketing Menopause etc.) also count as successful.
Re: Everyone knows
There is also the precedent of classic SIMD machines such as the Connection Machines CM-2.
In the day it was always referred to as a 64K processor machine (or maybe to the pedants, 64K PEs), had one Weitek FPU per 32 processors, and being SIMD I'd assume there were no per-CPU instruction fetch/decode units.
Oh, and BTW each processor was 1 bit wide...
Agile works well when there is a single on-line system where the goal post are always moving, and mistakes are not costly.
Waterfall is still the best for well-specced mission-critical systems or ones that cannot easily be upgraded.
And in between, where most systems fall, the best methodology is a blend: for example use of prototypes (agile mode) to nail down requirements for a waterfall phase which is rapid and productive due to the elimination of unknowns. For each application there is an optimum point of change frequency and scope where customers are willing to wait a certain amount of time for changes in exchange for greater stability, predictable release cycle, and the ability to schedule end-user training to avoid loss of productivity due to end-users having to work things out for themselves.
Stripping away the buzzwords, I'm highly entertained
As that's the way I, and most of my fellow South African devs, have always worked.
Although it should be more of a V than a T, i.e. it really helps to know a good deal about topics closely related to one's speciality. e.g. in my case it's the intersection of real-time and high-performance computing, but I can and will do GUI and database work if that helps get the product delivered.
It does make working in a mult-disciplinary team good fun, making problem-solving a collective exercise rather than a scrap about whose side of the wall has the root cause.
For as long as I've been developing software - 30 years now - there has been the problem of software projects being expensive and late. Over time the "industry" has tried to address this by providing frameworks of various kinds that are supposed to either take work off the developers' shoulders, allow more developers to tackle a given project, or allow cheaper (= less skilled) developers to be used.
That leads inevitably to a profusion of modules and interfaces (to prevent the multitude of developers tripping over each other), acres of crap code (due to hiring of "cheaper" staff), and frameworks that are themselves bloated as their feature-set grows to make them more all-encompassing.
And the net result is that the software is still late, but bloated and inefficient, wiping out the gains from Moore's law.
When you add to that the inevitable management push to deploy software before it is truly ready, and the massive organisational cost of end-user retraining (never mind hours lost due to bugs in the new software), the cost of an IT change is very much more than the development cost of a new system.
I should be the first one to admit that I'm not at all keen to maintain someone else's manky old code. Even though when hunting for bugs in old code it feels that it would be so much easier to rewrite the whole thing from scratch that to get into the heads of the guys who wrote it, the reality is that by the time one is 2/3 of the way through a rewrite the "new" system has become too big for any of the team to grasp in its entirety, and the cycle is well on its way to repeating itself.
Re: Intern slave labour
Interns are not forced to work - it is their choice.
for my part I benefited greatly from holiday jobs in industry that paid a token amount. It helpmed me decide what I wanted to do with my life, and when looking for real employment I could offer real experience and references.
Now that I'm an old fart, it has been my pleasure to have interns working with me and to develop their skills and see them move on to fulfilling paid employment.
Re: 3D XPoint is a new form of RAM, not SSD
Yes - if anything it reminds me of the merits of magnetic core, without the poor density and high power consumption.
The most interesting use, short-term, would be paging store. Systems already have a mechanism for paging stuff in and out of memory, but it's pretty useless these days as mechanical drives are too slow/thrashy and SSDs wear out.
If you view the paging mechanism as providing a RAM cache of the contents of the backing store (as if it were just a giant binary file that has been memory-mapped), the application is pretty obvious.
For HPC this means that the size of the data set can exceed the size of available RAM; and as RAM no longer limits the data set size, new RAM tradeoffs become possible. The amount of RAM can be chosen to suit the number of pages that are "hot" at any given time, thereby reducing cost/heat/size. The savings then become available for more processing power.
There is one element that would be a handy addition to the present virtual-memory model: a prefetch capability (analogous to that used in floating-point DSPs) that would extrapolate data-access patterns to identify data that should be brought into higher levels of the memory hierarchy before it is needed.
By "higher levels" I'm really thinking of processor caches rather than DRAM, which is pitifully slow in comparison to the performance of HPC compute engines.
Re: Lone Wolf..
1. By the law of averages, at least some of the team will be either already competent or teachable.
2. So it is fair to say that the "toxic lone wolf" is one who hoards information and doesn't share it within the team, and actively works to prevent the others from getting anything done.
After all, if one really is the most competent person around, what harm is there in sharing information?
And the reality in a team is that people tend to be different, and each one has their own area of competence (or in the case of junior members, potential to become really good in one area or another).
Personally I've tried to stick it out in this kind of situation, hoping that the person in question would change, but it wore me down and in the end for my own sanity I had to quit.
Back when I were a lad, "IT" was mainframes and shadow IT was PCs and Sun workstations. Actually my first job was to look after some Apollo workstations that had been bought _after_ a run-in between a bunch of engineers and the corporate IT dept (corporate IT insisted on getting an IBM mainframe that was unsuited to number-crunching, and then charged royally for processor time).
It's one of the eternals of "the business" - IT depts need absolute control to run systems efficiently (= cheaply, really) but because it costs a lot of money to establish that much control, it only happens when technology is mature. So the empire of control is vulnerable to new technologies, especially from vendors who do not have a legacy money-spinner that is at risk of being killed by new & more nimble technology.
There's no point to whingeing about the situation; there is no solution, only mitigation. i.e. monitor the new technologies that are appearing and look at how they might be valuable to the business; and make a plan to work with early adopters to find out the best way of assimilating these technologies into the existing infrastructure. Resist the temptation to tell all the users to wait for the supposedly equivalent product from $INCUMBENT_BIG_VENDOR, for it will always be compromised to the advantage of their legacy products.
One of the biggest problems with "the thrill of the new" wasn't really mentioned in the article, namely the problem of divergence. e.g. the multiplicity of cloud storage offerings resulting in siloes of information and wasted end-user hours as users manually try to keep track of what information is where. That is probably the best reason of all for IT departments to engage with early adopters: at least get agreement on no more than two competing solutions that will be evaluated. Those early adopters can be your friends, as they have tremendous energy and power to influence other users.
Well it certainly was revolutionary - for quite a long list of reasons; but my favourite is that for once computer scientists and electronic engineers worked together to make a chip that was conceptually elegant and practical to use.
It was made by a smallish British semiconductor firm (Inmos), and they simply ran out of money to take the architecture further in the face of the onslaught from RISCs and Pentiums. And even though the Transputer made it easier to write and especially debug parallel software than any other architecture, the combination of Moore's law and superscalar processor architectures was enough to keep single-core performance advancing fast enough to keep everyone happy.
BTW Transputers found a niche in space applications, and especially the clever serial links have been standardised as SpaceWire.
The concepts live on in the XMOS multicore microcontrollers - for anyone wanting to play, XMOS offer a "startkit" for something like a tenner.
What I still find utterly beautiful is the predictable and low-latency communication model. And I'm _still_ waiting for someone to make a big bad floating point chip based on these principles, a T800 for the modern era.
Edit: back in the day we had a 40-Transputer box; saw one of those at TNMOC but not operating.
Re: Buy or shed? Gotta keep with the latest Wall St fashion
With both the FPGA behemoths (Alteria & Xilinx) using ARM, that is certainly a threat to Intel. Not really the current cortex-A9 dual-cores which are just about good enough to run an OS, but the upcoming quad-core 64-bit ARM based parts that also have a full spectrum of "proper" peripherals that would enable them to be used as the heart of a computer system.
Re: Altruism & Culture
> If you live in a desert you are overburdened with sunshine and you need to be very efficient to survive.
Pierre - you are right! I'm from Zululand, so associate sunshine with greenery and abundant tropical fruit - didn't think of deserts at all.
That said, the Sudanese are also very hospitable...
Altruism & Culture
Tim, you may have this one the wrong way around.
Many poorer societies (e.g. African, Latin American) are relationship- and group-oriented rather than task-oriented and individualistic.
In that kind of society there is an expectation that those who have an income will share it with less fortunate family or group members. That is surely a higher degree of altruism than in the "developed" world!
If the concept of "cutting off one's nose to spite your own face" is alient to the culture, it becomes perfectly acceptable to accept $1, because then another person is benefiting - and if you fall on hard times you know where to find someone who has a bit of money!
Something I've observed but don't have the background to draw any conclusions from, is that these "warm south" cultures are quite enterpreneurial - lots of people making or selling things on a small scale. Not efficient, but if you live somewhere with plenty of sunshine you don't need to be efficient...
Back to the point - the $99/$1 split makes perfect sense in a "we" culture where it is not acceptable in a "me" culture. Perhaps the comparison is telling us more about negotiation styles than economics?
Re: What platform?
Not to mention that the protocols mentioned are verbose and text-processing intensive. Not what one wants in a sensor that should run on battery or harvested energy.
An earlier el Reg article (which I'm too lazy to look up) made a compelling argument for a 3-layer IoT.
1. Sensors and controllers continue to be low-power, using high-volume microcontrollers, bare-metal programming, and lightweight proprietary protocols.
2. A gateway product (embedded SoC with RTOS) provides the integration point at which sensor/controller functionality is exposed using standard protocols. This sits within the customer's network and supports direct access from customer devices (computing and mobile).
3. The "big cloud" is mostly useful for analytics or as a higher-level integration point. For example to forward traffic between the on-site sensor gateway and mobile devices when the user is off-site.
The key point made in that article is that bandwidth is more expensive than processing power; thus uploading untold millions of sensor readings to a data centre for hypothetical future data-crunching is neither cost-effective nor energy-efficient.
Big thumbs up for CiviCRM.
It does take a while to get into, but as there is never enough time to do everything it is better to invest the time that there is into high-level work that benefits the entire organisation than one-by-one PC-shop jobs.
Some other things we have been using:
* email - Zimbra (not perfectly happy with it)
* files - OwnCloud (just getting into it, if it works as adertised it should be possible to sync the desktop and my Docs of each PC)
* server backup - BoxBackup
All of these things could run on a local server or in a data centre, depending on the size & sophistication of the organisation (and its Internet links!)
The biggest problem has been rollout, training, and subsequent hand-holding.
Sounds over-complicated and high-risk, I'm afraid...
There are some additional risks that others have not mentioned - mostly related to that fact that you are looking at a customer-specific solution. Firstly, even as a volunteer your time is valuable and you are likely to spend many hours getting a setup based on creaky old desktops to work. Then there is the problem of the *sysadmin* becoming the single point of failure - i.e. it will be really hard for someone else to support a bespoke system that has been cunningly constructed to minimise costs, especially as it may be complex/creative and therefore time-consuming to document fully. And of coure the creaky old desktop hardware remain creaky old hardware.
This is a "Been there, done that, got the T-shirt" comment - I personally have made all the above mistakes (though not with VDI), and in the long run really regretted taking that approach.
If I were in your situation I'd look at replacing the desktops with three year old ex-corporate machines (Windows 7 licensed). If you can get Windows 7 licenses from CTX for a couple of pounds each, you could look for early Core2 machines that are contaminated with Vista licenses. The key thing here will be to get machines that are similar enough that you can support them with a single image. I think you have a fighting chance of finding *free* machines of this era - what you would have to do is upgrade the RAM and if possible the disk drive (which is the main performance bottle-neck in these systems). For older machines I'd also look at replacing the fans.
The standard-image desktop approach works well for us, as we can have a couple of machines on the shelf "ready to roll" so that if a machine dies when I'm not around, it can be swapped for a working one and no-one's work is affected (assuming they have played ball and kept their stuff on the server).
BTW for servers our entire setup is open-source, using DRBD for server-to-server replication (ask if you want more info). Servers for the users are virtualised using KVM, and virtual disks are mirrored by the VM hosts - so the users' servers don't need any funky configuration for data redundancy.
The challenge in scoping a technically complex system is that it is very hard for the business decision maker to understand the proposal. What has worked for me is to prepare 3 proposals - high road, middle road, low road - with a description of how they vary in cost and benefit (finding the "best cheap option" makes for a stimulating challenge).
Then it's up to the leaders to choose an option - and the compromises inherent in the chosen option become their responsibility, not that of the IT guys.
What is a killer (learned the hard way) is to try to do things as cheaply as possible, because that *will* cost time and ultimately need re-doing. However sometimes it is necessary because until the benefit of some new technology is perceived, the funds for a proper implementation will not be forthcoming. Probably what is key here is that the "quick cheap option" must be accompanied by clear written caveats expressed in business terms, e.g. "this system will support at most 10 users and the equipment will need to be replaced in 3 years' time".
This is OT, but I've just been through the pain of reloading my laptop - XP suffered fatal internal disintegration and was bluescreening. So I bit the bullet and installed a Win 8 upgrade. Cue several hours of faffing around with Classic shell etc. OK, finally all working... except for a USB to serial converter for PICAXE programming.
I'm completely with you on the distro confusion, but I followed the herd and went for Mint with MATE - simple install alongside Win8, everything worked out of the box. I've not needed to drop to the command line at any point. Although the intention was to only boot into Linux when wanting to play with the PICAXEs, I haven't bothered to go back to WIndows.
Knowing a smidgen more about these things (maybe?)
A few weeks ago we bought a Nook HD for £119 at JL.
As an ereader it is a bit of a loss (I would definitely buy a different product, probably the Kobo one), but the "great enabler" for this device is that it can boot from SD.
The first step was to get the Google apps onto the Nookified Android. A 4GB microsd card is needed - instructions here:
The result is pretty good, the only problem is that some Play Store apps are listed as not available - e.g. Evernote had to be installed from the B&N shop thing.
However there is an even better plan - get a fast microSD card (e.g. UHS type) and install vanilla Android on it, run the thing from the SD card:
That's working really well for me, all apps install happily (Evernote and Skitch included) and the device has a nice his-res screen that's bright enough to be usable in sunshine.
As a device it *does* have e-reader type limitations - no cameras, no GPS...
in due course there will be frustration at the reliability (or not) of those solutions and training gaps and the fact that different and incompatible solutions are used in various departments. Some bright spark will say "why don't we hire someone to take charge of this chaos". A couple of hiring cycles and a smidgen of empire building later later and the result will be indistinguishable from the much reviled It dept of old.
And some groaning user will exclaim "there must be a way that involves less red tape"...
Re: The music industry: @Mark Honman
> On the other hand, the sound quality was dreadful, commercial pressings of the most atrocious quality
You must be a fellow South African, then... one of the big benefits of visiting the UK in the early 1990s was to buy some decent LPs. On the other hand classical LPs were of brilliant quality (N.O.T. pressed in SA, obviously) and any warps were my own fault, really. What used to drive me nuts was end-of-side distortion...
There is the whole quasi-religious thing you describe, and it's hard to even guess at how much that affects the perception of sound quality (or one might put it, the index of overall satisfaction gained from playing an album).
Re: The music industry: Still late for their own funeral
Although I'm something of a vinyl luddidite, you still deserve an upvote.
But where are these cheap Chinese styli you speak of? Any self-respecting vluddite will spend as much on a stylus as the rest of the world spends on an iThing, and probably for the same reasons (an elevated life-form has descended to their plane and offers a tangible object thickly encrusted with magic pixie dust).
While like you I'd still tend to buy CDs for convenience and rippability, the weird thing with vinyl is that while objectively speaking it cannot match CD for sound quality, for most types of music the deficiencies of vinyl are less annoying than those of CD.