Brilliant insightful comment - el Reg should pay you to expand it into an article on the subject.
84 posts • joined 19 Apr 2007
Brilliant insightful comment - el Reg should pay you to expand it into an article on the subject.
IBM used to be the evil callous gorilla of the IT world; somewhere in the 1990s they realised their stranglehold was lost and the only way to stay in business was to build positive (as opposed to extortionate) relationships with their customers.
I hope & expect that Microsoft will do the same, probably retreating to being the world's leading provider of enterprise client computing software (yes, with the cloudy bits).
There are bits of the company that are showing signs of the new "give the customer what they want" behaviour, and I look forward to the "RS6000 moment" when Microsoft products can be recommended on their merits and without fear of lock-in.
Another great computing book
"Dealers of Lightning" by Michael Hiltzik
And does anyone remember Tracy Kidder's "Soul of a new machine"?
On the deficit side, sometime in the early 1970s a book called "the Glitch" (IIRC) was published - this claimed that electronic circuits were frequency-limited by "glitches" that increase with frequency (probably referring to metastability) and that the computer industry was doomed, doomed...
> Microsoft has reached the point where optimizing and tuning is what they're focusing on.
Well I think that is what they *should* be focusing on.
Hence why massive changes in the user interaction model are not going down well with people.
They really don't need to do much: just provide UI continuity while as you say, focusing on optimising and tuning what exists.
I would happily pay a subscription for ongoing OS support (security/technology upgrades) for that optimising and tuning.
In the 1990s I did some maintenance work on a process-control system (HP1000 minicomputers, all of 2MB main memory - running about 100 processes - amazing what one could do in the old days).
One of the original developers had passed into management, which was just as well as while a very nice person had limited coding skills. He had done a lot of the Pascal code in the system, and had fallen in love with CASE - it was used everywhere, almost to the exclusion of IF, and the code was peppered with 100 line long (or more!) CASE constructs. In that system the assembly code was generally much better written, commented, and documented than the Pascal code.
One of the tasks in this project was to move to 50MB disk drives, which meant a different driver set & thus different OS (RTE) memory layout, requiring a complete rebuild of the system from source.
Except - one little time-difference calculation function was missing. So we (myself, another young'un, and the boss) looked at the code that was using it, reverse-engineered its functionality, and re-created it (all of 15 lines IIRC).
However... it was our bad luck that the program on which the big rebuild's compilation had failed was a G-code masterpiece, and when the system went live _every other_ program which used TDIFF was getting wrong date calculation results.
Actor model - now that's interesting... from that brief description it looks like it's rather like the traditional RTOS message-passing model (now I will _really_ show my age - HP1000 RTE class I/O?).
Synchronous vs. asynchronous message passing is one of those wonderful design trade-offs. On the one hand the predictability of synchronous interactions makes it easier to reason about what is going on in parallel software, and the very simple rules make direct implementation in hardware feasible.
On the other hand in the real world one always seems to need a certain degree of decoupling between processes (easy enough to provide in a CSP environment via lightweight buffer processes, as long as the buffering depth is bounded).
The most useful form of buffering seems to be a depth of 1, on the data-producer side of the data path. Enough to release the producer to get on and produce more data, but avoids dynamic resource management overheads and still allows one to reason about where producer and consumer are in their respective control flows?
> Transputer and Occam not worth mentioning in the context of efficient message-passing?
I wanted to emphasise that CSP is even more applicable now than 30 years ago. While Occam and the Transputer were so closely coupled that the CSP primitives had equivalents in the instruction set, the XMOS take on CSP is IMO very clever because it addresses the problems that limited adoption of the Transputer and Occam. Specifically, the computational performance advantage of a RISC architecture, and reluctance to write substantial software in new languages.
And £10 will get one an XMOS devkit to play with...
As for me, I'm very happy to be working for a "Transputer company" that _still_ has a modular multiprocessing vision; even more so that when we had a clear-out last year there were some TRAMs needing a new home..
Having taken the plunge into embedded software development, the unikernel concept doesn't seem very different from the standard embedded way of doing things.
Perhaps a differerence in that in embedded-land, specs are much "harder" and clearly thought-through, and testing much more thorough than in web-service land.
However there is a better way, that (of course) was first developed in concept more than 25 years ago.
That is, a bunch of unikernels that are functionally separate and only interact by exchanging messages. This re-introduces isolation, both conceptual and physical, without corresponding overheads. The model is called Communicating Sequential Processes.
CSP also includes a concept of low-overhead fork/join and yes, works best with languages that are amenable to high-quality static analysis (so that resource requirements can be computed in advance). XMOS microcontrollers and their C-derivative language XC are the modern-day examples; how I wish they'd produce some proper _fast_ processors!
The key idea with the XMOS chips is that there is direct hardware support for all the CSP constructs, so message-passing overheads and task switching are very efficient.
Back in the land of normal processors these kind of facilities are not available; but cores are now so cheap that it is possible in concept to dedicate a core to managing each peripheral. I'm not thinking of the x86 architecture here, but devices like the lowrisc chip, Zynq MPSoC, and especially TI's DaVinci media processors.
What is still lacking is decent message-passing hardware in mainstream processors, so that hand-off of requests to lightweight coprocessors does not need to go through a memory-mapped interface (where the resource management handshaking always seems to get ugly).
Sorry, "Brazen Chariots"
Robert Crisp: http://www.theguardian.com/sport/2013/mar/05/the-spin-bob-crisp-amazing-life
His books "Blazing Chariots" and "The Gods were Neutral" are brilliant reads.
And there's Deneys Reitz, who wrote "Commando" about fighting the Brits during the Boer War, and "Trekking On" about serving with them in WW1.
Have actually opened one up to change a fan, and noticed it had a Xilinx Spartan FPGA in it.
That's a good example of FPGA sweet-spot where you need significant bandwidth (16Gbps), some smarts (MAC filtering, VLANs, web GUI), updatability, and perhaps the ability for others to OEM the product with their own firmware branding and "secret sauce".
Despite the cloudwashing, from the partner list this looks to be an embedded systems thing - so not much that can be offloaded.
It is probably more cost-effective to add another cheap processor core than integrate a high-performance NIC that can't do anything useful if there isn't TCP offload work going.
From embedded-land it is certainly interesting - the heavyweight TCP/IP processing in the Linux kernel was too much for one of our projects, and we had to use bare-metal software (and lwIP) to get full performance from the SoC's gigabit connection.
> Except when the device has to communicate its data elsewhere. Or talk to the smartphone app that controls it. Etc,
Those don't need complex layered protocols. If the functionality is simple, the protocol can be simple. e.g. a sensor application will be returning the same data over and over - so it can be as simple as encapsulating the binary data in a UDP packet & handing that over to the network hardware. (I won't go into security question here, the trade-offs involved would make for an essay but suffice it to say that cheap microcontrollers like the XMEGAs have AES encryption support in hardware...).
Bluetooth is handy for smartphone comms - and is increasingly integrated in microcontroller hardware and supporting bare-metal software stacks. e.g. Cypress PSoC BLE (which would be my favourite platform for hobbyist projects). So, don't need an OS for that.
lwIP is a free TCP/IP stack that can integrated with an RTOS (e.g. FreeRTOS) or be used bare-metal. Doesn't force a requirement for an OS. There are also wi-fi modules available that allow one to offload the wi-fi and TCP stacks.
I's still failing understand why anyone would want to run an full-fat OS on an embedded sensor/controller device. Even a "concentrator" would do fine with a lightweight RTOS.
The point of an OS is sharing resources between functions; when the device is single-function there is no reason to have something to manage the hardware resources. And no need for complicated layered communication protocols when the device has a single function.
Having worked on bare-metal embedded products (including ethernet-connected) AND traditional Windows and Unix software, it's clear that each platfrorm has its place. However hardware size and cost dominate all considerations (including development costs) in commodity embedded systems and any processor that costs more than $2 (in volume) probably won't get a look-in.
I was going to end there, but there are 2 other dimensions to embedded computing that make Windows a non-starter.
First, support life - Microsoft just loves to hype up new tools, only to obsolete them a couple of years later. A device manufacturer wants to know that once the expensive development has been done, the product can be manufactured & sold for as many years as it remains competitive in the market.
Second, non-portability. Since the RPi design can be licensed it is feasible for device manufacturers to build the relevant bits of it into their designs; but the device manufacturer loses the negotiating power of being able to say "mr chip vendor, give us a better price or we build these million boards using someone else's processor". Every penny saved on components goes straight to the manufacturer's bottom line, and some firms have "value engineering" teams whose sole aim in life is to remove components from successful designs, or substitute cheaper alternatives.
To me, "success" is when the system is in use for 5+ years, "really good" is when it is successfully adapted to changing requirements during the maintenance phase.
And what's really nice is to hear via the grapevine that when the system is eventually replaced that it was due to changing fashions in technology and that the end-users want it back.
Projects that were delivered but could not be deployed due to business reasons (re-orgs, Marketing Menopause etc.) also count as successful.
There is also the precedent of classic SIMD machines such as the Connection Machines CM-2.
In the day it was always referred to as a 64K processor machine (or maybe to the pedants, 64K PEs), had one Weitek FPU per 32 processors, and being SIMD I'd assume there were no per-CPU instruction fetch/decode units.
Oh, and BTW each processor was 1 bit wide...
Agile works well when there is a single on-line system where the goal post are always moving, and mistakes are not costly.
Waterfall is still the best for well-specced mission-critical systems or ones that cannot easily be upgraded.
And in between, where most systems fall, the best methodology is a blend: for example use of prototypes (agile mode) to nail down requirements for a waterfall phase which is rapid and productive due to the elimination of unknowns. For each application there is an optimum point of change frequency and scope where customers are willing to wait a certain amount of time for changes in exchange for greater stability, predictable release cycle, and the ability to schedule end-user training to avoid loss of productivity due to end-users having to work things out for themselves.
As that's the way I, and most of my fellow South African devs, have always worked.
Although it should be more of a V than a T, i.e. it really helps to know a good deal about topics closely related to one's speciality. e.g. in my case it's the intersection of real-time and high-performance computing, but I can and will do GUI and database work if that helps get the product delivered.
It does make working in a mult-disciplinary team good fun, making problem-solving a collective exercise rather than a scrap about whose side of the wall has the root cause.
For as long as I've been developing software - 30 years now - there has been the problem of software projects being expensive and late. Over time the "industry" has tried to address this by providing frameworks of various kinds that are supposed to either take work off the developers' shoulders, allow more developers to tackle a given project, or allow cheaper (= less skilled) developers to be used.
That leads inevitably to a profusion of modules and interfaces (to prevent the multitude of developers tripping over each other), acres of crap code (due to hiring of "cheaper" staff), and frameworks that are themselves bloated as their feature-set grows to make them more all-encompassing.
And the net result is that the software is still late, but bloated and inefficient, wiping out the gains from Moore's law.
When you add to that the inevitable management push to deploy software before it is truly ready, and the massive organisational cost of end-user retraining (never mind hours lost due to bugs in the new software), the cost of an IT change is very much more than the development cost of a new system.
I should be the first one to admit that I'm not at all keen to maintain someone else's manky old code. Even though when hunting for bugs in old code it feels that it would be so much easier to rewrite the whole thing from scratch that to get into the heads of the guys who wrote it, the reality is that by the time one is 2/3 of the way through a rewrite the "new" system has become too big for any of the team to grasp in its entirety, and the cycle is well on its way to repeating itself.
Interns are not forced to work - it is their choice.
for my part I benefited greatly from holiday jobs in industry that paid a token amount. It helpmed me decide what I wanted to do with my life, and when looking for real employment I could offer real experience and references.
Now that I'm an old fart, it has been my pleasure to have interns working with me and to develop their skills and see them move on to fulfilling paid employment.
Yes - if anything it reminds me of the merits of magnetic core, without the poor density and high power consumption.
The most interesting use, short-term, would be paging store. Systems already have a mechanism for paging stuff in and out of memory, but it's pretty useless these days as mechanical drives are too slow/thrashy and SSDs wear out.
If you view the paging mechanism as providing a RAM cache of the contents of the backing store (as if it were just a giant binary file that has been memory-mapped), the application is pretty obvious.
For HPC this means that the size of the data set can exceed the size of available RAM; and as RAM no longer limits the data set size, new RAM tradeoffs become possible. The amount of RAM can be chosen to suit the number of pages that are "hot" at any given time, thereby reducing cost/heat/size. The savings then become available for more processing power.
There is one element that would be a handy addition to the present virtual-memory model: a prefetch capability (analogous to that used in floating-point DSPs) that would extrapolate data-access patterns to identify data that should be brought into higher levels of the memory hierarchy before it is needed.
By "higher levels" I'm really thinking of processor caches rather than DRAM, which is pitifully slow in comparison to the performance of HPC compute engines.
1. By the law of averages, at least some of the team will be either already competent or teachable.
2. So it is fair to say that the "toxic lone wolf" is one who hoards information and doesn't share it within the team, and actively works to prevent the others from getting anything done.
After all, if one really is the most competent person around, what harm is there in sharing information?
And the reality in a team is that people tend to be different, and each one has their own area of competence (or in the case of junior members, potential to become really good in one area or another).
Personally I've tried to stick it out in this kind of situation, hoping that the person in question would change, but it wore me down and in the end for my own sanity I had to quit.
Back when I were a lad, "IT" was mainframes and shadow IT was PCs and Sun workstations. Actually my first job was to look after some Apollo workstations that had been bought _after_ a run-in between a bunch of engineers and the corporate IT dept (corporate IT insisted on getting an IBM mainframe that was unsuited to number-crunching, and then charged royally for processor time).
It's one of the eternals of "the business" - IT depts need absolute control to run systems efficiently (= cheaply, really) but because it costs a lot of money to establish that much control, it only happens when technology is mature. So the empire of control is vulnerable to new technologies, especially from vendors who do not have a legacy money-spinner that is at risk of being killed by new & more nimble technology.
There's no point to whingeing about the situation; there is no solution, only mitigation. i.e. monitor the new technologies that are appearing and look at how they might be valuable to the business; and make a plan to work with early adopters to find out the best way of assimilating these technologies into the existing infrastructure. Resist the temptation to tell all the users to wait for the supposedly equivalent product from $INCUMBENT_BIG_VENDOR, for it will always be compromised to the advantage of their legacy products.
One of the biggest problems with "the thrill of the new" wasn't really mentioned in the article, namely the problem of divergence. e.g. the multiplicity of cloud storage offerings resulting in siloes of information and wasted end-user hours as users manually try to keep track of what information is where. That is probably the best reason of all for IT departments to engage with early adopters: at least get agreement on no more than two competing solutions that will be evaluated. Those early adopters can be your friends, as they have tremendous energy and power to influence other users.
Well it certainly was revolutionary - for quite a long list of reasons; but my favourite is that for once computer scientists and electronic engineers worked together to make a chip that was conceptually elegant and practical to use.
It was made by a smallish British semiconductor firm (Inmos), and they simply ran out of money to take the architecture further in the face of the onslaught from RISCs and Pentiums. And even though the Transputer made it easier to write and especially debug parallel software than any other architecture, the combination of Moore's law and superscalar processor architectures was enough to keep single-core performance advancing fast enough to keep everyone happy.
BTW Transputers found a niche in space applications, and especially the clever serial links have been standardised as SpaceWire.
The concepts live on in the XMOS multicore microcontrollers - for anyone wanting to play, XMOS offer a "startkit" for something like a tenner.
What I still find utterly beautiful is the predictable and low-latency communication model. And I'm _still_ waiting for someone to make a big bad floating point chip based on these principles, a T800 for the modern era.
Edit: back in the day we had a 40-Transputer box; saw one of those at TNMOC but not operating.
With both the FPGA behemoths (Alteria & Xilinx) using ARM, that is certainly a threat to Intel. Not really the current cortex-A9 dual-cores which are just about good enough to run an OS, but the upcoming quad-core 64-bit ARM based parts that also have a full spectrum of "proper" peripherals that would enable them to be used as the heart of a computer system.
> If you live in a desert you are overburdened with sunshine and you need to be very efficient to survive.
Pierre - you are right! I'm from Zululand, so associate sunshine with greenery and abundant tropical fruit - didn't think of deserts at all.
That said, the Sudanese are also very hospitable...
Tim, you may have this one the wrong way around.
Many poorer societies (e.g. African, Latin American) are relationship- and group-oriented rather than task-oriented and individualistic.
In that kind of society there is an expectation that those who have an income will share it with less fortunate family or group members. That is surely a higher degree of altruism than in the "developed" world!
If the concept of "cutting off one's nose to spite your own face" is alient to the culture, it becomes perfectly acceptable to accept $1, because then another person is benefiting - and if you fall on hard times you know where to find someone who has a bit of money!
Something I've observed but don't have the background to draw any conclusions from, is that these "warm south" cultures are quite enterpreneurial - lots of people making or selling things on a small scale. Not efficient, but if you live somewhere with plenty of sunshine you don't need to be efficient...
Back to the point - the $99/$1 split makes perfect sense in a "we" culture where it is not acceptable in a "me" culture. Perhaps the comparison is telling us more about negotiation styles than economics?
Not to mention that the protocols mentioned are verbose and text-processing intensive. Not what one wants in a sensor that should run on battery or harvested energy.
An earlier el Reg article (which I'm too lazy to look up) made a compelling argument for a 3-layer IoT.
1. Sensors and controllers continue to be low-power, using high-volume microcontrollers, bare-metal programming, and lightweight proprietary protocols.
2. A gateway product (embedded SoC with RTOS) provides the integration point at which sensor/controller functionality is exposed using standard protocols. This sits within the customer's network and supports direct access from customer devices (computing and mobile).
3. The "big cloud" is mostly useful for analytics or as a higher-level integration point. For example to forward traffic between the on-site sensor gateway and mobile devices when the user is off-site.
The key point made in that article is that bandwidth is more expensive than processing power; thus uploading untold millions of sensor readings to a data centre for hypothetical future data-crunching is neither cost-effective nor energy-efficient.
This is a leading question as I have some hardware pals who build this kind of stuff - how much would you think of spending on a 1U solution? The TI 66AK2H chips are pretty expensive on account of the 8 DSP cores - TI are asking about £1000 for their 66AK2H EVM.
My next-door neighbour would disagree - when on holiday his main stress factor is whether the fish in his tank are doing OK - and would love to get notifications on his mobile phone to confirm the fact!
Well the last one out was Domain/OS SR10.4 or 10.5, IIRC?
BTW there were rumours that HP had DomainOS running on the 9000/700 series but chose not to release it....
That Office Upload Centre dialog sounds like a chip off the old block...
Big thumbs up for CiviCRM.
It does take a while to get into, but as there is never enough time to do everything it is better to invest the time that there is into high-level work that benefits the entire organisation than one-by-one PC-shop jobs.
Some other things we have been using:
* email - Zimbra (not perfectly happy with it)
* files - OwnCloud (just getting into it, if it works as adertised it should be possible to sync the desktop and my Docs of each PC)
* server backup - BoxBackup
All of these things could run on a local server or in a data centre, depending on the size & sophistication of the organisation (and its Internet links!)
The biggest problem has been rollout, training, and subsequent hand-holding.
There are some additional risks that others have not mentioned - mostly related to that fact that you are looking at a customer-specific solution. Firstly, even as a volunteer your time is valuable and you are likely to spend many hours getting a setup based on creaky old desktops to work. Then there is the problem of the *sysadmin* becoming the single point of failure - i.e. it will be really hard for someone else to support a bespoke system that has been cunningly constructed to minimise costs, especially as it may be complex/creative and therefore time-consuming to document fully. And of coure the creaky old desktop hardware remain creaky old hardware.
This is a "Been there, done that, got the T-shirt" comment - I personally have made all the above mistakes (though not with VDI), and in the long run really regretted taking that approach.
If I were in your situation I'd look at replacing the desktops with three year old ex-corporate machines (Windows 7 licensed). If you can get Windows 7 licenses from CTX for a couple of pounds each, you could look for early Core2 machines that are contaminated with Vista licenses. The key thing here will be to get machines that are similar enough that you can support them with a single image. I think you have a fighting chance of finding *free* machines of this era - what you would have to do is upgrade the RAM and if possible the disk drive (which is the main performance bottle-neck in these systems). For older machines I'd also look at replacing the fans.
The standard-image desktop approach works well for us, as we can have a couple of machines on the shelf "ready to roll" so that if a machine dies when I'm not around, it can be swapped for a working one and no-one's work is affected (assuming they have played ball and kept their stuff on the server).
BTW for servers our entire setup is open-source, using DRBD for server-to-server replication (ask if you want more info). Servers for the users are virtualised using KVM, and virtual disks are mirrored by the VM hosts - so the users' servers don't need any funky configuration for data redundancy.
The challenge in scoping a technically complex system is that it is very hard for the business decision maker to understand the proposal. What has worked for me is to prepare 3 proposals - high road, middle road, low road - with a description of how they vary in cost and benefit (finding the "best cheap option" makes for a stimulating challenge).
Then it's up to the leaders to choose an option - and the compromises inherent in the chosen option become their responsibility, not that of the IT guys.
What is a killer (learned the hard way) is to try to do things as cheaply as possible, because that *will* cost time and ultimately need re-doing. However sometimes it is necessary because until the benefit of some new technology is perceived, the funds for a proper implementation will not be forthcoming. Probably what is key here is that the "quick cheap option" must be accompanied by clear written caveats expressed in business terms, e.g. "this system will support at most 10 users and the equipment will need to be replaced in 3 years' time".
This is OT, but I've just been through the pain of reloading my laptop - XP suffered fatal internal disintegration and was bluescreening. So I bit the bullet and installed a Win 8 upgrade. Cue several hours of faffing around with Classic shell etc. OK, finally all working... except for a USB to serial converter for PICAXE programming.
I'm completely with you on the distro confusion, but I followed the herd and went for Mint with MATE - simple install alongside Win8, everything worked out of the box. I've not needed to drop to the command line at any point. Although the intention was to only boot into Linux when wanting to play with the PICAXEs, I haven't bothered to go back to WIndows.
A few weeks ago we bought a Nook HD for £119 at JL.
As an ereader it is a bit of a loss (I would definitely buy a different product, probably the Kobo one), but the "great enabler" for this device is that it can boot from SD.
The first step was to get the Google apps onto the Nookified Android. A 4GB microsd card is needed - instructions here:
The result is pretty good, the only problem is that some Play Store apps are listed as not available - e.g. Evernote had to be installed from the B&N shop thing.
However there is an even better plan - get a fast microSD card (e.g. UHS type) and install vanilla Android on it, run the thing from the SD card:
That's working really well for me, all apps install happily (Evernote and Skitch included) and the device has a nice his-res screen that's bright enough to be usable in sunshine.
As a device it *does* have e-reader type limitations - no cameras, no GPS...
in due course there will be frustration at the reliability (or not) of those solutions and training gaps and the fact that different and incompatible solutions are used in various departments. Some bright spark will say "why don't we hire someone to take charge of this chaos". A couple of hiring cycles and a smidgen of empire building later later and the result will be indistinguishable from the much reviled It dept of old.
And some groaning user will exclaim "there must be a way that involves less red tape"...
> On the other hand, the sound quality was dreadful, commercial pressings of the most atrocious quality
You must be a fellow South African, then... one of the big benefits of visiting the UK in the early 1990s was to buy some decent LPs. On the other hand classical LPs were of brilliant quality (N.O.T. pressed in SA, obviously) and any warps were my own fault, really. What used to drive me nuts was end-of-side distortion...
There is the whole quasi-religious thing you describe, and it's hard to even guess at how much that affects the perception of sound quality (or one might put it, the index of overall satisfaction gained from playing an album).
Although I'm something of a vinyl luddidite, you still deserve an upvote.
But where are these cheap Chinese styli you speak of? Any self-respecting vluddite will spend as much on a stylus as the rest of the world spends on an iThing, and probably for the same reasons (an elevated life-form has descended to their plane and offers a tangible object thickly encrusted with magic pixie dust).
While like you I'd still tend to buy CDs for convenience and rippability, the weird thing with vinyl is that while objectively speaking it cannot match CD for sound quality, for most types of music the deficiencies of vinyl are less annoying than those of CD.
Don't think so, I worked with PDP-8 derived HP1000s which had a totally different instruction set/register architecture to the PDP-11. Much as I loved the HP1000s, I have to admit that the PDP-11 had a much more elegant architecture.
The closer the vehicle is to the loop, the more efficient the energy transfer is.
What I want to know is why the electric trolley buses were done away with all those years ago...
You're thinking of Transputers. Which were certainly the right idea for CFD, not so much on account of the architecture but because of the good balance between compute and communication speeds, and especially the very low communication latency. Low latency meant that relatively little time was wasted hanging around at the end of an iteration, waiting for data to arrive from neighbouring processors. But they had their problems too - especially absence of any kind of global/broadcast communication.
I don't know how things have changed since those days, but at that time there was a tension between algorithmic efficiency and parallel processing - the more efficient CFD algorithms coupled cells across the whole domain, which was generally OK to parallelise on a shared-memory system but was no-go on a distributed-memory architecture where less efficient algorithms could be used.
So... real kudos to the guys & gals, both system architects and software developers, who have pulled off the feat of building a system and a real-world application that scale across 1^^6 processing elements.
Due to platform support payments from MS to Nokia, WP licenses for Nokia phones are effectively paid for by Microsoft. I'm not sure how that will change as the number of phones sold by Nokia goes up or down, but as Nokia produces the vast majority of WP phones, the average Windows phone has a free OS.
Once the tablet sales breakdown - Surface vs. the others - becomes available it will be possible to determine whether the average Windows RT tablet also has a free OS (i.e. MS hardware effectively has a free OS, even if it is paid for by internal funny money).
The 20 lines bit comes from the terminals of old - that's how much you can see at one go (60 lines on a printout page).
I've got to agree with the "wrapper" type of function, hat you've described. I guess my opinion comes from having done a lot of maintenance, and the frustration of following the flow of control in OO code that passes from one inconsequential little method to another. is a maze of twisty little methods, all alike.
On the other hand there were some shockers in the old days, 1000-line Fortran IV routines (don't get me started on common blocks) , and a supervisor who having discovered the Pascal CASE statement considered that it made IF... THEN.. ELSE obsolete.
OO shockers seem to be small, e.g. a simple "pure" function (no state, global references or side effects) that was implemented as a method, and the caller of this 8-line function was expected to instantiate an object of this class, call the method, and then throw the object away.
In fact there's a pattern here... it's so easy when one has discovered something new (COMMON blocks, CASE statements, object-orientation) to unthinkingly use this new shiny (or orgnisatoinally mandated tool/method) to solve every imaginable problem including those that can be more effectively solved the old-fashioned way.
As another commenter said, that is a common factor in poor-quality code.
It's so long ago that I can't give credit where it is due, but the best approach to commenting that I've found is:
* If you need to comment a line of code, that code is obscure - do it differently. (not always possible, as it may be a toolset restriction) - but line-by-line comments should be exceptional and are there to say "here be dragons"
* every module and routine should have a comment header that explains its purposes, inputs outputs and side-effects, and an easy-to-read outline of the steps that occur in processing
* write the header comments before writing any code
The rationale is that line-by-line comments generally make code harder to read because one can't see the totality of a routine on-screen. On the other hand the header comments are an expression of the programmer's understanding of the requirements and allow one to clarify how the problem is going to be solved before getting into the nuts-and-bolts of language and toolset specifics.
BTW the old rule on function size probably still holds - if it is less than 20 lines, it's probably too small, more than 60 definitely too big (yes I know OO results in lots of dinky methods... but too much of that results in lousy readability as one has to skip from one routine to another to follow the logic).
and it was ever so....
One thing that works for me is to allocate some time for "while you're in there" refactoring when estimating development times. Then there is a bit of buffer when changes need to be made to some particularly manky old piece of code - by the time one has understood it it's not that much extra work to refactor it.
The biggest problems are that (a) I'm an optimist and (b) managers always try to compress the development schedule, so there isn't much in the way of buffer.
Hmm, if every family member gave me a Pi for Xmas that would be a decent start.
from the enquiring minds want to know (and are presently too lazy to read the blog) dept:
* can the graphics part of the Pi processor be used as a floating-point vector processor?
* what is the computation/communication performance ratio?
* Linpack performance?
I guess it will all become clear in time...!
Android isn't free to OEMs who want to offer the Google stuff - I guess the store is particularly important here.
Probably still cheap compared to the offical price for WP licenses.
What I'm getting at with that is that maybe WP8 is technically better than iOS or Android (as the MS fans on here would have it, but my guess is that the three will be very much of a muchness in terms of quality/stability).
I chose OS/2 anaolgy because, back in the days, OS/2 was technically way ahead of the MS offerings, was well thought through, had the might of IBM (then the 800lb gorilla of the IT world) behind it, and still failed to gain traction in the market because Windows was "good enough" and already there (for a fairly small value of "good", admittedly). I wonder how OS/2 sales compared to Mac, though?
So even if WP8 turns out to be significantly superior to the competition, my guess is that the same combination of "good enough and everyone knows it" and "expensive comfort zone" competitors will not leave any room in the market for it.
But if WP8 is truly based on the traditional Windows kernel I have a horrible feeling that it will suffer from traditional Windows problems.
In the case of Nokia, Nokia are paying full price for the WP7 licenses and then getting some kind of "marketing support" from MS that is set up to magically counterbalance the cost of the licenses. So MS can report revenue on WP7 while hiding the subsidy as marketing expenses. (See Nokia financial results for a glimmer of how the deal works).
Win-win for both companies, MS gets shipments & revenue, Nokia gets free software. No wonder it was a no-brainer for Nokia to go for WP7 rather than Android.
A side-thought there - given the history if internal feuding at Nokia, it's very likely that any Nokia Android would have taken a lot longer to get to market that the WP7 phones - the hardware spec of the latter is so locked-down that it leaves no room for turf wars over what the hardware will be and how much of a classic Nokia personality it should have.
Ironically I think this time round WP8 is probably in the spot where OS/2 was in the PC operating system wars, with Android 2.x/4.x in the role of Windows 9x/NT respectively.