369 posts • joined 27 Dec 2008
Re: What? Dates and times still a problem?
In short, GMT is a time-zone, and UTC is a timekeeping system:
Err, no. GMT is the natural observed time as modified by the equation of time (hence the "mean", since it is an averaged over the year to compensate for the Earth's elliptical orbit). UT1 is basically GMT with a slightly sharpened-up definition because the original was a little vague by modern standards of precision.
UTC is the time as determined by atomic clocks and needs leap seconds inserting periodically to keep it within 0.9 seconds of UT1. For most everyday purposes you can consider the GMT and UTC to be the same but you don't get leap seconds in GMT.
Re: 25% wasted
I might buy a DVD off the t'internet but no way would I consider banking with Bank of Uganda or buying a car from a website advert.
I don't think anyone is seriously suggesting that a big-ticket purchase such as a car is going to be bought solely on the strength of a banner ad. However, if it creates awareness that might possibly cause an option onto the short list when otherwise it wouldn't.
As a simple example I just typed in "two seater convertible under £20,000" in to Google. I got three adverts above the main results, mentioning the Mini convertible, the Audi Cabriolet Range and the Citroën DS3 Convertible. If I'm entering that specific search there's a very good chance I'm considering a purchase, even if it's only an idle day dream at that point. Getting those names into my mind is worth money to the advertiser, even if I never even click through on those ads, since there's some kind of awareness of them created as I conduct my research. After all, I can't buy a particular model of car if I don't know that it exists.
Re: Only 44%?
Surely it must be more than that? The silent 44% are dummies set up solely to boost someone else's follower count and of that 56% who do tweet half exist purely for fake sycophantic replies and retweets to further "improve" the standing of the real user.
Twitter has always struck me as being an incredibly narcissistic premise to begin with, there must be a hell of a lot of self-flattery underlying their usage statistics.
Re: Read between the lines
This is a common mis-understanding - that the interests of the stockholders are determined on a quarter-by-quarter basis by only how high the stock price rises or how much can be paid out in dividends.
I never said that they were. Of course "shareholder value" is a vague concept and indeed in theory at least allows for value to be attributed to non-monetary or even intangible benefits. However, the primary goal must always be to maximise those benefits to the shareholder, however they are judged. That means any spending on the greater good can only be justified to the expect there is an ultimate payback to the company or its shareholders, whether that be good PR, avoiding boycotts or improving staff retention. Directors don't have discretion to invest in the public (or indeed employee) good if there is no benefit to the company.
Re: Read between the lines
IBM remains committed to paying its employees and retirees as little as we can get away with so we can pay our shareholders more.
Well, yes. The primary duty of any company director is legally defined as to maximise shareholder value, after all it is the shareholders for whom they are ultimately working. Anything else - environmental friendliness, ethical practices, social goods or in this case employee compensation - can only be justified to the extent that it benefits the shareholder.
That isn't some particularly ruthless business attitude, it's the law. It has attracted plenty of criticism in the past for not allowing for companies set up in the pursuit of e.g. particular environmental goals, but that wouldn't cover IBM in any event. Like it or not IBM top brass are doing precisely what they are required to.
Re: This comment is totall Bullsh--!
Oh those hardware running with embedded chips? Not running XP - running XP embedded, which is STILL supported for free.
Don't believe it - there is an awful lot of oddball hardware out there doing things you'd never think of in a thousand years, and they could be running pretty much anything. One of my colleagues has an oscilloscope on his bench still running Windows 98. It's 15 years old so ancient by IT standards but only a bit past mid-life by instrumentation standards. Of course it's moved down from high-end to mid-range in that time (i.e. from the super-duper one-per-department scope to a personal bench scope) but it'll probably have another five years within the company and probably another decade at least in the hands of some amateur when it gets moved on. There's plenty of other examples, in fact I believe the later models of that very scope were indeed XP Pro powered.
It would have been £10,000-£15,000 scope new, probably even today it would be £1500-£2000 to replace - it's not the kind of thing you do on a whim without clear benefits.
Re: Linus for motivational speaker?
But coder's widget is fundamental to Linux and affects the kernel people fundamentally.
How do you get dis guy to fix his stuff?
Public shaming, if nothing else works ... ?
That isn't how open source works. The idea is if you don't fix the bug someone else will. Too often of course the premise fails: 1% of the user base have the ability to fix the problem and of those high-earners perhaps 1% have the time and inclination to do something about it. Before you dispute this consider how many long-standing security bugs were recently found that can ultimately be traced back to the MIT X release. It didn't work there, did it?
The open source contract works both ways: essentially it reads as "Here is what I have done, knock yourself out with it". It doesn't mean "This is my baby, you must feed it, and if you don't have breasts you must grow them".
Don't get me wrong, I am generally pro-open source, but the quid pro quo is that no one has any duty to do anything, no matter how much you might like them to.
Re: minor nitpick correction
By line count it's roughly 2:1 in favour of C++ over C. By execution time C dominates because it's mainly at the lowest levels and innermost loops. How is describing the project as C/C++ incorrect?
With a track record of zero deaths or serious, permanent injuries since our vehicles went into production six years ago, there is no safer car on the road than a Tesla.
Am I the only one that instantly thinks of the claims made about Concorde prior to 2000? Concorde always proudly boasted a perfect safety record too, again if you overlook not-so-minor issues - fires for Tesla, rudders falling off for Concorde.
Of course in both cases the "record" is a statistical quirk of relatively small sample sizes and easily moved dramatically by a small number of events - in the case of Concorde of course a single crash was enough to revoke the airworthiness certificates of the entire fleet. Similarly it doesn't take a lot for Tesla's record even on those narrow criteria to go down the toilet.
I think you'll find that we don't have a constitution here...
Of course we do. The fact that it isn't a single document bound up and labelled "The Constitution" doesn't mean that it doesn't exist. Instead it is spread across myriad different acts and in determined part by historical practice and precedence. We've arguably had a constitution of some form going back at least as far as Magna Carta.
Go into any law library and you'll find shelves full of books on British constitutional law - that's an awful lot of coverage for something that doesn't exist.
I think the fact that 2 people took the time to mod parent down is the most damming piece of social commentary I've seen this year.
I've just given him another: look at what he said, think about it, and it is outright objectionable:
You could look at educating people into realising...
In other words, "This is what I think and therefore what all educated people think". It doesn't matter what the actual position is, the logic is dangerous and wide open to abuse. If the logic has any validity whatsoever we needn't bother educating those people, since the result is already known - we do whatever it is he wants.
That isn't a free democracy. It's a dictatorship ruled according to his own personal views.
UNIX is the one that has had remote management since its inception, which dates back to 1970 (probably earlier). Linux got it since it was born due to being a UNIX derivative as well. Windows had to have the remote management stuff added later, and even then it had to be changed at least once from the proprietary thing they had on NT4 and earlier to the LDAP/Kerberos5 thingy they made in Win2000.
That is rewriting history to a certain extent. The earliest Unix systems were strictly host based. Your only remote capability of any kind would have been by hooking a modem to a tty - hardly a feature of the operating system. The first Unix to include networking support was 4.2BSD in 1983 and it took to the late eighties to propagate around the various workstation/server variants. Xenix, which is where the volume was back then, got it in 1987 and even then it was an optional extra with additional licensing fees. That continued until the OS was phased out completely in the mid 90s.
Re: Run that by me again...
No, because light always travels at the same speed relative to the viewer. If space is expanding as it crosses then rather than the speed dropping the wavelength changes. So, if the galaxies appear to be 12b ly away today the light left them 12b years ago.
But it still takes longer because it has further to travel, and the expansion covers the ground already covered as well as that yet to cover. For a gross simplification reduce the continuous expansion to a single event: light is emitted from a source that is at that point 2 billion light years away. Halfway through its journey the Universe expands so that distance becomes 3 billion light years, meaning the light has another 1½ billion light years to travel. When it finally reaches us we see the object as it was 2½ billion years previously, even though the object is by then 3 billion light years away.
"Shut up or piss off!"
Don't forget the big, bad, "I said NO!"
Re: Don't see how this helps any
presumably the proposed Nemesis is supposed to have a highly elliptical orbit? In which case - assuming it isn't currently at/near an extrema, the liklihood of spotting it might be higher...
Orbital mechanics have the effect that the closer in a body to whatever it is orbiting the faster it travels, before slowing down as it moves further away from that body again - if you think about it it's simple conservation of energy, as kinetic energy is traded for potential and vice versa. The net effect is that arguing for a highly elliptical orbit pushes out the average distance at any given time quite considerably, since the body spend most of its time traveling slowly through the more distant part of its orbit, before quickly sweeping through the closer portion and returning to a greater distance.
In any event, for the theory to hold it almost needs the reverse, while the orbit doesn't have to be perfectly circular it can't be highly eccentric. The projected orbit is huge - a radius of 1½ light years. The more eccentric it becomes the further the hypothetical body moves away from the Sun at it's outermost limit and if it gets too far the Sun ceases to be gravitationally dominant. Even if in one particular pass it doesn't come close enough to anything else to be perturbed out of orbit, you would expect precession of the argument of perihelion over cosmological time, which would have the effect of flinging it out in a slightly different direction on each orbit. That increases the chances of an eccentrically orbiting body being lost forever.
Re: Don't see how this helps any
Not at all. Small sample size and more noise than signal in the data suggests that the most likely cause was not some invisible gas giant that has now been observationally shown to not be within one light year.
Where's your evidence? As has already been pointed out, the very evidence you cite explicitly does not make that assertion but limits the distance to a much smaller region of space. Repeating your assertion doesn't make it true, it reveals a lack of basic comprehension.
The problem is small sample size and a lot of noise.
More data would be helpful but we have what we have. Even with that it meets the usual criteria to be regarded as statistically significant, i.e. less than a 5% chance of a random result. That doesn't rule out a freak chance result but it does rule out arguing statistical insignificance. Bear in mind that this pattern was noted before the hypothesis was advanced and independently of its proponents.
Less, it's a theory that needs to be treated with a healthy degree of skepticism, but that does mean you can simply make stuff up to "disprove" it. If the hypothesis wasn't taken seriously why would the people running Wise take the trouble to even consider it? As Fiona pointed out, what do you know that these peer-reviewed scientists don't?
How about this for anecdotal evidence?
1) I ...
2) I ...
3) I ...
How is that not anecdotal?
How does it contradict our first stab at in-house testing? In 2008 we installed 12 32GB drives in desktops from two different manufacturers. After two years 50% had failed. After four 75% had failed. 100% failure was at 4 years 8 months. We are conducting a second experiment as we speak on admin workstations but in the first twelve months sings are not encouraging.
Our DBA's job is to protect the data at all costs. You don't have to convince me, you have to convince him.
Yeah SATA drives are cheaper but you don't get a dual ported interface or Data Integrity Field (DIF) 520 byte sector sizes. If you don't know why either of these might be important you are not qualified to comment.
It helps not one iota if the entire unit is bricked. I also think you'd be surprised just how much error checking and recovery is built in to a standard hard drive even before RAIDing them - you never see the vast majority of read errors. In any case I don't think it's particularly relevant since a lot of data storage bods are naturally very conservative and want to see evidence of long term reliability over a couple of generations. SSDs have been around just about long enough to do that now but the long term figures from four or five year old units are far from encouraging.
Offer them the odd millisecond based on established and trusted technology and they'll take it. Offer them five milliseconds based on technology with an appalling reliability record less than a generation ago and they are much more circumspect. Anecdotal evidence of the "I've had an SSD in my desktop for eighteen months now and haven't had a hitch" is not going to sway them from that position.
You haven't really specified what you need in terms of number of disks/storage capacity.
To be honest I wanted to keep the emphasis squarely on remote management rather than get sidetracked into an endless list of more general server requirements. However, right now I'm looking at a little over 2.5TB of data which in my book suggests 8TB storage on day one - it doesn't make sense to provision for less than a couple of years growth at least at first.
However I suspect a HP ProLiant G7 N54L microserver may be a good choice as long as you don't need a lot of disks or a huge amount of cpu power. The HP iLO should meet all of your needs.
It's an interesting wild card option that initially looks very attractive until you start costing it against its likely lifespan. Server, remote access card and initial complement of drives is in the £500-£550 bracket. However, it's low spec and essentially fixed configuration bar memory and drives, so it's a write off after perhaps five years, and only four drive bays may mean junking sound drives early just to get capacity.
I don't rule out buying a pre-built server system but the sums have to add up: I was thinking more along the lines of a self-build with as many generic components as possible to maximize the scope for future upgrades. I'd expect to sweat such a system around 15 years, with additional or replacement drives as necessary and a replacement mobo/CPU around halfway through that life. I've costed that in the £650-£700 range up front. Sure, that's £150 more but a lot better spec to start with, much more upgradable in the future, and in a 4U rack enclosure (rack mount is nice but not essential) that allows up to 11 drives to be fitted. Those additional drive bays potentially save money by using pulls from primary systems as those get replaced, instead of having to fork out for new high-capacity drives each time you need a capacity bump.
Lights out management options
I'll start with a little context: this is my home network where I run most machines (certainly the beefy ones) headless either under the stairs or in the shed, so they are out the the way and the noise causes the least disturbance. Workstations are relatively thin clients in that they have some local processing on them, but are mainly used to remote in to another machine via X or RDP as appropriate.
The time has finally come to upgrade my backup server because it's out of disk space and can't be upgraded any further. That machine is a Sun T1-105 so it's certainly not the fastest machine around, but I've kept it going this long simply because the lights out managment support has proved so convenient - I can power it up and down remotely, reboot it if it hangs, easily netboot for a new OS installation, without ever having to go to it. In fact once you've physically installed it and connected it to power, network and console server you never need touch it again - you can take it from bare metal to a fully configured system remotely.
Your typical x86 system feels more like a toy than a "proper" computer in that respect. Depending on the OS you may be able to configure a serial console but if you need to access the BIOS, install a new OS, or even if it simply gets wedged you're going to have to physically interact with it and possibly hook up a monitor, keyboard etc, basically a lot of hassle.
This is quite a bit away from my line of work but it appears IPMI is the way to go here. I'm looking specifically at the Supermicro server boards, the octocore Atom ITX boards seem a good fit for the wider wishlist, and from the sketchy details I've managed to piece together they appear to offer IP based KVM, remote power control and the ability to downlaad a virtual USB boot image. That seems to tick all the boxes.
However, all this IPMI stuff is completely new to me. Will it do the kind of thing I want? Is there anything else I should be looking at? Bear in mind that this is a home system and even that £300 CPU/mobo combo is enough to make me think "ouch". I'm certainly not going to drop £2,000 on a ready made "server" machine. One final thing is my default OS is NetBSD unless there's a reason to run something else, so ideally the access software should run on that, essentially meaning either open source or possibly a Java application (preferably not an applet). I've seen what appears to be a Java-based KVM viewer which would fit the bill, but does anyone have thoughts on that side of things?
Re: CD... in DOS/CMD ect
As in go to the grandparent directory? IIRC that was a 4DOS extension, it never made it into MS-DOS, DR-DOS or CMD. If it's really that important to you it's open source these days although it appears to be dormant. Personally I don't think the savings over a ../.. are enough to be worth a third party shell, although the other extensions may be - 4DOS batch files are infinitely more capable than their CMD equivalents, it's almost approaching Unix shells in terms of capability.
PPC hardware is still supported. By Linux...
68k Macs are still supported. By NetBSD...
Re: And people still don't think bitcoins are a scam?
A little bit more like the London stock Exchange or Nasdaq going bust and losing the record of who owns all those shares. Could happen and there is no Bank of England guarantee to refund all the shares you, or your pension fund, own.
No it couldn't. Listed companies are responsible for maintaining their own share registers.
Scrappies need to get with the plan and not pay up for scrap without a history. I'm not talking about Mrs Jones taking a fridge to the tip, this is for industrial waste which should have a license, supported by a chemical or molecular signature, all of which also mean more jobs for people who *want* to work.
Dealers already do as much as is practical: by law all purchases are photographed and ID is taken. They're generally a lot happier if they see some evidence you're a tradesman too if the amount or nature of the scrap isn't in keeping with a consumer. On the other hand, it's not realistic to ask for too much in the way of provenance since by definition the material is scrap with only residual value and often accumulated frequently but in small quantities.
For example my Dad is retired now but he was an electrician working on generators - they did a lot of work with very heavy gauge cables. If you have a metre-long offcut of say 300A cable (still quite small by their standards) that could easily be worth a couple of pints as scrap. He'd accumulate such offcuts in the course of his work and perhaps once a year take them to the merchant for perhaps £80, £90 or £100. Do you really expect him to be able to say "Oh yes, that particular one foot of cable is from cutting a run to length for a new installation at Coventry General Hospital"?
I don't understand why the wheels would sustain less damage when moving in reverse. Can anyone suggest reasons?
The rover's wheels are heavily ribbed, presumably to improve the level of grip. You see from the tracks it leaves behind that those ribs do cut into the surface. It's possible that they have an asymmetric profile - e.g. more sawtoothed shaped than straight up and down, to cut into the surface and provide a positive key. On the other hand if that level of grip isn't needed (or you can switch direction again if you get stuck) running the "saw" backwards may well allow smoother operation and avoid relentlessly cutting into the surface when it isn't needed.
No actual evidence to support that hypothesis but it seems eminently plausible.
Re: Let's hope this actually filters through
Out of curiosity, how do you expect them to use your CV to find work for you if they cannot access it?
By reading it perhaps? An encrypted PDF is still perfectly readable and printable, it just defeats the automatic scrapers.
This isn't imagined - if you read my CV you've get a clear idea of the kind of person I am and the roles I might be interested in. You'd see that I'm primarily an embedded/systems C programmer with major sidelines in hardware design and network protocols and infrastructure. However, like anyone with a little experience I have the usual long tail of countless other odd skills developed to varying degrees well away from my main areas of expertise.
One of them is PHP, where I state I've done a few database front ends from time to time. It's clear from my CV that's just a casual ability rather than something developed and honed full time for many years. You wouldn't consider me for a PHP developer role, more you'd consider that as something in reserve for those off little jobs that crop up for time to time.
Why, then, am I still constantly bombarded with mail for PHP developer roles from companies and agencies I've never heard of and are located at the other end of the country? Anybody could see I wouldn't be interested straight away, indeed I probably wouldn't be worth considering for such a role even if I wanted it. The problem is that no-one has looked at it - instead there's been a spectacularly dumb keyword match.
If they've got my details who else has? There's a difference between sending a specific person or company a copy of your CV in application for a vacancy or even on spec, and it being automatically scraped and keyword scanned by multiple agencies where it can be accessed by any pretty much anyone with little to no checking of credentials first. A CV is an identity fraudster's wet dream and I'd rather keep mine well away from their grubby little mits.
Let's hope this actually filters through
I was unemployed for a few weeks late last year and signed on. The Jobcentre insist you use their website for jobsearching and allows you to upload your CV in Word format only. A suggestion that they'd want to stump up a few grand for a machine to run Windows, copies of Windows and Office, and consultancy to secure the machine and keep it secure (why should I bother with that?) didn't get very far.
They then grudgingly allowed you to upload PDFs as well. However if you try and upload an encrypted PDF it immediately claimed it was corrupt.
In other words, we insist we must be able to scrape your data. Any agency advertising through us must also be able to scrape your data and bombard you with keyword-matched but obviously inappropriate job suggestions. Forget any notion of anyone reading your CV and if you want to keep your personal information safe don't apply through us. It's not like we're here to help you find work after all.
PDF is a solution to a problem that was solved over a decade ago with Open Office and digital printers from home to industrial becoming able to accept any file type.
Over a decade ago? Wow. And PDF dates from what, 1993? Over two decades then. What was the unnecessary solution then?
PDF still fills a role. The real world still needs paper documents even if they are distributed electronically. PDF isn't perfect but is readable much more widely than ODF, is read only and can be encrypted and made tamper-proof very easily. Those are key attributes for a lot of scenarios away from this "HTML (or whatever) is good enough for everyone" la-la land.
Re: The title is too long.
If it is even a remote concern a fix doesn't need to be complex since it's effectively a point to heat. A few turns of nichrome wire or even a couple of resistors would do the job nicely. See e.g. http://www.skyandtelescope.com/howto/diy/3304231.html for a practical example of solving the same issue in a slightly different context.
You've got a plastic bezel around the lens which decreases efficiency, but of course the lens is far smaller to start with, so pluck a figure of 1W-ish power out of the air based on that article. From the 5V power for the Pi that's be a couple of ½W 47R resistors wired in parallel, one each side of the lens assembly. However, current draw is an additional 213mA which may need to be accounted for.
Re: Margin of error / confidence intervals...
although here the statistics are a tad more fancy
They're not though. Hard drive failure rates follow the common "bathtub" curve - you get a comparatively high rate of infant mortality as the drives that weren't quite right to begin with fail. Then you get a long steady period of fairly low failure rates (the "bottom" of the bath) before it rises again as the sound drives wear out.
They admit their figures are for younger enterprise drives than consumer units so both sets have gone through the early purple patch, but the consumer units have had longer to travel along the "bottom" to get their failure rates (in failures/year terms) down. I actually find the fact the two figures are so close after this gross distortion to be reassuring that it is worth spending the money on the pricier drives.
Re: We expect you to be able to do arithmetic...
That knife cuts both ways. 0.1% of $160 is still 16¢. Thus a transposition typo of an intermediate figure rather than an arithmetic fail of the end result.
Re: [B]asic maths and literacy...
contrast a late sixties 'O' level (others are available but this was the first I found):
with a current GCSE, designed for innumerates to pass:
That isn't a valid comparison. GCSEs are split in three different level papers which candidates are submitted for based on their estimated grade. This allows for more testing at or around the level of ability of each candidate. The foundation level paper you cite is for students expected to get no more than an E (A D is doable but needs a very high mark). The intermediate level above that is aimed at C/D students and the higher paper above that for B or above. Thus only the higher level paper is even intended to be a O level equivalent. The paper you cite isn't even CSE standard.
Oddly enough it was Furber's depiction I found the least convincing although I suspect that simply because I know him from when I was at Uni. In the film he had a bit of the stereotypical reserved geek quality about his portrayal and while admittedly this was 15 years later I never found him like that at all. On the contrary, he's probably one of the most naturally gregarious people I've ever met.
You could ask him a question at the end of a lecture and it was almost as if you were his new best friend. He'd start by asking you a question or two to clarify precisely what you were thinking, think about it for a second and then point out what the error was, suggest systems that did what you suggested, or speculate as to the outcome of following that direction. You always got the impression that he was enjoying the conversation. Not like most lecturers who respond with an unstated but readily apparent "It this, its obvious, stop wasting my time with your undergrad stuff and let me get on with my research".
The PCW had a 23k bitmapped display that was arranged in a very specific fashion to facilitate the fast display of text at the expense of graphics capabilities. Even drawing something as simple as a pie chart was a programming chore.
When the first game turned up (Batman) the designers at Amstrad were said to be amazed as they didn't believe it was possible due to the complex nature of the display.
I don't see why that would be. The display was optimised for very fast vertical scrolling - not text per se since as far as the hardware was concerned everything was graphics - the PCW lacked a hardware character generator. The top-level graphics structure was an array of pointers to screen lines so lines could be moved up and down the screen simply by moving their references. OTOH there was nothing at all to stop you simply allocating each line to a contiguous memory region and dealing withe the screen as a two dimensional bitmap array.
That's inevitable. Real programming is hard. Or at least takes a lot of knowledge and development of skills. It's not possible to cram all that in amongst everything else at a school environment.
That's precisely my point. It needn't be. For a start remember we are talking about a foundational level here - no one is suggesting high schools should be turning out CS graduates. Secondly, with an intelligent choice of tools and proper integration it could actually reinforce existing material. This is why I suggested functional programming: it's obviously not got so much commercial relevance but allows pupils to focus on what is essential rather than peripheral.
I mentioned SML in my previous post so we'll stick with that for the time being. "3 + 7;" is a complete and useful SML program - key it in at an interpreter and it gives you the result. No need for compilation or a containing program to get parameters or communicate the result. In that sense it's a basic calculator.
Go one step further to an early example of a useful algebraic equation: ⁰F = 9/5 ⁰C + 32. That translates more or less directly to an SML function, again with no need for a surrounding program:
fun CtoF c = 9.0/5.0 * c + 32.0;
A factorial is defined as
- The factorial of 0 is 1
- The factorial of n is n multiplied by the factorial of (n-1)
Again it translates directly:
fun factorial 0 = 1
| factorial n = n * factorial (n-1);
We've only scratched the surface yet, but we've already established the idea in the pupil's mind of programming as solving computations rather than making text scroll across the screen or something equally pointless. It's also strongly reinforced the maths syllabus by showing real world relevance. You can go on to discuss sorting algorithms or basic data structures as time permits.
Re: Schol Reform
1: Daily PE rather than weekly, with no BS excuses to get out of it.
That's a quarter of the school day gone already then. Forget adding anything else to the syllabus - are you going to get rid of maths, English or science to make way for it?
I don't see why kids can't stay on till four or half four instead of three or half three, especially in secondary school, but that is the world we live in.
But we know it's already going pear shaped. If you look at any of the media reporting it seems the whole concept is being diverted into things like basic HTML markup. I wouldn't call that computer science, more an extension of the existing ICT syllabus.
If you really wanted to push CS into schools the way to do it is not as some kind of bit part. Use one of the very high level functional languages, e.g. SML, to reduce the amount of red tape, and introduce it in lower high school alongside basic algebra. That way the two areas directly reinforce each other.
Re: Run on the bank? - read the wiki page
Your logic is conceptually flawed. Each iteration of the system results in the bank keeping more of the money - the sum to infinity is that they retain the full amount. They have external liabilities greater than the original cash amount but equally they have external creditors greater than those liabilities. Your argument switches between the two sets of figures mid-flow: if you deal with it on a cash basis it makes sense. You deal with it on an assets and liabilities basis it makes sense.
It doesn't make sense if you look at one side of the assets and liabilities equation but ignore the other. You're treating the deposits as if they were cash and treating the creditors as if they don't exist. The problem isn't a flaw in the model, it's a flaw in the analysis by not using the figures consistently.
Re: Run on the bank? - read the wiki page
Based on the £100 initial deposit the bank can lend, through just 3 iterations, £244 (and, when all is said and done, about £900 total). This is known as the money multiplier. Like I said, they lend out a multiple of the original deposit.
But they have far more than £100 in deposits to cover it.
Consider the same 10% reserve level. Mr A deposits £1,000,000. B then borrows £900,000. The bank keeps the other £100,000 towards its reserves. B spends his loan with C who redeposits in in the bank, who then lends out another £810,000, keeping £90,000 to add to its reserves. You can repeat this cycle as many times as you want. After ten iterations the bank have received £6,513,215.60 in deposits, lent out £5,861,894.04, and has £651,321.56 in its reserves.
In other words the 10% ratio is maintained. Each transaction is completely separate - there is no chaining of them together. It isn't as if if Z defaults that means P doesn't have to pay his loan back either. The amount out on credit thus remains the same proportion of the same proportion of their liabilities. This isn't advanced finance, it's basic maths.
But he's right.
Undoubtedly. Alleging that things like Google may be eroding our ability to think must be the understatement of the year - it clearly is, especially in the younger generation. How many technical forums have been utterly ruined as useful resources by the kids who think anything can be learned in two minutes flat? You know the ones - if their Googling doesn't hit the precise fact needed immediately they just need to ask someone to point them to that magical site that teaches you advanced 3D games programming in C++ from nothing in five minutes?
It seems to be growing an utter disregard for any actual skills at all - a few months ago in a newsgroup I said something along the lines of "Take some brass sheet, drill holes here and here, cut with a scroll saw and clean up with files and Brasso". That triggered a response of "But that would look really rough and amateurish", not from the OP who took that suggestion but from some kid somewhere. He couldn't make something that looked good, so the idea someone else could make something with a better appearance than a commercial product with a little time, effort and knack never entered his head.
We're heading for a generation who are nothing but consumers and define skills by what they have bought rather than what they can do. Of course in the long term they won't even be able to buy their shiny, because they have no skills of value in the job market.
Re: Judging by the list of equipment
your office is actually the Government's 'hi-tech' COBRA war-room
Don't even joke about that - it's closer to the truth than you realise. I recall reading a news feature on Cobra a couple of years ago. It's where the nuclear button is kept.
It's a Telex machine.
Re: those typewriters have monetery value dont you know...
Rather difficult for the likes of GCHQ/NSA and Google to slurp data from a typewriter.
It's actually very easy, especially with single strike ribbons - unroll the ribbon and you can read off everything it has typed directly. No one ever thought about that when throwing out used ribbons.
He did not want to accept the fact that he had broken the law.
That's probably because based on the facts as you presented them he hadn't. There are no strict liability offences in the Computer Misuse Act so you have to establish mens rea (essentially deliberate intent) in order for it to be a crime. The scenario you describe falls far short of that.
Re: All the Arduino IO is connected by a single I2C port
I/O timing is here:
Essentially, I/O mediated via I2C can go at 230Hz (not KHz or MHz) maximum. The 2 pins that don't go via I2C can go just under 3MHz (it's unclear the jitter on this).
A ha! So that's where people are getting these silly numbers from. We were discussing the speed of the I²C bus, not the output speed of a GPIO module attached to that bus. It is akin to me noting that I have a serial card in one of my machines whose top speed is 115200 baud, and from that extrapolating that all the PCIe interface is capable of. Of course, it's nonsense, you are referring to the limitations of a peripheral and not the bus it is connected to.
Re: All the Arduino IO is connected by a single I2C port
I²C has been operating at 3.4MHz for the last 15 years: you start getting much faster (especially for interconnects) and you need careful impedance control and high-speed boards. If you're going that way is the PCIe fast enough for you?
No, not a total fail from Intel. A physics fail from a commentard.
I don't see what "problem" this product is supposed to "solve".
If you need small and embedded, you choose one of the proper Arduinos, and get microscopic power usage and direct control over the hardware.
If you need power and programmability, you can get a Pi, and still get something that can be battery powered in a pinch.
I work in precisely this sort of market and have come to precisely the opposite conclusion. The smaller Arduinos are hobbyist devices, nothing more. It doesn't matter what you use, you're going to have to program it, so the smaller boards are up against bare e.g. PIC or AVRs. For simple stuff with bare chips you can sketch out a schematic in five minutes, calculates its values in another ten, and lay out a PCB in another fifteen - no, those figures are not an exaggeration. The resulting circuit may cost under £1 in parts and might consume one square inch of board space. That is what the smaller boards are up against commercially - they can't compete even in very limited volumes. Sure, a more complex circuit might involve more resources, but in that instance you will be designing that complexity as add-on boards for the Arduino, so even then you're not gaining anything.
It's only going up the performance scale where modules start coming into their own, once you start needing high speed board layouts that are trickier to design and manufacture. RasPi superficially looks attractive for commercial use, but there are a number of issues with it that tend to make you very reluctant to consider it. The stated design aim was for something easily embeddable into other devices, and yet the first production versions left out something as fundamental as mounting holes. I mean, really, you call that a contender? Even now there are issues. USB ports for power are a dubious decision even from an electrical standpoint given that the Pi can draw more than 500mA, but there are more prosaic issues to consider. USB plugs are always moulded - they have to be, the spec demands it. How then are you supposed to connect it up to your wiring harness given you can't get a plug to fit it? Yes, these are little things to be sure, but there are any number of them. Each one simply adds to that gut feeling of "This board isn't really for me".
This board might not have the same toy value for the hobbyist buying them in ones and twos but it seems to have a lot more practical refinement for the integrators buying them by the 20 or 50.
Bad faith all around
I can still remember appeals for contemporary valves etc going around in the mid-90s - the rebuild was only possible because of the good-faith donations (money and parts) of a lot of interested folk. Does it cross the minds of the people involved that this kind of petty squabbling is not what those donations were to enable?
Same here in Preston. The annoying thing is the next street on one side can get it, so can the next but one on the other side, as can the rest of the neighbourhood. When they came round to install the network in the early nineties they saw our road and the next one had under-pavement service ducts fitted when the estate was built and so there was no need to dig them up.
The only problem was there was a restrictive covenant on them and were BT-only for the first ten years or so (the estate itself dates from the early 80s). They explained at the time "We're going to come back in a couple of years to cable up your street when that covenant runs out".
Well, that was twenty years ago. They've still not come back.
Missing the issue entirely
Google break the law, blame the advertisers. If the article is accurate then blames rests squarely and undeniably on Google. It is the collection of sensitive data that is restricted, not acting upon that collected data.
Therefore if a Canadian searches for "green penile wart" or similar and that is recorded Google are in breach right away - the law is broken the instant the user is put in the "searched on genital warts" box. You could legitimately dish out sponsored links in the search results themselves, even if they are of a sensitive nature - after all, no data has been collected (as in put into a collection) at that point. However, after that the data must be discarded.
Advertisers are then free to select based on whatever criteria they want. If it is for sensitive data they simply get no results. Blaming the advertisers for this is passing the buck for Google's own illegal activity.
Re: "It's not remotely possible...
"It's not remotely possible...
"...to provide any product "for life" that has recurring costs associated with it."
Of course it is. The lack of recurring revenue doesn't by itself make the model unsustainable. Somebody's already mentioned the annuity business which relies on accurately forecasting how long "for life" means on average. The other method simply depends on the average churn rate as people naturally move on to something else.
For example, I have a lifetime membership with sdf.lonestar.org for remote access - I use it as my primary news and email account, a stepping stone in various remote access cases (behind firewalls etc), and as a dumping ground for various small bits and pieces I need to access from anywhere. It cost something like £15 twenty years ago. I've had far more value than that since.
It's a non-profit with public accounts, though, and you can see that it is not in financial difficulty of any kind, even though the lifetime accounts make up the largest share of revenue. The majority of users play with it for six months or so and then move on to something else. Since it's a "lifetime" membership people don't feel the need to use it to get their money's worth and many simply never return after a while. Overall, the model is perfectly sustainable, even if some people like me are effectively subsidised by the short-term users.
- Batten down the hatches, Ubuntu 14.04 LTS due in TWO DAYS
- FOUR DAYS: That's how long it took to crack Galaxy S5 fingerscanner
- Did a date calculation bug just cost hard-up Co-op Bank £110m?
- Feast your PUNY eyes on highest resolution phone display EVER
- Wall St's DROOLING as Twitter GULPS DOWN analytics firm Gnip