"excitedly splashing sand at it's balls".
303 posts • joined 2 Jun 2008
"excitedly splashing sand at it's balls".
Another more recent example. In the early Noughties, the BBC’s iPlayer was envisaged as a sophisticated P2P client, and at one stage had over 400 people involved in spec meetings. iPlayer only rolled out after the team had been reduced to around 15 – and the doors were bolted shut.
And all 15 of them had iPhones. And it was impossible to watch it on Android. And I spent years getting iritated at how anyone could have been so stupid (and still are?), before just giving up. And the news website is equally moronic.
So, just maybe, cutting a team down to 15 and letting them get on with it is not necessarily the right thing to do.
"Of course if you want to avoid support for your hardware going away, best bet seems to be running Linux. Strange how we got to that state".
Speaking as a lifetime Unix user, and an occasional Linux device driver writer, and as sometime who recently had to take a hammer to his wife's computer after it announced that it was going to 'upgrade' to Windoze 10 in 5 minutes...
Not quite. Keeping up-to-date with kernel changes is a major, major, PITA. I did a PCIe driver a few years ago, which was originally for 2.4.7. There were significant or major changes in so many kernel versions that I lost count - 2.4.10, 2.4.17, 2.4.22, 2.6, whatever, not to mention the whole v3 and 4 thing. The only way to keep on top of it is to select a major distro - something like RHEL6 - and try to support that.
The kernel people will update a few selected drivers (which I've never heard of) when they make a change, but the rest of us are on our own, with little or no usable documentation.
“Being a special unique snowflake works for art but not design. Design should be invisible… so you have die hards that love it, but you have the mainstream of the market that struggles with it, if they try at all”.
Now, if somebody could just tell that to the the f***wits behind the Ribbon...
> Industry standard or Microsoft?
Or VMS... DEC... RiscOS... etc. RiscOS was a PITA - deleting *.c could wipe your disk. And MS has actually always supported '/', though I'm not sure to what extent.
Seriously, though, Cygwin and MSYS have file path conversion issues which make it difficult to do Makefiles, scripts, and so on. If MS have managed to sort this out so that the machine looks like it has native *nix file paths then it's probably worth trying out.
@DougS: different sort of customisation. If every processor off the fab line is identical, then running up a VM is trivial, as long as you can get your hands on a spec for the CPU.
The problem is when each processor on a wafer is individually etched with something like a serial number, which can be used as a secret key. This is what is expensive, and is what Intel used to do on x86. This is what you'd need the electron microscope for.
@Bucky2: no-one seems to have specifically answered your point.
*If* the processor on the iPhone board (or any other embedded system) is generic, in the sense that it doesn't have extra mask processing to give it a unique attribute of some sort, then you're probably right. You just use the ATE equipment which was used to test the boards to extract the ROM data, create a VM, and you're good to go. However, many ROM devices will have a security bit which may be blown after manufacture to prevent this. To get around this, you may (or may not) have to get the chip off the board and read it (normally, ie not with JTAG/ATE equipment) in your own test rig.
However, the processor may be customised. Older Intel x86 processors had a CPUID instruction which returned a unique serial number, for example. The problem with this sort of thing is that it involves an extra manufacturing mask and is therefore expensive. I don't know (or care, actually) whether Apple does this. If they do, the unlock algorithm presumably requires knowledge of both the 4-digit passcode and the processor ID. In these cases, you may have to resort to getting the top off the chip and examining it under an electron microscope to try to find the ID (which is not necessarily very expensive). If you have some knowledge of the algorithm you may instead be able to brute-force this in your VM.
Anyway, having said all that, I've worked on various embedded devices and I would be very surprised (astonished) if Apple doesn't already have software that can boot up any iPhone without knowing the passcode.
@vector: I think you might have missed the point of JimmyPages' original post.
And it's curious that nearly 30% of Reg readers have either done the same, or are happy with selective principles.
> And the photos had all been FTP'd from various servers on the 'net.
> How times change. We would never think of doing it today, obviously.
We might not, but most of us are probably getting on a bit. About 6 years ago I had a short gig with a significant engineering company, doing electronic design. This was an all-male environment, but I replaced a girl who had recently left college. I personally have almost never worked with any females over the past 30 years. Anyway, turned out that this girl was frequently on youporn, completely openly, on her work computer, in an open plan office. She had her back to the window and the half-dozen or so guys around her all knew she was doing it, and came and watched occasionally.
These people need to get a dictionary and look up 'pragmatic'.
It's now only a matter of time before we can dowload everyone's tax data and find the MPs, fatcats, and so on who aren't paying any taxes.
This sort of thing really pisses me off. Why the **** would anyone want to start encrypting *everything*? I have a mail server that sends out automated non-sensitive messages (*not* spam), and I foresee lots of pointless dicking about coming up. Consider:
1 - Google is a prime mover behind 'TLS Everywhere';
2 - Google charges for TLS on inbound connections;
3 - Google is behind 'Let's Encrypt', which issues free TLS certificates, which are trivial to get (I have one myself, and I did the whole thing online in a few minutes, with no human intervention);
4 - The Let's Encrypt certificate proves exactly nothing except that I have control of the server for which the certificate was granted (I only had to post stuff on it to get the certificate);
5 - Phishers control their own servers anyway, so can trivially get their own certificates. There is *no* "protection".
6 - If you really want private email, you wouldn't do anything as stupid as attempting to encrypt the connection - you'd encrypt the *email*
7 - the whole point of SPF records is to make sure that the email came from whoever it claims to have come from, and webmail providers do a good job of SPF validation. This adds exactly nothing
8 - Conclusion: this is all about Google trying to make money.
The only reason I had to get a certificate was because some pointless retards who run a public, non-sensitive and non-commercial website (ie. most sites) which I need automated access to decided to take TLS-only connections. Why?
I also run mailing lists where about 30% of recipients have gmail accounts, and another 35% have Microsoft webmail accounts. The emails are opt-in, non-commercial, non-spam, and are SPF- and DKIM-signed. About once a year Microsoft will silently cut off all outlook/live/hotmail/msn recipients, and I have to dick about for a day with some retard at Microsoft to get them re-enabled. I now suggest to new subscribers that they don't use Microsoft accounts. This never happens on gmail, aol, gmx/whatever. If Google starts popping up warnings for recipients who happen to be on gmail, they'll get the same treatment.
There are other ways of detecting VPNs and proxies than playing whack-a-mole with IP addresses.
Some of these arguments don't really hold up. Geolocating isn't an issue, because (a) IPV4 addresses are scarce and are sold on, and it's not unusual to find IP addresses that trace to, for example, China where the block itself is registered in the US (though, granted, traceroute will do the job, but I believe that geolocation is normally done through registration and not tracing), and (b) the cheap proxy services are all in the US anyway. You can get a proxy in the US for less than a dollar a month per IP address, and this is where I'd start if I was connecting to Netflix.
On the user agent, I always put a plausible user agent in my (cURL) scraper, and I bet everyone else does. And a plausible referer, and cookies, and everything else.
I wouldn't expect a commerical proxy service to distribute my traffic over the IP addresses I've paid for. I connect to address X, and expect my outgoing traffic to come from address X. If you're right, then that provider doesn't understand anonymous proxying. I automatically test a proxy before using it live and this is easily detected.
But, at the end of day, I agree that you're vulnerable because you have to log in with a Netflix account, and all they have to do is log all the IP addresses on that account. Your best hope is to go through one clean/paid-for US proxy and hope that you don't have to change the address too often. Or you could get a life and stop watching Netflix.
(yet?) First off, I can't see that their examples are even "IoT". Jeeps and Boeings aren't part of the IoT. Somebody just (allegedly) screwed up their entertainment systems, and failed to separate them from the control systems. I don't need a paper on that. Somebody managed to gain access to a rifle targeting system because it had a WiFi connection; not even the Internet. And anyone who builds Linux and WiFi into a rifle deserves all they get. And somebody else built a drug infusion system so that it could be controlled over the Internet; I think I see what their problem was. This was the only example where there was a possible use case for external control, but I would like to see their justification for remote *control*, rather than *monitoring*. The place to control drugs is at the bedside.
Back in the real world, I get asked to monitor taps, for example, over the internet, to see how often they're used (really). They have a tiny micro and a GPRS connection. I might be asked to turn something on occasionally. I thought this was the "IoT", and the paper is pretty much irrelevant to that. It doesn't even mention TLS/SSL, and even that's a big deal on the electronics I've got. My #1 problem is ensuring that a request to turn on a tap comes from a trusted source, which isn't even mentioned. My interest in trusted hypervisors, having cryptographically signed boot software on the micro, chain of trust authentication, and all the rest of it, is exactly zero. Putting in all this overhead is far more liekly to cause a problem than to cure it.
*All* Joomla versions are unsupported. Seriously. And please don't down-vote me unless (a) you've attempted a site in all of versions 1, 2, and 3, and (b) you've been dicked about by completely incompatible "upgrades", and (c) you (very) occasionally get completely pointless "security updates" which contain no useful information whatsoever, and (d) at least one spotty adolescent has told you that it doesn't need documentation because it's Open Source, therefore you read the source and write your own documentation.
Then you probably don't know how many sides it has. I don't know what a year 12 student is, but anyone in the UK who couldn't answer this question at GCSE (16-year-olds) is unlikely to end up as a radio astronomer, or a statistician.
Your entire argument is nonsense. This is nothing to do with rote learning - it's a basic concept with almost-zero mathematics involved. Once you've got your head around this, you can move on to vectors and matrices. For a professional mathematician or scientist there's no such thing as an optional subset which can be ignored - would you have a problem writing articles if you weren't allowed to use the letters 'a', 's', or 'd'?
I wrote some software a few years ago to let GPs/PCTs/CCGs/etc (ie. UK family doctors and the people who pay them and fund medical care) identify anomalies in referral patterns, hospital admissions, length of stay in hospital, "GP performance", "over-referrals" (largely a myth, BTW) and so on. It was funded and used by local GPs - ie. the NHS itself - and the base dataset was the NHS Spine data.
The software was great, but it was useless for the first year or so, because no-one would let me (ie. the GPs) see the raw data with DOBs and gender in it, and you can't do the stats without them. It took a year to get the authorisations, but without postcode (or, equivalently, deprivation) data. Much better, but you can't really be sure what's going on without post/zip code, which makes the data identifiable. I spent about a year trying to get the additional clearance, but there was so much politics in the local NHS that it was next to impossible. The whole system then imploded with the PCT/CCG changeover, and everyone's access to the data was withdrawn, and the funding went, and the NHS disappeared up it's own backside.
So, the software has been unused for 2 years, and no-one in this area (and probably any other area) has any statistically valid way of finding out what's going on in primary care. The govt has now apparently decided that this is important again, so other people are now going to spend a couple of years dicking about trying to get the Spine data, before losing it again. And the whole pointless cycle will repeat again in another 5 years. And the base Spine dataset cost going on for a *billion* to create, plus maintenance.
So, if you're worried about the privacy of your NHS data - don't be. Everyone in charge is so stupid and paranoid that no-one's ever going to see it anyway.
Why would anyone use their real name and address on a site like this? And, conversely, surely the email addresses *are* real, or AM couldn't send notifications/whatever to members?
The only personally identifiable data would be a name on a credit card, which is hardly unique, and possibly the last 4 digits of the CC number, which seems to be in the database. So, on the face of it, the database seems close to useless, unless you can match a name on a CC with an email address.
Looked like this was going to be an interesting article till I read the first line. What total bollox - "chalk long ago reached parity with cheese"?
I've been using Unix, variants and descendants, for 30-odd years, and I use Linux day in and day out. In all that time the only half-usable "desktop world" that I've had, and that I would seriously consider as a replacement for a modern working Windows desktop, was Solaris. I've spent a year with Unity on my laptop and every time I turn it on I remember that I need to get on to the internet to figure out how to remove it. 'Parity' my ass. And I'm still waiting for a usable file explorer.
Linux is great for what it does, which is by no stretch of the imagination the same as what Windows does.
Nothing to see here - move on.
Fasthosts in general: they went through a really bad time maybe 3 years ago. I signed up 2 years ago, without doing my homework. I've been running bare-metal Linux/Apache/stuff at Fasthosts ever since, with no problems that I can immediately remember. Their prices were (still are, I think) cheap, presumably because of their history. I wouldn't have a problem recommending them.
And on LAMP security/time and money: bollox. If you don't know how to keep a Linux box secure, then you're in the wrong business, and going for Windows 2003/anything isn't going to help you. The only problem I've had in 2 years was Shellshock, which I fixed in half an hour.
- If you're debugging JS, you'll need to use both Chrome and FF. They respond differently to errors, and one may report nothing, while the other might give you enough information to find the problem.
- the IE11 debugger looks like it might be pretty good. Not had the time or inclination to try it properly, though.
- Chrome and IE are way ahead of FF in app shortcuts and web apps. Mozilla has absolutely no idea what it's doing here, and dropped shortcut support 4 years ago.
How does that help (seriously?)
The server and client are asynchronous and completely unrelated, probably running on different platforms, connected only by a thousand miles of wet string. Use your browser/whatever to step through the JS, use your server dev tools to step through the server code. JS almost always runs asynchonously; there is no temporal correspondence between the client and server. Have I missed the point, or are you on too much MS Kool-Aid?
REAL men didn't stop learning how to do new things 20 years ago.
Just quoting for an NHS job:
Web based applications have to run on a minimum of Microsoft Internet Explorer 6.
29 comments and no-one's mentioned Polymer? Anyone used it? Just spent 20 minutes with it and it looks interesting. It only seems to have the Metro-like blocky theme, which I'm not keen on (looks crap, actually, too many colours, too big, too square, but might appeal to Metro users), and it doesn't look production-ready. The Bootstrap look-and-feel is much cleaner, IMHO. Other immediate reactions are that the UI designer looks like it might save a lot of trouble, and that it might currently be too network-heavy for general web apps on a mobile.
To be fair, Google's done a Microsoft. Chrome is the default browser on Android, so the vast majority of Android users (which means a significant majority of phone users) are using Chrome. So these stats don't reflect user choice
No - StatCounter doesn't make it clear if 'Browser' means all browsers, or just desktop browsers, but it seems that it's just desktop browsers. In any event, you can generate the mobile browser stats at StatCounter, and 'Chrome' is not shown as a mobile browser; it shows 'Android'. And, second, 'Android' is buried in 'Others' on the Browser chart (the one in the article), at 0.47%.
HAS THERE BEEN any single "pellet" compressed all the way to fusion yet in practice? Yes or no?
No. It's just a proposal to generate $1B in funding. Here's a 30-page pdf which expands on the summary in the Reg link:
In short, fusion gain is required to get the necessary energy. The authors discuss Inertial Confinement Fusion (ICF), saying:
"This Inertial Confinement Fusion (ICF) approach has been actively pursued by the National Nuclear Security Administration (NNSA) of the DOE for decades as it represents essentially a nano-scale version of a fusion explosive device.... It appears that the most promising solution to accomplish this is with a large array of high power pulsed lasers focused down on to the D-T pellet....The National Ignition Facility (NIF) at Livermore National Laboratory is now in the process of testing a laser driven pellet implosion capable producing significant fusion gain for the first time....While the expected energy yield is in the range appropriate for propulsion (E ~ 20- 100 MJ), the scale and mass of the driver (lasers and power supplies) is not, as it requires an aerial photograph to image the full system."
Hence the cunning plan to use Magneto Inertial Fusion (MIF), where there's some theoretical work to show "that if the imploding shell on to the magnetized target were fully three dimensional, fusion gain could be achieved on a small scale with sub-megajoule liner (shell) kinetic energy. "
The authors say there are a number of challenges still to be resolved, but "The key to answering all four “hows” stems from current research being done at MSNW on the magnetically driven 3D implosion of metal foils on to an FRC target for obtaining fusion conditions."
No problem with the FryDay, as long as you have an an OrlowskiDay. Only fair. My favourite is the "retired chemist in the audience" episode. I don't suppose you've got this on video, have you?
They also own the sites which host the adverts, or have an agreement with the owners. In other words, they publish the ads, and their botnet then clicks the ads. The advertiser pays the network (Google, for example) something in the region of 10c - 50c for the botnet's click, and Google pays the site owner some percentage of this. Great work if you can get it.
Well, no-one else has mentioned it, so I'll just point out that Drive is a lot more useful than just some place to put all your important data while you wait for someone else to lose it for you.
Of course, GAS is full of bugs, and is practically unsupported, and development seems to have stopped, and it's generally a bit of a pile of sh*te. Still, it's a great idea, and it's better than nothing.
PointDNS lost all its DNS servers for several hours on Tuesday morning. They blamed it on a DDOS attack, saying it "definitely originated in China" (with no details). My email and sites were down all morning.
I will forcefully argue that it IS a science if you can make cold-hard-provable statements like "you can't be faster than O(n*log2(n))
That doesn't make it any more "Science" than, for example, Economics is. It's logic and the occasional application of mathematics. Perhaps someone should point this out to that nice Mr. Gove.
Don't worry about it. I had to sign up to get support from Samsung (they very cleverly used FB's "technology" to avoid providing support, while making you think that your posts were visible and being read).
Anyway, after calming down, I signed up with the name of a well-known cartoon character, after some slight modification, to get it past their checkers. FB now sends me occasional mails telling me about people on the other side of the world who also went to "The University of Life". They think I'm really stupid, and it's great. I use it for logging onto other sites, and I get counted as an "active user", and I've done my own little bit to screw FB. I think I may go and click on some ads when I get bored.
I haven't read the paper, but I think it's pretty obvious that they are suggesting that current qubit experiments have not yet proven that "local realism" etc. (my emphasis), not that quantum mechanics doesn't exist. They're not physicists (I think), but they're not stupid either. And I didn't downvote you.
And I don't know what your beef with wavefunctions is. Maybe you did it another way, but it's a perfectly good model.
All three machines and others of the time, Johnson maintains, contain TV-out circuitry inspired by his pioneering work on the Microtan.
Cite? Everybody at the time just used an Astec modulator. The Microtan-65 photo even shows exactly that.
According to Paul Johnson, the ULA was originally created as a TTL (Transistor-Transistor Logic) integrated circuit.
Some confusion in this paragraph - the ULA emulator would have been a PCB with discrete TTL devices on it. The quote at the end of the paragraph doesn't make any sense.
I think Barry Muncaster was being a little disingenuous with this one:
“The 16K Oric [was] exactly the same design as the 48K Oric,” said Muncaster. “We would have had no problems if the specification of a particular chip had not altered just prior to manufacture. However, it did, which resulted in us having to change the 16K circuit board. This has meant a 12-week delay in production.”
The only thing that could have caused a 12-week delay would have a ULA re-spin. So, was it Tangerine's own spec alteration that caused the problem?
The of course the IBM PC came along and made everything generic, beige, slow and boring for a decade and a half.
Don't know why anyone would down-vote that - it's absolutely true. The PC was probably the major factor that killed off nearly everybody else in the 84/85 timeframe. The hardware was crap, and DOS was crap. Everyone bought it simply because of the IBM name. And, yes, we were stuck with crap for at least 15 years. I was using a Hercules graphics card till about 2000, to get second-rate mono bit-mapped graphics, when home computers had had it sorted in the early 80's.
Don't agree with your choice of 3, though.
CP/M had no graphical interface beyond the ability to change text colors with escape sequences.
CP/M had GEM, which we were running on bit-mapped hardware in '83 (or early '84?). Maybe they didn't put it on the Tiger, though.
what was the first OS that supported bitmapped graphics? windows or OS9?
Must be loads of OSes that predated graphics on OS/9. Unix, for one.
I wasn't objecting to your use of EMF; I just put it in quotes because I didn't want to repeat it.
The point of my reply was that free electrons are not a problem. They can't be, because metals are full of them, and you have no problem with mechanical systems. But, you do have a problem with electronic systems, precisely because of the free electrons.
Ok, you may have lots of valid reasons for not trusting electronic systems, but "free electrons" should not be one of them. Besides, in this case, it's Chemistry that's the problem, not electronics.
And I was also pointing out that electromagnetism is not "ephemeral and easily disrupted". And, quite apart from anything else, the actual movement of electrons has pretty much nothing to do with the operation of electronics. That's just something they teach kids in school. Electrons crawl through a conductor - I forget precisely how fast, but maybe a cm a second. But signals propagate across PCBs and ICs at close to the speed of light. Electronics is about electric fields, not electron movement.
After all, a mechanical system is one made up of atoms—atoms whose electrons are tightly bound to the nucleus and thus extremely stable. However, in an electronic system electrons are freed from atoms and are subject to the most ephemeral and easily disrupted of all the forces of nature—the electromotive force
Hope you didn't ask the aviation experts about this. Any conductor has free electrons; that's why it's an electrical conductor (and, to a lesser extent, a heat conductor). Mechanical systems tend to be metallic, and so have free electrons. Look up 'conduction band'. And there's nothing "ephemeral and easily disrupted" about electromagnetism ("the electromotive force"). Look around you - everything you see, every natural phenomena, the form of every physical object - if it's not that way because of gravity, or the strong or weak nuclear forces, then it's that way because of electromagnetism.
The authors explain that their calculations suggest that of the 16 Terawatts of global energy consumption in 2006
Watts power, Joules energy, and Joules = Watts times seconds. '16TW' isn't an energy consumption, it's an instantaneous power usage. The number you've given (16) is approximately the energy consumed every hour (ie. 16TWh). Over an entire year, 16TWh is about 140,000TWh, or 5 x 10^20 J, or 505 exajoules.
Wikipedia gives the 2008 global energy consumption as 474 exajoules.
Let me guess. Historical article ratings had been showing significant dumbing-down over the years. This clearly couldn't be happening, so the ratings were erased.
I smell a fish - this is straight out of a Dilbert strip. Someone that smart wouldn't have let his subcontractor connect directly. And when did the Chinese subcontractor actually connect? In the middle of the night? And why did this "star programmer" actually bother going in to work to surf the web all day, presumably at the same time that "he" was connected externally? And is Verizon a bit like the US version of Vodafone? If so, what sort of moron would ask them to do a security audit?
Someone's taking you for a ride, Mr. Reg.
Sorry, got to add to the 86 comments...
Monitors don’t age very well; growing, as they do, dimmer and yellower as time passes.
Wrong. My 12-year-old near-perfect £600 CRT is going on the skip today, as it has just started to flicker occasionally (Mitsubishi Diamond Plus, 22", 1280x1024). This replaced my near-perfect Hitachi 21" CRT (also 1280x1024), which lasted about 10 years (cost about £1000 in ~1991).
The only replacement I've got lying around the office is a cheap 24" Philips 244E (1920x1080). It's pretty much unusable for development work. Lots of real estate, but it doesn't have the pin-sharp clarity of a good CRT.
Any why are you reviewing affordable monitors? I though this was a site for IT professionals? Your readers sit in front of a monitor all day, every day. Nobody in their right mind is going to save a couple of hundred quid on a second-rate monitor.
Another review, please: good monitors, for text-based work, that I can sit on front of all day long. We don't all sit around watching DVDs at work.
Wildly inaccurate, possibly.
Can I get the fiver first then write the book?
I detect an alternative career for you - crowdfunding. The requirement to actually produce a product is generally optional :)
This is a press release, not a detailed description of how they broke this attack. The last paragraph of the executive summary even ends with The case study closes with an overview of how individuals can protect themselves against the Eurograbber attack, including specific insight to how Check Point products and Versafe products protect against this attack. The article is pathetic, and looks like it was written by teenagers. If there's any truth in the article, then they must have hacked at least two drop zones, but they give no details on how they did this. The article isn't even consistent about the URLs of the drop zones, saying that two were known, and listing 4 elsewhere, and blanking different parts of the URLs in different parts of the article.
I call very low-quality BS.
This is a great opportunity to gauge the current Reg reader demographic...
Err... no. It's a great opportunity to gauge the demographic of those who are willing to read 174 comments on a Stallman story. Nothing to see here.
I hear there's room for one more in the Ecuadorian embassy.
wgetis broken and should DIE, dev tells Microsoft