"Proof of work"
Greenpeace (etc) should award a prize to whoever can come up with a cryptocurrency where the "proof of work" is proof of /useful/ work - not just burning CPU cycles (= heating the planet/wasting electricity).
505 posts • joined 14 May 2008
Greenpeace (etc) should award a prize to whoever can come up with a cryptocurrency where the "proof of work" is proof of /useful/ work - not just burning CPU cycles (= heating the planet/wasting electricity).
Because the algorithms which are used to limit logins usually take into account the IP from which the attempt is made
Well don't use those algorithms then.
(I know, you could lock a legitimate user out of their account in that case, but maybe you could design some way to mitigate the impact of that, e.g. require a user to log in from a separate web system using decent 2FA or whatever to unlock their account in that case).
Why are limited login attempts not going to stop that happening?
Every time I change my work network password, I have to first stop my phone and email client auto-syncing with the server, otherwise I get locked out of my email for too many bad password attempts.
And if logins are automated, all the more reason for using long and complex passwords.
So? Messagelabs provide the first MTA, I expect mail passes through a bunch of other servers before it reaches the users.
In our Computer Science department at uni (2003-2007) they had some network printers which were connected such that every page printed cost a few pence in credit. Students were allocated a couple of quid of free credit each term and if you used that up you had to buy additional credit using a coin box.
They also had stuck fixings to the printer trays with epoxy and used a padlock and chain to stop people fiddling with the internals, or taking paper from the paper trays.
Some bright spark figured out that you could walk up to the printer and print a diagnostics page from the menu, and it showed the printer's IP address. The printer had a FTP interface enabled and any postscript file transferred to it would be printed instantly. Of course this was all anonymous and bypassed the print credit charging system.
After that I don't think anyone seemed to run out of printer credits. One guy even took to printing a blank postscript file whenever he wanted a blank piece of paper.
Worse still, at least at that time universities had huge IPv4 address blocks meaning every PC - and yes, the printers - in the department had real-world public IP addresses. Not sure if anyone tried but I reckon you could have logged in to that printer from anywhere in the world, without any authentication, and printed stuff off.
The OS provider who has been warning for years 'this OS is obsolete, it's unsafe, we no longer support it, stop using it'?
That looks like like the result of 'randomly' mashing the keyboard, not truly random. Lots of substrings formed of letters that are close together on the keyboard.
Haigh also asked what assessment has been conducted of the consequences for (a) the UK economy and (b) national security of banning end-to-end encryption?
How could the government ban end-to-end encryption? Or rather, how could they enforce such a ban? Assuming they can inspect all internet traffic, encrypted content should be indistinguishable from random noise, so are they going to ban sending random data over the internet? Assuming not, an encryption tool could just hide genuine data in amongst a stream of random noise; there's no way they could force you to decrypt the random noise because it's impossible to do so, and no way for them to tell which of the packets contain encrypted content.
Of course they could prevent named service providers from offering messaging apps that use end-to-end encryption, but someone (probably abroad) would just create another one, or the terrorists will use PGP or something. We all know how well attempts at banning that went.
A single developer working on a contract won't get that much, though they'll probably make a bit over half of that, with the rest being overhead costs (premises, recruitment, management, etc...) and profit for the company you are working for. An employee will make less but gets job security, holiday pay, sick pay, maternity/paternity pay, employer's NI etc etc.
"But if I want something to reach a certain temperature, I want the rate of energy in to be as high as possible as all the time it's heating up, energy is also being lost to the surrounding area. So, in extremis, 100W for 10 hours won't get me to the same temperature as 10KW for 6 minutes, even though the total energy input is the same."
That's true if you're heating a fluid (e.g. boiling water for a cuppa) but not for a solid. If you cooked meat that way you'd end up burned on the outside and raw in the middle.
And gentler heating can be more efficient, for example you could cook a joint of beef in a big electric oven or a small slow cooker. The slow cooker would use less energy because it loses less heat to the surroundings, the cooker itself has lower thermal mass, it heats the beef directly by conduction rather than indirectly by convection, etc.
I work in embedded software - not smart meters - but I've worked with on similar projects.
"Replacing one chip" is probably not as simple as it sounds.
From the hardware side - there will probably be board redesigns (operating in a different band may need new antennas, RF validation etc) which could result in several spins of hardware, refinements etc.
From the software side - we don't know what the interface is to this chip but quite possibly it will require entirely different drivers to the old chip. Maybe some kind of abstraction layer needs changing if the interface to the driver is different (we don't know if the new chip is from the same vendor as the old chip). Sure, drivers are often provided by the chipset vendor, but they also rarely work perfectly first time and need effort to integrate and test.
And then you have to test the thing, not only with the new bands but also with the old ones, and shake out all the weird edge cases, investigate, debug and fix them. And then there's the official certification process too, which is I imagine fairly onerous.
I'm not saying that this is definitely a £3.3m project, but in the absence of any other information, it certainly seems plausible that the engineering (and, yes, project management and QA) involved in such a project could come to that ballpark.
Works out an average rate of £98 an hour, which is not crazy. Depends a lot on where they are based and the availability of people with the right skills.
So the question is, are they over-estimating the resourcing needs?
Based on everything involved in designing and validating a hardware change and the software changes to go with it (does it need new drivers? How well tested is the new chip? Do you end up finding bugs and weeks of back-and-forth with the chipset vendor to resolve them? etc) 18 months and 13 people seems kinda the right ballpark.
Considering the outrage there would be if the project was rushed and the resulting code under-tested and buggy, maybe they are rightly estimating cautiously in this case?
If you have a choice of paying by Paypal or by credit card directly, for purchases over £100, pay directly by card or you will lose the "Section 75" protection. http://www.moneysavingexpert.com/credit-cards/PayPal-Section75
Line rental is not "unnecessary". You still want some kind of piece of string between you and "the internet" and providing that string costs money. Doesn't really matter whether you use that string to make analogue phone calls or not.
They should just call it a "standing charge" like the gas and electric companies do and have done with it.
I'd much rather take a few hours off that I can spend at home the day before a long flight, knowing that I'll be able to get the work done while on the flight with nothing much better to do. There's not a lot you can do with your time on a long haul flight that is actually really relaxing, so might as well do work on the plane and relax at other times.
I used American Airlines' inflight WiFi on a business flight from London to LA - I think they charged something like £15 and while it was by no means "superfast" the ability to spend a few hours of an otherwise long and boring flight working (sending emails, using VPN etc) was great and definitely worth paying for.
Only weird thing was that the service seemed to be tunnelled back to T-Mobile somewhere in Germany so you kept getting sent to German versions of websites.
There isn't (and can't be, due to limited frequency space) the capacity though. Certainly in an urban area, if all the data that flies about through cables was replaced with mobile you wouldn't be getting those good speeds...
Read the first few paragraphs of this article and tell me what it's about.
"extending the reach of Wi-Fi" could mean anything, to me it sounds like some research to extend the range of a single access point.
But no, apparently it's to do with adding public Wi-Fi hotspots. Yet the word "hotspot" doesn't appear until paragraph 15.
As with most things, this is the classic trade-off.
Funnily enough industries such as aviation and nuclear spend a lot more money to find and fix bugs in their software than do people developing consumer grade software (desktop and mobile OSs, TVs, set top boxes etc). Consumers demand quickly-developed, latest and greatest software and it is neither possible nor necessary to deliver your mobile OS to the same standards of quality as you would the control software for a nuclear reactor. If you wanted your mobile phone to be as reliable as a warplane then (1) it would take decades to develop and (2) you wouldn't be able to afford it.
The same is true for other things, your house was not built to the same quality standards as the Channel Tunnel was because of the typical trade-off between time, cost and quality and the impact of failure. Software is no different.
And the IoT devices involved in this attack were bargain basement models made as quickly and cheaply as possible, therefore it comes as no surprise that the quality of their software is rock bottom (at least when it comes to security).
EE, Plusnet: both owned by BT but for the time being at least operated independently, somewhat better
Fibre cable, coax, headend equipment, ...
Upgrading to fibre is going to be one major way that customers will get a connection > 10 Mbps. Are you saying that if a customer has a 6 Mbps ASDL connection they can't be upgraded to a 50 Mbps FTTC connection until they and all of their neighbours has first been upgraded to a FTTC connection that has been throttled to exactly 10 Mbps?
is the version of the BBC website for those *outside* the UK. bbc.co.uk is for those inside. The former has adverts and the latter does not.
Try to access either from the wrong location and you will be redirected.
A railway is relatively self-contained, sure there are occasional junctions/points and the like but basically trains can go forwards or stop, and maybe occasionally reverse direction. It's rare to find people or other obstacles blocking the lines.
Not so a car on a road.
Actually, it scums up the coffee pot.
"Well I would say that someonee embarking on a CS degree should ALREADY know how to code......"
Well, that wasn't me. Sure I was "good with computers" at school, and learned a bit of HTML etc, and my A-level further maths included modules of discrete mathematics (algorithms and so on), but I didn't actually write my first "hello world" until I started my CS degree course in 2003. There were no programming courses at school, and nobody to encourage me to program.
I graduated with a 1st class MEng in 2007, and have done alright in software jobs since ;-)
I would imagine that most 17 year olds who think they know how to code, don't really. They may be able to hack together code from examples, but they probably don't understand the detail of *why* things are done the way they are (or better ways of doing things)
So maybe there really is a difference between "good" CS degrees and bad ones.
"A good way to test this is to give some example code that returns a pointer to a local variable and ask them to describe what can happen if you start to use that pointer."
We have a question based on exactly that problem in our interviews. Not only do I ask what happens if you use the pointer (it depends) but how they would locate such a bug in code that someone else had written.
It's not universally true, but of those with a few years experience, it seems that those without a CS background struggle more with questions such as this than those who studied CS, but that's just the impression I've got from the candidates I've interviewed.
I can't remember the degree, but it was most likely computer science or something very similar (actually we've found at least a couple of unis have a course called Computer Games Programming which is actually basically Computer Science but made to sound sexier to 17 year olds who are applying for a degree).
Though, I'd still expect a Computing/IT graduate to be able to write a loop, maybe in Bash/Python/whatever scripting language they prefer but surely they're going to need to automate doing repetitive stuff at some point?!
The guy who was completely unable to write a for loop was an extreme example, but the sort that sticks in your head. Sure, he'd probably done it before and I'm sure his uni software project must have contained many loops, but sat in front of a computer he couldn't remember the syntax. I can't remember whether he claimed to know C, C++ or Java but as the loop syntax is the same and we'd given him a choice of languages, the fact he couldn't do even this basic thing from memory was rather concerning. (It wasn't the only thing where he failed to show knowledge or understanding, however).
As someone who does interview both graduates and more experienced developers (and a comp sci grad myself), in an embedded software business, I'd say I most value someone who has learned the fundamentals of CS (algorithms, complexity, computer architecture, logic) and some software engineering (design, testing, OO, design patterns) and can evidence applying both through their project work. The "Android/IOS/Linux/Oracle/Windows 10/Azure/AWS" stuff I really don't care about, provided their project work shows they have applied some knowledge in some domain areas and can pick it up quickly. (Though understanding the basics of the Linux/POSIX style command line is a big plus).
One reason why I'm always wary of "experienced" programmers who were self-taught and came from a hardware or physics background for instance is that they can bash out code based on tutorials they've learned etc, but they don't really understand basics like what a pointer is or what the difference is between a list and a vector, for example. Which can lead to writing buggy software, or being unable to debug such issues in other people's code...
We're a small/medium sized software consultancy (~60 or so employees in the UK). This year we advertised a vacancy for a software graduate. Many who applied who were either in the final year of their course or who had graduated with a Computer Science or similar degree failed our 10-question online multiple choice filter test. The test in question is open book and not time limited; the questions cover the basics of programming and CS theory, nothing complex; and our "pass" mark is only 6 out of 10. (Question 11 is "how many of the above answers did you look up online or ask for help with" - we wouldn't necessarily reject someone who looked up most of the answers, provided they got them all right!)
Of those who got to an interview (6 candidates if I remember correctly), none was up to standards (and our standards are not overly high for a graduate; we're talking basic failings like being unable to write a "for" loop in C/C++/Java). We left the graduate role unfilled this year. We do also take a "year in industry" student, who we interview about half way through their second year at uni, with the same questions and interview process, and universally the "year in industry" applicants were brighter and more capable than the graduate ones.
Which suggests that somehow we failed to attract the "good" graduates, and were left with a bunch who had somehow graduated or were on track to graduate in Computer Science but yet failed to understand the fundamentals of their chosen subject.
And actually, I don't need to control the DNS server, that just makes it easier. Since I can see and intercept all your traffic to my AP, I can look out for any initial non-HTTPS request and spoof a response, for example.
This also works with secure access points, if there is a common password I can get hold of (e.g. WPA2-PSK). If there's a hotel or pub that has a known WiFi password they provide to customers (maybe they stick it up behind the front desk/bar), for example, I could easily set up an AP using the same SSID and password and chances are at least some of the time (e.g. if your device has a stronger signal from my AP than from the hotel's) you will end up connecting to my network.
Avast obviously weren't being malicious.
Let's say I can convince you to connect to a WiFi access point (AP) I control.
Chances are you use the DHCP server in my AP to get an IP address *and DNS server address*.
So I configure my AP to point you at a DNS server I also control.
When you type www.facebook.com in the browser, I can deliver a DNS result that points you at a web server I also control, that provides a facebook lookalike login page.
You don't look close enough to notice that this particular connection to Facebook isn't redirected to HTTPS, you log in, I get your facebook password.
You can replace "facebook" for "most other secure websites", unless you've visited them before, and they use HTTP Strict Transport Security, and your browser supports it (Facebook actually do send HSTS headers, but many other secure sites, e.g. online banks, don't.
“I have enough information at this point to open and close his bank accounts, or do whatever I want,” he says.
Er, really? Sure, he knows a fair amount about his "victim", but that still shouldn't be enough to do anything particularly lucrative to a criminal.
Last time I tried to close a bank account, I had to go into the branch (even though it was an "online" savings account), and show the bank card of my linked current account, and sign a form. That was for a dormant account with no money in it - had I actually wanted to withdraw money and close the account I'd have needed the card's PIN and also possibly some other photo ID if the amount in question was large enough. To steal money with online banking, from the two banks I use, I'd need (1) knowledge of logins, passwords etc and (2a) access to my card and PIN or (2b) access to my phone, depending on the bank. The attacker described here doesn't have ANY of that info.
Maybe this speaks more to the lax security policies of American banks than anything else?
And being able to gain root access someone's web server (not really sure how that is related to "replicating" a web site) is entirely unrelated to learning anything about their home address, car registration etc, and more the fact they were running an old unpatched Linux distro.
"TfL claims 65,000 journeys a day are being made using contactless with 500,000 million journeys made using contactless since its introduction."
If that "500,000 million" is right then at 65k journeys a day it would imply it's been running for 21 thousand years. If it's really only 500,000 journeys then it's only been running for just over a week?!
"The EU does not allow origin discrimination of that nature on goods, hence the need for the "protected origin" scheme and the roomfuls of bureaucrats to administer it."
When has anyone ever worked on New Years Day?
"ANY information they can glean from it can be used to reconstruct your identity, at least to the point they can employ social engineering to get more information and then eventually they have enough to compromise or steal your identity."
They *could*. But *would* they?
Your common-or-garden cybercriminal, much like your common-or-garden house burglar, will go for the easiest targets. They're after quick money not some convoluted identity theft.
In practice, my LinkedIn password is better than "password" or "12345678", but not as good as 12 truly random characters or whatever. Which is fine, as long as there are lots of people who have passwords worse than mine; just as my house isn't likely to get burgled as long as I have pretty good locks on the doors, and the guy down the street has crap ones.
I really don't want someone to get access to my bank account, or my email account, or root access to my servers, so I use secure passwords for them.
But LinkedIn, or for that matter some random forum such as this one, what's the worst that can happen if someone logs in as me?
The main risk if someone steals my login details from the likes of LinkedIn (or indeed this forum, which doesn't even use a HTTPS connection...) is if I use the same email and password combo for either this site and others, or for my email account, in which case they can get access to all the "forgotten password" emails and the like.
But if I don't, then what's the problem?
I have a better lock on the front door of my house than I do on my garden shed, for much the same reason. Get into the shed and at most you can steal some plant pots, potting compost, barbecue charcoal and a bit of garden furniture maybe.
Customer: "Why is X so expensive? Surely it doesn't cost you anywhere near that much to provide the service?"
Ryanair/Three: "Well, you don't have to use X."
It doesn't really answer the question, even if they are correct that you can generally avoid the charges by jumping through various hoops.
LOC was thrown out as a useful measurement for *coder productivity*.
It used to be assumed that the more LOC per day, the better the coder.
Now it is often believed that less is more, simpler is better, so actually writing negative LOC could be a very good day indeed. Hence the argument in the article that fewer LOC in non-British banking apps is a good thing.
Yes - depending on which article you read something like 30-45 million circulating pound coins are fake.
They actually seem to have quietly dropped that name since 2014 - the link from the article redirects to a different page. Can't think why.
The potential security features are intriguing though. Could the coins, rather than being just a lump of metal, actually contain some kind of chip?
A server that supports TLS 1.2 is only vulnerable if it or some other server that does support SSLv2 is using the same certificate.
Yes, every BBC Radio news bulletin I've heard in the past 2-3 days has had a brief "x says we should remain in the UK for y reason" from the newsreader, then some spokesperson for the UKIPs/Tories/other xenophobes of choice have been given a 30 second clip to spout complete tripe arguing about why we should leave.
I've not heard a single clip spoken by a correspondent from the "remain" camp.
Either always use braces, or use Python...
There are online "scanners" that work from webcams or uploaded images. You could have reproduced the QR code in MS Paint or similar if you really didn't have a way of photographing it.
Yes, but this is once you have a bank account. Setting up a bank account if you've never had one is a surprisingly difficult task. Especially for recent immigrants (e.g. refugees granted asylum) who have no identity history in the UK and may not even have ID documents from other countries.
It's these sorts of individuals DWP in particular will have difficulty identifying.
I imagine the Verify system would also struggle to identify many of those particularly of an older generation who even if they have a bank account may not have any debts (so no credit record), have no driving licence or passport, etc.
You are correct. But the URL is inside the HTTP request itself - not in the packet headers - and is encrypted for HTTPS. That's what I meant by "Or are they inspecting the contents of every HTTP request and logging that? In which case, what happens when the server is using HTTPS?"
Biting the hand that feeds IT © 1998–2017