Decent screen for once
OK, the MS product is not very cheap, but the 2160 x 1440 screen is a damn sight better than even the majority of ultrabooks at more than the ~£600 range of these.
2330 posts • joined 15 Mar 2007
OK, the MS product is not very cheap, but the 2160 x 1440 screen is a damn sight better than even the majority of ultrabooks at more than the ~£600 range of these.
I think the official El Reg unit should be in kilowrists, the measure of simultaneous pr0n film streaming performance.
I think the main reason is there is SO MUCH silicon technology, experience, and fab facilities it is easy to make complex chips from it, whereas GaAs has been generally kept for the fastest of products where you don't have the same density of components but need higher speed. GaAs is more radiation-hard than silicon, but also less tolerant to heat. I'm sure others with more knowledge can provide a better informed answer though.
In the blue corner we have Cisco with the (alleged) NSA back doors.
In the red corner we have Huawei with the (alleged) PRC backdoors.
Ladies and gentlemen, pick our choice of partners now and place you bets! Complementary tubes of KY will be provided...
Firstly all these remote locations don't need access to a lot of data at any one time, so the database server ought to rate-limit requests and queries to a reasonable amount per authorised machine/user.
Secondly having something where the leak is so significant really ought to have raised the question about how many sites really need to access it, and for them you could have deployed specific machines with dedicated hardware encryption in the network card (or a dedicated secure router) to tunnel the data to/from the server.
None of them having any simple path to the outside world so an attacker would need multiple physical access aspects to begin hacking past the user account and rate-limiting aspects. Anyone needing to access the data base would find those PC(s) in a reasonably secured room, log on and do their job, then go. Room could be CCTV'd so any attempted tampering would be on record, etc.
It is all perfectly possible, but it costs money to do (much less than the hack is going to cost, I'll bet) and adds some inconvenience, but still much easier than the old days of paper files. So its not really *that* inconvenient.
Again, you are looking at a much higher bar than plugging in a device to an unused port at the recreational area, etc.
Now you are actually tampering with the internal wiring and could easily install a keyboard logger, etc. But on an isolated network you would have to use a radio link out, and that could be monitored as part of a sweep for bugging anyway, if you are sufficiently paranoid or working to regulations that deamand that degree of security. That is why the "red" cables in proper high security installations have to be visible along whole length and subject to regular inspections for tampering (or shielded fibre with some fibres used as tamper-detection, etc).
You are of course correct.
I was just thinking aloud about things that can be done for little physical cost on "normal" PCs & networks typically used in below-secret Gov, Business & Universities. OK, air-gapping is not common on those, but all the other features are pretty much standard on Cisco and similar kit, so having red/black networks for internal/external can be done and spare kit used for both.
Edited to add, worth a read:
Sadly it could get worse, the original hackers could paste it on a torrent or similar to provide plausible deny-ability for the state about acting on the information in it, and just say they got it from the hackers' public posting. That way other nations and every low-life scammer out there would have the treasure trove as well.
I feel sad for all of those US citizens now at risk and angry that their government was so stupidly caviller in having such an important database on a public-connected system (probably?) with such a poorly thought-through security aspect as this.
They pay billions for the NSA and the least they could have done was got them to give the whole system and its management a once-over. Scrap that, Snowden showed even they had not thoroughly thought-through big system security.
Maybe Snowden's documents were the source, or maybe this mega-hack. Who is to say the UK has not been popped (or was sharing with the US which clearly has been)?
If I were Russia/China it would make sense to say it was Snowden to disguise being in on this hack, for example.
Similarly if I were the USA/UK it would make sense to use Snowden as a stool pigeon to try and deflect public anger from the piss-poor security in place and/or the lack of appreciation of what such a massive database of all security-checked staff could mean when leaked.
"the night janiter plops a rasberry computer with a wireless modem"
If they are taking security seriously the switch would be configured to only allow specific MAC addresses on specific ports and even then only allowing the DHCP-supplied IP address to be used, so that trick won't work.
Also if they take security seriously they would put all the crappy never-patched network things like printers, web cameras, etc, on a separate VLAN/IP range (and without external access in the sad case they are not air-gapped) so their behaviour can be seen more clearly by intrusion monitoring systems, etc, and they can be blocked from initiating any connection to the "good range" machines (i.e. they only react to a print command and don't get to broadcast or probe the PCs).
A more likely physical attach is to plug 'evil USB' devices in to unguarded machines. OK those systems should also be locked down so USB is not on autorun on anything like it, but that may not be enough if they have a zero-day exploit for the lower level USB hardware/stack used. In the nation-state with insider doing dirty work case that is, of course, possible.
Either way, it is much much harder to exploit a network not on-line, as exfiltrating the data needs some sort of access (USB or similar again) and there is a high risk of the person getting caught if the sysadmins have some regular checking of system logs for device attachment, etc, happening.
The sad thing in this train wreck was seen to be coming for a long time, as you have:
1) Gov collecting data on its people like a fetishist
2) Gov cutting IT budgets and not holding anyone personally responsible, with power, to do anything about it.
3) Putting stuff on or connected to external networks because its cheaper/easier/more productive that way.
4) Software / OS being so complex and hole-ridden with developers all running after "shiny and new" instead of simple and reliable.
5) Other nations realising 1-4 and the gains to be had from popping said data.
The USA may not be the first, but it sure as hell won't be the last nation to have its dirty laundry sent to China (or Russia, Israel, etc, etc)
If you collect it, it will get leaked eventually.
Well there are old people or others with vision defects where an 11" retina screen is utterly a waste of money.
A lot of folk use a laptop as a semi-permanent thing because its smaller than desktop + monitor and can be tided away fairly easily. For them a 15" or 17" screen is just so much nicer to work on, and the size and weight are not the same issue as those always travelling with it.
I think in most cases the issue is not that you could search for a person's past events using specific knowledge, but that Google would return all sorts of older stuff with just a person's name.
Surly a technical company such as Google could implement a filter that only returns recent (say past 12 months results) when the search is a person's name, but only recovers such issues if you really go digging deeper with specific searches and date ranges (like you once had to do before t'Internet came along)?
If you know the camera's point-spread function accurately, which I assume the guys who built the probe did, you can deconvolve the received image with the point-spread function.
It makes more sense in the frequency domain (plus phase) where the deconvolution process consists of dividing the image spectrum by the camera's "low pass filter" effect to restore the original image. This, of course, is not as easy as it sounds for various reasons:
1) There is noise in the image (both random and quantisation due to A/D converters). This gets magnified seriously wherever the camera has poor spatial resolution.
2) If there are nulls in the camera's response you have irrecoverably lost that information.
3) You might be trying to compensate for two effects - the camera's response and the movement of the system.
4) Errors in the above can become artefacts.
See, this reason alone is why I won't choose to use Windows unless its the only choice for some special job - they can dick around with my PC at any time of their choosing.
"Idiot sysadmins...greater risk to security than an unpatched Linux or Windows machine"
Often the unpatched machines are the result of said idiots.
Sure you may find machines that can't be patched for various odd reasons (not supported and/or run special software that can't work on newer OS, etc) but for $DIETY's sake you don't have them Internet-facing or in use for email/web browsing...
"Don't believe only the luser blindly clicking on an exe is the culprit, sometimes the real luser is the syadamin"
For most corporate networks they should have all user-writeable space set to no-execute via Windows ACLs. Apart from software developers or sysadmins, who need to execute software that is not already installed in the proper (read-only) system locations?
"PTP eliminates Ethernet latency and jitter issues through hardware time stamping"
So you have an irrelevant comparison: PTP can't work on a WAN, and on your LAN (without WiFi use or woeful congestion meaning you should upgrade your routers) you get sub-ms accuracy which is smaller than the time-slice for most software/OS task scheduling.
Also having asymmetric delays of 100ms or so is quite poor, you really ought to be using NTP sources that are 'closer' to your machine (in a network sense).
But returning to may main point made elsewhere, using time stamps which are *assumed* accurate to re-order data over a wide system is simple but also prone to clock error. Should programmers not be looking at other hand-shake and event counting methods to synchronise the *order* of events instead of trusting everyone's clock is always sufficiently close in time-keeping?
NTP is better than Windows SNTP by many order of magnitude
A typical Windows installation (thinking desktop here) has, by default, a time set once per week - so can be out by minutes at times. Even if you set the frequency to once per hour (registry setting) you are lucky to get better than 1 second.
NTP on a WAN typically give you accuracies of 10ms or better (so around 100 times improvement)
NTP on a LAN with decent time servers (e.g. machine with very good hardware clock or local GPS) gives you accuracies of the order of 0.1ms or better, so around 10k times better.
Often the question programmers should be asking is why am I using time, and is that actually the best way of determining order and sequence?
No, the UNIX time_t follows UTC and so is not able to perform correct time duration calculations over the leap-second period as there are discontinuities at those points.
A more general problem with programmers is they use "clock time" as a substitute for "order of events".
This works well if all events are being recorded with consistent time stamps, say for conditional compilation on a local machine where you can check if the .o file in one location is older then your .c file in another, or when you pressed the "build" button in the GUI, etc.
Things break due to time faults: such as the same conditional process on a network file system where the time stamp of some files is due to the servers' clock, and others locally are from the client's clock which is different, or the file system's time resolution (e.g. 2 seconds on FAT32 as a worst case) is now greater than the interval between steps, etc.
Then we get in to all sorts of debates abut keeping leap seconds to work around dumb programming. But really what the programmers & software architects should be asking is down to the ACID database situation - how do you guarantee correct order of events in a process if the local clocks are not fully in sync?
Time-obsessives have two things:
1) Atomic time, which is precise and monotonic (the spherical cow).
2) Human/civil/UTC time that follows the Earth's rotation (upon which our concept of time and units were based). And there are differing degrees of the (look up UT1 & UT2 if you want to know more). This is your real cow, and equivalent choice of Frisian, Aberdeen Angus, etc...
NTP handles leap seconds in the "correct way" as far as it is defined, in that it makes UTC follow its defined values. The problem in the more general sense is you have two concepts of time, you have:
(1) The UTC/Civil definition of days being 24 hours, of 60 minutes of 60 seconds always, along with a formulae for dates that make up the Gregorian calendar (lets keep quiet for now about other calendars).
(2) You also want for various reasons staying in approximate synchronisation with the solar time - i.e. that at, say, 0 longitude the 12:00 local is, on a yearly average, the time the sun is overhead.
Now the second is define these days with extreme precision, but the Earth's rotation is variable and, worst of all, not quite predictable due to stuff moving around inside as well as tidal friction, etc.
The correct way to do all of this, of course, already is known and implemented in some systems that really matter, and that is to have you clock keeping "atomic" time that has no discontinuities, and then to apply a leap-second correction to get "civil" time. See:
That is exactly how the GPS satellites do it, and their own GPS time was in sync with UTC in 1980 and is now 16 seconds different.
What is a problem for more software when it comes down to second "accuracy" is the most computer libraries are based on (1) and:
a) They don't quite know how to deal with the 59-second or 61-second minutes that happen when you get a second removed/added.
b) Also to perform the conversion to/from atomic time you need the offset values and as they have to be updated as the Earth's motion is observed, so it is hard to do correctly on anything stand-alone. You then would need internet access and the security problem that brings, and the grief caused when in a few years some web developer stupidly change URLs of important data for no obvious reason when tarting up sites.
Finally, there is a project (which I have not checked/tried yet) to give you a local NTP "fluid time Wednesday" effect here:
NTP handles this correctly.
Most OS can handle it correctly as well, but from time to time (groan!) someone changes the time-handling code and then fails to test it on leap seconds and you get problems, like the Linux glitch a year or so ago.
You can get GPS simulators and create your own NTP servers that push out this sort of thing for testing, so its quite possible to do, but people don't. And the results are predictable. Of course, you also get programmers doing dumb thing to implement delays, etc, rather than using the proper OS calls, leading to more bugs.
I personally think they should step the second backwards and forwards every Wed for a couple of months - then we would get OS and application software tested and fixed. One can hope they would fix it...
Maybe if they spent less time in pointless GUI dicking around and fixed bugs they would have less need for this?
And what about non-security bugs, like the defaulting to US Legal paper for printing on every update on the *NIX version that has been open for more than a decade?
As for the bundestag pc network being scrapped and replaced, maybe that is a final way to get rid of XP and force all users on to something more secure, reliable, and supported?
Lets just hope they check the PC suppliers are not using NSA-infected HDD should they decide to keep with Windows known boot loader...
The correct statement is "stopping the use of cheap crap programmers who don't understand what they are doing and fail to apply best practice when coding and the multitude of tools that already exist to help"
If you are really looking for a scape-goat for info sec woes, how about Office and VB plugins?
I wondered the same! OK, not quite as sinister as General Jack D Ripper, but a triumph of naming nevertheless.
Paris, as she could have my precious bodily fluids...
I don't want to blame the victim for being robbed, as that is a crime no matter where it takes place. But I do wonder why would you turn up in person to pay by bitcoin?
As far as I can see its main reason for existence is for electronic payment, and more so for use outside of the US-controlled credit card and Paypal corporations where it is hard to trace and hard to stop (e.g. wikileaks accounts being blocked by US gov pressure). Of course the "hard to trace" aspect appeals to criminals just like wads of cash, so maybe they pushed him for a bitcoin transaction knowing that.
I wish her all the best for a speedy recovery.
I still find it hard to believe that in my own lifetime that kiss was such a big deal, but then looking around the world today at some of the morons out there it is less of a surprise.
OK, you don't like sanctions.
Now how do you propose the West can impact on "the criminal government staff" without starting a war?
Sure, eventually something important will be hacked and people will die and, maybe then, will organisations will finally wake up and stop putting critical stuff on the internet at all.
Hell if you had to audit your whole system and get risk-based insurance for such design-decisions we would hardly see any such risk, as then systems would be properly secured and so take physical access as well as cyber skills to damage.
Just now there is a sporting chance of a few script kiddies taking a pop at critical stuff because a many years old and unpatched (or unpatchable) system is now exposed to "save money" and "improve productivity" in an important infrastructure or plant somewhere.
"If you yourself are not clever enough to use github in a secure manner"
So Sir, were you clever enough to notice the bad random number generator in Debian's OpenSSL? Did you in fact report it and help fix things?
If not then STFU and get on with something more useful. The call is not for GitHub to hand-hold users at any point, but to notice said compromised keys and warn users about them. Those keys, most likely, were generated years ago and then kept even when the user's OS was updated to something that has that bug fixed and they probably forgot which version of number generator was used to generate them originally.
"DHCP exchange) from being poisoned by a man in the middle?"
DHCP exchance is on my LAN and so under my own control. Not perfectly immune to attacks as they could p0wn the router, etc, but far far harder to do than out on the WAN.
"What does DHCP have to do with HTTP/HTTPS?"
Don't certificates sign for a given IP address? What if that changes?
"Also, you do a dis-service to systems admins/engineers by repeatedly writing that only developers can manage redirects and handling the nuances of making SSL work"
OK so who patches old expensive colour A3 laser printers to add SSL support? Have you seen much sign of software patching/upgrades even for new/recent printers?
"they'll still prevent man-in-the-middle alteration"
If world & dog just ignores dodgy or revoked certs (like Google do in Chrome) when so many stink and/or change for no good reason, then what is to stop an ISP doing a proxy with some self-signed cert for everywhere?
Yes, like the shitty business of defaulting to US Legal paper size on every update on *NIX platforms. That bug has also been open for more than a decade. Maybe a small amount of time fixing stuff would bring more happiness to users than pointless dicking around with GUIs and pushing policies out that break things?
Vaccines have considerably less of a down-side that not being able to access old sites and local printers, router config pages, etc.
Forgot to mention network printers - how many of them support https? And how well is that going to work with DHCP?
Indeed, they tried to with pointless GUI changes and "lets copy Chrome" approach to hide and obscure menus and other featured of interest.
Why stop there? Why not also break access to your local router, home NAS, and lots of old but interesting sites which the owners of have died, moved on, or otherwise given up maintenance on?
Really, some developers seem to spend their day in a jerk-circle reassuring each other that they are right and the rest of the world (i.e. the users of their product) are fools for not agreeing.
"but like real democracy, it only works if every person has one vote"
That aspect appeals to the idea of fairness - that the robber barons, etc, don't get to dictate over the views of the proletariat by virtue of money or connections.
But the problem with democracy in practice is the same as MS has, the voters are often ill-informed or idiots, and the choices they have have been pre-selected by a few with vested interests.
However, it should also be stressed that GUI stupidity is not a MS exclusive, as we see various flavours of Linux desktops, browsers like Chrome and Firefox, and the anointed "master" of GUI design Apple all pushing unwanted and/or ugly and/or irritating changes our on long suffering users.
Really what a lot of people don't want is just that - change for no good reason (as they see it). Would I be so pissed off with the Office ribbon if there was a small config option to put the menus back as they were? No. Same for changes Gnome has made, etc.
You seem to have ignored the part that it will be the service operators who hold the data, not the police. so they have to make a request, with justification I hope, as they currently do for phone billing records.
Also it might not be a case of you being a criminal, you might also be the victim and said short-term metadata might be of help.
I doubt very much that data retention has helped stop terrorist attacks, etc, in spite of all that Gov around the world claim as the justification.
However, I can see some use for normal policing of being able to access recent data (maybe several weeks as the Germans appear to be proposing). For example if someone goes missing in suspicious circumstance to find the last place their mobile was seen at, etc, or if someone is accused of committing a crime in a given location/time window.
The real issue most folk have about data retention is (a) the time-scale and what long term hoarding that means for digging dirt on those who fall out of favour, and (b) access by world+dog in government on the slightest pretence, and (c) feature-creep when it becomes useful for something else that is profitable, etc.
So personally I don't have a problem of short-term retention of several weeks and all access being by a properly justified court order.
"Nautilus, the default file browser which has about 30 per cent of the features it once had"
And that kind of sums up the whole GNOME 3 experience. In fact it seems to sum up the majority of GUI changes these days, pointless tinkering with eye-candy and the removal of features that some up-their-own-arse developer decided you didn't really need.
Sure its not the Wirrn?
Thanks for an interesting article - certainly I had no idea of the definitions used, and how by-product elements are, by definition, ones without reserves.
I think it's an Igor you are looking for...
(don't worry one will appear behind you as if by magic)
Really, are people STILL falling for that one?
Another minor aspect of the Umbongo "soft drink" was it would not ferment.
In spite of claiming to be fruity juice there must have been some non-volatile preservative in there as even boiling it up, then cooling it and adding yeast, etc, failed to produce any viable fermentation.
I'm sure you will understand how important this information is to your whole day...