Oh well, makes a change from toad licking I guess.
All hail the hypnotoad!!!
5659 publicly visible posts • joined 15 Mar 2007
That is a lofty goal, but I think the problem of slowing a probe down to get into orbit around a (relatively) light system is going to be a show-stopper in terms of fuel demands (as you have to get the probe+fuel up there and fast enough in the first place).
An atomic powered ion-engine craft might be possible...
On a more serious note, in engineering in particular there is a shortage of women entering the subject to study (e.g. compared to biochemistry, etc), no doubt due to various factors, but that in turn has an impact on the gender bias of typical engineering companies and university staff (who tend to reflect the entry stats some 5-20 years previously).
Tackling the issues around that at school age would be a good start.
Or just giving us engineers all much more pay, THEN we would have more uptake :(
Even though it caused an upsetting event here by being a touch too sensitive, it is still much better than Clementine's computer that lacked a watchdog and paid for that blunder in a serious loss of science after it got into trouble and wasted its fuel:
http://www.ganssle.com/watchdogs.htm
A nice beer for the folks looking after the probe
Indeed. Once we step beyond the ethics of what this company does (did?) and those who exposed the data, it will be interesting to see some proper analysis of the techniques used and if they relied on zero-day bugs, or Trojan installs, or maybe even state-instigated installation by suppliers/ISPs, etc.
Best not to forget what happened in the UK. This article covers the background and is essential first reading.
http://arstechnica.com/tech-policy/news/2010/09/amounts-to-blackmail-inside-a-p2p-settlement-letter-factory.ars
Then enjoy this:
http://torrentfreak.com/acslaws-anti-piracy-downfall-sends-hitler-crazy-101004/
The CBBC lot has produced some genuinely good programs in recent years, "Horrible Histories" and "Young Dracula" stand out just off the top of my head.
But all else on cable and broadcast has gotten shittier as more adverts are stuffed in, and more channels means less spent per channel on anything worthwhile.
That report about Toyota's software is truly shocking - so many mistakes that are in the "just out of Uni and never worked on something serious" level and a corporate arrogance (or ignorance) that the system fails on so many of the safety guidelines in the automotive industry's own MISRA standards.
I think the goal of NTP's guidelines is to stop a major supplier hard-coding "generic" pool servers in to their product, as correcting any problems later is a major problem.
So what they are asking is vendors create their own pool (maybe providing their own servers as well, but I don't know if they have to, as they could be aliases of other pool servers) so the hard-coded server addresses are unique to their product(s). That way any problems it can be throttled or blocked, etc, without impacting on anyone else.
As for software projects defaulting to the generic "pool" of NTP servers, I kind of feel they should not - that anyone choosing to install such software should be made to configure it. Of course, when such software is part of an OS or application, you are back to the vendor issue again and it should be pre-configured with the vendor's pool of server names.
Incidentally, in this day and age why are ISPs not providing NTP servers and offering the address via DHCP?
I prefer my own suggestion - make the insertion and removal of a leap second very frequent, say once per week, but arranged so they correct on average for the amount we need.
That way software developers will be forced to test their own damn code and the problems will be found and fixed.
Interesting. But more disturbing is Lennart Poettering's attitude to all of this.
For a start, WTF is systemd doing controlling the time when ntpd has done so well over the years?
Secondly, his attitude is one of outstanding arrogance that (a) they are defaulting to Google's time servers that are not offered or guaranteed to work in the future, and not abiding by official NTP/UTC time keeping, and (b) he seems not to care that such defaults put in to systemd will most likely got used by others. If your defaults are not world-wide sane, DON'T PUT THEM IN AT ALL!
If you follow El Reg and have read other articles by Trevor Pott you would know he simply speaks his mind on what has worked for his business and that may, or may not, be a MS-based solution. He is certainly not a "pathetic shill" as you suggest.
You may not care about w2k3, and certainly I don't care as I have not responsibility for w2k3 machines, but there is a lot of businesses out there that are about to get their backside's bitten.
Most of it is down to a lack of forward planning, and some of it is down to changes MS have made. You know, like no new 32-bit server machines supporting 16-bit code, or updated security practices that bork badly written older software (like some of MS' own code from around 2000...)
They have to do something: whether it is crossed fingers and more care in firewalls, or migrating some off the physical w2k3 machine and leaving the troublesome code on a VM, or even totally re-thinking what they do and why. So while it may be tedious to hear repeatedly, it is also with a good reason.
Of course they do - they don't have problems with this by design.
Its only the ground based software that is implemented by folk who (a) don't know what they are doing, and (b) don't test things that cause problems.
Fsck'em - why not have leap seconds +/- every week and occasionally do two in the same direction on consecutive events? That way stuff will be tested and fixed because the code monkeys can't argue "oh it only happens one per 2 years or so".
Chromium is the open-source part of the web browser project, and Chrome is Google's version with additional propitiatory stuff built-in (flash, other spyware).
That is what has kicked off the storm, that Google had modified the open source part to download a close-source (and pretty creepy) feature for voice recognition.
AFIK it has nothing to do with the license, but that MS never attempted to port the ntvdm to 64-bit.
Most likely for the same reason that 64-bit dosemu is different to 32-bit and that is down to the 64-bit mode of the CPU not having the VM86 instruction to make life easier.
However, as you say a VM will do for your remaining 32-bit Windows (provided you don't have hardware dependency).
Yes, DOSbox is also worth a turn but we had hardware I/O demands so it had to be dosemu.
dosemu also ships with a copy of freedos, though you can also use MS-DOS as well. You can configure the time keeping part to either follow the host time (so you get NTP accuracy, subject to the ~55ms tick of DOS time-keeping) or have it decoupled from the host which is handy for testing applications with other dates & times.
I tried it beyond the 2038 point and on 64-bit it is fine. Puts off the date problems for long enough for most readers to be commentarding on St. Peter's book...
If you are unlucky enough to have 16-bit + 32-bit + specific hardware/driver + IE dependency then I really do pity you :(
However, if you have 16-bit DOS style stuff then you might also want to try dosemu for Linux. Beware it also has some oddities in terms of 32-bit versus 64-bit versions, but it might be an easier choice. Also if you depend on special hardware that assumes direct DOS-style access to special hardware (as we do) then this option might be away of avoiding having real DOS or Win95/98 machines any more since dosemu can be run with sudo (root) access and configured to permit specific hardware I/O in dosemu.conf
If all else fails, then identify what is not going to work on 64-bit systems and keep that on a dedicated machine/VM and really go out of your way to protect it from the big bad world by putting it on a separate VLAN, etc, and firewalling it to the hilt. Even if it has to print to a network printer, try to block the printer connection as much as possible as they are often never patched and probably contain vulnerable web servers for configuring them, etc.
Really I have to hand it to the German BOFH when the declared that a complete rebuild is the only (the final?) solution.
Think about it: you get to replace old routers and start with safe/sane settings for the firewalls, etc. You also dump all of the out of support 2k3 servers and XP machines, and give everyone a new desktop.
Personally I would go Linux with a Windows VM for special stuff, partly to reduce the COTS malware risks but also as a lot of that won't run in a VM to avoid analysis, but looking at a more realistic scenario you get to deploy new desktops with known configurations and can have the ACLs set so no user program can 'execute', only those the BOFH has installed in the correct locations.
Along with that you deploy only known, patched, and properly configured applications. Sure, you have to re-import user data, but that can all be scanned first and maybe even make users ask for what they actually *need* to have, further reducing the risk of p0wned stuff.
We salute you!
The real problem is if you have 16-bit Win95/DOS era software as that won't run on 64-bit Windows. OK you may also have driver problems as well for older hardware under 64-bit (remember how crappy 64-bit XP support was?). Sometimes it will run on Linux emulators (Wine, or dosemu, etc) but that is a significant gamble.
Now you might be saying "Who runs 16-bit any more?" without realising there is a lot of small speciality software from that era that works, and changing the software to a newer version is a major PITA for various reasons:
1) New software license costs
2) Maybe no longer compatible with old, special, and very expensive hardware
3) Different file formats so you cant read/write previous data
4) Different work-flow so you have to re-jig lots of scripts and re-train users.
5) All of the above often gets you nothing more than "supported OS" status as it will do exactly the same job as the old one (maybe better, maybe more buggy).
So while using old servers for general stuff is barely excusable, there are some VERY GOOD reasons why it won't happen for many. But as other commentards have pointed out, you should be working on the assumption that ALL systems can be p0wnd (old & new, Windows & Linux) and planning how you detect that and restore to a clean state when it happens, not IF it happens.
I think the main reason is there is SO MUCH silicon technology, experience, and fab facilities it is easy to make complex chips from it, whereas GaAs has been generally kept for the fastest of products where you don't have the same density of components but need higher speed. GaAs is more radiation-hard than silicon, but also less tolerant to heat. I'm sure others with more knowledge can provide a better informed answer though.
Firstly all these remote locations don't need access to a lot of data at any one time, so the database server ought to rate-limit requests and queries to a reasonable amount per authorised machine/user.
Secondly having something where the leak is so significant really ought to have raised the question about how many sites really need to access it, and for them you could have deployed specific machines with dedicated hardware encryption in the network card (or a dedicated secure router) to tunnel the data to/from the server.
None of them having any simple path to the outside world so an attacker would need multiple physical access aspects to begin hacking past the user account and rate-limiting aspects. Anyone needing to access the data base would find those PC(s) in a reasonably secured room, log on and do their job, then go. Room could be CCTV'd so any attempted tampering would be on record, etc.
It is all perfectly possible, but it costs money to do (much less than the hack is going to cost, I'll bet) and adds some inconvenience, but still much easier than the old days of paper files. So its not really *that* inconvenient.
Again, you are looking at a much higher bar than plugging in a device to an unused port at the recreational area, etc.
Now you are actually tampering with the internal wiring and could easily install a keyboard logger, etc. But on an isolated network you would have to use a radio link out, and that could be monitored as part of a sweep for bugging anyway, if you are sufficiently paranoid or working to regulations that deamand that degree of security. That is why the "red" cables in proper high security installations have to be visible along whole length and subject to regular inspections for tampering (or shielded fibre with some fibres used as tamper-detection, etc).
You are of course correct.
I was just thinking aloud about things that can be done for little physical cost on "normal" PCs & networks typically used in below-secret Gov, Business & Universities. OK, air-gapping is not common on those, but all the other features are pretty much standard on Cisco and similar kit, so having red/black networks for internal/external can be done and spare kit used for both.
Edited to add, worth a read:
http://www.gocsc.com/UserFiles/File/Ortronics/WhitePaperGovtv5AUG2011FINAL.pdf
Well there are old people or others with vision defects where an 11" retina screen is utterly a waste of money.
A lot of folk use a laptop as a semi-permanent thing because its smaller than desktop + monitor and can be tided away fairly easily. For them a 15" or 17" screen is just so much nicer to work on, and the size and weight are not the same issue as those always travelling with it.
I think in most cases the issue is not that you could search for a person's past events using specific knowledge, but that Google would return all sorts of older stuff with just a person's name.
Surly a technical company such as Google could implement a filter that only returns recent (say past 12 months results) when the search is a person's name, but only recovers such issues if you really go digging deeper with specific searches and date ranges (like you once had to do before t'Internet came along)?