Re: Captain Badmouth
No, they would need control over your email account to do that. Of course if you used the same password...
3510 posts • joined 15 Mar 2007
No, they would need control over your email account to do that. Of course if you used the same password...
The problem is not the change period for any passwords.
The problem is people who use the same password for sites like Linkedin, Facebook, etc, and their work, bank accounts, etc
"In reality, both restrictions should have been subjected to competition law scrutiny wrong ago"
Really? I suspect we would still be waiting for the count verdict and the last decade of phone development would have been at a snail's pace (unless Nokia or MS had really stepped up to challenge Apple, and they seemed to struggle at that due to bureaucracy).
Otherwise you are quite right, Sun seriously mis-stepped on mobile and Oracle appear to simple want Java to sue Google. Given the current piss-poor state of Java, after several years of Oracle's finest guidance and support, in terms of compatibility and security it manages to make Android's lack of patching seem almost benign. Almost.
More like this:
"The ZFS issue is just an example of how difficult could be to develop kernel modules without giving IP away."
That shows a complete misunderstanding of the situation. Firstly virtually no "applications" need any kernel modules, typically that is for special hardware and things like file systems. Secondly you can develop a kernel module and make it available as a binary blob to be added to someone's Linux system if you want - after all that is what Nvidia, etc, do for graphics drivers. The current argument is about a distributing the GPL Linux kernel with a pre-compiled non-GPL driver and if that makes it "distributing a derivative" of the kernel (which seems a bit bizarre argument).
The lack of specialist applications for anything other than Windows is simply a historical artefact of 90+% of desktop computers being Windows based, why would you bother with the other 10%? However, if a lot of folk move off Windows due to this, or other reasons, then software developers may start to see the value in using cross-platform tools (like Qt and similar) so they are not tied to MS uncertain future roadmap.
Or just run stuff in a Win7 VM without email/web/external Internet access and forget about the future patching (or lack of) for the OS.
Actually you are quite right, it is perfectly in MS legal rights to make the stable business version a premium price, and for their shareholders it is the obvious and reasonable way to get more value from the MS ecosystem (given the shift to phone-based use for most personal applications that MS failed to crack).
I leave it as an exercise for the reader to compute if following this route is better or worse than going to an alternative OS.
How long is your LTSB?
Are all of the OS things covered, or is Edge, etc, excluded?
No, but I could site on a 5 year LTS version of Linux for the best part of that time.
But as you say, as soon as its "as a service" you basically have to jump to their tune: OS change breaks some bespoke application? Tough shit, pay them to fix it. What, that updated version is not compatible with your archive of valuable data? Tough shit. Office 365 or Google docs has played "hide the feature" again? Tough shit, retrain your staff or stop using it.
And who slurps your private data for profit?
Depends on where you start from, those still struggling to get rid of IE & ActiveX crap are in for a massive re-wire effort either way.
Sadly most people, including some IT-literate sorts, simply have no plan for data loss. It could be a HDD failure, some "gross administrative error" formatting something, a laptop being stolen, or a cryptolocker attack. Sooner or later it happens (couple of % per year for HDD, no idea how common cryptolocker is in comparison) and only then do most folk do anything about it.
When its too late.
...for those without working, protected backup copies I guess.
Like RAID-6 it gives you an extra degree of redundancy during a rebuild. And for all of you out there who have seen RAID-5 rebuilds cough blood on sector errors only found during the rebuild and with no parity remaining to correct them, that is vital.
But if you are looking at a week rebuild time on a 8TB disk under real-life conditions, you still have an uneasy window for something else to go wrong.
They had all the bits to make a great and reasonably priced system, but pulled defeat from the jaws of victory by shipping a prototype version and then (largely by the Oracle take-over) losing key staff and failing to invest enough in to fixing it, instead of adding tick-box features that the sales folk were asking for.
Now of course Oracle has no interest in the lower priced end of the market, or even of selling storage as an item instead of part of a large profitable database deal. Others have stepped in with the same idea of a ZFS based appliance, but have any of them really sorted out the management and recovery aspects to make it reliable and painless to use?
Also we are seeing longer and longer rebuild times on bigger and bigger HDD, which are still your best bet for GB/£, and ZFS has not got anything like the Dell "data pools" where in effect your RAID strips are randomly spread over disks in a much bigger pool. Then a failed HDD results in a parallel rebuild of all affected RAID stripes to other HDD and you don't have the single spare/replacement HDD bottleneck in write speed versus capacity.
I guess you have tried umount -f already?
1) Don't use de-dupe unless you have absolutely masses of RAM and something like multiple VMs that share a lot in common.
2) Fail over - just don't go there.
So far we have used the Oracle fail-over feature that sucked donkey balls big time. Others have said of other fail-over software that it causes as much down-time as it is supposed to solve. Stopping the "split brain" risk is very hard to do.
You might be better served by having a small separate arbiter (like a Raspberry Pi, etc) who's sole job it to spot an unusable system and power it down (ILOM command, or network controlled power strip) and bring up the 2nd head. Syncing the 2nd head status is another area of pain, again maybe best of the arbiter acts to configure both machines on boot from a central configuration. Yes, you just got a difficult job to implement and form your own start-up...
Of course you could buy a Oracle storage appliance and pay for a system where the management interface is buggy and locks up during problems (not fixed over 5 years of support), where the documentation is incomplete (and then they move/withdraw Sun blogs that answered some of this), the disks have interface problems (oh dear, yes the SATA ones are like that, no fix provided) and the power supplies and other hardware show phantom faults that are, once again, never really explained or fixed.
1) It gives you someone else to blame for any TITSUP events
2) You (naively) thought you would get professional support with it
So it kind of comes down to scale, budget and belief in yourself.
This is nothing to do with stopping "pirated" software. Simply that if you wish to use the given code legally you follow the terms. Basically it is the right to offer your code with the proviso that anyone benefiting from it returns the favour by offering the derivative as usable source code.
The GPL & ZFS argument is not that simple: Both code sources and modifications are available. What it comes down to is whether loading a kernel module makes it a "derivative" of the kernel for the GPL license match to be enforcible, or just some blob you wish to offer (with code) for use like a closed source video driver.
To stop others using it commercially without any need to provide worthwhile modifications such as bug-fixes or improvements in return.
To stop anyone else claiming another license on it to your detriment.
There may be other reasons, but in principle you are asking for "support" instead of money in return for your acknowledged work.
"Seems like Google are being penalized for making good products and a search engine that people want to use."
No, they are being penalised for promoting their own business above the competitors be deliberately rigging the search results. That is the point, it is no longer simple an algorithm that finds the best match to what you asked for, but one where there is another fudge-factor that promotes their own stuff.
Didn't you ever wonder how paid search promotion worked?
Or the fact you can still embed shit shit in an Office document?
"hadn't realized how sensitive the grid was to ad-break synchronized kettle usage!"
Not just that - millions of toilets flushing also pushes up demand for electricity for water supply pumping.
The point is you can't trust anything that:
1) has closed components
2) has known data slurping components
3) has limits on how *YOU* grant permissions
4) has enough value for subtle flaws in open parts to be engineered
But people do trust phones, and really should not. Maybe it is best to use the one with the least on-going cooperation with your own government and/or corporate interests as the least likely to screw you over outside of actual espionage?
Damn this AC business! Must have been another AC who's mom I was doing last night, she is definitely alive and well.
"Add in the fact that Poisson arrival rates are only an assumption, and that clusters of disk drive failures can happen more frequently than the model suggests"
Like when the power goes off and then an hour later you try to power up an array from cold that has been spinning for 4 years?
Well one hopes they might have a moment of revelation on the road to Damascus...
Not for him, but she does for the rest of us!
The only choice for serious nut-gripping!
Upvote for most points.
I also use Halifax with Firefox & Linux but only occasionally and had no big problems. But sticking to paper statements...
This assumes that any decent pirate is going to use a site under USA control in the first place.
In fact, I am surprised that the pirate bay has not yet got a distributed web site going, sort of a bit torrent of the site as locally accessible web pages, but with some crypto key to allow updates as needed. No central address/registry to get whacked, no need for backups when it is spread over 10M computers...
Now if was not for the UK's shitty connectivity to so many remote areas...
"Belt & suspenders" has a slightly different meaning this side of the pond!
Lets face it, a large number of recurring vulnerability in software written in C due to buffer overrun and misuse of printf() like format strings are ALREADY flagged by compilers like gcc if you use -Wall.
Problems is your code monkeys have to give a monkey and (a) use those options, and (b) fix them when found.
I noticed that when I visited NY years ago, cashier didn't even look at my signature. I think the most likely reason is the one given by @JBolwer above:
"When a US debit card is run *without* the PIN it is billed as a credit card (for the store) and lots of steel-rice-bowl types (as my Chinese wife would refer to them) get humongous amounts of cut out of this. And the store gets Ripped."
1) First step, always, is to fully backup/image your working Windows PC.
2) Second step is to spend a short while going through each bit of software you use (not always what is installed!) and create a list of it, why you use it, and any special catches with that (e.g. you must have V1.1 because V2 broke XYZ...etc). Make sure you can find the installation media/files, and any licence keys, etc.
3) From step 2, consider how critical EXACT compatibility it, and how much you really need any compatibility. From this you can decide if there are Linux versions that are good/better substitutes. Generally for email & web you will find Thunderbird & Firefox are shipped with most distros and work just fine as long as you are not tied to Exchange and/or crappy IE-only Intranet services.
4) Decide if you want to dual-boot, or try creating a Windows VM from your current PC. Both have slight risk, and to be perfectly honest, if you can create a clean VM of windows, patch it, and install only the software you really need, it will be faster and more reliable. Pros & cons:
Dual-boot - gives you Windows native speed for games, etc, but you lose out on disk space and risk some dumb-ass Windows update breaking the grub boot-loader (some shitty old software, like certain Adobe things, would also break grub booting by putting DRM stuff just after the MBR and outside of the Windows file system assuming nobody ever needed that...).
VM allows simultaneous Linux (e.g. web/email safely) and Windows (specialist software) but is more memory-heavy and you lose out on fancy graphics speed.
Also is allows "rootmydeviceyoubunchofuselessfuckers" to match..
I don't men the read/write permissions of individual /proc entries, I mean the lack of sand-boxing of all user process to mask such FAIL! cases as reported here.
It is not a real file, but part of the the device driver's internal memory that is presented as if it were a file.
That is how the UNIX model works, everything is a "file". You can access the keyboard and terminal as stdin and stdout, etc (same as Windows there). Hard disks appear as /dev/sda and the partitions on them as /dev/sda1 and so on, serial ports as /dev/ttyS0, etc, etc.
Why are user processes, presumably in some sort of sandbox for protection against dodgy stuff, allowed to *write* to /proc? Often disabling access to /proc except for your own process ID is one of the standard AppArmor settings.
Just trying to stand back from this massive FAIL and look at the bigger picture of system protection. Oh, and while we are at it, can someone beat the Chrome and Firefox dev teams until they start using and maintaining a tight AppArmor profile as well?
Of course in a similar vein:
Targeted hacking is much less of a concern to me that the hoovering of EVERYTHING you access via your ISP "just in case".
Should be curtains for the prosecutor
Indeed, people do dumb things, people make mistakes.
The issue here is its the 2nd time its happened, and its a known risk, so someone high up needs a total bollocking for not putting in place technical measures to stop stupid abuse of To/CC fields. Really, having a limit of 5 or so (maybe with an override button with "Are you really sure?" and a list of personal actions that *will* be applied if abused) would make little difference to sane email use, and having other configured options like email lists for any internal or external groups that need large updates would deal with the rest.
I would be more worried by an increase in mercury poisoning from said low-energy bulbs being dumped in landfill and leaching in to the water table over a few decades. But then I don't know the facts so could be talking out of my arse for all anyone knows...
I noticed that as well, Windows 7 VM was "checking for updates" for a couple of hours before I turned it off as not needed.
I used to dual-boot Windows and Ubuntu, but now its Ubuntu with a couple of Windows VMs for stuff that needs it. Much more flexible and works fine with most things, though not much good for games that need top graphics performance or any special hardware that needs hardware drivers for PCI connections (OK for USB, etc).
Or coat the tips with some insect repellent every so often?
What is actually changing? The MS web page does not say what these "modern synchronization technologies" are that are needed. POP or IMAP, are they really modern?
Or are they the ones being deleted to force poor outlook users in to a web interface to spam you with adverts more effectively?
Biting the hand that feeds IT © 1998–2017