Re: Who's their Lawyer?
The article was wrong. It has been fixed after it was pointed out. It is Blackberry that is getting a nice pile of cash.
239 posts • joined 1 Jan 2008
The article was wrong. It has been fixed after it was pointed out. It is Blackberry that is getting a nice pile of cash.
It sounds like they had a brilliant design where the local ReadyNAS you can hold in your hand would wipe itself if the ReadyCLOUD account was marked as closed. So if the cloud server makes a mistake and decides the account is closed and wipes the data and then the local NAS gets told that the account is closed, then the data is wiped everywhere. If it really does work that way, then it is a mindbogglingly stupid design. Essentially you are not in control of your local NAS at that point and it can only be considered a backup device, not primary storage, which it seems a lot of people thought it was with cloud backup.
The rotary has a much larger mass rotating than the radial, which is not a good thing in an airplane. Tends to make things want to turn when you don't want them too, especially when changing power settings. Some of the high power WWII planes had enough trouble with the mass of the prop when increasing power quickly. Doing that with essentially the entire engine spinning is a bad idea. But that is what they did early on.
It does seem terribly wrong to put a 4 cylinder in a Fokker DR I though.
The DR I replica that my dad flies sometimes in Brampton, Canada at least has the right type of engine in it.
Based on the endurance numbers intel is providing, XPoint isn't looking a whole lot better than NAND, which is rather contrary to the original claims about the technology.
So either intel's specs are wrong, or they are having early production problems causing endurance problems.
Certainly 30 complete writes per day is not a lot more than the 17 complete writes per day that the intel NAND SSD is rated for, especially if it is for 3 years versus 5 years. Certainly based on intel's hype I was expecting 1000 complete writes per day instead.
The latency is rather impressive though.
The openssl advertising clause is so obnoxious that if you even say:
Our product has secure connections.
You actually have to do something like:
Our product has secure connections (using OpenSSL copyright x, y, and z, blah blah blah).
Every single time you talk about any feature that relies on what OpenSSL provides. Does everyone do that? Well no, but the license does appear to say exactly that. It is very hard to comply correctly with that license.
It is not just the documentation that has to list the copyrights, or the about info for the application. It's documentation, advertising, discussions of product features, etc.
Except as the last few comments there say, it isn't actually fixed for many linux users.
Wait, you mean there is a step to the process after fast and cheap?
So 40% had bad security and 40% were written in Java-EE. Just a conincidence, right?
DRM does not stop piracy.
What would help piracy is to make buying it legitimately actually be more convenient than pirating.
If you could buy it easily, in a format that would be yours forever (so no server shutdowns to worry about), that you could play on whatever device you wanted when you wanted, could re-encode into a format needed for your device easily, then that would be what one would do. As long as a legitimately bought version is less functional than the pirated version, people will be willing to go through the hassle of finding a pirated version.
Better email a correction request.
Strong encryption already exists. So no matter what new encryption you invent with a backdoor (ignoring for the moment that you can't do that while also making it secure enough to be worth using), there is nothing stopping the criminals from just continuing to use the strong encryption, leaving the new backdoored garbage for the rest of us. So no help for law enforcement, just harm for everyone else.
Security is primarily a software problem.
A lot of arm chips are designed to support signed code execution all the way from the initial boot process, so they do provide a lot of features that allow software to be done securely. Of course you still have to make bug free secure software on top of that, but not much arm can do about that problem.
I did both. :)
Which kind of billion are we talking? I thought they were pretty much doing that already each year, so unless we are talking the 10^12 type of billion, this doesn't seem like news. Certainly for the 10^9 type of billion, I believe they were at 12 billion in 2014 alone, and probably higher now, so a couple of decades at that rate with a bit of growth might very well hit the 10^12 type of billion (although that meaning seems uncommon in English these days from what I can tell).
Reading the Reuters article in fact says Trillion, not Billion, which seems to clarify things.
The DIMM ones are very non standard and I haven't actually seen OS support for them (Diablo seems to be totally failing to provide sources for any drivers as far as I can tell, so no idea how they are supposedly working with linux at this point). At least Optane appears like it might be NVMe compliant, although the fact they claim you have to have the NVMe mapped through the PCH is a potentially bad sign for actually being standard. I am suspecting that they are saying that so it can be used as a cache for another drive, which is one of the RST features intel offers for windows users and if the Optane drives are only 16 or 32GB, then caching might in fact be their best (only?) use, rather than primary storage.
In fact based on what I have read elsewhere, an Optane drive will in fact work in an NVMe slot on a Z170 board, but it won't work with RST as a cache drive, which is the new feature of the Z270. So if that is actually true, then the Optane drive looks like any other standard NVMe drive, and you can use it as such, although since it is rather small, you probably don't want to just use it as a stand alone drive. Linux users can probably use it as a cache drive for another disk using bcache or lvmcache.
I don't think so. From what I have seen it is simply something got left out of the design that had to be there, and is why there is a workaround involving resistors possible.
The pentium was socketed and easy to replace. This one is soldered on the board.
These are chips for embedded systems with long term supply promises. This is very much not a chip that is end of life yet. It was supposed to be available for at least 5 years I suspect, maybe more.
Not rebooting is not good enough. The clock signal is used for quite a few things inside the CPU.
Having the system sleep when not working will reduce wear on the clock and make it last longer. Makes sense that things that are off last longer than things that are on. :)
They are not saying they will all fail, they are saying that the rate of failure starts to go up more than normal for intel's chips, due to a design mistake on the LPC signals.
So you might have a system that fails in 18 months, or you might have one that fails in 36 months, or one that never fails. Intel almost certainly has statistics of how much the expected increase in failures is after a given amount of time, but they aren't likely to share that. Could be the failure rate is 50% higher than normal, or 5000% higher (I have no idea what the normal failure rate for intel chips is, although based on the ones I have dealt with over the years, I have no seen very many fail). If the normal failure rate was 0.1% and it is now 1% or 5%, well that's certainly a problem, although it might still mean that most systems will be OK. Unfortunately intel isn't likely to share that level of details although I am sure they have done the calculations and hence determined it was bad enough that they had to admit to it.
I suspect their customers might in some cases want to continue to sell products even if they have to replace them and swap the CPU on the one they sell now. It is their choice to take the risk after intel tells them about the problem (And intel will probably insist on them signing something to continue to receive the chips with the known problem to reduce intel's risk at that point). So I would think it is still shipping although probably not in the same quantities as before.
New chip revisions take time to make and validate, so certainly not fixed yet.
The only fix so far is to change your own board to add the workaround. New chips don't exist yet so no one is getting those until they exist. So everyone is at their own mercy about how long it takes to change the board design and get new boards made, or they can wait for the new chips and hope for the best in the mean time. Doesn't matter if you are Cisco or some tiny company. Of course I suspect Cisco might very well be able to get a new board revision design made a lot faster than the little guys.
The work around means adding some resistors to the design, which on most boards is not something you can just do, since this is a clock line. You can't just add wires and resistors since that would mess with the clock signal. So it is either change the board design to add the resistors, or wait for the next version of the chip (which will probably take months to happen). Of course since the chips are soldered to the board (not a socket), they are not easy to replace either.
So one month into his trip and he made 500 km? At that rate he would need 4 months to get home. Was he only biking 1 hour a day and very slowly?
Because electric cars of that era were considered "not manly enough" for motorists. Sure women and maybe doctor's making house calls (where reliable and simple and clean was acceptable, unlike for real men driving cars with hand cranks and oil everywhere) could drive electric cars, but not men.
As for road trips, who really cares? Most people don't do road trips. Go rent a car if you need to make a road trip then. Take the train, or a plane or a bus. There are options for the special cases.
Of course what people seem to be forgetting is that electric cars can be charged at your own house. You don't have to go to a gas station for a fill, just plug it in at night when you are at home. As long as it has the range to handle your normal drive, you don't need a charging station.
Only people doing longer trips away from home would need to stop at a charging station. It really is not directly comparable to your current gas powered car.
The demand at charging stations will be much lower than current gas stations.
Making hydrogen from water is very inefficient use of energy. Almost all current hydrogen production comes from fossil fuels and generates CO2 as a result.
So given the safety issues of handling and storing hydrogen, how inefficient it is to make, pure electric makes a whole lot more sense, and we have a distribution network for it already, unlike hydrogen.
If you want to use sugar, make biodiesel instead. We already have vehicles that can work on that, a distribution system that can work with it, and it is quite efficient and safe way to store energy.
No debian release has ever had the problematic version. Your production system would hence not be affected.
How is Debian affected? I see version 232 in unstable and testing, and version 230 in backports and 215 in stable. No version appears to be running 228 and hasn't been for at least 6 months. Version 229 was released almost a year ago, so at this point I doubt there are really any vulnerable systems out there.
Apparently babestation has a bunch of 098... numbers, probably so people can call different things that they advertise (or so I figure based on reading some other articles about the problem). So since they have a block of numbers, they would be able to hit a block of people on the unfortunate town.
I certainly use IPv6 a fair bit at home since my ISP supports it.
My modem happens to have a rather odd bug, where once in a while (every few months) it suddenly stops sending packets for IPv4, while my IPv6 packets get through fine. I don't know how it does this, since all it sees is PPPoE packets from my router, so the modem should have no clue, but somehow it does. Rebooting the modem fixes it.
The funny thing is that it takes hours for me to realize that the problem has happened, since google stuff (gmail, etc) all works fine, facebook works fine, but links to other things stop opening and I start to wonder what is going on, before finally remembering the stupid modem problem and go reboot it and suddenly gain access to the IPv4 world again. Quite a bit of the internet does work with IPv6 only these days it seems.
The limit may have come from there, but since the limit exists everything was designed and built around it. You can't just change it. Jumbo frames only work on network segments where every single device on that segment supports it and all have it enabled. It's not easy to get right.
At least IPv6 mandated that the minimum was 1280, which is much better than the minimum allowed by IPv4, so at least when using IPv6 you could use a pretty decent size and not worry about MTU discovery at all.
But too many routers and switch chips and network cards (wired and wireless) and all sorts of tunneling protocols, etc all know 1500 or so is the standard, and most have hard limits that don't allow you to exceed that by very much.
Well I couldn't find any links on the front page about how to report problems. Maybe a link to webmaster would be in order somewhere at the bottom. Too many places don't have a webmaster@ address working, so it never occurred to me to use that.
I see about 6 stories and that's it. The rest going down is just lots of white emptiness. Mobile version seems to be fine however.
Opera shows broken image icons for the blank space, but still no stories.
Speaking of bugs: The front page is only showing about 6 stories, with the rest being blank space (some browsers show broken image icons throughout the remaining space, but that's it). Mobile version is fine though.
Oh but there is an excuse: The ad serving infrastructure is shit and doesn't do https yet. That seems to be the standard excuse for not doing https on sites these days.
It is trivial to get USB adapters for IDE, parallel, serial and other things, so at least any hard disk from the last 25 years is easy to connect to a modern machine and access. If your DVD drive can't write CD-R, then it is junk. All mine can, as can my BD writer. SCSI adapters still exist (including USB to scsi, although pretty expensive), so scsi devices can be accessed too, although with a bit more effort.
Things are actually surprisingly good when it comes to dealing with old stuff.
Any modern efficient vehicle will not warm up while just idling. So remote start would only be helpful if you have a crappy car that is inefficient.
And yes fortunately it is also illegal to have your car idling in many places.
The insurance terms are interesting, since remote start does not involve the keys being left in the car, so you can't take the car out of park (remote start is for automatics only of course).
At my previous job we had wanted to buy a couple for 3 years now, and nothing ever announced was ever apparently actually able to be bought. Lots of press releases, pictures, etc, but nothing actually for sale.
Step one in getting a market is clearly to actually bloody well let people buy the damn stuff.
Yes the split /usr is an old unix leftover, not a linux invention.
Also since deboostrap is a script, and not compiled, the --merged-usr is a command line option, not a compile time option (which would also be rather stupid if debootstrap was in fact a compiled program. You want things optional at runtime, not chosen at compile time).
Except if you wanted to keep compatibility, adding a symlink of /usr -> / would give you /usr/proc, /usr/boot, /usr/root, etc, which is a mess and not nice. Adding symlinks for /lib, /bin and /sbin to /usr/lib, /usr/bin and /usr/sbin on the other hand does not give you the mess.
So as far as cleaning up by merging /usr/lib, bin and sbin with /lib, bin and sbin, the way it was done was the cleaner option while keeping compatibility with all the scripts that assume where things will be located.
I am not convinced it had to be done at all, but I am not going to argue with people over it.
No. That was used to tell if it had been backed up yet or not. Every time you changed a file, the A bit got set. When it was backed up, it was cleared. Nothing more than that.
In this case Lenovo removed access to features that were still in the BIOS code which is what caused the problem. Those features were useful and needed by some people. If Lenovo had done nothing to the code they bought, this problem would not have existed. They put actual effort into making the product worse for their users.
If they wanted to try a new dynamic strip for extra keys, then sure fine. But don't remove the existing function keys to do it. Add a new row above the function keys and see if people start using them. A lot of people do know and use shortcut keys and taking them away is NOT going to be popular. But yes they are probably a minority and Apple clearly doesn't care about any minority user bases.
I thought users already hated touch panel buttons instead of function keys when Lenovo tried it. I guess Apple thinks they can just do what they want and their users will thank them for it. No thanks. Keyboard keys need to be real keys, not stupid touch panels.
Where is the evidence of Windows 1 having stolen code?
You could claim the "stole" the idea of the GUI and look and feel from the Lisa, but it "stole" it from the Xerox PARC.
I highly doubt Apple let Microsoft have the OS or GUI source code in the first place.
Is 99pc some new weird way of writing 99% (which is of course way more readable)?
Whoever said they were yahoo webmail accounts? Lots of people have yahoo accounts for yahoo messenger, yahoo groups and many other things. Is it perhaps that list of users accounts that was stolen? Yahoo accounts does not equal yahoo webmail.
I guess the judges forgot to read the rules.
The rules also say there won't be two competitors with the same score, so not sure why they talk about the behavior of programs when that happens.
Biting the hand that feeds IT © 1998–2017