Re: What is the market for these
I thought the 960 EVO was a conventional SSD, and hence the claim of 640MB/s is nonsense given clearly conventional SSD outperforms the intel.
255 posts • joined 1 Jan 2008
I thought the 960 EVO was a conventional SSD, and hence the claim of 640MB/s is nonsense given clearly conventional SSD outperforms the intel.
So yet more stuff released we are trying to make less of.
This sounds like a far from clean method for making clean energy.
So far I have no hope at all for hydrogen as a fuel source because there simply is no good way to make it, store it, or transport it.
No IBM makes PowerLinux 7rX machines that only run linux and not aix or i. I have never heard of Lenovo having anything to do with those.
There are other non IBM systems out there though. The Talos workstation for example.
Certainly none of what they listed is required and many people would rather not have it.
Sure Microsoft wants you to use secureboot, and it does have some good features. So why is it in the same environment as remote management which clearly is not required or desired in most cases? Do not combine useful local stuff with optional risky remotely accessible stuff.
Actually they don't. Some AMDs have something similar. IBM power uses a different chip outside the CPU for management and it has its own network interface that you are not required to connect if you don't want to. No idea what sparc has (are they still doing anything?). ARM supports running trustzone code, although not all of them use it, so it is certainly possible to buy arm systems that don't have it enabled at all. \
Intel's big mistake is putting optional stuff and essential stuff together in the same place. The essential system startup stuff has no reason to have network access at all, and the optional stuff that does have reason for network access should be able to be turned off so it should have been an independent device from the essential startup stuff. Secureboot and remote system management have no reason to share the same CPU and OS.
Well the article is wrong. You will need a new cable to use HDMI 2.1's new higher resolution and refresh rate. Your existing cables are fine for VRR, eARC, dynamic HDR and the other features that are not using the new higher resolutions. Only the higher resolutions require the new 48Gbps capability, which is the only thing that requires new cables. Since every feature in HDMI 2.1 is optional, a device that implements just one of them can call itself HDMI 2.1 and it doesn't have to be the 48Gbps feature.
So for the features that might actually be relevant anytime soon for most of us, existing 2.0 compatible cables are fine.
A cable meant for 1.4 might handle 2.0 in some cases, and it might also fail at the edge cases (when you go pushing the full 18Gbps, not just 12 or 15) in the case of 10 bit HDR at 60Hz 4k resolution. Fortunately premium certified cables can be bought for about $5 and work great for HDMI 2.0 stuff.
I suspect Qualcomm is very correct about regulators. Broadcom and Qualcomm together own the wifi AP chipset market. Everyone else combined makes up almost nothing in that market. I am sure there are other markets where they would be totally dominant.
So at least some parts of the business would need to be split out, and many of Qualcomms advantages is that they provide all the bits needed so splitting wifi from cellular or processors just doesn't make sense.
Oh yes the intel i740. Let's offload all video memory to system memory over the AGP bus. Sure it makes geometry move faster, but sure didn't help the texture performance.
Things have been radioactive forever. It's part of reality.
Some things are just more radioactive than other things.
For example: https://en.wikipedia.org/wiki/Natural_nuclear_fission_reactor
Maybe someone is assuming Tesla will actually make 300000 model 3s in 2018 and sell them all in the US. Or they expect the Bolt to sell a lot of cars.
I believe the requirement is that the electric range has to be larger than the non electric range (this affects the BMW i3 with range extender for example which has an artificial fuel tank limit in the US). Regular hybrids (like a prius and such) does not apply since the electric range is much shorter than the non electric range.
Actually there is, and you haven't been able to do one for about 25 years now. It is not something done on modern drives, but used to be required on old drives a long time ago.
Yes hashes are clearly not unique (and hence the article is just plain wrong about that).
Any dedupe system that assumes they are unique and doesn't verify the data is the same after getting a hash match is insane and should not be used. It doesn't matter how low the probability is, I do not want to risk my data getting destroyed because it just happens to have the same hash as some other data.
And yes there are programmers out there dumb enough to assume the hashes are effectively unique because the probability is so insanely low of them having a collision and then treat them as if they were actually unique. Some day their customer is going to pay for that mistake and they might not know it for a long time after it happens.
No he meant the keys to hit to get the login prompt in Windows NT. Not to reboot the machine. It happened to do that when running DOS because that is what the BIOS did with that interrupt. Windows NT and newer does something else with the interrupt.
And amazingly control-shift-esc did the same thing 20 years ago too.
No on UEFI systems they start in 16bit mode then the firmware rather quickly switches to 64 bit mode and that's the mode it starts the OS in, unless you enable legacy boot mode, in which case it switches back to 16 bit for booting.
The article was wrong. It has been fixed after it was pointed out. It is Blackberry that is getting a nice pile of cash.
It sounds like they had a brilliant design where the local ReadyNAS you can hold in your hand would wipe itself if the ReadyCLOUD account was marked as closed. So if the cloud server makes a mistake and decides the account is closed and wipes the data and then the local NAS gets told that the account is closed, then the data is wiped everywhere. If it really does work that way, then it is a mindbogglingly stupid design. Essentially you are not in control of your local NAS at that point and it can only be considered a backup device, not primary storage, which it seems a lot of people thought it was with cloud backup.
The rotary has a much larger mass rotating than the radial, which is not a good thing in an airplane. Tends to make things want to turn when you don't want them too, especially when changing power settings. Some of the high power WWII planes had enough trouble with the mass of the prop when increasing power quickly. Doing that with essentially the entire engine spinning is a bad idea. But that is what they did early on.
It does seem terribly wrong to put a 4 cylinder in a Fokker DR I though.
The DR I replica that my dad flies sometimes in Brampton, Canada at least has the right type of engine in it.
Based on the endurance numbers intel is providing, XPoint isn't looking a whole lot better than NAND, which is rather contrary to the original claims about the technology.
So either intel's specs are wrong, or they are having early production problems causing endurance problems.
Certainly 30 complete writes per day is not a lot more than the 17 complete writes per day that the intel NAND SSD is rated for, especially if it is for 3 years versus 5 years. Certainly based on intel's hype I was expecting 1000 complete writes per day instead.
The latency is rather impressive though.
The openssl advertising clause is so obnoxious that if you even say:
Our product has secure connections.
You actually have to do something like:
Our product has secure connections (using OpenSSL copyright x, y, and z, blah blah blah).
Every single time you talk about any feature that relies on what OpenSSL provides. Does everyone do that? Well no, but the license does appear to say exactly that. It is very hard to comply correctly with that license.
It is not just the documentation that has to list the copyrights, or the about info for the application. It's documentation, advertising, discussions of product features, etc.
Except as the last few comments there say, it isn't actually fixed for many linux users.
Wait, you mean there is a step to the process after fast and cheap?
So 40% had bad security and 40% were written in Java-EE. Just a conincidence, right?
DRM does not stop piracy.
What would help piracy is to make buying it legitimately actually be more convenient than pirating.
If you could buy it easily, in a format that would be yours forever (so no server shutdowns to worry about), that you could play on whatever device you wanted when you wanted, could re-encode into a format needed for your device easily, then that would be what one would do. As long as a legitimately bought version is less functional than the pirated version, people will be willing to go through the hassle of finding a pirated version.
Better email a correction request.
Strong encryption already exists. So no matter what new encryption you invent with a backdoor (ignoring for the moment that you can't do that while also making it secure enough to be worth using), there is nothing stopping the criminals from just continuing to use the strong encryption, leaving the new backdoored garbage for the rest of us. So no help for law enforcement, just harm for everyone else.
Security is primarily a software problem.
A lot of arm chips are designed to support signed code execution all the way from the initial boot process, so they do provide a lot of features that allow software to be done securely. Of course you still have to make bug free secure software on top of that, but not much arm can do about that problem.
I did both. :)
Which kind of billion are we talking? I thought they were pretty much doing that already each year, so unless we are talking the 10^12 type of billion, this doesn't seem like news. Certainly for the 10^9 type of billion, I believe they were at 12 billion in 2014 alone, and probably higher now, so a couple of decades at that rate with a bit of growth might very well hit the 10^12 type of billion (although that meaning seems uncommon in English these days from what I can tell).
Reading the Reuters article in fact says Trillion, not Billion, which seems to clarify things.
The DIMM ones are very non standard and I haven't actually seen OS support for them (Diablo seems to be totally failing to provide sources for any drivers as far as I can tell, so no idea how they are supposedly working with linux at this point). At least Optane appears like it might be NVMe compliant, although the fact they claim you have to have the NVMe mapped through the PCH is a potentially bad sign for actually being standard. I am suspecting that they are saying that so it can be used as a cache for another drive, which is one of the RST features intel offers for windows users and if the Optane drives are only 16 or 32GB, then caching might in fact be their best (only?) use, rather than primary storage.
In fact based on what I have read elsewhere, an Optane drive will in fact work in an NVMe slot on a Z170 board, but it won't work with RST as a cache drive, which is the new feature of the Z270. So if that is actually true, then the Optane drive looks like any other standard NVMe drive, and you can use it as such, although since it is rather small, you probably don't want to just use it as a stand alone drive. Linux users can probably use it as a cache drive for another disk using bcache or lvmcache.
I don't think so. From what I have seen it is simply something got left out of the design that had to be there, and is why there is a workaround involving resistors possible.
The pentium was socketed and easy to replace. This one is soldered on the board.
These are chips for embedded systems with long term supply promises. This is very much not a chip that is end of life yet. It was supposed to be available for at least 5 years I suspect, maybe more.
Not rebooting is not good enough. The clock signal is used for quite a few things inside the CPU.
Having the system sleep when not working will reduce wear on the clock and make it last longer. Makes sense that things that are off last longer than things that are on. :)
They are not saying they will all fail, they are saying that the rate of failure starts to go up more than normal for intel's chips, due to a design mistake on the LPC signals.
So you might have a system that fails in 18 months, or you might have one that fails in 36 months, or one that never fails. Intel almost certainly has statistics of how much the expected increase in failures is after a given amount of time, but they aren't likely to share that. Could be the failure rate is 50% higher than normal, or 5000% higher (I have no idea what the normal failure rate for intel chips is, although based on the ones I have dealt with over the years, I have no seen very many fail). If the normal failure rate was 0.1% and it is now 1% or 5%, well that's certainly a problem, although it might still mean that most systems will be OK. Unfortunately intel isn't likely to share that level of details although I am sure they have done the calculations and hence determined it was bad enough that they had to admit to it.
I suspect their customers might in some cases want to continue to sell products even if they have to replace them and swap the CPU on the one they sell now. It is their choice to take the risk after intel tells them about the problem (And intel will probably insist on them signing something to continue to receive the chips with the known problem to reduce intel's risk at that point). So I would think it is still shipping although probably not in the same quantities as before.
New chip revisions take time to make and validate, so certainly not fixed yet.
The only fix so far is to change your own board to add the workaround. New chips don't exist yet so no one is getting those until they exist. So everyone is at their own mercy about how long it takes to change the board design and get new boards made, or they can wait for the new chips and hope for the best in the mean time. Doesn't matter if you are Cisco or some tiny company. Of course I suspect Cisco might very well be able to get a new board revision design made a lot faster than the little guys.
The work around means adding some resistors to the design, which on most boards is not something you can just do, since this is a clock line. You can't just add wires and resistors since that would mess with the clock signal. So it is either change the board design to add the resistors, or wait for the next version of the chip (which will probably take months to happen). Of course since the chips are soldered to the board (not a socket), they are not easy to replace either.
So one month into his trip and he made 500 km? At that rate he would need 4 months to get home. Was he only biking 1 hour a day and very slowly?
Because electric cars of that era were considered "not manly enough" for motorists. Sure women and maybe doctor's making house calls (where reliable and simple and clean was acceptable, unlike for real men driving cars with hand cranks and oil everywhere) could drive electric cars, but not men.
As for road trips, who really cares? Most people don't do road trips. Go rent a car if you need to make a road trip then. Take the train, or a plane or a bus. There are options for the special cases.
Of course what people seem to be forgetting is that electric cars can be charged at your own house. You don't have to go to a gas station for a fill, just plug it in at night when you are at home. As long as it has the range to handle your normal drive, you don't need a charging station.
Only people doing longer trips away from home would need to stop at a charging station. It really is not directly comparable to your current gas powered car.
The demand at charging stations will be much lower than current gas stations.
Making hydrogen from water is very inefficient use of energy. Almost all current hydrogen production comes from fossil fuels and generates CO2 as a result.
So given the safety issues of handling and storing hydrogen, how inefficient it is to make, pure electric makes a whole lot more sense, and we have a distribution network for it already, unlike hydrogen.
If you want to use sugar, make biodiesel instead. We already have vehicles that can work on that, a distribution system that can work with it, and it is quite efficient and safe way to store energy.
No debian release has ever had the problematic version. Your production system would hence not be affected.
How is Debian affected? I see version 232 in unstable and testing, and version 230 in backports and 215 in stable. No version appears to be running 228 and hasn't been for at least 6 months. Version 229 was released almost a year ago, so at this point I doubt there are really any vulnerable systems out there.
Apparently babestation has a bunch of 098... numbers, probably so people can call different things that they advertise (or so I figure based on reading some other articles about the problem). So since they have a block of numbers, they would be able to hit a block of people on the unfortunate town.
I certainly use IPv6 a fair bit at home since my ISP supports it.
My modem happens to have a rather odd bug, where once in a while (every few months) it suddenly stops sending packets for IPv4, while my IPv6 packets get through fine. I don't know how it does this, since all it sees is PPPoE packets from my router, so the modem should have no clue, but somehow it does. Rebooting the modem fixes it.
The funny thing is that it takes hours for me to realize that the problem has happened, since google stuff (gmail, etc) all works fine, facebook works fine, but links to other things stop opening and I start to wonder what is going on, before finally remembering the stupid modem problem and go reboot it and suddenly gain access to the IPv4 world again. Quite a bit of the internet does work with IPv6 only these days it seems.
The limit may have come from there, but since the limit exists everything was designed and built around it. You can't just change it. Jumbo frames only work on network segments where every single device on that segment supports it and all have it enabled. It's not easy to get right.
At least IPv6 mandated that the minimum was 1280, which is much better than the minimum allowed by IPv4, so at least when using IPv6 you could use a pretty decent size and not worry about MTU discovery at all.
But too many routers and switch chips and network cards (wired and wireless) and all sorts of tunneling protocols, etc all know 1500 or so is the standard, and most have hard limits that don't allow you to exceed that by very much.
Well I couldn't find any links on the front page about how to report problems. Maybe a link to webmaster would be in order somewhere at the bottom. Too many places don't have a webmaster@ address working, so it never occurred to me to use that.
Speaking of bugs: The front page is only showing about 6 stories, with the rest being blank space (some browsers show broken image icons throughout the remaining space, but that's it). Mobile version is fine though.
Biting the hand that feeds IT © 1998–2018