I thought ...
... that A5 was S7 lite, so a new release of A... could take the place of "S9 lite"
2346 posts • joined 6 Sep 2007
... that A5 was S7 lite, so a new release of A... could take the place of "S9 lite"
Oh, so that's where URL came from
AMD CPUs are not affected by Meltdown, only by Spectre. The workaround to the former is really expensive in terms of CPU overhead on Intel (and exactly 0 cost on AMD), especially if your workloads involve lots of IO, which is why I plan my next upgrade to AMD Epyc (from Xeon Ivy Bridge). The workarounds to Spectre are still appearing, but so far all are pretty cheap.
... run Meltdown exploits ?
Yes, well, assuming it is actually removed from users' computers ... especially those whose owners never bother with patches anyway.
I recently installed a new server and migrated friends website from Ubuntu 14.04. While doing so, I also installed letsencrypt certificate and it was very easy, thanks to "apt-get install letsencrypt". A bit of learning of nginx configuration was required, but learning is what I do. Setting up a timer to refresh the certificate bimonthly was trivial, too. One point of note: the certificate will store all alternative host names from -d parameter(s) passed to letsencrypt, but the first -d parameter is also set as CN= record of the certificate. So make sure you pass the right name first.
Why boasting? Just to show there is absolutely no reason to stick with dinosaur CA like Symantec. If the only thing you want to show in the certificate is the host name (rather than organization name), then you do not need expensive verification and letsencrypt is your friend. If you need verification, there is plenty of competition to choose from.
Say what you will, but you have to admire the consistency (screwing the rest of population so that select few can have it all)
I don't see how adequacy of security could be used as an argument by the defence. It's a bit like saying that because you only had a Yale lock
Good point, but looking at OPM hack it appears plausible that the US side would not be able to point to any security features (which would have to be hacked) at all. Like, no lock and the door left ajar for anyone to enter at will.
That's no moon.
I think Russians are used to seeing worse, with all these oligarchs ...
(sadly, not a joke)
You mean, Time Machine snapshots are not immutable?
I am starting to feel lucky, as I do not use Apple products ...
I am using BBM for family too, it works great and is simple to both install and use. However it is not available on Windows nor Linux, so was looking at alternatives - Signal received serious consideration.
To be truly pedantic about such things, you get 0 years (rounded down, as set by the rules of conversion from floating point to integral types) until the midnight before first anniversary of whatever event you are counting from. So, 1023 years would be either forever (if the counter does not have enough bits and hence keeps rolling over at some smaller value, like 255 + 1) or 1024 years less some arbitrary, usually small, quantum of time.
vCPU pinning is well know, but it makes load balancing difficult. Regular load balancing would be based on assumption that you can always pin more than one vCPU to a single core, and you pin vCPU from multiple VMs to cores on one physical CPU. These assumptions need to go out of the window now.
I expect Amazon, AWS, Azure etc. will start offering a new tier of services where they indeed guarantee that only your VMs run on any single physical CPU, but this is going to be expensive (you pay for more vCPUs than stricly needed), or slow (poor load balancing), or both.
That's what GCHQ wants you to think ...
I am not living under the impression that my computer is not vulnerable to spectre v1. But there is very little I can do about it. I am simply happy that living on the bleeding edge of both kernel and compiler development has, at least once, given me some real benefits. Few distributions make this easy and most are lagging behind, sometimes quite significantly.
Thanks to a really, really small patchset in the distribution itself, it is trivially simple to build my own Linux kernel straight from www.kernel.org and with my own configuration. Even better, thanks to following GCC releases really fast, my kernel is now reporting this: "Mitigation: Full generic reptoline". And I am running 4.14.15 - but I bet the distribution will make kernel 4.15 available in the next few weeks (or I can just roll my own - not tempted though, just yet)
Well, let's see ... processors from the competition are catching up, on some benchmarks are better and worse on others, but clearly getting there. Then comes the news about Meltdown bug (lets put Spectre aside - all are vulnerable to this) which only affects Intel processors, and the mitigation to this one comes at a cost at least 5% of the performance, sometimes more than 20%. Surely that is going to impact the benchmarks, and hence sales figures. The problem is systematic, and it will take Intel a long time to fix it in hardware - the time which the competition can use to improve their designs, not impacted by Meltdown bug
Judging by the moves of INTC and AMD share price right after El Reg article, shareholders think similar.
This, but what should we expect when everyone - including headhunters, regulators and journalists - think that CODERS are synonym with ENGINEERS.
Sometimes it feels as if these two were seriously considered to be engineers, and the whole of the profession held responsible for the inevitable mayhem.
I think you got it right here "If something goes wrong, we don't know how to fix it, but we may still be held responsible. Most IT practitioners simply won't be able to get over this."
The thing is, IT practitioners make the decisions. Their job is to control the risks and take the responsibility. Until "serverless" evangelists find a way to put that in a black-box, too, it will not take off. And we are far from it.
Why a VFS plug-in would need 1GB memory allocation? I guess you mean file buffers, which are maintained by the kernel anyway and shared by all processes which need access to the cached files.
It started its life this way, but that was long ago. Since then it has become a central authentication authority based on standard Kerberos (now with both MIT and Heimdal implementations available) in the local network, with integrated directory services for both humans and machines, based on standard LDAP. Also, it is a go-to solution for making the enterprise scale distributed filesystems available to Windows machines, thanks to CTDB - for example see page 12 in Lustre Architecture whitepaper. Not everyone needs distributed filesystem; I will grant you that. But that does not mean that Samba is less useful as an authentication authority or directory service.
@David Roberts - it is a perfectly good analogy, for a non-technical person like Lee has proven himself to be.
@ST you are right and I am right - I was referring to Meltdown (not to Specre) so we agree on this. The speculative execution on its own can also cause Spectre "class" of bugs which are cheaper (performance wise) to work around, as compared to Meltdown one. The numbers I keep seeing on lwn.net for Specre are consistently under 5% (usually around 1%), but numbers for Meltdown easily exceed 10% if your system is doing little more IO or other kernel-related activities. This is why I believe that AMD (not being affected by Meltdown) have now huge performance win against Intel, which is not reflected by old benchmarks, at all. On related note - I wonder if GPU intensive application (i.e. games) need context switch to communicate with the GPU. If so, then gaming benchmarks are going to be affected a lot, too.
You are also right on explaining that speculative execution issue is not just "implementation", it is more of a design issue. Just let me have that simplification, ok?
AMD is known to be not affected by Meltdown bug, which was also the most expensive (in terms of performance penalty) one to fix. This means that, suddenly, all the performance benchmarks comparing Intel (with
crappy buggy speculative execution which allows user code crossing kernel boundary) against AMD ones (which would not allow such violation in its own, slightly less buggy, implementation of speculative execution) are no longer valid.
(typo, should be 4.14.14 - that's what I'm running, today). Always follow kernel.org , that's where upstream is.
Honestly, I am very annoyed at distributions refusing to use something closer to LTS upstream and insisting at applying hand-picked patches on old kernels instead. I can understand the reasoning for RedHat doing that, but everyone else?
@Destroy All Monsters
... ouch, this is so sexist. And brilliant, too
I read elsewhere that MS has added (not production ready yet) an ssh server to Windows. I am trying to guess what shell is that ssh going to make available for its users, could that be PS rather than cmd? If a Linux admin tried to do some remote administration of a Windows machine under ssh and landed in PS prompt, then perhaps it would help to be able to run (and learn the basics of) PS on a Linux machine? It wouldn't be great and it would not convince anyone to switch from bash or zsh, but it would serve the educational purpose, I think.
Given that Equation Editor was a 3rd party tool, sublicensed by Microsoft, I suspect they never had a copy of to the source files in the first place. Assuming they had access to source code in the first place, it is not the same as having a copy of it which you can keep "just in case the original author loses it".
Another question, how much bandwidth do you want for this 20TB of data? With small factor storage directly attached to PCIe bus (M.2 discussed here, or older brother 2.5" U.2 NVMe) you have some balance between capacity and bandwidth. On the other hand, single SAS connector is not really that much, and there is no form factor with directly attached 3.5" PCIe bus.
I was thinking about it. For Lustre you would normally rely on a failover cluster of two MDS with single shared high-performance disk system (SAN e.g. fibrechannel) used for MDT i.e. actual data storage. There is no space for such arrangement if you have one server with all disks inside directly attached to PCIe buses of the CPUs. Unless the servers in the cluster were virtual, running inside that one machine - but that it is not much of added resiliency, is it?
Another possible scenario is ZFS volume, shared as iSCSI target for a cluster (i.e. two more machines for MDS, without such outrageous storage). However then you lose large part of the potential performance gains from NVMe and flash, so perhaps not so good either.
On top of that, it would have to be a very, very large filesystem which would need 200TB of MDT (i.e. metadata only). Still, I would be very happy to play with such a storage, for an experimental Lustre setup just to see how fast it is :)
I do not think it is really justified. Hillington borough covers very large area and both Heathrow airport and its approach are only a small part of it. Also, approach on the east side of the airport (which, judging by the noise outside my window is the most commonly used) is not over Hillington at all, it is Hounslow i.e. neighbouring borough.
Good question. My take is that some filesystems (notably ZFS) make heavy use of the memory, which is fine if user space and kernel share the address space (little impact on cache) but pretty bad if cache needs to be cleared on every disk IO (only if the only cache in question is page translation tables).
Turns out the bug was initially found in kernel 4.11 , in June 2017
Also, the bug does not actually "brick" the computer. Looking at the fix it appears the problem is in module initialisation code. This section of code gets hit on every start, and it borks BIOS anew on every start. However following the original kernel thread it appears that as soon as the module is updated not to flip "writeable" flag in bios on startup, the machine is back to normal.
Well, as far as new functionality (new drivers etc) in Linux kernel are concerned ... sometimes you do end up with beta-quality for some time after the release. I know it may sound like heresy to some, but there is a reason why RHEL is running old kernels (with a very long list of in-house maintained patches).
However, since I like living on the bleeding edge, I use fresh upstream kernels, which is how I also know that bugs are quickly fixed. Usually within days (or short weeks) from the first report.
As for malware writers - kernel module can do a lot and damaging hardware was possible since a very long time ago. But in order to run a new module, you need to root the OS first. So, nothing new really.
These machines are not really permanently borked. It is possible to reflash them, which restores normal BIOS functionality. The difficulty is that Lenovo only supplies reflashing tools which work under Windows, and in order for these to work you need to boot Windows. Which is tricky, if your only OS available on disk is Linux, and you cannot boot anything else from USB.
Some affected users managed to attach CDROM via USB and proceed from there. Ideally Lenovo should provide BIOS reflashing tool which works under Linux :-(
OK, I promise to pay the visit the next time I am in Texas. Can I have that downvote removed or balanced now?
I am shocked.
... and at the bottom.
Yes, I know the way out.
Road and rail networks: 0.05%
For me, this is a surprise.
I do not know about Irish laws, but there is a possibility that executive branch is prohibited from making such a decision, and has to pass it to courts instead.
I do not agree with the "meaningless" portion, even though you are correct on the first part. The problem is that some people will not realize it.
@JulieM do not forget that 1) the decompiled source will have been after all the optimizations that the original compiler applied, hence it will be removed from the original programmers intent 2) it will not have any of the symbolic names that the original programmer intended and finally 3) it will not reflect the design of the original source, since all the static program constraints will have been optimized away (things like encapsulation etc.).
The tool is meant for providing a more readable form of what the program actually does, which is very useful in itself. However, I would not put collaboration between projects without appropriate language bindings in this bucket because collaboration implies a statement of intent, which is next to impossible if the design is hidden.
Open sourcing hopefully also means that a community will build around it, improving the overall quality of the tool.
@Potemkine! I think you are onto something, but you are also missing important part: 5% upheld complains from a large number is still something to consider. Also, the way the Administrative Council of EPO works, it does not really take much interest in the workings of EPO (as it should). Unfortunately.
Create unique email address (i.e. the user name that is hard to guess even by brute force, as-if good password) and use easily guessable password for that one. Create another unique email address, but with a strong password. If first account was breached, that means the email leaked (or email + easy password hash). If second was breached, that means plain text password leaked. I would be interested if such monitoring of websites was standard and users were informed of results.
DTraceunder the GPL
Biting the hand that feeds IT © 1998–2018