Perhaps they did it on purpose?
Obligatory xkcd reference.
2186 posts • joined 6 Sep 2007
... Iran? Sudan?
I think every developer worth his/hers salt wrote some truly awful software in their young age. Bill Gates was unlucky to have that software used by virtually everyone, for a very long time. If people all around the world started using code I've written 25 years ago I would probably die of embarrassment.
If I was on her place I would keep that visible. The fault is on the board of directors for putting her in a position show was not qualified to. Since that information
is was public, there is no suspicion of her trying to pass the qualifications she did not have.
I have not seen the document in question, but can offer an educated guess. The document contains recommendation that the breached company should buy ID protection services for the affected customers. From Equifax, of course.
I do not think Aspera would be able to transfer the initial snapshot "really fast". This technology seems to be only only sending incremental changes, based on real-time (or near) monitoring for local changes. When adding any off-site data storage service (does not have to be cloud - replication to newly added second datacentre is possible example) then the initial snapshot could take a long time, and sending it in a "hard format" instead makes sense.
Right, yes .... (scooting off)
Hi Goeffrey, from my experience you would probably find ready documentation using either of these extra "magic" words, i.e. names of distros which have plenty of useful documentation: arch, gentoo, debian, ubuntu, redhat . There is also, of course, stackexchange and serverfault.
It matters not how many other people are using any OS, as long as the applications you need are available for it.
You are right - Linux rules in pocket and in datacentres exactly because of abundance of "applications" (for lack of better word). My point was that the limiting factor in all cases (pocket, desktop, datacentre, IoT etc.) for the past years, was not kernel (nor the drivers, which are in the kernel source tree). These factors are elsewhere - with the developers whose business model does not play well with open source. There are more than few developers who release (and earn their revenue this way) closed-source on Linux, but still not enough. FWIW I would prefer them all to go "Red Hat way", i.e. release open source and sell support instead. However let's not deceive ourselves - that would leave Linux without any AA games or other "no contracts needed" software products, since developers need to earn their keep. It so happens that this kind of "no contracts needed" software is crucial on desktops, but now the borders are blurring.
Anyway back to topic: Linux kernel improvements change nothing in this space. Hence the contention that the discussion on the "Linux on desktop" does not belong here (I am actually surprised that this is contentious, judging by the number of downvotes I received, but whatever ...)
So, Linux overlord announces arrival of significant new features into Linux kernel version 4.14, which will be also next (after 4.9) Long Term Support kernel. This is actually great news for those (few?) of us who prefer to use self-configured and self-built vanilla kernel, as opposed to distribution patched, binary kernel. Also the features announced are quite significant for datacentres, where (if you look beyond operating systems sales figures), Linux absolutely dominates.
And where do we get from this announcement? Lame complaints that Mint GUIs do not appear to provide equivalent functionality to Windows firewall management. Really?
Just to compensate, here is a much nicer list of new features in kernel 4.14 to ponder about, summarized below:
* GPU drivers improvements, for all of AMD, Nvidia and Intel
* many new features in memory subsystem, as mentioned in the article
* ARM64 improvements, with support for Raspberry Pi Zero, Banana Pi etc. boards added
* updates for both hypervisors KVM and Xen, and (as mentioned in the article) Microsoft Hyper-V guest
* a bunch of improvements and fixes in filesystems btrs, ext4, xfs, f2fs and (as mentioned) zstd compression
* new drivers, EFI boot improvements etc. etc.
This is going to be large release and I will not be surprised if we see it going to 4.14-rc8 this time round.
Could this possibly mean "year of Linux on desktop"? This question is really missing the point. It does not have to - it is already dominating in the pocket, and in datacentres. If we want to carry on discussion about role of Linux on desktop, then much more appropriate context for it would be new KDE, Gnome or perhaps XFCE releases, or maybe Libre Office or some major distribution. Linux kernel is more than ready for taking on desktops, it is these other projects which lead it there. However in the context of new kernel releases, that discussion is entirely off topic. Please refrain next time!
... or simply because all these starts and landings leave lots of soot anyway
Actually locking the color scheme of LIVE version would be even more important - imagine the mayhem if a user decided that TEST version scheme is so nice, he wants it on LIVE one. Then few weeks down the line proceeds to use LIVE as if it was TEST, because it looks like one.
The pre-requisite here is that you need to trust both the network and the server, because otherwise your OS image becomes untrusted. I am not entirely sure I would trust mobile networks more than I (have to) trust Google or Apple.
... but surely they will not make anyone think worse of them ? I mean, surely that's impossible ?
@Peter Getherhole exactly, you have won "compartmentalization" word in the bullshit bingo. The sad truth is that many C-level executives only know the words, but do not know the meaning.
I do not have a silver bullet, and I do not claim this to be a simple problem. The list of possible underlying causes is, by necessity, short.
The one common factor I found, reading about this and other failures, is the C-level executives oblivious to, or otherwise ignoring, the implied requirements of sane IT infrastructure, and focusing on explicitly stated business needs only.
Dysupgraditis - permanent inability to deploy required upgrades or patches on time, typically caused by fear of breaking infrastructure. Underlying causes: lack of understanding of the existing infrastructure within the organization, lack of infrastructure to perform pre-deployment testing of patches or upgrades, lack of skill to minimize the downtime or risk from the deployment. Known cure: none.
"I thought not."
It might be worth giving it a shot. For one, I would be interested to read tech articles written by an old curmudgeon like yourself. "Old is new again" etc, that might be not only entertaining but also enlightening.
Go for it, and who knows?
You served well, thank you for all the science and nice pictures!
Imagine you are novice investor into some research. You eventually get your research results, and they appear to provide innovative solution for a problem you paid to research in the first place. You go to patent your solution and then move onto production, happy that your investment will bear fruit, only to discover there are many related patents and there is no way to make a viable product without encroaching in someone else's patent "do it with a computer" or some such.
So, the next time you in invest your money in a research, you spend more and more money researching patents in the related areas, rather than the science involved. As a result you spent much more money, but at least you get to profit from the science you invested in - so it is money well spent!
Or perhaps I am entirely wrong about it.
Very appropriate, too
... or you could just buy new Android phone with a builtin physical keyboard
Mine's with a blackberry in the pocket.
Well, you cannot deny that obviously Adobe has committed resources for issuing all these releases and bugfixes. I said "commitment", not "quality". First is prerequisite for the former, but is not sufficient.
I am not speaking for Java - have not used the thing for many years. However, regarding the question "why hasn't there been a new update" - updates are seen as a sign of commitment from developers.
This is a good thing, sometimes necessary, because there is no such thing as "software without bugs". Well, maybe in fables or on nice-looking "software development cycle" charts (which is the same thing as fables). In reality software development is messy and entirely dependent on human ability to comprehend both the means (i.e. code) and the goal (i.e. domain) and also to communicate both with other humans (team and users) and with compilers (i.e. writing code).
This is sometimes very tricky and the results are never perfect. Demanding that there be a constant stream of updates is simply a very human way of dealing with it, demonstrated by the users who (at least intuitively) understand these things, for example because they are used to seeing multiple "fixed ..." items on release notes, every one of them.
You also need to direct the input to appropriate machine, and how do you know which machine that is? For this you need the concept of input focus and switching of that focus. This can be tricky and requires some OS-level smarts. Also, there is an old problem of compatibility with wide range of inputs - for example, WEY-TEC USB Deskswitch II does not work with Topre Realforce keyboards, even though both devices appear to be entirely standard/generic stuff. This is because "dumb" KVM is unable to negotiate keyboard layout with this particular model, something an OS will normally do with ease.
EDIT: Just found there is a very nice document explaining how this thing works, at http://people.eng.unimelb.edu.au/tobym/papers/cddc-acsac2016.pdf . It appears at least one of the authors is an avid El Reg reader , hence icon ->
... I am glad that the bureaucrats are taking their time with this one.
In the spirit of previous reply - if you use "specialists" to describe persons who feel they no longer need to learn new things and are unwilling to contribute to their future development, then IMO they deserve everything they get.
I was told by a doctor that beer helps to avoid this very painful issue. Perhaps you do not drink enough of it?
I bet that IPv6 opponents will continue to insist that NAT improves security.
You can tell them than you head predates 1980 (hey I'm making an assumption here, ok?) and you are still using it and in fact, you consider it better than some newer models of human heads.
... or, if you do not like VPN, you can run zerotier on your laptop, and then another machine with IP routing enabled on the other end, in a nice controlled environment. Like your home. Or at least someone's cloud which you feel is more trustworthy than the environment your laptop is currently attached to.
Don't attribute to malice that which is adequately explained by stupidity
I think this applies to AI training even more - unless inputs are sanitized (who does that and on what basis, exactly?) then the training will reflect all the usual biases like racism or misogyny, often with some unexpected twist or emphasis (for example, failing to recognize some faces)
www.theregister.co.uk www.theregister.com forums.theregister.co.uk m.theregister.co.uk
... but it does not mean that this user is an avid El Reg reader! Or at least, that's how Mozilla would have it.
@Just Enough you make a good point, but I think this does not apply to internet ads.
@veti - you forget survivor bias. These people tend to forget failures, because their future money does not depend on them. Instead, their money depends on optimistic outlook into future ad campaigns. Unless the money tap is closed, they learn nothing as there is no incentive to do so.
I have unpleasant experience with D-bus failing and I'd rather keep it away from the filesystems I use, because it was impossible to troubleshoot properly and even clean system shutdown was difficult (I have enabled SysRq on my system since then). Also, D-bus is a higher abstraction than a filesystem is, so making it a critical dependency in the management of a filesystem turns the dependencies in the system upside down, making it more difficult to recover when things go wrong. I think this is very rational evaluation.
This is why I'm using Arch - my Linux kernels are following kernel releases exactly and are easy to build with my own configuration and fresh selection of version straight from www.kernel.org .
if during the write something goes wrong and the data gets corrupted That's one case I do not really care much about, because both software stack and controllers are pretty good at avoiding these kinds of errors (as long as hardware can be trusted - but then ECC memory is not really that expensive and no one really has to use overclocking)
The kind of bitrot I care about, is storing my personal videos, pictures or ripped CDs or other data worth archiving, on magnetic medium, which then silently gets corrupted few years down the line. If stored on ZFS with data redundancy, then not only the error will be detected, but also the original data will be silently restored from redundant copies. With filesystems measured in terabytes (like, your usual archive of DSLR RAW pictures and a small library of ripped CDs which I own) this kind of bitrot is all but inevitable. Which is why I'm using ZFS with mirrored disks (and my offsite backups are also on ZFS, although my filer is Linux and backup is FreeBSD)
OpenZFS is natively available on FreeBSD. It is the same OpenZFS which is also available for Linux, under "ZFS on Linux" project, and which is included in Ubuntu since version 16.04. The parts of OpenZFS which are interacting with Linux kernel are doing so via modules called Solaris Porting Layer, which are all licensed under GPL (and not CDDL).
ZFS provides strong checksums for protection against bitrot - which is one of the main reasons why people are using it. Neither LVM nor XFS provide such protection, hence I do not see such combination as viable replacement for ZFS. With the increasing data storage needs (but without corresponding increase in medium reliability) protection against bitrot is only going to get more important, so the whole direction seems a bit of a non-starter to me.
hmm .... this filesystem has a dependency on D-Bus. I will ignore it. I do not want to wake up in the world where 1) it gets integrated into systemd and 2) distributions agree that's the only filesystem their users will need.
As you surely know, combination of ZFS+Linux is currently being challenged. It may yet turn out to be valid and legal combination and frankly, I see nothing wrong with such an outcome.
Wrong. They created synthetic DNA which, when sequenced, produced dataset, which in turn allowed them to pwn the computer doing the processing. Admittedly it was due to a bug they inserted into software themselves - so more like a backdoor, to which actual strand of DNA was a key.
Hacking own DNA to hack sequencing machine when you are swabbed
Surely there is nothing to worry about. Mine is the one with 3 sleeves, thank you
I want to know 1) how telemetry controls work in this version, 2) can I pick and choose updates and when they are applied, 3) does it have Linux Subsystem, 4) can I disable Cortana and all Store Apps (per user), 5) does it have any stupid limitations compared to Pro version, 6) what's the price for an upgrade from Pro, 7) is free trial available. Will consider buying if (and only if) I like the answers.
US prosecutors seem so keen on charging teenagers (remember Aaron Swartz) under the pretext of "unauthorized access to computer network". Well, this:
". . . and communicates them to our servers through a secure, undetectable channel"
... seem very much to be the definition of unauthorized access.
Not holding my breath, though.
yeah, pull other one
It was probably the same rogue developer who wrote
emission acoustic control code for VW Bosch diesel engines.
Biting the hand that feeds IT © 1998–2017