Who came up with the list of OSs? And in what decade?
Chrome OS is now as common as Mac OS, and *way* ahead of any BSD, let alone the Amiga.
25 posts • joined 6 Oct 2011
I am not doubting the findings that Ireland and Luxemburg granted excessive tax reductions to Amazon and Apple.
However, I am disturbed by this idea that a commission can just retroactively impose taxes, even if the bills are being given to companies who can afford it.
If you paid your taxes in year X and the government you paid them to agreed that you had paid in full then it should require a trial where the taxpayer is found guilty of having deliberately misled the government in some way.
If the EU wants to find Luxembourg guilty and asses Luxenbourg for the taxes that should have been collected, that would make sense. But this is telling all companies doing business in the EU that they
can never know what their true tax rate is, the EU can change the rules ex post facto.
Whody en you listen to the breached companies lament you would get the impression that cyber-security is impossible.
But consider for a moment that we do not have routine electronics looting of bank accounts, or of confidential files have by law firms for clients. Somehow *those* records can be kept secure.
Nobody designs their bank accounts so that a single password can abscond with the entire assets of a company, but apparently that is all it takes to steal all of the data held about consumers. But that's understandable, cash has real value that needs protecting.
That's not allegedly misleading, that's theft plain and simple.
A one time $20 cancellation fee might have been reasonable, and something a consumer
might reasonably expect. But no customer would reasonably expect tht cancelling a service
would not reduce their bill.
What was so urgent that they had to arrest him at the airport?
Did they have any reason to doubt that Britain would extradite him after they issued a
throoughly vetted indictment?
This at the minimum suggests to me that the prosecutors are not confident in their case.
The justification for Common Carrier regulations is that it would never make sense to build two or more parallel replicas of physical structures. The "Free Market" is not dynamic enough to build a second railroad into a town when the first railroad owner could just end their overpricing as soon as the second railroad was in place. Rather than letting these common carriers have a natural monopoly we regulate them.
The left has been contending that ISPs have a natural monopoly on providing broadband service, which therefore should be regulated as a common carrier. Perhaps Mr. Bannon has forgotten that the Republican party is against this classification. I think it makes sense, but I can see this as an area of legitimate debate.
I can even construct an argument that the Internet has created a new type of natural monopoly, where the advantages of having over 50% of the market make it impossible for anyone to compete with you to ever get a larger share. The argument is that the value of the data that Amazon/Google/Facebook can gather cannot be replicated.
But whether this creates a monopoly, as opposed to an oligopoly, is still open to debate. In any case there is absolutely no rationale for protecting the ISPs from being regulated as common carriers and then regulating the Google/Facebooks as being common carriers - other tha the political parties they choose to donate to.
That is exactly the grounds on which some lawyer will demand the right to review the NIT code.
There is a very simple test that judges should be applying to these cases. If law enforcement is
doing something that would get someone else arrested then they need a warrant.
I don't have the right to sabotage any web client that visits my site with malware. Neither does
the FBI. Requiring a warrant limits the scope and volume of such infections, and therefore cuts
the risk of collateral damage to lhe computers of law abiding citizens.
I believe the correct response is to return the email to the sender, indicating that you had only read
enough to identify that this was not work related.
Then remind the sender that personal messages should be sent using personal email.
Nobody will believe that you didn't read the whole email, but the message that you don't
appreciate having personal stuff like this get misdirected to you will have been delivered.
Small clusters can use centralized metadata to manage storage very quickly.
Centralized metadata can support both POSIX and object semantics.
But when you scale up the metadata must be distributed, making efficient support of POSIX untenable.
That said, there is no inherent reason for "file servers" to be faster than "object servers".
The metadata operations required to put a new version of a 10 MB document are actually
less demanding with object storage than with file storage. The only advantage file APIs have
is their familiarity to developers and the maturity of the code base implementing POSIX APIs.
Neither the existing code or the APIs will scale to the degree that object APIs allow, however.
RDMA security relies on network level security *and* the local interface only enabling access to memory
as explicitly granted by the user. Both InfiniBand and iWARP APIs require this. The RDMA device does
not have access to any portion of physical memory it wants.
I would be surprised if someone designed a new API which did not match this security.
Applications can of course make application layer security mistakes. An RDMA interface is only
more vulnerable to the extent that eliminating bottlenecks would allow the applications to do their
work, and their mistakes, faster.
The real question is whether Apple has careless left a hole in their security.
You should not be able to update the firmware without entering the password or doing a full
factory reset first. If Apple's firmware allows itself to be bypassed then the "guess limit" never
really did any good anyway. Apple should have been allowing longer PINs to make brute force
attacks infeasible even without this firmware assist.
But forcing Apple to disclose this detail about potential flaws in its designs on the theory
that this *might* unlock information that is useful to the FBI strikes me as a real stretch.
If Apple confirms that such an attack is possible then hackers will inevitably figure out how it is done.
Meanwhile, I doubt that an iphone has unbreakable *physical* security. The FBI, on its own dime,
should be able to *clone* the memory and then just start a series of 10-try run until they've tried
all 10,000 PINs. They do not need Apple's help to do that.
The slickest piece of evasion here is talking about files as though they were the same thing as Objects, just with different metadata attributes.
The fundamental difference is that files are updated very differently than how objects are updated.
Objects are put, as a whole, in a conceptual instant.
Files have a series of writes applied at different offsets, each conceptually at one instant. But there are
complex rules governing when each of those updates can and may be visible to other reading the file.
If you access files as though they were object, which most file access does, then it is easy to map a file to an object. The only difficulties come when you play games like renaming hierarchical directories.
Figuring out exactly how much of the file system semantics to support with objects is a tricky issue. Most end users do not yet fully understand the tradeoffs so it is not surprising that we do not know their preferences yet.
This sort of double-talk from Swiftstack does not help the process.
Promoting placing its clumsy ring on flash is particularly pathetic. It's a data structure that is not needed; its main impact is to delay when the cluster responds permanently to changed membership. Optimizing your achilles heel does not make it a feature.
A file or object storage system store both metadata and data.
Having the storage server, or even a "smart" drive, store the metadata related to precise locations make sense. But it is hardly a new concept.
But a full local file system has a lot of naming metadata, which is handled by the object cluster. Writing that redundant metadata wastes CPU power, disk space and disk writes. It also creates more links that can be broken, with no way to use the redundant information for repair. It's a lose-lose proposition.
Using a pre-existing local file system may be a development trade-off worth considering, but it is far from something to boast about.
I've already posted a comment that I think that the IETF's tone is far superior to other standards bodies and especially to most open source projects. So it really isn't a high priority target for improving things.
But, I have to respond to this. Implying that being listed as an RFC author implies merit is nonsense.
And yes, I am an RFC author.
But my work on RFCs represents a small part of my technical work. What percent of your work becomes published in an RFC depends on a wide variety of factors, such as who your employer is. None of these factors are correlated with how good of a designer and/or writer you are.
People (both men and women) are included or excluded from being listed as an author for political reasons. I know I was included as an author on some I-Ds from relatively light contributions, but only acknowledged on the RDMA MPA draft despite making major contributions. This is probably all very similar to the process of who gets listed as an author on a research paper. There are many factors, and the number of RFCs is still a bit too small to be proving things by statistical analysis.
While no technical body is perfect on these issues, my experience has been that the IETF gets closest to functioning as a true meritocracy. I've been involved in several standards groups, including IETF, and several open source projects. The IETF is far ahead on the tone of discussions and fairness.
There is some amount of "Boy's Club", yes. But compared to open-source communities it is nothing.
If we could get the discussions on most open-source projects to be as respectful and professional as those in the IETF we could declare victory for the decade.
Internal traffic within a storage cluster is carried over a VLAN, frequently configured to use no-drop Ethernet. This is how FCoE works. Everything that works for FCoE works for UDP.
You still need to confirm successful transfer, but that is true with connection-oriented transports as well.
16 or 32 bit protection is totallly inadequate when trying to manage Petabytes of data.
Custom congestion control building upon negotiated reservations can start the transfer at full wire speed. You cannot do that with any connection-oriented transport.
The bottom line is that there is very little that can be done to optimize WAN transfers that cannot be done with an improved TCP congestion algorithm.
Within a datacenter, or in-house corporate intranet, multicast can indeed be a verey useful solution.
Not only do you get the obvious multiplier effect when replicating, multicast also enables dynamic load balancing when selecting targets.
When dedup in integrated into the file system (such as ZFS, or apparently now Windows 8) there is no "dedup equipment" to be separated from the files. Any disk array is meaningless if you don't have a copy of the file system that filled it in.
When you are talking about "one corrupt database" ruining your entire set of stored data you are assuming that there is a "dedup database" that is somehow separate from and not integrated with the file system directories themselves.
Of course you file system directories need to be sufficiently robust,and that is something you should kick the tires on with any FS vendor.
The other issue is whether dedup is necessary. The justification for saving raw disk space loses some appeal every year. But in the long run, dedup will also reduce network traffic. This is already true to the degree that backup traffic can be reduced, and will be more as distributed dedup algorithms become more widely deployed.
Biting the hand that feeds IT © 1998–2019