Re: Why write bandwidth matters
40 hours to write - that means you can't actually reach its wearout limit in the warranty period. So basically the warranty is back to time-based - 5 years.
Round and round we go...
73 posts • joined 1 Feb 2011
40 hours to write - that means you can't actually reach its wearout limit in the warranty period. So basically the warranty is back to time-based - 5 years.
Round and round we go...
Au contraire monsieur.
Sir Loin of Beef had a 3Par 7250 model QZF rev 1 with seven UFS400 rev B drive trays cabled in configuration A.
The ATO had a 3Par 7250 model QZF rev 2 with nine UFS400 rev C drive trays cabled in configuration B.
These are totally different and in no way related, so the failures are completely unrelated and unique, have never happened before and HPE are totally telling the truth.
The first thing that came to the minds of SWMBO and Daughter was the song from the movie. The one parodied nicely by the Simpsons (Burns: "See my vest, see my vest, made from real gorilla chest...").
But perhaps I am the only one who remembers them shoving it down everyone's throats.
I can't surely be the only one thinking it's not a coincidence?
Technically they're still not wrong...
Not necessarily. http://www.ntp.org/ntpfaq/NTP-s-time.htm#Q-TIME-LEAP-SECOND is prescriptive, but basically you end up with 23:59:59 --> 23:59:60 --> 00:00:00 (that's the leap second) instead of the normal 23:59:59 --> 00:00:00, when the extra second is inserted. After that it's up to the kernel (if it knows how to handle a LS) or NTP will adjust slowly back to sync.
It's simple, time never goes backwards. I mean seriously, this is a solved problem, and someone didn't understand enough about time.
Come now, SURELY you'd have to call them Crunch bars.
(https://upload.wikimedia.org/wikipedia/en/thumb/f/f3/Nestle-crunch-small.jpg/330px-Nestle-crunch-small.jpg for those who don't know of them)
This is the most egregious doublespeak I've seen for some time.
Advertisement says you can order a self-driving Uber. Company turns around and says "Nah cuz, not self-driving because someone is in the driver's seat".
Bet some of the Americans are begging for their equivalent of the Advertising Standards Council (or Board, or whatever they are this month).
If it stops and starts by itself, changes lanes by itself and uses sensors and cameras to identify what to do, it's a bloody self driving car.
You all know what I mean. Where's the emails (on either side) showing that they are telling the truth?
My employer keeps everything for 7 years. If this involved them, there'd be plenty of people involved in the project saying "Well, $Boss, here's the emails where we told them and they told us we were wrong" or similar.
The lack of any evidence on either side produced by any of the parties involved just SCREAMS incompetence or collusion.
IBM, ABS, et al: Which will it be?
Agreed. We can only hope the ACCC says "Well, you're acting like a monopoly by refusing to let people see the T&C's without a confidentiality agreement, so we're going to assume you ARE a monopoly, and that your T&Cs entrench your monopoly, oh and by the way, since you're a monopoly you can no longer enforce regulated pricing on iPhails by taking advantage of the loophole for selling on consignment."
What rubbish. When you don't buy any servers your lack of purchase is already hidden (and the servers which weren't purchased would have been not-purchased many months ago anyway).
Um, no, a cluster consists of eight hundred and eighty (multi-socket, hundreds-of-GB of RAM and stack of disks) hosts.
At least, as far as I understand it.
So conceptually at least an Azure cluster is potentially 10,560 cores, 440TB of RAM and a few PB of flash and disk, all probably connected with something like 10GbE for networks and 56Gb Infiniband for HA and storage interconnects.
The above is conjecture, I have no direct knowledge.
Of course it's exposed to PATRIOT. The US has proven it doesn't care where the data is, just that it's stored on equipment managed by a USanian company (see also Microsoft Ireland vs US Govt where it's not even the US company, but an international subsidiary of the US company and the USG still claims access).
Oh well then it's fairly simple, you're just "wrong". See there's no scenario which having a one size fits all solution (like Cloud) doesn't suit perfectly.
Want offsite and archived backups? No, you don't. You don't have a need for them. They're archaic - just replicate the data it's the same thing because no-one would ever accidentally delete something, and you never have a regulatory or legal requirement to show "the way it was" 20 years ago.
Want the ability to work when your office Internet connection is on the fritz for a week because someone cut 2,000 fibers tearing up the main road to replace it with a tramway? Pfft. That could never happen.
Work somewhere you don't have 1000Mbps Internet? No, no way - there's no such place on Earth!
Oh, you think you have a critical application that can't be licensed for the Cloud? Just throw it out - you don't need it even if it runs your entire business!
Besides - you aren't one of the high priests of IT, so bow to your betters. Mainframes in space was never a better description, we're right back in the 70's with longer console lines to someone else's computer.
Symmetric plans were on the NBN roadmap for 2014 - and would have been no problem had we stuck to the FTTP rollout (which has also reduced costs as predicted it would, though I'm not 100% sure the numbers match perfectly).
No disagreement from me, although I wasn't certain that upgrades to the fiber on that scale wouldn't need changes inside the ... FSA? FDA? FSAM? rather than on the borders.
I'm with you 100%, but we wont win the war if we're not honest - while it's true there are no DOCSIS upgrades, there will still be upgrades to the fiber infrastructure - we can't deliver 10Gbps+ otherwise :)
Allow me to scare you then.
What if the next version of $EncryptMalware has functionality to set and change the encryption password for your backup?
So now all your offsite tapes are encrypted with a password you don't know. Want that data back, do you?
Jesus that even scares me. No, no forget I said it.
And the latest GWX disables those policy settings "if they've been manually set" (I believe that's the term that was used).
Now I'm guessing that means if you use gpedit.msc to set them, you're fine; but if you set them with a script, or a GPP, or manually with regedit, they'll be disabled - and your administrative changes get overridden by some dickwad manager at Microsoft.
Even large orgs are going to get hit with this, as there will doubtless be non-domain-joined PCs in weird parts of the network, probably with direct WU configuration rather than reaching through firewalls etc for WSUS. Yes, they COULD do it another way, but when you have to allow a DMZ-connected PC doing some weird task to reach in through multiple firewalls to your WSUS infrastructure, or just let it hit the net (as it's doing for $Task) for updates - which do you think many will do?
I supported the first GWX - and my support disappeared as soon as it became clear that MS has completely lost the plot (reissuing new ways to nag). It's now BS of the highest order.
This doesn't make sense to me. You already have management of devices that Apple freely and willingly push as "personal, not for business" yet won't consider managing devices that are intended to be managed?
Either you have the ability to push out Windows updates (which includes Surface updates and BIOS), or you don't. Since I'll guess you do, you can update these tablets already. Also, you probably have some form of OS and app deployment (SCCM, Zen, ?????) which can push to any Windows machine. Because this is just another laptop/Windows machine. You manage it like anything else Windows in your fleet.
Also, I know several non-technical people who have purchased SP4 specifically because it's Windows and a tablet (in addition to the techies). No, it's not for everyone. But blanket disregard for change is antithetical to professional IT.
Oh, no I've worked with Cisco kit before - just not anything that requires a separate license to enter the "enable" command (and presumably to create a user with privilege level 15 or its equivalent).
Is it just me or is the more surprising story here that getting root (is this just full administration rights?) access "legally" to these routers, as the device owner, also requires payment of a license!?
Would you be more or less outraged if it was the unions who had engaged LM to investigate Walmart so that the protests could be more successful? Is it fair for Walmart to boost sales and profits by undermining union action, if the unions are unable to reciprocate? Is it fair for the unions to boost the chances of a protest, sit-in or flash mob succeeding and stopping sales by undermining the company actions (which do of course employ thousands of people)?
I don't know if the allegations of poor staff payment are true, but it seems to be consistent with other anecdotes I have heard about both Walmart and other US companies. Does that make the actions of either organisation more right or more wrong?
It seems to me both sides cry foul if their "free speech" rights are impinged - regardless of whether those rights are actually rights at all (interpretation of the constitution is a art [sic]). It would seem to be rather self evident that unions really have improved worker conditions, regardless of whether those same unions have perhaps gone on to feather, gild and diamond-plate their own nests afterwards.
The Big T can at least claim to be the biggest improver for October.
It therefore comes as no surprise that Optus' ... improved the most in October.
Oh, if only it were that easy. But apps (I'd say "real enterprise" if it wasn't so blatantly ridiculous) such as SalesForce don't even have a standard installer - just a web download that puts itself in %Temp% and installs into the user profile.
Central distribution? We can't have that, we need to be able to update our cloud app whenever we want, customer testing be damned.
And so we're all still allowing execution from %Temp% and AppData, which is effectively dropping daks, bending over a nearby table and being handcuffed to it with a sign saying "Please roger me with the spiked baseball bat behind you". Because the alternative is for no-one to be able to sell stuff and keep the business running.
But hey, what else can you do as a developer when those bloody admins insist on attempting to secure a machine? We have to do things the easiest way we can, security be damned!
I've narrowed it down to El Reg. Something is playing M Burns from the Simpsons, very badly, trying to say "Decisions, decisions". Comes out more like "De ... cisions ... dec ... is ....ions". Never heard what comes after, but I've got no clue where it is.
Actually - just heard more, for the first time ever - Decisions, decisions, so many gourmet ingredients f... <cut>. It was even on this "new topic" page, before.
Seriously guys, I understand you need to pay the bills, but this is exactly the sort of stupid ad that makes people find and install ad blockers. Random sounds? Auto-playing poor quality ... supermarket ads (I'm guessing here)?
Further, the implementation of (effectively) random words as root zones has no doubt caused extra costs to unrelated companies. I'll bet good money that there are hundreds or thousands of cases of internal domains being "duplicated" than just my own network at home. When I built it, 15? 18? years ago, .earth was a perfectly reasonable choice as it wasn't public. Now I can't even tell (programmatically) if I'm inside or outside the network if I look up nameservers for what was my internal space - it's no longer NXDOMAIN outside the walls.
And each one will require either some internal migration work (to something real that is "owned" or at the least to something that can never BE owned - good luck guessing today what ICANN will not decide to make available as a root domain over the coming decades). Or, "sorry no you can't browse that site at work because our internal network clashes with the Internet".
I'm not saying the individual costs are huge but when there are thousands of small impacts it adds up.
I'd be surprised if the Xeon E3-1500M is anything other than one of the following:
* Rebranded Skylake cores (4C/8T) just like the past few generations of Xeon E3's have been rebranded i7's of the current generations (E3-12x0 and E3-12x5, V1, V2, V3 ...) with a 32GB ECC UDIMM ceiling
* Rebranded Xeon D Broadwell cores like the Xeon D 1520 (25W of 4C/8T CPU, 128GB of RAM capable using RDIMMs), or the Xeon D 1540 (45W of 8C/16T goodness). Both of these are soldered to the board solutions and there's supposedly a few new variants on the way.
ServeTheHome has some reasonable info here (among other articles) - they've been covering Xeon D in detail for weeks:
Well, no Trev, it's not a joke. Because it all comes down to how resources are allocated and used. I'm sure you've seen this post (http://blogs.technet.com/b/exchange/archive/2015/06/19/ask-the-perf-guy-how-big-is-too-big.aspx) explaining that with bigger servers, the .NET framework (which underpins much of Exchange nowadays) allocates memory and CPU threads ineffectively. After all it was posted the same day as the updated calculator to which the article refers.
5000 x 5GB users on 3-4 servers is STILL not enough to hit more than about 15 cores (even on slightly older kit) but might need 128GB RAM. Or, you scale it out ONE more VM/node and get within the recommended guidelines. How many users do you want in a fault domain anyway?
Alternatively, if you're going to go to 20 and 50GB mailboxes, and a small number of massive servers, you need to understand that you'll run out of databases in the DAG before you run out of CPU and RAM resources (you're probably looking at SMALLER servers being needed but still with 100TB+ of disk each). Why allocate 64 CPUs and 256GB of RAM to a server that will end up running at 2% CPU and 5% RAM usage? And who would purchase those servers in preference to smaller/cheaper?
Even MS says to virtualize Exchange if you're planning to deploy massive hardware platforms. It's right there in the article. It's also very clear that bare-metal 2U commodity servers is the way they deploy at (much larger) scale, far beyond 99.9% of organisations' internal deployments.
But hey, the software's architects, the support teams who troubleshoot this stuff day in and day out, and the guys who have deployed that system for multiple millions of users - they don't know what they're talking about. But some VMware guy - I guess he must be an _expert_.
Seriously, what is it with settlements - every time I see one I see something like "BlechCo will be required to bend over and spread them, pay $thousands in fines, tug the forelock whenever the wind blows, yet admit no wrongdoing".
If they're that adamant there was no wrongdoing, why on EARTH would they agree to fines and constant monitoring; and if the Govt/Attorney General/Regulator was that sure there WAS wrongdoing (and hence the fines and useless controls) why would they agree to allow the company to escape admitting they did the wrong thing?
What am I missing here?
NET USER Bob /Domain *
<enter password twice>
Or if you insist on PSH:
Set-ADAccountPassword -Id Bob
<enter password twice>
And the way you get first line to work with it is you provide processes and/or your own toolset (e.g. a central website that audits and logs resets, runs PSH in the background, and does all the other first level stuff).
You could run up a .HTA app with jQueryUI in a couple of days that does this sort of thing - I know, I've done it recently to provide a front-end for USMT plus data backup and restore, plus password resets and computer moves.
I wonder if one way to attack the problem is to take all of a given conglomerate's financial reports into account as follows:
* US Parent Company reports total gross profit of $30B for a financial year (pre EBITDA)
* US Parent Company reports "$Country operations were fantastic" in producing 10% of turnover (probably hard to enforce/deduce - you'd need SEC etc to require this - but that doesn't sound impossible to get happening)
* Resulting deemed profit for taxation purposes is $3B for $Country at $Country tax rate, less only purely in-$Country costs (i.e. no overseas transactions - this is hard to enforce too)
I know there are wrinkles and problems there, and it wouldn't be easy to police (I'm sure the first thing $corp would do is set up several dozen "arms-length" suppliers to whom it pays $X, and who buy "stock" from $corp at 99.99% * $X). It would permit a company making that $3B of profit to still deduct its local expenses before paying tax on the final "local profits". It lets local businesses (who employ other people who spend money, generating the economy!) compete on a more level playing field.
It takes all the actual real costs to the business into account and taxes only (approximately) the profit created by a given country. Faked ... er, I mean, totally legitimate real reasonable expenses for "IP Rights", "Redistribution Rights", "Marketing", "Support", "Brand Awareness" etc are all taken into account before taxation occurs (in each country). And on the surface it seems like the tax burden in a given country would be ... if not perfectly aligned, at least close to the level of profit. Which ensures that the corp is paying the taxes that, for example, build roads and provide public transport which allow people to get to their stores and buy their stuff.
I'd be interested to hear all the holes that no doubt exist in this approach too :)
Yes, you read that right - the government has admitted that overseas service providers, cafes and many other organisation types won't be required to store any metadata nor make what they do store available without a warrant.
So if you want to continue your nefarious plans, you have to ... make sure you don't use your ISP email, and log on to your TerryWrist198109@gmail.com account at Starbucks et al (or at work).
This needs the "Genius" meme applied liberally. Or perhaps the heavy-handed and repeated application of a hammer to the privates of each person proposing it.
I wonder if this will be the typical "Here's an update to your app requesting you drop your pants, no you cannot decline the permissions only the update" request for access?
Google's permissions system sounds great, but the inability to tell an app to go f... er ... to reconsider its requests for access and just fail without them, or be provided with fake data, just cements the view that Google is only pretending to give a you-know-what about privacy.
Paris because even SHE knows more about privacy than Google.
Where are you seeing those 10Gb switches (and what are they?) I've not seen any 10Gb kit for less than 80% of current retail except for the odd PCI-e card here and there.
MS is even jumping on the convergence bandwagon with the next Server platform supporting something like this (can't comment yet on exactly how, I'm still downloading the ISO). But it looks like MS wants to eat some of Nutanix and their competitions' lunches.
See this for info http://technet.microsoft.com/en-us/library/dn765475.aspx (Storage Replica). I'm guessing it'll be file-based and synchronous commit, but until I build one...
AC: It _was_ set to the secure setting by default! MS are admitting they never should have allowed the mentally incompetent to de-secure it.
And there's no valid reason for it to be an option. Learn to do servers securely (both devs and admins) or not at all, and just bloody deal with it properly. About the only thing left to do is scour stackexchange / serverexchange for all references to it, and down-vote to hell any answer which says to set the option.
I see SAN-free in the title of the story but no mention of block access (only file). I'll bet it's not SMB3 either. So it's not going to be killing off the SAN any time soon, and probably not the NAS either since you're talking about getting the data off PCs with local disks that are already using many (most?) of their IOPS to support the local user.
So this is a low IOPS store with 4 replicas per file according to the FAQ. With 25 workstations per store, 200GB each, you get just over 1TB of NAS which is continually restriping as people turn machines off for the night or restart, can't be backed up effectively, is slow in use - and depending on how it's done, could be useful only for small files?
Yeah, that'll take off.
Quote: If enough people dump Chrome and answer the exit questionnaire thoughtfully provided by Google, maybe things will get fixed.
Hi there, hope you're well. My name is Billy Bob Joe. I saw your comment and I have some excellent waterfront property going for a steal in Florida. I need to sell ASAP for personal reasons.
But seriously, I have exactly zero confidence that those results go anywhere but the bit bucket. I strongly suspect they don't want intelligent, thoughtful users - those guys are a pain! They keep pointing out our privacy and security gaffes, and then worse - they tell ORDINARY PEOPLE! We can't have ORDINARY people understanding our data slurping!
It probably should be, but the transcript does say 17TB. Meh, I prefer to think of EMC going away.
OK I'm going to ask the question here.
A stack of experienced VMware guys (above) don't see any problem with the 2TB VMDK limits. How do you do host-based backups and VM snapshots? Don't you need to have synchronicity between all of the vdisks? How do you ensure the SAN admin doesn't nuke the RDM snap before the VM snap (if that's even possible)?
And you'd lose Storage vMotion as well, right? Isn't that supposed to be a killer feature? It seems to lock VMs pretty thoroughly to the specific array.
And yet, no sign of a replacement for P2V, removed from VMM 2012 R2. Which was removed, as I recall, so that it could be updated and released on an accelerated schedule.
Apparently the intended acceleration vector points backwards.
So I'll buy one for my datacenter and copy files to that filesystem with Explorer, right? What do you mean it's only in the Cloud? I want it local! I don't want to wait hours for data copies over this pathetic long distance WAN link! What about my backups (do NOT try to tell me that any form of resiliency = backup).
And what the dickens do you mean you can't just save to it with a shared drive - a REST API!? Oh, wait, no you want to sell me a NAS gateway too - for goodness sake, man, if it's a filesystem let it store files my way!
Stupid Cloud. It is NOT the be-all and end-all of IT (or if it is, I'm taking up flower arranging).
Evan ... I think your calculator is broken.
32 x 96 = 3072 (if you prefer, 2^5 x (3 x 2^5) = 3 x 2^10 = 3 x 1024 = 3072
That's 3TiB in my book.
Well despite any disadvantages, there is one significant advantage to the MS-contributed designs - they fit in a standard 19" rack. So those companies with HP, Dell, IBM servers and blade chassis racked today in HP, Dell, APC and IBM racks can switch out those old servers for a "MS Blade Chassis" without also ripping out the racks themselves.
Yes, obviously there are potentially downsides (2 sleds per 45mm 1U, rather than [IIRC] 3 per 48mm OpenRU in the existing OCP designs) but at least it can be up to the customers which size to choose.
Of course, we won't know if the designs use the same or similar power interconnects, nor how the network and storage interconnects are handled, until the designs are available. But it certainly seems to me (and obviously Microsoft) that there could be value in sticking to existing standards for the rack rather than going the whole hog for the new OpenRack design.
No it's not even a NAS - not really. After all a NAS generally exposes a filesystem onto which you place named files with SMB/NFS/etc. A NAS is ... useful! These don't even have that - unless you have no directories on your filesystem and you name all your files like this:
In which case yes, it's close to being a single-drive NAS.
It's more like a single-drive SAN, incompatible with the rest of the world. With at most 1GBps (that's 10Gbps!!) per device, the network ports will cost more than the drive itself. It's about 50-50 with a 1Gbps NIC on the device but then a drive will deliver 100MBps max (less than SATA). And you better hope the switch never fails, you'll temporarily lose access to data but not know what until you try to access it.
I'm sure this is a solution. But it seems someone forgot to investigate whether there was a problem in the first place.
OK I hear what you're saying, and yes:
Q. what is the first thing you do after you carve up a LUN ? ...
A. Put a filesystem on it
True enough, but I have a followup question. Which one?
Because the one thing I don't see addressed here is heterogenous environments. Which filesystem can my Win2012 box running Exchange share with a Win2012 box running SQL and a RHEL box running Oracle?
Or is everyone assuming the storage should be presented to a set of hosts running an intermediary layer and carved up there?
Funnily enough, though, most of my arguments with storage engineers have revolved around them wanting to give me little 3 and 5 disk RAID 5 sets for each Exchange database, or insisting that RAID 5 is better in every way than RAID 10 (spindle utilisation yes, performance .... not always despite what three-letter-vendor claimed). Not sure the LUN argument has even reached those cretins yet.
I just want to point out that a DNS blocklist as described (and though I do work for Telstra, I have no direct visibility of what's happening here) won't block sites that share an IP address with a C&C site.
As described the "filter" looks for the DNS query to badguy.domain.com and either blocks or ignores those queries. So when you look up "goodguy.mysite.com" it won't match the bad site DNS name, and your query (and connection attempt) proceeds.
I'm not a fan of filtering/blocking etc; be it whitelisting, blacklisting, or using a black box list of "stuff someone claimed was bad". But let's argue about the right stuff :)
So for me the thing that stands out with all of the VNXe specs is the continued specification of 2TB maximum LUN size. In all of my (increasingly) meagre experience, the LUN is the unit of storage presented to the server by the array.
So what I obviously don't understand, in this day and age, why is 2TB still the maximum? MBR vs GPT for some reason? Or am I completely clueless (it wouldn't be the first time)?
I have SMB customers with larger shared datastores than 2TB - let alone larger enterprises (in which a 10TB fileserver is the small one in the branch office). Is there some unit of storage that aggregates 2TB LUNs together at the array level, or is one expected to present 5x 2TB LUNs to the server and aggregate them there (with mdadm / Storage Spaces / etc)?
Biting the hand that feeds IT © 1998–2017