Hogwash! Hogwash I say!
Symmetric plans were on the NBN roadmap for 2014 - and would have been no problem had we stuck to the FTTP rollout (which has also reduced costs as predicted it would, though I'm not 100% sure the numbers match perfectly).
59 posts • joined 1 Feb 2011
Symmetric plans were on the NBN roadmap for 2014 - and would have been no problem had we stuck to the FTTP rollout (which has also reduced costs as predicted it would, though I'm not 100% sure the numbers match perfectly).
No disagreement from me, although I wasn't certain that upgrades to the fiber on that scale wouldn't need changes inside the ... FSA? FDA? FSAM? rather than on the borders.
I'm with you 100%, but we wont win the war if we're not honest - while it's true there are no DOCSIS upgrades, there will still be upgrades to the fiber infrastructure - we can't deliver 10Gbps+ otherwise :)
Allow me to scare you then.
What if the next version of $EncryptMalware has functionality to set and change the encryption password for your backup?
So now all your offsite tapes are encrypted with a password you don't know. Want that data back, do you?
Jesus that even scares me. No, no forget I said it.
And the latest GWX disables those policy settings "if they've been manually set" (I believe that's the term that was used).
Now I'm guessing that means if you use gpedit.msc to set them, you're fine; but if you set them with a script, or a GPP, or manually with regedit, they'll be disabled - and your administrative changes get overridden by some dickwad manager at Microsoft.
Even large orgs are going to get hit with this, as there will doubtless be non-domain-joined PCs in weird parts of the network, probably with direct WU configuration rather than reaching through firewalls etc for WSUS. Yes, they COULD do it another way, but when you have to allow a DMZ-connected PC doing some weird task to reach in through multiple firewalls to your WSUS infrastructure, or just let it hit the net (as it's doing for $Task) for updates - which do you think many will do?
I supported the first GWX - and my support disappeared as soon as it became clear that MS has completely lost the plot (reissuing new ways to nag). It's now BS of the highest order.
This doesn't make sense to me. You already have management of devices that Apple freely and willingly push as "personal, not for business" yet won't consider managing devices that are intended to be managed?
Either you have the ability to push out Windows updates (which includes Surface updates and BIOS), or you don't. Since I'll guess you do, you can update these tablets already. Also, you probably have some form of OS and app deployment (SCCM, Zen, ?????) which can push to any Windows machine. Because this is just another laptop/Windows machine. You manage it like anything else Windows in your fleet.
Also, I know several non-technical people who have purchased SP4 specifically because it's Windows and a tablet (in addition to the techies). No, it's not for everyone. But blanket disregard for change is antithetical to professional IT.
Oh, no I've worked with Cisco kit before - just not anything that requires a separate license to enter the "enable" command (and presumably to create a user with privilege level 15 or its equivalent).
Is it just me or is the more surprising story here that getting root (is this just full administration rights?) access "legally" to these routers, as the device owner, also requires payment of a license!?
Would you be more or less outraged if it was the unions who had engaged LM to investigate Walmart so that the protests could be more successful? Is it fair for Walmart to boost sales and profits by undermining union action, if the unions are unable to reciprocate? Is it fair for the unions to boost the chances of a protest, sit-in or flash mob succeeding and stopping sales by undermining the company actions (which do of course employ thousands of people)?
I don't know if the allegations of poor staff payment are true, but it seems to be consistent with other anecdotes I have heard about both Walmart and other US companies. Does that make the actions of either organisation more right or more wrong?
It seems to me both sides cry foul if their "free speech" rights are impinged - regardless of whether those rights are actually rights at all (interpretation of the constitution is a art [sic]). It would seem to be rather self evident that unions really have improved worker conditions, regardless of whether those same unions have perhaps gone on to feather, gild and diamond-plate their own nests afterwards.
The Big T can at least claim to be the biggest improver for October.
It therefore comes as no surprise that Optus' ... improved the most in October.
Oh, if only it were that easy. But apps (I'd say "real enterprise" if it wasn't so blatantly ridiculous) such as SalesForce don't even have a standard installer - just a web download that puts itself in %Temp% and installs into the user profile.
Central distribution? We can't have that, we need to be able to update our cloud app whenever we want, customer testing be damned.
And so we're all still allowing execution from %Temp% and AppData, which is effectively dropping daks, bending over a nearby table and being handcuffed to it with a sign saying "Please roger me with the spiked baseball bat behind you". Because the alternative is for no-one to be able to sell stuff and keep the business running.
But hey, what else can you do as a developer when those bloody admins insist on attempting to secure a machine? We have to do things the easiest way we can, security be damned!
I've narrowed it down to El Reg. Something is playing M Burns from the Simpsons, very badly, trying to say "Decisions, decisions". Comes out more like "De ... cisions ... dec ... is ....ions". Never heard what comes after, but I've got no clue where it is.
Actually - just heard more, for the first time ever - Decisions, decisions, so many gourmet ingredients f... <cut>. It was even on this "new topic" page, before.
Seriously guys, I understand you need to pay the bills, but this is exactly the sort of stupid ad that makes people find and install ad blockers. Random sounds? Auto-playing poor quality ... supermarket ads (I'm guessing here)?
Further, the implementation of (effectively) random words as root zones has no doubt caused extra costs to unrelated companies. I'll bet good money that there are hundreds or thousands of cases of internal domains being "duplicated" than just my own network at home. When I built it, 15? 18? years ago, .earth was a perfectly reasonable choice as it wasn't public. Now I can't even tell (programmatically) if I'm inside or outside the network if I look up nameservers for what was my internal space - it's no longer NXDOMAIN outside the walls.
And each one will require either some internal migration work (to something real that is "owned" or at the least to something that can never BE owned - good luck guessing today what ICANN will not decide to make available as a root domain over the coming decades). Or, "sorry no you can't browse that site at work because our internal network clashes with the Internet".
I'm not saying the individual costs are huge but when there are thousands of small impacts it adds up.
I'd be surprised if the Xeon E3-1500M is anything other than one of the following:
* Rebranded Skylake cores (4C/8T) just like the past few generations of Xeon E3's have been rebranded i7's of the current generations (E3-12x0 and E3-12x5, V1, V2, V3 ...) with a 32GB ECC UDIMM ceiling
* Rebranded Xeon D Broadwell cores like the Xeon D 1520 (25W of 4C/8T CPU, 128GB of RAM capable using RDIMMs), or the Xeon D 1540 (45W of 8C/16T goodness). Both of these are soldered to the board solutions and there's supposedly a few new variants on the way.
ServeTheHome has some reasonable info here (among other articles) - they've been covering Xeon D in detail for weeks:
Well, no Trev, it's not a joke. Because it all comes down to how resources are allocated and used. I'm sure you've seen this post (http://blogs.technet.com/b/exchange/archive/2015/06/19/ask-the-perf-guy-how-big-is-too-big.aspx) explaining that with bigger servers, the .NET framework (which underpins much of Exchange nowadays) allocates memory and CPU threads ineffectively. After all it was posted the same day as the updated calculator to which the article refers.
5000 x 5GB users on 3-4 servers is STILL not enough to hit more than about 15 cores (even on slightly older kit) but might need 128GB RAM. Or, you scale it out ONE more VM/node and get within the recommended guidelines. How many users do you want in a fault domain anyway?
Alternatively, if you're going to go to 20 and 50GB mailboxes, and a small number of massive servers, you need to understand that you'll run out of databases in the DAG before you run out of CPU and RAM resources (you're probably looking at SMALLER servers being needed but still with 100TB+ of disk each). Why allocate 64 CPUs and 256GB of RAM to a server that will end up running at 2% CPU and 5% RAM usage? And who would purchase those servers in preference to smaller/cheaper?
Even MS says to virtualize Exchange if you're planning to deploy massive hardware platforms. It's right there in the article. It's also very clear that bare-metal 2U commodity servers is the way they deploy at (much larger) scale, far beyond 99.9% of organisations' internal deployments.
But hey, the software's architects, the support teams who troubleshoot this stuff day in and day out, and the guys who have deployed that system for multiple millions of users - they don't know what they're talking about. But some VMware guy - I guess he must be an _expert_.
Seriously, what is it with settlements - every time I see one I see something like "BlechCo will be required to bend over and spread them, pay $thousands in fines, tug the forelock whenever the wind blows, yet admit no wrongdoing".
If they're that adamant there was no wrongdoing, why on EARTH would they agree to fines and constant monitoring; and if the Govt/Attorney General/Regulator was that sure there WAS wrongdoing (and hence the fines and useless controls) why would they agree to allow the company to escape admitting they did the wrong thing?
What am I missing here?
NET USER Bob /Domain *
<enter password twice>
Or if you insist on PSH:
Set-ADAccountPassword -Id Bob
<enter password twice>
And the way you get first line to work with it is you provide processes and/or your own toolset (e.g. a central website that audits and logs resets, runs PSH in the background, and does all the other first level stuff).
You could run up a .HTA app with jQueryUI in a couple of days that does this sort of thing - I know, I've done it recently to provide a front-end for USMT plus data backup and restore, plus password resets and computer moves.
I wonder if one way to attack the problem is to take all of a given conglomerate's financial reports into account as follows:
* US Parent Company reports total gross profit of $30B for a financial year (pre EBITDA)
* US Parent Company reports "$Country operations were fantastic" in producing 10% of turnover (probably hard to enforce/deduce - you'd need SEC etc to require this - but that doesn't sound impossible to get happening)
* Resulting deemed profit for taxation purposes is $3B for $Country at $Country tax rate, less only purely in-$Country costs (i.e. no overseas transactions - this is hard to enforce too)
I know there are wrinkles and problems there, and it wouldn't be easy to police (I'm sure the first thing $corp would do is set up several dozen "arms-length" suppliers to whom it pays $X, and who buy "stock" from $corp at 99.99% * $X). It would permit a company making that $3B of profit to still deduct its local expenses before paying tax on the final "local profits". It lets local businesses (who employ other people who spend money, generating the economy!) compete on a more level playing field.
It takes all the actual real costs to the business into account and taxes only (approximately) the profit created by a given country. Faked ... er, I mean, totally legitimate real reasonable expenses for "IP Rights", "Redistribution Rights", "Marketing", "Support", "Brand Awareness" etc are all taken into account before taxation occurs (in each country). And on the surface it seems like the tax burden in a given country would be ... if not perfectly aligned, at least close to the level of profit. Which ensures that the corp is paying the taxes that, for example, build roads and provide public transport which allow people to get to their stores and buy their stuff.
I'd be interested to hear all the holes that no doubt exist in this approach too :)
Yes, you read that right - the government has admitted that overseas service providers, cafes and many other organisation types won't be required to store any metadata nor make what they do store available without a warrant.
So if you want to continue your nefarious plans, you have to ... make sure you don't use your ISP email, and log on to your TerryWrist198109@gmail.com account at Starbucks et al (or at work).
This needs the "Genius" meme applied liberally. Or perhaps the heavy-handed and repeated application of a hammer to the privates of each person proposing it.
I wonder if this will be the typical "Here's an update to your app requesting you drop your pants, no you cannot decline the permissions only the update" request for access?
Google's permissions system sounds great, but the inability to tell an app to go f... er ... to reconsider its requests for access and just fail without them, or be provided with fake data, just cements the view that Google is only pretending to give a you-know-what about privacy.
Paris because even SHE knows more about privacy than Google.
Where are you seeing those 10Gb switches (and what are they?) I've not seen any 10Gb kit for less than 80% of current retail except for the odd PCI-e card here and there.
MS is even jumping on the convergence bandwagon with the next Server platform supporting something like this (can't comment yet on exactly how, I'm still downloading the ISO). But it looks like MS wants to eat some of Nutanix and their competitions' lunches.
See this for info http://technet.microsoft.com/en-us/library/dn765475.aspx (Storage Replica). I'm guessing it'll be file-based and synchronous commit, but until I build one...
AC: It _was_ set to the secure setting by default! MS are admitting they never should have allowed the mentally incompetent to de-secure it.
And there's no valid reason for it to be an option. Learn to do servers securely (both devs and admins) or not at all, and just bloody deal with it properly. About the only thing left to do is scour stackexchange / serverexchange for all references to it, and down-vote to hell any answer which says to set the option.
I see SAN-free in the title of the story but no mention of block access (only file). I'll bet it's not SMB3 either. So it's not going to be killing off the SAN any time soon, and probably not the NAS either since you're talking about getting the data off PCs with local disks that are already using many (most?) of their IOPS to support the local user.
So this is a low IOPS store with 4 replicas per file according to the FAQ. With 25 workstations per store, 200GB each, you get just over 1TB of NAS which is continually restriping as people turn machines off for the night or restart, can't be backed up effectively, is slow in use - and depending on how it's done, could be useful only for small files?
Yeah, that'll take off.
Quote: If enough people dump Chrome and answer the exit questionnaire thoughtfully provided by Google, maybe things will get fixed.
Hi there, hope you're well. My name is Billy Bob Joe. I saw your comment and I have some excellent waterfront property going for a steal in Florida. I need to sell ASAP for personal reasons.
But seriously, I have exactly zero confidence that those results go anywhere but the bit bucket. I strongly suspect they don't want intelligent, thoughtful users - those guys are a pain! They keep pointing out our privacy and security gaffes, and then worse - they tell ORDINARY PEOPLE! We can't have ORDINARY people understanding our data slurping!
It probably should be, but the transcript does say 17TB. Meh, I prefer to think of EMC going away.
OK I'm going to ask the question here.
A stack of experienced VMware guys (above) don't see any problem with the 2TB VMDK limits. How do you do host-based backups and VM snapshots? Don't you need to have synchronicity between all of the vdisks? How do you ensure the SAN admin doesn't nuke the RDM snap before the VM snap (if that's even possible)?
And you'd lose Storage vMotion as well, right? Isn't that supposed to be a killer feature? It seems to lock VMs pretty thoroughly to the specific array.
And yet, no sign of a replacement for P2V, removed from VMM 2012 R2. Which was removed, as I recall, so that it could be updated and released on an accelerated schedule.
Apparently the intended acceleration vector points backwards.
So I'll buy one for my datacenter and copy files to that filesystem with Explorer, right? What do you mean it's only in the Cloud? I want it local! I don't want to wait hours for data copies over this pathetic long distance WAN link! What about my backups (do NOT try to tell me that any form of resiliency = backup).
And what the dickens do you mean you can't just save to it with a shared drive - a REST API!? Oh, wait, no you want to sell me a NAS gateway too - for goodness sake, man, if it's a filesystem let it store files my way!
Stupid Cloud. It is NOT the be-all and end-all of IT (or if it is, I'm taking up flower arranging).
Evan ... I think your calculator is broken.
32 x 96 = 3072 (if you prefer, 2^5 x (3 x 2^5) = 3 x 2^10 = 3 x 1024 = 3072
That's 3TiB in my book.
Well despite any disadvantages, there is one significant advantage to the MS-contributed designs - they fit in a standard 19" rack. So those companies with HP, Dell, IBM servers and blade chassis racked today in HP, Dell, APC and IBM racks can switch out those old servers for a "MS Blade Chassis" without also ripping out the racks themselves.
Yes, obviously there are potentially downsides (2 sleds per 45mm 1U, rather than [IIRC] 3 per 48mm OpenRU in the existing OCP designs) but at least it can be up to the customers which size to choose.
Of course, we won't know if the designs use the same or similar power interconnects, nor how the network and storage interconnects are handled, until the designs are available. But it certainly seems to me (and obviously Microsoft) that there could be value in sticking to existing standards for the rack rather than going the whole hog for the new OpenRack design.
No it's not even a NAS - not really. After all a NAS generally exposes a filesystem onto which you place named files with SMB/NFS/etc. A NAS is ... useful! These don't even have that - unless you have no directories on your filesystem and you name all your files like this:
In which case yes, it's close to being a single-drive NAS.
It's more like a single-drive SAN, incompatible with the rest of the world. With at most 1GBps (that's 10Gbps!!) per device, the network ports will cost more than the drive itself. It's about 50-50 with a 1Gbps NIC on the device but then a drive will deliver 100MBps max (less than SATA). And you better hope the switch never fails, you'll temporarily lose access to data but not know what until you try to access it.
I'm sure this is a solution. But it seems someone forgot to investigate whether there was a problem in the first place.
OK I hear what you're saying, and yes:
Q. what is the first thing you do after you carve up a LUN ? ...
A. Put a filesystem on it
True enough, but I have a followup question. Which one?
Because the one thing I don't see addressed here is heterogenous environments. Which filesystem can my Win2012 box running Exchange share with a Win2012 box running SQL and a RHEL box running Oracle?
Or is everyone assuming the storage should be presented to a set of hosts running an intermediary layer and carved up there?
Funnily enough, though, most of my arguments with storage engineers have revolved around them wanting to give me little 3 and 5 disk RAID 5 sets for each Exchange database, or insisting that RAID 5 is better in every way than RAID 10 (spindle utilisation yes, performance .... not always despite what three-letter-vendor claimed). Not sure the LUN argument has even reached those cretins yet.
I just want to point out that a DNS blocklist as described (and though I do work for Telstra, I have no direct visibility of what's happening here) won't block sites that share an IP address with a C&C site.
As described the "filter" looks for the DNS query to badguy.domain.com and either blocks or ignores those queries. So when you look up "goodguy.mysite.com" it won't match the bad site DNS name, and your query (and connection attempt) proceeds.
I'm not a fan of filtering/blocking etc; be it whitelisting, blacklisting, or using a black box list of "stuff someone claimed was bad". But let's argue about the right stuff :)
So for me the thing that stands out with all of the VNXe specs is the continued specification of 2TB maximum LUN size. In all of my (increasingly) meagre experience, the LUN is the unit of storage presented to the server by the array.
So what I obviously don't understand, in this day and age, why is 2TB still the maximum? MBR vs GPT for some reason? Or am I completely clueless (it wouldn't be the first time)?
I have SMB customers with larger shared datastores than 2TB - let alone larger enterprises (in which a 10TB fileserver is the small one in the branch office). Is there some unit of storage that aggregates 2TB LUNs together at the array level, or is one expected to present 5x 2TB LUNs to the server and aggregate them there (with mdadm / Storage Spaces / etc)?
I liked this piece the best:
"Through discussions with the ACCC with a view to resolving the legal proceedings brought against HP, HP has voluntarily consented to Federal Court orders."
Voluntarily consented? Ah yes, the big international company (which IIRC has 10K staff in Aus) doesn't HAVE to comply with the Federal Court, it's just the easy way out this time.
IMO, the whole "apology" (like most, it's fair to say) is just another generic, self-serving, half-hearted, condescending, steaming pile of ... er ... codswallop, from a company who most certainly doesn't give two bits about anyone, or anything, other than screwing anyone and everyone it thinks it can while still getting away with it.
Actually SATA is supported with Storage Spaces, but it cannot be shared / multiple-connected like SAS can. Basically the drives need to be "dual-ported" (connected to both controllers). It's a small thing, but we like accuracy, don't we?
Storage Spaces then manages having the disks online on one node or t'other.
The text says it's the DGS-3420-28TC but the picture seems to be DGS-3620-28TC - and both model numbers appear to be valid and quite similar?
I didn't intend to say macro rewrites - if that's what you read then I apologise. The reality is though that templates written in Office 95 and Office 97 continue to work for the most part in Office 2010 and Office 2013. So that's a pretty good compat record. (And yes, there ARE things that break and those things are documented). Plus you get a minimum of 5 years support for Office (Office 2007 only just left support). How long is that version of LO/OO/SO (collectively $NEWOFFICE) going to be supported until you're told "Oh that's in $NEWVERSION, you'll have to upgrade". And how much is that pro support going to cost compared to the free MS cases that Enterprises get?
The problem is when you move to $NEWOFFICE it's not a tweak, it's a complete rewrite - assuming that the new product even has the right hooks and triggers for macros.
As for throwing away ... er ... replacing the DMS - let's look at that option.
1. Find a new multi-million $ DMS that works with $NEWOFFICE.
2. Write or port all functionality from $OLDDMS to $NEWDMS
3. Migrate all content, version info etc from $OLDDMS to $NEWDMS
4. Run both side by side for 10 years because no-one has the cojones to decomm the old one.
6. Rewrite macros and integration for each new version of $NEWOFFICE.
It's really not that much better is it - even assuming you can find this near-mythical $NEWDMS?
Because OpenOffice is a complete and perfect replacement for Microsoft Office, right? It never gets formatting wrong, understands all the existing 15 year old templates (across Word, Excel, PPT), includes a mail program people actually like (Outlook replacement) which talks Exchange, does offline caching and integrates with the email archive? And Excel formulas are all present with the same names, parameters and results?
And there's no chance LibreOffice or OpenOffice would be unable to create and edit documents using Rights Management Services? And it also works with the appropriate DMS (which is usually a fragile set of poorly written and undocumented macros which often needs updates even for Office service packs, let alone completely different products)?
Sorry, but while OO and LO are great for some people, they're not there yet for everybody.
Even Paris knows the above...
I'm serious - I have a customer here in Sydney with a pair of earlier Dell switches. After about 10 weeks of uptime, something goes wonky and they stop forwarding packets at or near line rate (think 20kB/s for a GigE port). So now we have a monthly switch reboot for this customer.
I believe a magnet program is a program for more advanced learners - the idea being that you attract the smart and/or enthusiastic kids and the rest will follow. So it's also somewhat of a "comply or go back to the less advanced classes and be bored/lose opportunities".
Vodafone has double the spectrum of either Optus or Telstra, as I recall. ISTR that the 700MHz spectrum was allocated 25% to each of Telstra, Optus, Voda and 3; then Voda bought 3 (and all the assets, including spectrum). Given the lacklustre performance and the resulting mass exit of subscribers, I don't see them being short on spectrum right now.
While that's absolutely true, there are a couple of factors that many (including I) thought would force adoption faster.
First and foremost, virtualisation. When you have 30 VMs on a single host (not hard to do whether you're ESX, Xen, KVM or Hyper-V) even a 4Gbps channel averages out at under 135Mbps per VM (and you're hoping the peaks on one VM cancel the troughs on another for higher throughput).
Secondly, iSCSI storage, where 4-8Gbps total from the array might be OK for smaller environments but isn't enough for larger ones, especially during backup;
Thirdly, server-backup. Aggregation only helps so much, as many (most?) aggregations split data across the connections based on source and/or destination IPs. So a single stream is limited to 1Gbps.
All those scenarios would fare better with 10Gbps, especially if all the vendors start doing the funky network/bandwidth splitting like HP - where each 10GbE can be logically separated into 4 different virtualised adapters for the OS.
In those cases, 2 x 10Gbps connections provides similar connectivity and throughput to 12 x 1Gbps connections. If 10Gb is only 5x the price, it's CHEAPER than doing 1Gbps.
The other key advantage the older CPUs have over current brethren is the power required to operate them. For example the 386 required 400ma at 5V - just 2W - and the radiation-hardened version 460ma at 5V (2.5W at 100% usage). See this doc: http://datasheets.chipdb.org/SEI/space-elec-80386.pdf. And that's the 1995 version so it's probably less now.
Even the lowest-power Intel x64 chips require more than that (17W).
And apart from generating the power, in space we also have only radiation to disperse heat. No convection, evaporation, sublimation, etc; forced cooling with air or water just moves that heat somewhere else to be radiated.
Slightly OT I know, but try installing drivers for the laptops from other download sites (e.g. Aus or UK), or set the driver properties to a foreign country (again, Aus is a reasonable choice). Or tell the AP it's in Aus.
You should get the full range of 2.4GHz channels then.
They all do it. They claim they've selected certain drives and made them differently, in an "enterprise" ready state. Frankly I figure the extra bucks are twofold - one, to get enough dosh to cover the extra warranty claims from running the drives 24x7 for 5 years instead of 8x5 for 3 years (it's 44000 hours instead of 6280), and two, some extra margin for the resellers to make some money off the enterprise drives.
I really do doubt the quality is that different (of course having said that, I'll get home and I'll have lost 4 of the 20 disks in my array of consumer drives).
Sherlock would know whether or not the drives are different ...
I'm not sure what you're trying to say with "adding switch capacity with every enclosure" - since even with UCS, you need to add fabric modules to each new chassis. And you need to plan for resiliency there, too just as much as you would with a bog standard switch.
And looking at the Cisco modules, depending on your point of view, they're either switches themselves (just switching something other than plain Ethernet) or they're dumb devices for aggregation of bandwidth. One view has them adding switch capacity with every chassis, the other says they're a reasonably dumb and inefficient piece of circuitry for which Cisco charges as much as a small car.
Regardless of whether the person has committed a crime (or breached the law in another way) or not, nothing has yet been proven.
Near as I can see from the article, and thinking of each individual case, there is a threat against the person, accompanied by a demand for money to make the threat go away.
Since the law firm is quite obviously a private concern, how is this not extortion (demanding money with menaces if you prefer)?