Re: This is Dell or EMC?
Long term Plex Pass user here who did not use their cloud as it was obviously unsustainable. Too good to be true always is.
156 posts • joined 9 Jan 2008
Well AMD Epyc has 128 PCIe lanes in either single or dual CPU configs.
32 Lanes for external comms (4 x 100GbE or 16 x 25GbE ports) and 96 lanes for 32 storage devices (3/Ruler) makes sense. That is nearly 3GB/Sec per device (2,955MB/Sec).
Of course Intel will probably not license the EDSFF form factor for AMD based systems so it might all be for naught.
Very few people can tell a real world difference between a SATA SSD and NVMe, this is especially true in a single user (Not server or shared storage) workload.
Unless you are editing 4K video in real time or similar then a 6Gbps SATA SSD is more than fast enough 99% of the time.
Two things stop it going into a laptop.
12.5mm Z Height
These will end up in servers and storage arrays.
It does mean that when my vendor qualifies these I will be able to get 3PB raw into 5 rack units. At a monstrous price but that density. 3PB used to be 5+ racks, now it is 5RU.
But without the licensing advantages of Windows 10 Enterprise managing those VM's and their licenses in a 30,000 seat enterprise would be a nightmare of the proportion you have never dreamed and the risk and cost exposure would make any corporate guy run for cover.
What works in a SMB environment does not scale to 30,000 seats very often.
As a DellEMC Solution Architect in the Distribution Channel in the ANZ region I can categorically state that VXRail is *NOT* limited to 8 or 12 nodes.
Also the support for VXRail is *NOT* Dell standard or even Dell ProSupport but is instead supported by the VCE support organisation which also supports vBlock/vxBlock/vxRack.
BTW unlike a lot of you on here not hiding behind AC, happy to post and be held accountable for my posts.
I am the first to accept that VXRail is not perfect, but in a VMWare shop who want to lower their OPEX costs and free resources for future projects VXRail is hard to beat.
And two SD card slots just next to it. What are they likely to be for? Are they normal in a server (I have very little experience of servers). As installed in the case shown they are inaccessible.
They are used in X86 servers for hypervisor bootup (VMWare), could install Linux on raided SDHC cards and then use all the disk slots for data drives.
So the sweet spot for 1 drive is probably 3TB.
When installing 120 of those into 12 drive shelves, each shelf needing rack space, SAS cables and cooling in a datacentre environment then density is king. 10RU vs. 20RU when going from 3TB to 6TB pays for the $/GB difference anyway. For the likes of Google/Amazon/Azure/Netflix/Dropbox who run disks in the tens or hundreds of thousands this makes a major difference.
So while for a home PC with 1-2 disks 3TB is the sweet spot, at scale the higher the density the lower the associated costs will be and that is what drives the research.
I had a war going on with one of my customers for about 8 months back in 1996.
They were a high school and they insisted that they needed 2 "Multimedia" machines in the library with CD-ROM's and Soundblaster cards. Running Windows 3.11 with apps like the Microsoft Encarta and its ilk. The rest of the customers network ran on a NetWare environment with BootPROM equipped PC's running Win 3.11 off of the network with locked down and well managed.but we could not use that for the library for a variety of reasons.
It got to the point where I was replacing all of the Windows/DOS configuration files on NetWare login via login script to stop the kids installing games under Windows or DOS and having a "Shutdown" button that removed key Windows and DOS system files on Windows exit so that they could not use the machine outside of the controlled Windows environment.
Worked well but was quite time consuming to install new apps.
A few points.
1. VXrail is available today and it is NOT on Dell hardware.
2. The Dell/EMC deal, while being on track, has NOT yet closed and there is no hardware partnership between EMC and Dell at this point in time.
3. I am not 100% sure which ODM is being used for the VXrail kit but it will PROBABLY be Quanta who are the ODM for the VXRack and ScaleIO nodes.
*I work as an EMC Solution Architect in the Distribution Channel in APJ.
Once you hit 8 controllers,4+PB of raw capacity , 16TB of memory, 384 CPU Cores and 256 16Gbps front end ports do you really need more scale in a single system?
There comes a time where the complexity of the scaling is more than it is worth.
If you need monster scaling then look at a solution like ScaleIO, Isilon or ECS depending on the data type.
*DIsclaimer - I am an EMC channel pre sales guy working in the distribution channel.
Yep I agree with you on a 2 controller SAN vs. Scale Out maintenance risks.
However on a dual controller array a firmware upgrade cycle is on the order of 12 months on average, assuming a tier 1 vendor with a mature product. If we look at an NDU on a EMC VNX for instance total exposure is 2 x 30 minute windows every 12 months (one per controller).
vSphere 5.5 had 15 patch release cycles in 25 months, an average of every 1.6 months. Assuming your 3 hosts and an average of 30 minutes to evacuate the VM's, apply the patches, reboot (if required) and re-enter the cluster we are talking 7-8 x 90 minute windows in those same 12 months.
Not quite the same risk profile.
As for any scale out solution has the same limits I agree and with the ScaleIO solution you get that 4th host for the cost of the hardware and hypervisor license, no per processor costs, just license the space you consume. In little old NZ that equates to between $15-20K in savings depending on support level.
Duncan I have a lot of respect for you but here you are being disingenuous at best.
Look at the first paragraph on Cormac's design guide that is on the VSAN product page....
The minimum configuration required for Virtual SAN is 3 ESXi hosts. However, this
smallest environment has important restrictions. In Virtual SAN, if there is a failure,
an attempt is made to rebuild any virtual machine components from the failed
device or host on the remaining cluster. In a 3-node cluster, if one node fails, there is
nowhere to rebuild the failed components. The same principle holds for a host that
is placed in maintenance mode. One of the maintenance mode options is to evacuate
all the data from the host. However this will only be possible if there are 4 or more
nodes in the cluster, and the cluster has enough spare capacity.
3 Node VSAN clusters, while supported do not have any resilience for node maintenance. How are you going to patch your 3 node vSphere cluster running VSAN when you cannot put on into maintainence mode. Also 2 node will only support branch offices and needs an external (vCloud Air) failover manager, no thanks.
Best one was a Netware 3.12 customer, their server had two network cards in it to support the two network runs around the office on Thin Net, each about 150m ish in length.
Get a support call one day that the clients keep dropping so I grabbed our network kit (BNC tester, 2 spare NE2000 cards, 3 spare terminators, a few T's and a couple of 5M lengths of thin net) and head out to the customers site.
Upon arriving I notice that the server has moved to the other side of the office, and both segments are connected to ONE card.
I ask why this is the case and they admitted to losing one of the terminators when moving the server and of course they did not see the issue, I measured the network before I fixed it and sure enough, total length 327M :)
Re-terminated the second card, split the segments and of course the network issues disappear immediately.
Easiest emergency callout fee we ever earned.
Quote: Windows XP was the first PC operating system to drop the MS-DOS Prompt and change it to Command Prompt, due to a change to the NT kernel. The Windows NT family has used the newer Command Prompt since it started with Windows NT 3.1, so it was nothing new on that side of the fence.
Umm Windows NT 3.5 Workstation, Windows NT 3.51 Workstation, Windows NT 4.0 Workstation and Windows 2000 Professional would like to talk to you behind the bike shed :)
Exactly. Look at the local launch of NetFlix in NZ. Something like 1500 items in the library compared to the nearly 9000 in the US service and we are being charged a 30% premium for access to that reduced library.
Sky's Neon service is even more expensive at $20NZD a month and Spark's Lightbox is the same price as NetFlix @ $12.99NZD a month (Well actually 30 days).
So to get the same coverage as NetFlix and Hulu Plus in New Zealand I need to spend $46NZD a month and get maybe 4000 items in the library. In the US $22NZD gets you access to NetFlix and Hulu Plus with a combined library of ~19,000 items.
And the content providers wonder why Aus/NZ is a hotbed of torrents and stolen content.
The same can be said for Johnson. Chads point is that a true AFA is not the same as a 3Par 7450 or XP7 with pure flash. There is a measurable latency difference between most of the current startup/ex-startup AFA's and a hybrid array that is not hybrid.
Even HP's tools model approx 1.5MS of latency in an AFA 7450 with the MLC / cMLC disks.
Compare that to 500us for Xtreme-IO / Violin etc. A 3x improvement in latency is significant.
As a 20+ year IT veteran who supported NetWare from 2.15c up to v6 including Groupwise 4.1 -> 5.5 migrations and 5.5 -> 6.0 as well I really enjoyed the stability and performance of NetWare against Winblows.
I had NetWare 3.12 servers with 400+ days of up time, try that on NT 3.1/3.51, it just did not happen.
Compare Windows 2000 AD to NDS on NetWare 5.1 and there was NO comparison, NDS was so far ahead MS needed a telescope to see it.
But marketing and ownership of the desktop triumphed over a superior product and here we are today with MS still not caught up in some areas, there are still limits to how you design an AD for 20K users when NetWare and NDS was just getting out of second gear with 20K users.
Unfortunately I see a lot of similarities between Novell and VMware :(
Biting the hand that feeds IT © 1998–2019