Re: This is Dell or EMC?
Long term Plex Pass user here who did not use their cloud as it was obviously unsustainable. Too good to be true always is.
156 posts • joined 9 Jan 2008
Long term Plex Pass user here who did not use their cloud as it was obviously unsustainable. Too good to be true always is.
Well AMD Epyc has 128 PCIe lanes in either single or dual CPU configs.
32 Lanes for external comms (4 x 100GbE or 16 x 25GbE ports) and 96 lanes for 32 storage devices (3/Ruler) makes sense. That is nearly 3GB/Sec per device (2,955MB/Sec).
Of course Intel will probably not license the EDSFF form factor for AMD based systems so it might all be for naught.
You will not find a traditional RAID card that can keep up.
Core CPU driven Scale Out Software Defined Storage will be the norm for these.
Jets are actually simpler to maintain though as you have 90% of a jet in a turboprop + a gearbox and a prop.
Modern turbofans will be lower maintenance cost per mile.
Very few people can tell a real world difference between a SATA SSD and NVMe, this is especially true in a single user (Not server or shared storage) workload.
Unless you are editing 4K video in real time or similar then a 6Gbps SATA SSD is more than fast enough 99% of the time.
An article on Ryzen Gen 1 is available at Toms Hardware.
The obvious is they took the eye off of the ball during transition and now that is behind them.
After a P9+ that did not get updates in my market for over 15 months Huawei will never again get money from me for a phone.
When the flagship phone is over 12 months behind on security patches there is an issue.
Two things stop it going into a laptop.
12.5mm Z Height
These will end up in servers and storage arrays.
It does mean that when my vendor qualifies these I will be able to get 3PB raw into 5 rack units. At a monstrous price but that density. 3PB used to be 5+ racks, now it is 5RU.
So the exploit was proved on an Mac with MacOS 10 on Intel. AMD is vulnerable, ARM is vulnerable and so are most versions of Linux.
But well done with the Intel and Microsoft hate.
XIV wants you to hold its beer. :)
Large SSD's are available, if you have the coin.
Samsung have a 15TB PM1633a in a 2.5" form factor.
The issue is not technology but cost, that 15TB disk is over $10K USD.
But without the licensing advantages of Windows 10 Enterprise managing those VM's and their licenses in a 30,000 seat enterprise would be a nightmare of the proportion you have never dreamed and the risk and cost exposure would make any corporate guy run for cover.
What works in a SMB environment does not scale to 30,000 seats very often.
I sense 3.2TB or 3.84TB SSD's in this box.
As a DellEMC Solution Architect in the Distribution Channel in the ANZ region I can categorically state that VXRail is *NOT* limited to 8 or 12 nodes.
Also the support for VXRail is *NOT* Dell standard or even Dell ProSupport but is instead supported by the VCE support organisation which also supports vBlock/vxBlock/vxRack.
BTW unlike a lot of you on here not hiding behind AC, happy to post and be held accountable for my posts.
I am the first to accept that VXRail is not perfect, but in a VMWare shop who want to lower their OPEX costs and free resources for future projects VXRail is hard to beat.
Look at the AMD Ryzen 3 1200 mate, much better option compared to a current gen APU.
If you want an APU wait for the Zen based ones towards the end of the year.
SAS SSD's are already shipping at 15.3TB and 32TB will be out either late this year or early next year so unless this sat for 12+ months the article is wrong.
My take on that now is they look even better.
Intel keep playing silly games.
It was adding the Skype for Business functionality to Google that made O365 cheaper.
Not unusual with Microsoft.
Nah I use the much more secure P@ssw0rd!
And two SD card slots just next to it. What are they likely to be for? Are they normal in a server (I have very little experience of servers). As installed in the case shown they are inaccessible.
They are used in X86 servers for hypervisor bootup (VMWare), could install Linux on raided SDHC cards and then use all the disk slots for data drives.
Also have a good look at the new DellEMC VXRail V Series nodes for VDI.
In my admittedly biased opinion they are one of the better ways of deploying VDI.
I could just say Avro Arrow and leave it at that.
But then the same fate befell the TSR.2 so the UK did not do any better so....
The last Block 60's only rolled off of the line in 2014 IIRC and the add on order from the UAE has restarted production so another 200 would not be an issue.
So the sweet spot for 1 drive is probably 3TB.
When installing 120 of those into 12 drive shelves, each shelf needing rack space, SAS cables and cooling in a datacentre environment then density is king. 10RU vs. 20RU when going from 3TB to 6TB pays for the $/GB difference anyway. For the likes of Google/Amazon/Azure/Netflix/Dropbox who run disks in the tens or hundreds of thousands this makes a major difference.
So while for a home PC with 1-2 disks 3TB is the sweet spot, at scale the higher the density the lower the associated costs will be and that is what drives the research.
He COULD be working for Intel....
Why? Its protecting a single 2U server crammed with PCIe flash. 1500VA will be fine.
Having worked for both, albeit 15 years ago, they deserve each other.
Quick someone dust off the FC over Token Ring standard.
It is interesting that datacore was always pitched as the virtualisation engine to use, I always thought it was a better match to IBM's SVC/StorWize platform.
With its per enclosure licensing it was quite a cost effective option.
I was expecting a Windows ME disc not NT Workstation.
There was nothing wrong with NT at the time, it was a good desktop OS. Rather lacking in server chops back then and well behind the competition in features but a reasonably competent desktop OS for the period.
I had a war going on with one of my customers for about 8 months back in 1996.
They were a high school and they insisted that they needed 2 "Multimedia" machines in the library with CD-ROM's and Soundblaster cards. Running Windows 3.11 with apps like the Microsoft Encarta and its ilk. The rest of the customers network ran on a NetWare environment with BootPROM equipped PC's running Win 3.11 off of the network with locked down and well managed.but we could not use that for the library for a variety of reasons.
It got to the point where I was replacing all of the Windows/DOS configuration files on NetWare login via login script to stop the kids installing games under Windows or DOS and having a "Shutdown" button that removed key Windows and DOS system files on Windows exit so that they could not use the machine outside of the controlled Windows environment.
Worked well but was quite time consuming to install new apps.
A few points.
1. VXrail is available today and it is NOT on Dell hardware.
2. The Dell/EMC deal, while being on track, has NOT yet closed and there is no hardware partnership between EMC and Dell at this point in time.
3. I am not 100% sure which ODM is being used for the VXrail kit but it will PROBABLY be Quanta who are the ODM for the VXRack and ScaleIO nodes.
*I work as an EMC Solution Architect in the Distribution Channel in APJ.
Once you hit 8 controllers,4+PB of raw capacity , 16TB of memory, 384 CPU Cores and 256 16Gbps front end ports do you really need more scale in a single system?
There comes a time where the complexity of the scaling is more than it is worth.
If you need monster scaling then look at a solution like ScaleIO, Isilon or ECS depending on the data type.
*DIsclaimer - I am an EMC channel pre sales guy working in the distribution channel.
I was just going to mention that.
There is nothing new just old ideas rehashed and tried again. :)
Try Windows 95 beta on an AMD 386 40Mhz with 4MB of memory.
My first introduction to the Win9x stack.
Or installing WIndows NT 3.5 Workstation from Floppys and finding a corrupted disk (36 of 38 from memory)
Yep I agree with you on a 2 controller SAN vs. Scale Out maintenance risks.
However on a dual controller array a firmware upgrade cycle is on the order of 12 months on average, assuming a tier 1 vendor with a mature product. If we look at an NDU on a EMC VNX for instance total exposure is 2 x 30 minute windows every 12 months (one per controller).
vSphere 5.5 had 15 patch release cycles in 25 months, an average of every 1.6 months. Assuming your 3 hosts and an average of 30 minutes to evacuate the VM's, apply the patches, reboot (if required) and re-enter the cluster we are talking 7-8 x 90 minute windows in those same 12 months.
Not quite the same risk profile.
As for any scale out solution has the same limits I agree and with the ScaleIO solution you get that 4th host for the cost of the hardware and hypervisor license, no per processor costs, just license the space you consume. In little old NZ that equates to between $15-20K in savings depending on support level.
Duncan I have a lot of respect for you but here you are being disingenuous at best.
Look at the first paragraph on Cormac's design guide that is on the VSAN product page....
The minimum configuration required for Virtual SAN is 3 ESXi hosts. However, this
smallest environment has important restrictions. In Virtual SAN, if there is a failure,
an attempt is made to rebuild any virtual machine components from the failed
device or host on the remaining cluster. In a 3-node cluster, if one node fails, there is
nowhere to rebuild the failed components. The same principle holds for a host that
is placed in maintenance mode. One of the maintenance mode options is to evacuate
all the data from the host. However this will only be possible if there are 4 or more
nodes in the cluster, and the cluster has enough spare capacity.
3 Node VSAN clusters, while supported do not have any resilience for node maintenance. How are you going to patch your 3 node vSphere cluster running VSAN when you cannot put on into maintainence mode. Also 2 node will only support branch offices and needs an external (vCloud Air) failover manager, no thanks.
Tokyo Institute of Technology Supercomputer
There you go, problem solved :)
Best one was a Netware 3.12 customer, their server had two network cards in it to support the two network runs around the office on Thin Net, each about 150m ish in length.
Get a support call one day that the clients keep dropping so I grabbed our network kit (BNC tester, 2 spare NE2000 cards, 3 spare terminators, a few T's and a couple of 5M lengths of thin net) and head out to the customers site.
Upon arriving I notice that the server has moved to the other side of the office, and both segments are connected to ONE card.
I ask why this is the case and they admitted to losing one of the terminators when moving the server and of course they did not see the issue, I measured the network before I fixed it and sure enough, total length 327M :)
Re-terminated the second card, split the segments and of course the network issues disappear immediately.
Easiest emergency callout fee we ever earned.
Quote: Windows XP was the first PC operating system to drop the MS-DOS Prompt and change it to Command Prompt, due to a change to the NT kernel. The Windows NT family has used the newer Command Prompt since it started with Windows NT 3.1, so it was nothing new on that side of the fence.
Umm Windows NT 3.5 Workstation, Windows NT 3.51 Workstation, Windows NT 4.0 Workstation and Windows 2000 Professional would like to talk to you behind the bike shed :)
Exactly. Look at the local launch of NetFlix in NZ. Something like 1500 items in the library compared to the nearly 9000 in the US service and we are being charged a 30% premium for access to that reduced library.
Sky's Neon service is even more expensive at $20NZD a month and Spark's Lightbox is the same price as NetFlix @ $12.99NZD a month (Well actually 30 days).
So to get the same coverage as NetFlix and Hulu Plus in New Zealand I need to spend $46NZD a month and get maybe 4000 items in the library. In the US $22NZD gets you access to NetFlix and Hulu Plus with a combined library of ~19,000 items.
And the content providers wonder why Aus/NZ is a hotbed of torrents and stolen content.
The same can be said for Johnson. Chads point is that a true AFA is not the same as a 3Par 7450 or XP7 with pure flash. There is a measurable latency difference between most of the current startup/ex-startup AFA's and a hybrid array that is not hybrid.
Even HP's tools model approx 1.5MS of latency in an AFA 7450 with the MLC / cMLC disks.
Compare that to 500us for Xtreme-IO / Violin etc. A 3x improvement in latency is significant.
Except that HP 3Par and HDS HUSVM are NOT AFA's.
They are hybrid arrays without disk, there are significant differences in latency between a hybrid box with just SSD (VNX/3Par/V7000/HUSVM) and a true AFA (Xtreme-IO / IBM Flash System / Pure etc.)
Can run both natively on partitions of the hardware.
As a 20+ year IT veteran who supported NetWare from 2.15c up to v6 including Groupwise 4.1 -> 5.5 migrations and 5.5 -> 6.0 as well I really enjoyed the stability and performance of NetWare against Winblows.
I had NetWare 3.12 servers with 400+ days of up time, try that on NT 3.1/3.51, it just did not happen.
Compare Windows 2000 AD to NDS on NetWare 5.1 and there was NO comparison, NDS was so far ahead MS needed a telescope to see it.
But marketing and ownership of the desktop triumphed over a superior product and here we are today with MS still not caught up in some areas, there are still limits to how you design an AD for 20K users when NetWare and NDS was just getting out of second gear with 20K users.
Unfortunately I see a lot of similarities between Novell and VMware :(
Also driving the low price is last time I checked about 80-90% of traffic never leaves the nation.
Compare that to NZ where I live and approx. 60% of traffic is international.
SMB Array - EMC has VNXe
HP has a similar capability to VPLEX in Peer Persistence on the 3Par platform although less flexible and more feature limited, however a good match for VPLEX Metro when used with VMware or Hyper-V
Biting the hand that feeds IT © 1998–2018