Re: It only makes it easier to crack...
Nah I use the much more secure P@ssw0rd!
136 posts • joined 9 Jan 2008
Nah I use the much more secure P@ssw0rd!
And two SD card slots just next to it. What are they likely to be for? Are they normal in a server (I have very little experience of servers). As installed in the case shown they are inaccessible.
They are used in X86 servers for hypervisor bootup (VMWare), could install Linux on raided SDHC cards and then use all the disk slots for data drives.
Also have a good look at the new DellEMC VXRail V Series nodes for VDI.
In my admittedly biased opinion they are one of the better ways of deploying VDI.
I could just say Avro Arrow and leave it at that.
But then the same fate befell the TSR.2 so the UK did not do any better so....
The last Block 60's only rolled off of the line in 2014 IIRC and the add on order from the UAE has restarted production so another 200 would not be an issue.
So the sweet spot for 1 drive is probably 3TB.
When installing 120 of those into 12 drive shelves, each shelf needing rack space, SAS cables and cooling in a datacentre environment then density is king. 10RU vs. 20RU when going from 3TB to 6TB pays for the $/GB difference anyway. For the likes of Google/Amazon/Azure/Netflix/Dropbox who run disks in the tens or hundreds of thousands this makes a major difference.
So while for a home PC with 1-2 disks 3TB is the sweet spot, at scale the higher the density the lower the associated costs will be and that is what drives the research.
He COULD be working for Intel....
Why? Its protecting a single 2U server crammed with PCIe flash. 1500VA will be fine.
Having worked for both, albeit 15 years ago, they deserve each other.
Quick someone dust off the FC over Token Ring standard.
It is interesting that datacore was always pitched as the virtualisation engine to use, I always thought it was a better match to IBM's SVC/StorWize platform.
With its per enclosure licensing it was quite a cost effective option.
I was expecting a Windows ME disc not NT Workstation.
There was nothing wrong with NT at the time, it was a good desktop OS. Rather lacking in server chops back then and well behind the competition in features but a reasonably competent desktop OS for the period.
I had a war going on with one of my customers for about 8 months back in 1996.
They were a high school and they insisted that they needed 2 "Multimedia" machines in the library with CD-ROM's and Soundblaster cards. Running Windows 3.11 with apps like the Microsoft Encarta and its ilk. The rest of the customers network ran on a NetWare environment with BootPROM equipped PC's running Win 3.11 off of the network with locked down and well managed.but we could not use that for the library for a variety of reasons.
It got to the point where I was replacing all of the Windows/DOS configuration files on NetWare login via login script to stop the kids installing games under Windows or DOS and having a "Shutdown" button that removed key Windows and DOS system files on Windows exit so that they could not use the machine outside of the controlled Windows environment.
Worked well but was quite time consuming to install new apps.
A few points.
1. VXrail is available today and it is NOT on Dell hardware.
2. The Dell/EMC deal, while being on track, has NOT yet closed and there is no hardware partnership between EMC and Dell at this point in time.
3. I am not 100% sure which ODM is being used for the VXrail kit but it will PROBABLY be Quanta who are the ODM for the VXRack and ScaleIO nodes.
*I work as an EMC Solution Architect in the Distribution Channel in APJ.
Once you hit 8 controllers,4+PB of raw capacity , 16TB of memory, 384 CPU Cores and 256 16Gbps front end ports do you really need more scale in a single system?
There comes a time where the complexity of the scaling is more than it is worth.
If you need monster scaling then look at a solution like ScaleIO, Isilon or ECS depending on the data type.
*DIsclaimer - I am an EMC channel pre sales guy working in the distribution channel.
I was just going to mention that.
There is nothing new just old ideas rehashed and tried again. :)
Try Windows 95 beta on an AMD 386 40Mhz with 4MB of memory.
My first introduction to the Win9x stack.
Or installing WIndows NT 3.5 Workstation from Floppys and finding a corrupted disk (36 of 38 from memory)
Yep I agree with you on a 2 controller SAN vs. Scale Out maintenance risks.
However on a dual controller array a firmware upgrade cycle is on the order of 12 months on average, assuming a tier 1 vendor with a mature product. If we look at an NDU on a EMC VNX for instance total exposure is 2 x 30 minute windows every 12 months (one per controller).
vSphere 5.5 had 15 patch release cycles in 25 months, an average of every 1.6 months. Assuming your 3 hosts and an average of 30 minutes to evacuate the VM's, apply the patches, reboot (if required) and re-enter the cluster we are talking 7-8 x 90 minute windows in those same 12 months.
Not quite the same risk profile.
As for any scale out solution has the same limits I agree and with the ScaleIO solution you get that 4th host for the cost of the hardware and hypervisor license, no per processor costs, just license the space you consume. In little old NZ that equates to between $15-20K in savings depending on support level.
Duncan I have a lot of respect for you but here you are being disingenuous at best.
Look at the first paragraph on Cormac's design guide that is on the VSAN product page....
The minimum configuration required for Virtual SAN is 3 ESXi hosts. However, this
smallest environment has important restrictions. In Virtual SAN, if there is a failure,
an attempt is made to rebuild any virtual machine components from the failed
device or host on the remaining cluster. In a 3-node cluster, if one node fails, there is
nowhere to rebuild the failed components. The same principle holds for a host that
is placed in maintenance mode. One of the maintenance mode options is to evacuate
all the data from the host. However this will only be possible if there are 4 or more
nodes in the cluster, and the cluster has enough spare capacity.
3 Node VSAN clusters, while supported do not have any resilience for node maintenance. How are you going to patch your 3 node vSphere cluster running VSAN when you cannot put on into maintainence mode. Also 2 node will only support branch offices and needs an external (vCloud Air) failover manager, no thanks.
Tokyo Institute of Technology Supercomputer
There you go, problem solved :)
Best one was a Netware 3.12 customer, their server had two network cards in it to support the two network runs around the office on Thin Net, each about 150m ish in length.
Get a support call one day that the clients keep dropping so I grabbed our network kit (BNC tester, 2 spare NE2000 cards, 3 spare terminators, a few T's and a couple of 5M lengths of thin net) and head out to the customers site.
Upon arriving I notice that the server has moved to the other side of the office, and both segments are connected to ONE card.
I ask why this is the case and they admitted to losing one of the terminators when moving the server and of course they did not see the issue, I measured the network before I fixed it and sure enough, total length 327M :)
Re-terminated the second card, split the segments and of course the network issues disappear immediately.
Easiest emergency callout fee we ever earned.
Quote: Windows XP was the first PC operating system to drop the MS-DOS Prompt and change it to Command Prompt, due to a change to the NT kernel. The Windows NT family has used the newer Command Prompt since it started with Windows NT 3.1, so it was nothing new on that side of the fence.
Umm Windows NT 3.5 Workstation, Windows NT 3.51 Workstation, Windows NT 4.0 Workstation and Windows 2000 Professional would like to talk to you behind the bike shed :)
Exactly. Look at the local launch of NetFlix in NZ. Something like 1500 items in the library compared to the nearly 9000 in the US service and we are being charged a 30% premium for access to that reduced library.
Sky's Neon service is even more expensive at $20NZD a month and Spark's Lightbox is the same price as NetFlix @ $12.99NZD a month (Well actually 30 days).
So to get the same coverage as NetFlix and Hulu Plus in New Zealand I need to spend $46NZD a month and get maybe 4000 items in the library. In the US $22NZD gets you access to NetFlix and Hulu Plus with a combined library of ~19,000 items.
And the content providers wonder why Aus/NZ is a hotbed of torrents and stolen content.
The same can be said for Johnson. Chads point is that a true AFA is not the same as a 3Par 7450 or XP7 with pure flash. There is a measurable latency difference between most of the current startup/ex-startup AFA's and a hybrid array that is not hybrid.
Even HP's tools model approx 1.5MS of latency in an AFA 7450 with the MLC / cMLC disks.
Compare that to 500us for Xtreme-IO / Violin etc. A 3x improvement in latency is significant.
Except that HP 3Par and HDS HUSVM are NOT AFA's.
They are hybrid arrays without disk, there are significant differences in latency between a hybrid box with just SSD (VNX/3Par/V7000/HUSVM) and a true AFA (Xtreme-IO / IBM Flash System / Pure etc.)
Can run both natively on partitions of the hardware.
As a 20+ year IT veteran who supported NetWare from 2.15c up to v6 including Groupwise 4.1 -> 5.5 migrations and 5.5 -> 6.0 as well I really enjoyed the stability and performance of NetWare against Winblows.
I had NetWare 3.12 servers with 400+ days of up time, try that on NT 3.1/3.51, it just did not happen.
Compare Windows 2000 AD to NDS on NetWare 5.1 and there was NO comparison, NDS was so far ahead MS needed a telescope to see it.
But marketing and ownership of the desktop triumphed over a superior product and here we are today with MS still not caught up in some areas, there are still limits to how you design an AD for 20K users when NetWare and NDS was just getting out of second gear with 20K users.
Unfortunately I see a lot of similarities between Novell and VMware :(
Also driving the low price is last time I checked about 80-90% of traffic never leaves the nation.
Compare that to NZ where I live and approx. 60% of traffic is international.
SMB Array - EMC has VNXe
HP has a similar capability to VPLEX in Peer Persistence on the 3Par platform although less flexible and more feature limited, however a good match for VPLEX Metro when used with VMware or Hyper-V
OK so you quit 5 years ago, now there is an easy way to solve that, cross realm LFD has fixed your issue, I leveled a Paladin from 10-50 in dungeons via LFD almost exclusively in a few short days (I do not get long play sessions).
Blizzard do work to solve issues as they arise, they may not fix them in a few days or weeks but they DO get solved, or at least worked on.
I do not get the hateorade on this, I really do not.
So they spent a bunch of time and money and are not happy with the result. They could do 3 things.
1. Keep pouring time and money into a dog with fleas (See Duke Nukem Forever for the outcome of this)
2. Polish the turd a bit and release it with much hype and never fix the problems (The EA option)
3. Call it off, junk the work and say "What's Next"
The third option requires a large amount of discipline and guts, no shareholder will be happy with that approach but it is about quality in the end of the day.
I respect Blizzard immensely and I say good on them.
Purple are surveillance drives with firmware optimised for multistream writes.
So you want to make it BIGGER?
See what happens when you compress a 15GB MPEG-4 HD video file.
10's of TB is a VERY low estimate.
I would put it in the Petabyte range at least.
A quick look at "Recent Releases" on one of the DVD sites lists 1,081 movies released in the last 90 days. At a VERY conservative 15GB per movie average that is 15.8TB in 90 days or approx 65TB a year which is, if we say the industry has done HD for 4 years, a quarter of a PB right there. Add historical digitised content going back 20+ years and amateur/Cam Girl/Red Tube type content and I would say far side of a PB easy.
VNX can use flash as both Cache and a Data Tier at the same time.
Not looked at Nimble enough to know what it can do TBH.
Not sure about the UK but even down here in NZ 20Mbps national is not that hard to acquire.
Any DOCSIS3, VDSL2 or Fiber connection should suffice.
They cannot even get the comparison configuration right.
VSAN disk groups are 1 SSD device and up to 7 magnetic devices. Up to 5 groups in a host. To support 24 x 1.2TB Magnetic disks would required 4 disk groups and therefore 4 SSD devices. You cannot chop up the FusionIO card into 4 LUN's and stay supported and you cannot at this time use external disk shelves.
If they cant get the before right I am not sure I trust their analysis.
Funny most of my V7000 boxen are sitting in traditional HP sites.
I am running a Iomega IX2-200D with 2 x 2TB disks. (Free at an industry event or would have been Synology 4 Bay)
Plex Server running on my gaming PC sharing the content.
Currently using the Plex App on the Samsung 2013 Series 5 TV (UA50F5500AM) but the Samsung WiFi implementation is a PITA so looking at mounting an Intel NUC (i3, 4GB, SSD) on the back via the VESA mount and running Windows 7 + Plex Client and having another Windows client available with a real keyboard and some big screen gaming (Intel HD graphics will be fine for what I play)
You mean the same system HP have used in EVA and now 3Par for years?
Except that if you issued a PxE boot message to 1000 desktops they would be booted in 30-60 seconds from a SATA magnetic disk and 10-20 seconds from low cost consumer SSD's.
IBM's storage strategy in the entry/mid-range is Storwize and Storwize. :)
They do modular in XIV but no real virtual SAN offering at the moment.
There have always been 2 GT releases per platform (GT and GT2 on PS1, GT3 and GT4 on PS2 and now GT5 and GT6 on PS3)
As for the AI cars they have pissed me off since GT1, making no allowance for player cars at all.
XtremeIO is a tool in EMC's kit bag, strategically not as an individual offering (Not that they will turn down sales I am sure, but even EMC are pitching it at big VDI right now) but as part of a complete "Big Picture" play with ViPR providing the "Software Defined Storage" control layer with XtremeIO, VMAX, VNX, Isilon, ScaleIO and Data Domain as storage resources to be managed and VPLEX/RecoverPoint providing the availability piece.
It is an impressive story that really only IBM can compete with, but even IBM have some work to do in this space.
The entertainment system in the Air New Zealand 777 I flew to SFO in 2010 ran Windows CE on the endpoints and my take is that these things do not change so fast.
I know this because I had to wait 30 mins while my screen rebooted loading the WInCE image over a serial connection.
Newer systems are probably Linux based I agree but the vast bulk of the install base is probably still Windows powered.
EMC buying ScaleIO will pave the way for VMware to have a storage hyper-visor that is worth deploying.
As with ViPR it makes sense to have the storage company develop the storage tech and then transfer it to VMware rather than transferring the engineers to VMware.
I think ViPR, or a subset of, will end up at VMware. But EMC will keep a version of it in their camp as a hedge against Microsoft/OpenStack/<Insert next big thing in server virtualisation>.
The others that you forgot to mention are vBlocks from VCE and Flexpods from NetApp/Cisco/VMware.
Will be interesting to see what Oracle are doing for storage, will it use a ZFS based system, a Nutanix like system or Pillar Axiom type tech?
The SSD provides up to 98,000/90,000 random read/write IOPS and 540MB/sec sequentially reading, 520MB/sec sequentially writing. These numbers vary with product capacity.
While Active Directory is not the best LDAP implementation it DOES have the largest installed base by a long margin.
There are a LOT of admins out there with AD skills and CxO's are comfortable with the technology.
Betting against Microsoft tends to be a bad idea in everything but game consoles.
Biting the hand that feeds IT © 1998–2017