* Posts by Nate Amsden

2005 posts • joined 19 Jun 2007

The Palm Palm: The Derringer of smartphones

Nate Amsden Silver badge

Re: Oh look, they re invented the HTC Touch Diamond.

quite surprised but perhaps I shouldn't be for the lack of comparison to the HP (Palm) Veer. Almost an identical form factor and target market. It was the last WebOS phone to fully launch (I don't count the Pre3 as it was canceled quite suddenly).


The veer was quite cute, and no built in headphone jack though it did have a magnetic attachment to get a headphone jack(and micro USB). It did have a slide out keyboard which this new Palm lacks. It also had wireless charging. No WebOS devices had expandable storage unfortunately.

I don't know what the Veer's sales were like but I remember the comments at the time when the market was going to bigger and bigger screens it seemed crazy to go the exact opposite direction though I appreciate the risk they took trying something different. I bought a Veer myself I think it was basically free when I renewed my ATT contract at the time(was using a Pre3). Never really had a use for the Veer outside of a toy. Ended up giving it to a webos developer a couple of years later.

Still have my ATT Pre3 sitting in a box along with a French language version of the Pre3. My nearly 6 year old galaxy note 3 remains my daily driver and my first Android device.

Here are another 45,000 reasons to patch Windows systems against old NSA exploits

Nate Amsden Silver badge

how about

turning off UPnP


Never have had a router that supported UPnP myself, well my home broadband connections have always had bridged modems with either Linux or OpenBSD(past 12 years or so) as my gateway.

Is Google's Pixel getting better, or just more expensive?

Nate Amsden Silver badge

Re: RE: Topperfalkon

Curious how many reboots is a lot these days? My primary phone is a note 3 for 5+ years and it last was rebooted 112 days ago. I think that reboot was me pulling the battery out to get to the SD card to copy about 45gb of files over to it directly before a 2 week trip (instead of through the phone which is a lot slower).

Groundhog Day comes early as Intel Display Drivers give Windows 10 the silent treatment

Nate Amsden Silver badge

Re: Office 2010

not sure what version of exchange like functionality Office 365 has but I have outlook 2010 on windows 7 connected to office 365 no issues., have been since at least 2013 I think, prior to that the company I am at was using hosted Exchange at Rackspace and it worked fine there too.

official support looks to conclude in Oct 2020 (updates etc). Haven't heard/seen anything saying when or if outlook 2010 will stop working with office 365 exchange.

Most of my email is done via Outlook web access from Linux but I do keep outlook running 24/7 in my windows VM sometimes it is useful.

Woke Linus Torvalds rolls his first 4.20, mulls Linux 5.0 effort for 2019

Nate Amsden Silver badge

Re: just mewling quims.

I'm curious what most people might find more "offensive" if they were given a choice

1) swear words or 2) USING ALL CAPS

I remember at my first job back in the 90s they used an old IBM system over serial ports and it only accepted input in all caps. One day the CEO of the company (about ~120 people) sent me an email saying "NATE PLEASE COME SEE ME IN MY OFFICE". I was shook up at the time but quickly learned he wasn't upset he just forgot to turn off caps lock after switching out of that app.

I think if I had to choose it would be CAPS would be more strongly felt than swear words. Hell in many cases people use swear words as comedy stuff.

Oracle 'net-watcher agrees, China Telecom is a repeat offender for misdirecting traffic

Nate Amsden Silver badge

low priority traffic?

Routing something that far away will of course kill latency and perhaps short of something like residential broadband or mobile connections would result in the customers seeing that drop in performance/throughput and reporting it.

I was involved with one such incident that I kept the traceroute for in 2004, the story I was told was there was a fiber cut in the midwest of the U.S. that caused some previously unused BGP broadcasts to route traffic for the company I was with(AT&T customer at an AT&T facility near Seattle) to go through Russia. Packet loss got as high as 98%, and being we were an online application had to take our site down since it was wrecking havoc with transactions. Took AT&T and friends 6-7 hours to put the filters in place etc to resolve the issue.

Traceroute from the time (source and destination addresses both in Seattle area)


32 hops, 98% packet loss and 280ms later arriving at the destination.

It's been a week since engineers approved a new DNS encryption standard and everyone is still yelling

Nate Amsden Silver badge

Re: where are the implimentations ?

.local I think is mainly used in windows and mac environments ? I've never used it in linux anyway. At the org I work for we have 47 domains hosted internally, some are both internal and external. All are .com or maybe .co.uk .de .fr etc.

Used to have over 100 but cut it down by a bit last year.

So saying .local isn't affected does nothing for what may be my situation at some point.

Nate Amsden Silver badge

where are the implimentations ?

I'll admit I haven't followed this closely. Just looking now at DNS over HTTPS for example, other than some web browsers, and Cloudflare (maybe some other public recursive systems) where are the other implementations of this? (specifically looking for server implementations, not stuff that is locked away by a service provider black box like Cloudflare). Speaking as someone who has run recursive and authoritative name servers for 22 years now(and still does today).

Wikipedia's info on it is pretty void of info https://en.wikipedia.org/wiki/DNS_over_HTTPS

Recently I looked into DNS over TLS(was just curious), specifically for BIND anyway and came across this page https://kb.isc.org/docs/aa-01386 , which talks about using stunnel in front of bind for DNS over TLS. Which to me is just a hack. I'd expect to see native TLS support in something like BIND, at least so you have full visibility into the IPs that are sending requests(with a proxy like stunnel that information would get lost).

Myself I am fine with DNS as is I have no need for TLS or HTTPS. Though I can certainly understand there are people in situations where they have a much higher need for privacy and for whatever reason a vpn may not work for them.

SSL/TLS connections are difficult enough now to debug with encryption protocols and ciphers and versions and generally crappy logging on behalf of the applications.

DNS in browsers is already a pain with the browser often caching DNS responses. Probably been 100s of times over the past decade I've had to tell users to restart their browsers to use another browser to clear their browser dns cache.

My general bigger concern with the likes of firefox and probably other browsers wanting to use DNS over HTTPS how that might affect my services. e.g. users connect to a VPN, and that has DNS resources that resolve stuff for internal names. I would kind of expect/fear that if firefox and others are defaulting to a public DNS over HTTPS provider than it would break DNS queries for basically everything internal the user is trying to connect to. Time will tell I guess.

Also of course more broadly along the same lines having inconsistent DNS behavior depending on whether or not the app is using DNS over HTTPS to a public resolver or the operating system's resolver.

(and yeah not a fan of IPv6 either)

IBM sits draped over the bar at The Cloud or Bust saloon. In walks Red Hat

Nate Amsden Silver badge


sad to see such big companies splash such massive amounts of cash while simultaneously practically strangling themselves for profits.

I mean IBM could invest $1 billion more in their cloud stuff every quarter for the next 8 years with that kind of cash, and I'd hope they'd be able to get 10X out of it relative to what they may get from Red Hat in that time frame. Maybe they just aren't capable of doing that kind of thing(if not then that is sad too).

If I was an IBM customer especially looking to use their cloud stuff I'd certainly be more impressed with an 8 year big investment push for their cloud stuff than a one off purchase of a services company whose software is all pretty much open source anyway. Everything I have seen claims IBM's cloud abilities are distant from the big guys. I can't imagine how Red hat can change that, with their software readily available to IBM this whole time.

I haven't been a Red Hat customer in 13 years (though used CentOS more recently than that, last time I used it was about 2010). I could see Red hat being bought for $5B or something, but $30+ ? Just insane, such a wasted opportunity. I haven't been a customer of IBM stuff since about 16 years.

Not that HP(who I have used a lot) is a whole lot different. If they had taken that $10B they dumped on fraud Autonomy and put it into mobile/WebOS that could of been something great.

The D in Systemd stands for 'Dammmmit!' A nasty DHCPv6 packet can pwn a vulnerable Linux box

Nate Amsden Silver badge

Re: Meh

Count me in the camp of who hates systemd(hates it being "forced" on just about every distro, otherwise wouldn't care about it - and yes I am moving my personal servers to Devuan, thought I could go Debian 7->Devuan but turns out that may not work, so I upgraded to Debian 8 a few weeks ago, and will go to Devuan from there in a few weeks, upgraded one Debian 8 to Devuan already 3 more to go -- Debian user since 1998), when reading this article it reminded me of


Flash price-drop pops Western Digital's wallet: Surprise revenue fall with worse to come

Nate Amsden Silver badge

I just bought two 8TB WD Gold drives myself. Was tempted to go to 12TB though 8TB is double what I have on my storage system these days(4x2TB RAID 10). How much do 8TB SSDs cost now?

Microsoft promises a fix for Windows 10 zip file woes. In November

Nate Amsden Silver badge

how is MS warning users?

Through some document buried on their website that you only see if you're specifically looking for a solution to that issue? Or are they proactive and giving users a message every time they open a zip file? Somehow I think it's the former, in which case, what 99.985% of people will never see it until it's perhaps too late or something?

Not that this affects me, mostly Linux and my Windows stuff is 7 only. I find it quite depressing how all of these complaints about windows 10 quality are falling on deaf ears. Not unique to Windows either of course. At least with Linux it was possible for(example) a group of folks to fork Gnome 2 to make MATE(which I use), and have been maintaining it ever since.

For me anyway very little(if anything) has come out of Windows or even Linux in the past decade that has gotten me excited.

AMD's shares get in a plane, take off and soar to 12,000 ft – then throw open the door, and fall into the cool rushing air

Nate Amsden Silver badge

quite a hit

So many times I've seen people say a stock tanks or crashes or some other extreme term only to see it down 0-8%. 24% is pretty crazy though. I haven't followed anything stock market related in 4 or 5 years. I think the nasdaq was just over 2000 at that time or something.

Epyc hasn't impressed me though with all of intel's manufacturing difficulties(and seemingly everyone running into performance or power walls) they certainly have some window of opportunity.

I'll certainly look to replacing my opteron 6100 and 6200 systems with newer epyc next year. Am quite shocked I was just able to renew support on those systems from HP for another year (entering 8th year of service soon for me anyway). Was expecting HP to EOL them a while ago.

Zip it! 3 more reasons to be glad you didn't jump on Windows 10 1809

Nate Amsden Silver badge

for me it was NT4 that drove me to linux. I liked win95 for a bit, got a NT 3.51 server cd from a friend at MS back in the day, liked that(more stable). NT4 was neat though I guess moving more shit into the kernel made it less stable. Quite a few crashes and seemingly have to reinstall every 6-12 months made me jump to Linux (Slackware 3.x) then Debian 2.0.

I still have Win7 at home and even an XP box(really games though I don't play much games). My main laptop dual boots to win7 but doesn't spend more than a dozen or so hours per year in win7 on average. I have a Win7 VM for work stuff that works fine.

So glad I never jumped on win10. I never did see the popups from MS offering free upgrades to win10. Win7 (and win2k8/r2 for the few windows servers I have) do what I need. win2k12 was quite annoying(have a half dozen of those systems).

Main interface for me though has been linux since about 1998.

GitHub.com freezes up as techies race to fix dead data storage gear

Nate Amsden Silver badge

Re: If Only....

What are you protecting against? Snapshots certainly are backups. As is RAID. It doesn't protect against everything certainly.

Just today on an isilon cluster i was deleting a bunch of shit and i wasn't paying close attention. Realized i deleted some incorrect things(data was static for months). i just restored the data from a snapshot in a few minutes.

I've been through 3 largish scale storage outages(12+hrs of downtime) in the past 15 years. It all depends on what your trying to protect and understanding that.

In my ideal world I'd have offline tape(LTFS+NFS) backups of everything critical stored off site(offline being key word not online where someone with a compromised access can wipe out your backups etc). This is in addition to any offsite online backups. Something I've been requesting for years. Managers didn't understand but did once I explained it. It's certainly an edge case for data security but something I'd like to see done. Maybe next year..

understand what your protecting against and set your backups accordingly.

Nate Amsden Silver badge

Re: storage

I remember nexenta's zfs solution to that problem was to corrupt the zfs file system and then get into a kernel panic reboot loop until the offending block device(and filesystem) was removed. Support's suggestion was "restore from backup". I tried using zfs debug tools (2012 time frame) to recover the fs. Was shocked how immature they were. Wanted a simple "zero out the bad shit and continue", didn't exist at the time. Disabled nexenta HA after that. Left nexenta behind a while later.

Our processor tech's got legs, says Arm: 'One million' data center servers will ship in 2018

Nate Amsden Silver badge

Re: A million what?

sounds like 1 million boxes with ARM something inside of them. The number would be more useful if we had indications as to the number of boxes shipped in previous years.

With Qualcomm, AMD and at least one or two others having bailed on ARM powered servers (that otherwise have no x86 in them) seems the only thing ARM is replacing in the data center is other embedded processors. Perhaps once entirely custom silicon in many cases it may be simpler/cheaper to put the logic on a general purpose ARM processor(maybe a custom ARM processor).

Love Microsoft Teams? Love Linux? Then you won't love this

Nate Amsden Silver badge

Re: "Vanishingly Small"

Not many developers use linux in my experience -- as someone who has worked supporting linux-based applications (mainly web stuff) for the past 18 years. I have been primarily linux on my desktop since 1998. I do keep a win7 VM running 24/7 for some work things though.

OS X seems to have killed 98%+ of of the developers I have worked with from using linux over that time. It was sad to see but understandable I guess for their use cases. I don't need more than 1 hand to count the number of folks at organizations I have worked at either developing software that will run on linux, OR support staff that run the linux systems that run linux as their primary desktop over the past decade (I think the actual number may be as high as 3, maybe 4). At the same time probably less than 20 people using Windows to do the same things(guesstimate). OS X dominates.

I tried OS X for a few weeks at one point but it was not my thing. Didn't even like the hardware ended up buying my own laptop so I wouldn't have to use the Mac trackpad (don't like (any)track pads, I want the touchpoint), Linux with Gnome2 + mouse over activation + desktop edge flipping + virtual desktops is what I like the most, so currently run Mint 17 MATE with 16 virtual desktops and the built in display(not a fan of multi display) on a Lenovo P50 laptop.

I was a believer in Linux on the desktop up until maybe 2004-2005, when I accepted that the linux kernel devs will likely never have a stable driver ABI which would of addressed a good chunk of issues with desktops and wide ranges of hardware.

I do use slack on a daily basis(Linux and Android) just for chat at work, never touched any audio or video capabilities it might have. I preferred(past tense) Skype which we were using before, but MS killed that generation of skype years ago and now it's as bad or worse than Slack for chat (don't care about audio/video).

I was die hard irc back in the 90s, the dot com bust caused the communities I was involved with on irc to mostly evaporate and I stopped going. irc is good too though the integrated ability to store messages while the user is not connected is something I never saw in irc (outside of maybe bots or something, I ran eggdrop bots for years).

Someone's in hot water: Tea party super PAC group 'spilled 500,000+ voters' info' all over web

Nate Amsden Silver badge

as long as you bothered to read the doc

Haven't you got the memo? People haven't read the docs for just about anything for a long time now. ESPECIALLY the kind of folks that put data like this on a service like S3.

As someone who has been told they write awesome documentation time and time again I can't even begin to count the times when someone asked me "what about X?" only to point them to documentation (that is easily searchable) written (usually) years earlier. I could understand "I browsed document X but it was last updated in 2014 is the info still valid?" kind of questions.

Party like it's 1989... SVGA code bug haunts VMware's house, lets guests flee to host OS

Nate Amsden Silver badge

Re: A standard dating back to 1987? -- Backward

I prefer a lower (standard) resolution on the console myself, whatever the default has been forever 320x240? I don't know.

My P50 laptop has Nvidia in it of course, and in X11 I have it fixed to 1080p (it is a 4k display). On the grub boot menu as well as the linux console the characters are the size of a tip of a pen, if that. And grub takes about 3 seconds to refresh the screen for choosing another OS to boot from.

Fortunately I don't need the console often on my laptop only to recover from the very rare issue affecting X11, but still would be nice to have a normal resolution for the console.

The march of Amazon Business has resellers quaking in their booties

Nate Amsden Silver badge

Re: "once Amazon has destroyed their competitors the prices will surely rise"

I haven't intentionally bought anything from amazon since 3/2011. I say intentionally. It wasn't until 2013 that I managed to discover woot.com whom I bought some stuff from was owned by amazon(at the time that info was only obvious buried within one of their pages), so of course immediately stopped going there. I was surprised, or even shocked to see a couple of recent purchases this year from Newegg arrive in amazon packaging. If there was an indication that it went through amazon I wouldn't of bought it.

I moved away from Seattle region in 2011 as well having seen ex-amazon folk spread to just about everywhere there and try to make it like amazon. Conversely when I first moved to the Seattle region(2000) it was a lot of ex MS-people going around, and at least the companies I worked at did not do things like MS was (biggest one being they were using Linux not windows).

I never really was a Walmart customer but since I saw some documentary on them at least 15-17 years ago I made it a point never to shop there either.

The amazon effect is felt even stronger in the IT realm with their cloud crap.

very unsettling times.

Samsung: Swanky hardware alone won't save a phone maker

Nate Amsden Silver badge

Re: Samsung bait and switch

Same for me, though a Note 3(my first Android device). Not every hour but usually 2 or 3 times a day it asks me to accept their new terms. I don't know if it's been 1 year or 2 years but I have no need to accept their terms I am not using any of their services, so I just clear the notification. Would be nice of course to be able to permanently decline so they stop asking.

AT&T is not quite as bad at pestering me to upgrade my OS, though their pestering comes in waves, for a few weeks they may ask me once or twice a day to upgrade, then they stop asking for a month or more. I have the latest Android from AT&T on another Note 3(5.0 from 2016) and I see no reason to upgrade my main phone, especially because I'd lose a critical feature of being able to mute the phone from the lock screen using the power button(which in itself was a downgrade in abilities from previous WebOS phones), something I do on a regular basis as my phone is also a pager for my on call stuff. I've kept wifi on my main phone for the most part disabled for the past two years to prevent automatic upgrades (at least on this vintage of phone upgrades require a wifi connection). In the early days I would turn wifi on when I needed it then off again, forgot a couple times and caught the phone mid automatic upgrade(sneaky bastards), till I killed wifi and the upgrade aborted. Now(well past ~18 months at least) with unlimited data don't really need wifi.

About to enter it's 6th year of service the only issue with my main Note 3 is the gyroscope and light sensors(assuming probably clustered together) started failing maybe a year ago. I replace the battery once a year, and maybe new screen protector every 2 years. I have a 2nd Note 3 that gets pretty heavy usage as well (Wifi only unless I'm traveling overseas), I bought that maybe four years ago and that has no issues other than I prefer Android 4.4 to Android 5(tried downgrading it one time couldn't get past KNOX).

The highest end Note 9 has me interested, though not in a rush at this time. I'd love an updated Note in the same form factor/design as the Note3 or Note 4(I have one of these too though rarely use it), but I guess that'll never happen.

In the two years since Dyn went dark, what have we learned? Not much, it appears

Nate Amsden Silver badge

Re: Bind/Named

One use case at last is geographical performance. One company I was at that switched to Dyn (2008/2009 time frame) had a high performance requirement. They were using F5 Global Traffic Manager prior to Dyn, and it was hosted active-active out of two data centers one on each coast of the U.S. Apparently their customers were complaining that DNS lookups were too slow, part of the (then, not sure if it still is true now) F5 DNS architecture when routing traffic to different geo locations was it required an additional DNS lookup (I forget why), so going to www.mydomain.com resolved to one CNAME which then resolved to a 2nd CNAME then you got the IP of the geo source from there.

Dyn's setup removed one of those CNAME lookups and combined with more geo diverse locations allowed DNS query times to drop by maybe 20-30ms (maybe more I forget now). The lower responses made their customers happy. Though I thought it was stupid just because nobody can tell that difference in performance ("but it shows up in their monitoring" was the response). Whatever.

Nate Amsden Silver badge

Re: Redesign

Just look to IPv6 to see how well that approach has worked?

To me the issue the people are advocating for WRT DNS is the centralization of DNS services, so many customers concentrated with such few providers.

I really don't see anything wrong with DNS as it is.

Certainly doesn't have to be that way, nothing in DNS prevents people from running their own DNS, though bigger companies are probably best off with a Dyn or Neustar to be able to absorb those DDoS attacks better. Obviously pretty much any internet provider has a DNS service available, and in many cases they may not even charge for the service since for the most part it doesn't cost much to run unless you're needing very regular updates assuming they don't have a UI to manage DNS.

I've run my own personal authoritative DNS since 1996 myself (still do today).

Nate Amsden Silver badge

mutliple CDN as well

As a dyn customer for about 9 years now(across 3 different companies) after the attack struck some providers tried to get me to go multi provider. To me does make sense but only really when it is also paired with a multi CDN deployment as well, and for whatever reason I don't see nearly as many people talking about that as multi DNS deployments.

DNS also has made it easy to use multiple providers forever now with slaves and longer TTLs. Though if your all up in their APIs and stuff then that may make it more difficult.

For the org I am with and probably many others this one outage was not nearly enough to switch off of Dyn(or to go dual provider). It's still the most reliable service I have ever used, and as a bonus their UI hasn't changed in as long as I can remember (probably in the 9 years as a customer). Which is refreshing for once to see something stable.

If only other cloud providers could tout 1 outage in 9 years (certainly is possible dyn has had more than that though nothing that has registered on my monitors long enough to be detected). They seem to be very proactive about alerting customers with https://www.dynstatus.com/ -- one of the first public status pages I can recall coming across.

All of the DDoS attacks that have affected me have always been collateral damage, either attacking Dyn in that one case, or in several other cases attacking upstream ISP (which in itself has probably 8 different load balanced proivders). CDN we currently use hasn't had an attack big enough for us to notice but they aren't a big name player either. I have seen more than one article in the past about big outages due to attacks or something at Cloudflare for example (don't recall anything recently though).

Happy with your Surface Pro 3's battery? Well, here's a setting that will cut the charge by half

Nate Amsden Silver badge

Re: I thought this was a solved problem

I have a Toshiba Tecra A11 from 2010 ($1878 before accessories+support), which has probably spent greater than 99% of it's operating hours plugged in. I bought 2 extra batteries for it in the beginning, I don't recall how often I rotated them, the current battery has about 40-50% of capacity left. The other two batteries appear to be in good condition according to this tool I have called hwmonitor(https://cpuid.com/softwares/hwmonitor.html - mainly use it for looking at temperatures). Probably been 2+ years since either of them were used.

I only say this because I thought that was "normal", didn't expect to read about batteries dieing(completely) from being plugged in too long, short of the occasional faulty battery I suppose.

Recently rebuilt the laptop to be a gaming system for older games. Decided to keep the bad battery in it since it's hooked to a UPS anyway.

Even on a good day when it was new, under Linux anyway and the Nvidia graphics a full charge wouldn't go much past 2 to 2.5 hours. I didn't buy it for good battery life though. One battery that is faulty on the Toshiba is the CMOS battery? If the laptop stays unplugged for too long (few days?). So even when it's turned off I keep it plugged in.

Currently using a Lenovo P50, it spends 95%+ of it's hours stationary on my desk plugged into a fancy double conversion UPS. Battery life on this on a good day in Linux with Nvidia is maybe 3-3.5 hours.

I replace the batteries in my main phones about once a year (Galaxy Note 3s), they spend a lot of time on wireless chargers which probably impacts the battery life. After a year they still work fine though it seems they lose 20-30% of the capacity.

Chinese Super Micro 'spy chip' story gets even more strange as everyone doubles down

Nate Amsden Silver badge

I believe bloomberg myself

Though I am obviously biased I suppose as I have had a small fear about this exact kind of thing since Lenovo bought IBM's Thinkpad line.

Fortunately I don't have anything of value that the Chinese would want. After being a die hard Thinkpad fan for many years when Lenovo bought them I swore off of them for 11 years - I used Toshiba in between. I am on Thinkpad again after I guess I accepted whatever could happen to Lenovo Thinkpad is just as likely to happen to Toshiba (that and Toshiba didn't have the hardware I was looking for at the time).

I've read conflicting comments on whether or not this kind of thing is possible, and to me based on history of other sorts of surveillance activities from other countries I absolutely have to be on the side of the fact that is probable this happened given the resources of a country like China. I'm just as likely to believe something similar could happen in the U.S. as well with NSA/CIA whomever. I also totally believe that the intelligence community is pissed off at the report for revealing that they knew what China was doing. They'd rather keep that secret so they can continue monitoring and quietly contain it.

I'm just hoping some day to see another Snowden-style leak of internal documents that say yes this did in fact happen, and those paranoid folks were right all along. Sort of reminds me of the early days of the reveals about the taps that the NSA had at AT&T facilities. As a AT&T data center customer at the time I joked with their staff about it, but really didn't surprise me, I continued as their customer until I moved to another job.

Some folks say why didn't more places encounter this well the answer seems obvious they targeted the attacks to lessen the likelihood of it being detected, like any good APT.

Certainly sucks for Supermicro right now though I'd suspect the vast vast majority(99.99%) of their customers have nothing to worry about(as they are not juicy targets). I run (1) supermicro server myself in a colocation in the bay area. I was thinking about getting a new one as that one is 7 years old. This report does nothing to sway my opinion either way.

However I wouldn't be caught dead running supermicro in mission critical production (again, this report has absolutely nothing to do with that either, just based off of ~18 years off and on of using their hardware). I do realize of course some 3rd party appliances I have may very well have supermicro hardware on the inside, but at least those are managed by the vendor as in I don't have to worry about diagnosing strange hardware faults or asking fortune tellers what changes are in the latest firmware, and don't have to worry about resetting all configurations to defaults when flashing said firmware(and the obvious negative implications from doing so from a remote location -- my critical servers are 2,400 miles away from my home)

To me at the end of the day this is hopefully a good thing in that it would raise awareness. I think it's totally possible for similar things to happen to other manufacturers as well even the big guys like HP and Dell. The trend of racing towards the bottom on pricing really puts pressure on the abilities for companies to be willing to be extra vigilant.

Oracle? On my server? I must have been hacked! *Penny drops* Oh sh-

Nate Amsden Silver badge

Re: bleh...

one of my co-workers likes to rant about Juniper (whenever the topic comes up). I don't use either Cisco or Juniper myself(Extreme networks customer for ~19 years now). Anyway his rant mainly revolved around this feature you mention. Which I had heard touted by another friend a long time ago. Had to do with the JunOS software not giving errors when the configuration was incorrect/syntax errors or whatever. It would go along like everything is fine, shit wouldn't be working but gave no indication as to why. He evaluated Juniper at his previous company (they were a Cisco shop) and it didn't get past the eval stage because of that I think, drove them crazy.

SUSE punts SES v5.5 out door, says storage is going software-defined and open source

Nate Amsden Silver badge

how is this different from red hat?

Maybe pricing?

Red hat seems to offer the same sort of thing: https://www.redhat.com/en/technologies/storage/ceph

Given Red hat bought the company that created Ceph (https://www.redhat.com/en/about/press-releases/red-hat-acquire-inktank-provider-ceph), article is not clear why someone would want to choose this platform over another based on Ceph.

But I guess the article is not alone, looking at the SuSE Ceph site - it makes no attempt to say how or why they may be better than Red hat (or any other Ceph-based solution), it only compares itself to non Ceph.

Obviously I'd expect much of the underlying core code to be the same across systems but they can differentiate with their pricing and their monitoring/management tools(vs other Ceph solutions), if they do differentiate they don't do a good job at communicating that.

Of course if you are already a SuSE Linux customer than it would make sense to use them if you were interested in a Ceph storage option.

(Not a Red hat nor SuSE customer, never used Ceph))

I was a brief SuSE customer I guess you could say maybe 13 to 15 years ago, bought several versions of their desktop linux distro at the time. It was pretty slick, though my sister used it more than me I kept to Debian, until at some point switching to Ubuntu on desktop/laptop(only) and now Mint(MATE). I remember one time looking at my sister's computer with SuSE and she had installed Yahoo messenger on it, the windows version. I was shocked that wine worked so well she could do it herself without asking me(she knew very little about computers).

IBM won't grow, says analyst firm while eyeing flatlining share price

Nate Amsden Silver badge

Re: Good News

am confused - why would anyone care to run that architecture on a desktop or laptop? I think even PowerPC which should be much easier to manage doesn't appear to have a laptop(mac of course I think was the only PPC laptop) or desktop version for many years now.

if anything I'd expect an emulator to be made(looks like there is at least one already - https://en.wikipedia.org/wiki/Hercules_%28emulator%29 ) for those that need to develop/test software on x86 systems.

You dirty DRAC: IT bods uncover Dell server firmware security slip

Nate Amsden Silver badge

used to be a time

Where people shunned the idea of dealing with signed code with hardware as it limited your ability to mess with it (one big example which was more of a licensing thing than a signing thing was Tivo).

I also remember several communities years ago the whole concept of TPM was quite a scary (I include myself in the list of people that feared TPM - and I still don't like it, fortunately in most cases it can be kept disabled without any issue(AFAIK I've never had a system with it enabled), though I think perhaps in some products like MS Surface (guessing) it may be forced enabled.

Nowadays it seems sad that everything that allows you to install unsigned code which previously was a good thing is now an evil thing because it's not "secure".


Reanimated Violin returns to scene with flashy XVS 8 array, and, er, AR app

Nate Amsden Silver badge

Re: Really?

back in 10? for me would be have to book a flight to the other side of the country(~4-5 hrs each way air time alone, at least 90min drive to the airport with no traffic). I typically make two such trips to our main data center per year, staying for 10 days on each trip. I do wish it was closer..

I too was quite confused when I saw mention of scanning a QR code to get info.

Cloudflare ties Workers to distributed data storage

Nate Amsden Silver badge

Re: It's not the sort of thing you'd want handling millions of rapid-fire financial transactions

I suppose you could take that question and ask folks that run things like redis, or memcache, or similar types of key value store systems.

The most popular use case that I see anyway is for caching and holding session data. Given the distributed nature of this feature it seems like it could do a lot for performance of caching more complex things closer to the client in a simpler fashion.

I'm not a developer and have no use for this myself, and am not, and never have been a cloudflare customer. I have worked with many apps that use memcache and redis over the years though (usually in addition to a database of some kind).

One developer I knew(but didn't like, and he didn't like me either) years ago thought he was smart and thought he needed just a couple of weeks to code a database caching layer in memcache to replace the mysql query cache. When I heard that I just laughed. He wasn't at the company much longer so never got to see what he might of come up with. (yes I know the mysql query cache is frowned upon, at the time it was required for our core application)

Heimdall is a MySQL database load balancer, analytics tool, and cache accelerator, something I have been using off and on for the past year or two. As an example they generally suggest using redis for (their) cache, though they can use Hazlecast and maybe one other tech too. Just mentioned them as an example of caching with MySQL anyway.

But it seems strange to me for someone to compare a key value store to a traditional database.

Perfect timing for a two-bank TITSUP: Totally Inexcusable They've Stuffed Up Payday

Nate Amsden Silver badge

Make sure those branches aren't dependant upon the same online services that keep failing.

TLS proxies? Nah. Truthfully Less Secure 'n' poxy, say Canadian infosec researchers

Nate Amsden Silver badge

Re: lesser threat

Not quite the same I think. Well it could be I'm not sure how browsers behave in the background when this happens. What I'm referring to is the big warning dialog that pops up saying "this cert is not trusted", and says why. Then you can override the connection if you wish (unless the site uses HSTS or whatever that thing is called) and continue connecting. I don't expect browsers to submit data until that exception is granted but they might, I haven't checked myself.

I recall back to 2004 or so time frame the company I was at had tons of SSL certs, so many that we had a special portal to Verisign's site where I could issue certs without ordering them each time and they would invoice us(something like $90,000 a year in certs). It was also my first (and probably only) experience using client side SSL certs for authentication to a website.

Anyway in one case we had a cert error that I saw, and one of the support folks wasn't seeing it. He wasn't the smartest guy in the company but he was a good support person. But he was conditioned I guess you could say to just click past SSL errors (in this case I think it was IE with a pop up dialog box one step click to bypass the error). I went to his desk and was talking him through the process to get to the error. The error popped up and he instinctively clicked the "continue" (or whatever it was called button), the error didn't even register to him. I laughed and said STOP the error was RIGHT THERE. Went back again and he realized it at that point.

So certainly people can be conditioned to go past the errors but as long as "untrusted" certs can be allowed in browsers (and if browsers some day decide to stop that I'll just, get off the internet entirely perhaps), the risk of a un trusted cert intercepting data is far greater than that of MITM decryption data because of weak(er) encryption.

But at the end of the day the whole SSL CA stuff is flawed security wise anyway since the list of CAs that are trusted seem to go on forever and there doesn't seem to be good enough controls on how certs are issued. Obviously there's been several incidents over the years where certs were issued to the "wrong" people for big domains..

But go beyond browsers, think of all of the server side applications that use SSL, I'm talking server to server communications whether it is API endpoints, email services, and other proprietary protocols that use SSL. Maintenance on SSL versions and stuff is honestly I'd call it black magic in many cases. Something as simple as the ordering of the ciphers can throw everything off.

A few months ago I upgraded some of our internal systems and when we hit production a critical external endpoint was simply failing. It worked fine prior to the upgrade, but not after. It was working in test only because they had configured it to use http. In test https would fail because the vendor's cert expired years ago so it failed validation. In production http was not allowed(on their end). After some investigation I determined they were using ciphers on their site that were now considered very insecure and OpenSSL (or gnuTLS I forget which) refused to connect to the site(no matter what). Strangely enough whichever OpenSSL or gnuTLS refused to connect the other one worked fine(so if OpenSSL was failing gnuTLS would work or vise versa I forgot which worked and which did not). I ran a SSLlabs diagnostics on the site and it was reported as a grade of "F". Ended up building an older OS system for that API call until the vendor could fix their stuff.

Fortunately for HTTPS based sites there is ssllabs testing site, without that I don't know what I'd do myself.

As for BEAST, I don't recall the details of it much, but I do recall putting an easy workaround in on my Netscaler load balancers a few years ago, back when we were prevented from upgrading the code on the load balancers to something that supported newer than TLS 1.0 due to an unrelated bug in the platform which took a good 2 years to get a resolution on.

The whole dumbing down of the internet is quite annoying to me. Present the user with choices and let them choose which they want to do (I have no problem with default choices, just let them override that if they desire). Browser vendors in particular Chrome and Firefox have been absolutely positively terrible in this regard(I say this running the Pale moon browser, I clung to firefox for as long as I could).

Nate Amsden Silver badge

lesser threat

I don't use these appliances(I haven't done corp IT work since 2002) but certainly can understand security folks seeing whatever vulnerabilities they have is less than letting people connect externally to things they can't inspect. I'd wager many companies would even be fine with SSL termination on the proxies and just providing http internally(that is quite common in application load balancing setups anyway, something I have been doing for about 15 years now).

Now it'd certainly be good if these vulnerabilities were fixed - though I don't agree with disabling support for older protocols not without a graceful failure mode of some kind. It drives me insane that browsers and people push to completely disable stuff without any sort of graceful failure mode. I've been saying for years now treat those sites as if they were using self signed certs -- provide a warning, and a way for the user to continue past the warning if they deem the risk is acceptable, or if they just don't care. Because the same level of threat exists with self signed certs as it does with weak(er) encryption. Well I guess technically non trusted certs are much easier to deploy and so a much greater risk than weaker encryption.

But as it stands the things in the article seem to be far less of a threat than if the companies removed the appliances altogether as an example.

I had discussions with one guy who is really good in security who at his previous company didn't allow any outbound communications from the servers to the internet unless it went through a proxy. Which on paper sounds fine but unless your doing SSL interception on that proxy there's still a very wide open door there for stuff to get out that you can't see(in the earlier days at his company it was long before wide spread https adoption). With more and more external services dependent upon large crappy clouds that have wide swaths of IP addresses that change often(without notice) it's not quite practical to try to lock down communications to IPs(at least to ones based in those clouds), and often even more difficult to determine what is on that IP.

Recently had to diagnose a network issue in this situation and fortunately the remote IPs were serving regular https and their ssl certs were specific enough that it allowed me to identify the organization(a provider the company I am with does business with so I recognized it) that was running the service on those ips.

Guys, geez... finally 5Gs: AT&T grows super-fast mobile net city rollout

Nate Amsden Silver badge

waste of time

5G sounds like it may be useful for things like fixed wireless communications. AT&T struggles to get their 4G LTE stuff working most of the time. I can recall two situations in the past 5 years where I got above 20Mbit on LTE. Most of the time it is below 5Mbit. One was in a San Jose hotel (another time I was at another San Jose hotel and the LTE was sub 1Mbit). The other time was at a Las Vegas convention center where they obviously had LTE repeaters or whatever you call them in the room.

Too bad fixing coverage doesn't sound flashy like 5G.

It's not my phone either have tried at least 3 different phones side by side the coverage is quite similar. I have seen many times where I have "good" LTE signal strength(as measured by an app that looks at the numbers), but not enough bandwidth to resolve any DNS entries.

(AT&T customer since about 2010 or so, I switched from Sprint in order to use Palm/HP Pre GSM phones at the time currently have Galaxy note 3s and Sony XZ1 on their network). When I was on Sprint of course it was far worse at the time anyway, their Wimax 4G was slower than their 3G (I had a Sprint mifi hotspot at the time and despite unlimited 4G Wimax it was so slow I configured the device to stay on 3G even though it was no longer unlimited). Changing to (then) AT&T's HSPA+ it was easily 4-6X faster than Sprint. I'm sure Sprint has improved a bit since that time, bad performance wasn't the only reason I left 'em.

Windows Server 2019 Essentials incoming – but cheapo product's days are numbered

Nate Amsden Silver badge

Re: Is Cloud computing Smart Meters for IT ?

"demand management" has been an issue with cloud since day 1. With the crappier public IaaS clouds such as MS, Amazon, google it's even worse as you can't provision into pools of resources(as in being able to over subscribe CPU/memory/disk -- which in itself is demand management as well but it can simplify things quite a bit depending on your workload). Conversely if you want to have much better resource utilization then you have to pick a cloud provider that provides that, though the costs typically go up even more in that situation.

SaaS clouds in theory should abstract that aspect of management away if managed correctly, but as big SaaS clouds like Google and MS have shown time and time again it's far from a mature process.

Who needs quality when you can just slowly numb your customers into a lower level of service without them realizing it.

Web cache poisoning just got real: How to fling evil code at victims

Nate Amsden Silver badge

Re: So non-core services offered by a SaaS supplier likely to be less secure thatn core

Sort of ironically in many cases a website is not an IP, but a combination of IP+host name. Sort of a side effect rather than "by design" I credit name based virtual hosting on my load balancers from protecting many of the websites from casual drive by scanners. There may be dozens of different websites behind that IP(and in some cases those sites are meant for "internal" use and have no external published DNS), but without specifying the right host: header at most you will hit only one of them(per IP).

Bitcoin backer sues AT&T for $240m over stolen cryptocurrency

Nate Amsden Silver badge

Re: So much for the "what you have" 2nd factor...

You seem to be focusing too much on hacking your account via their website because you use fancy 2FA, and not mentioning hacking the account via social engineering through the phone lines or in person.

For example I have a pass code on one of my bank accounts, if I call in they are supposed to ask me for the pass code before they can do anything. Though there is a way around that pass code if you provide enough personal information about yourself to verify you are who you say you are.. I can imagine without this the companies would be bombarded with complaints from users who do forget their shit. Can be a tough balancing act.

In the case of a bank account, or even a bitcoin site with millions of dollars of your own funds.. I have to believe there are ways around fancy 2FA in the event such tokens are lost/stolen/something. I mean I can't imagine an organization saying "sorry we can't authorize you because you lost your token(s) so your $24M gone forever".

Had this money been stolen from a FDIC insured account (knowing there are limits to the $/account that are insured) would FDIC and/or the bank cover the losses (at least to the limit of the insured value)? Or is FDIC only used for things like in person bank robberies?

Google bod wants cookies to crumble and be remade into something more secure

Nate Amsden Silver badge

Re: Zero understanding of cookies

how are cookies not stored on the server side ? Any cookie associated with a site would be transmitted to the site and the site can store that data if it wishes(but it probably already has that data in other forms, e.g. session info, items in your shopping cart). Back when I worked for an ad targeting company many years ago we collected probably 40TB of log data per day, most of that was cookie stuff from the tracking pixels.

It's pretty trivial to configure most web servers to log the contents of the cookies.

Of course I could be misunderstanding what you are saying as well.

Oracle: Run, don't walk, to patch this critical Database takeover bug

Nate Amsden Silver badge

could disable java

Though it is enabled by default, I remember disabling it on my last installs since the app that I use oracle with (vCenter) doesn't need it.

SQL> select comp_name from dba_registry;



Oracle Enterprise Manager

Oracle XML Database

Oracle Text

Oracle Workspace Manager

Oracle Database Catalog Views

Oracle Database Packages and Types

6 rows selected.

You can probably do it on the fly(as in don't have to reinstall) as well assuming you don't need it:


Phased out: IT architect plugs hole in clean-freak admin's wiring design

Nate Amsden Silver badge

I deployed some new pdus recently(as in 0U rackmount pdus not large scale datacenter pdus). They are pretty neat as they alternate the phase on every outlet (and the outlets are color coded). pretty convenient, though the locations on the outlets could use some improvement, assuming related to the extra hardware to do the alternate outlet thing, 36 outlets on the 208v 30a 3phase though probably a good 2 and a half feet of no outlets on the bottom part of the pdu.

Extreme Networks? Extreme Share Price Crash, more like

Nate Amsden Silver badge

loyal customer for ~19 years

Been a small but loyal customer of Extreme's for about 19 years now(always felt a bit bad seeing Extreme hardly ever mentioned on El reg, though now more so since they acquired the Brocade data center assets for whatever reason), though technically I guess they wouldn't count me as a paying customer until about 2004, purchases before that were made off of ebay. Not super excited myself about their recent acquisitions I did like their "one platform" (XOS) story(much like HP 3PAR's one platform small to big story) that they had before that(my first XOS was BlackDiamond 10k in 2004). I went to a conference of theirs earlier this year and didn't like what I heard, mostly "ignore what we've been saying for the past 8+ years we like this new stuff better". Though I had been ignoring what they had been saying for the past 8+ years anyway since it was about M-LAG which sounds nice though I never had a real interest in that approach either.

But whatever, as long as XOS keeps chugging along, there hasn't been a feature they have added that I've cared about in a long long time, my network architectures haven't changed since 2005(ESRP + virtual routers), so as long as that stuff keeps going, hopefully stable since it is mature I'll be happy and ignore the other messaging about their new brocade data center kit(never been a fan of Trill anyway myself). Also not sold on FCoE either, am happy with isolated fibre channel networks for storage.

People have been telling me Extreme is going out of business since at least 2004 (back then it was Foundry Networks who was spouting that, ironically enough look where Foundry is now), then when Extreme turned down the Juniper acquisition, then several other things along the way.. somehow they manage to keep going though.

The pitch is they will be able to better grow these multiple businesses now that they are a billion $ company, rather than the several smaller ones that struggled before. Seems like a tall order, I hope they didn't take on too much debt and other bad things as a result of this stuff.

Their new security and intelligence story really reminded me of their same story back in 2005 with the Black Diamond 10k(with FPGAs I assume, though they called them "Programmable ASICs" at the time) and their Sentriant technology along with ClearFlow. Now of course things are better technology wise but found that amusing all of the stuff they were touting at this conference I was at I literally heard them tout 13 years ago. It sounds cool for sure, my org's got no budgets for that stuff though so doesn't matter anyway.

I tell people if you want to make a career out of networking then go the Cisco/Juniper route(much more complicated), but if networking is only one of the things you do, chose something else. Other than ESRP, the ease of use of the Extreme platform is what has kept me happy. Certainly have had issues here and there over the years, but at least not the constant headaches of dealing with a Cisco (or Cisco like) CLI interface. Extreme believes CLIs will go away entirely in favor of fancy SDN. Maybe they will some day.. companies have been promising such things for a long time and so far hasn't happened. Not holding my breath.

For all the excitement, Pie may be Android's most minimal makeover yet – thankfully

Nate Amsden Silver badge

can you get updates only yet?

I'm still on my first Android phone which is a galaxy note 3 running Android 4.4 (I refuse to let it upgrade to 5), before that I was on webos. I have a note 4(android 5) as well(and another note 3 on android 5) which I'm typing this on (wifi only).

My Q is can you opt for JUST security updates in modern android. No feature upgrades. I see the July 2018 security bulletin still supports android 6, do the patches are there for older OSs. I haven't noticed a single headline feature addition to android since 5 came out (including 5) that looked interesting to me only annoying UI changes. The most frustrating of which other than the material design is the removal of the mute option from the pop up menu from pressing the power button which happened in android 5. I use this feature constantly, would be even better if there was a physical switch to mute like I had on webos, or the ability to mute the ring tone instantly by a quick tap of the power button.

I'm assuming not, but curious anyway.

Sitting pretty in IPv4 land? Look, you're gonna have to talk to IPv6 at some stage

Nate Amsden Silver badge

Re: Overly Gloom and Doom 90's Predictions

So a better solution to "breaking" a few things with NAT, is to break *everything* with IPv6 right?(because back then what really supported IPv6) Then they can be forced to update everything because everything is broken, then everyone will be happy. Yeah I can see why that didn't happen.

I've been doing networking stuff since the late 90s(not really my primary role), these days load balancers, firewalls, vpn, layer 3 switching, though no dynamic routing protocols etc, and even I have zero interest in ipv6(along with a lot of others I'm sure). In fact I don't recall ever even having a conversation/chat with anyone outside of toy(home tunnel) deployments who was excited about IPv6.

I go out of my way where I can to disable IPv6 on systems because it can still cause issues(perhaps mainly when there is no IPv6 network), one example that came up again recently is BIND by default will query IPv6 name servers unless IPv6 is explicitly disabled on the service itself (having it disabled at the operating system is not sufficient), which results in many query timeouts.

I do remember being "excited" I suppose that the big core switches I purchased in 2004 supported IPv6 in hardware, though other than a bullet point on a spec sheet my interest stopped there.

IPv6, much like SDN still seems to me firmly only beneficial in the service provider/large enterprise space at this time. For most folks I think running out of IPs isn't a critical issue.

It was much more so an issue back before SNI -- I was at one company about 13 years ago where we had a couple hundred SSL certs(many different domains too) that had to be exposed externally -- so of course each required it's own IP. Getting those IPs wasn't difficult at the time but these days such a setup could easily be consolidated even as far down to a single ip address with SNI.

For the stuff I do (managing production e-commerce infrastructure), if the time comes where we NEED inbound IPv6 then my strategy would be as the article suggests - though I would just have our CDN do the conversion for us. If the time comes where we NEED outbound IPv6 for something then I imagine my strategy would be to do IPv4->v6 NAT (never looked into it before). Though if either of those situations appear in the next 5 years I'll be quite shocked.

'Prodigy' chip moonshot gets hand from Arm CPU guru Prof Steve Furber

Nate Amsden Silver badge

reminds me of itanium

"[..]Power efficiencies are gained by moving out-of-order execution capability to software, Danilak said. “All the register rename, checkpointing, seeking, retiring, which is consuming majority of the power, is basically gone, replaced with simple hardware. All the smartness of out-of-order execution was put to compiler."

and then from wikipedia


"[..]With EPIC, the compiler determines in advance which instructions can be executed at the same time, so the microprocessor simply executes the instructions and does not need elaborate mechanisms to determine which instructions to execute in parallel. The goal of this approach is twofold: to enable deeper inspection of the code at compile time to identify additional opportunities for parallel execution, and to simplify processor design and reduce energy consumption by eliminating the need for runtime scheduling circuitry. "

How to (slowly) steal secrets over the network from chip security holes: NetSpectre summoned

Nate Amsden Silver badge

Re: Yup

if a nation state is after you this is the least of your worries.

Windows 10 IoT Core Services unleashed to public preview

Nate Amsden Silver badge

Re: February 30th?

When you want 10 years of support? When you don't want systemd?

(Debian user since 1998 - wondering when support for Debian 7 will run out and I will have to do another round of upgrades, probably going to the Deuvian(sp))

Xen 4.11 debuts new ‘PVH’ guest type, for the sake of security

Nate Amsden Silver badge

high availability is nice though not really required for a bunch of workloads. I mean I ran vSphere from about 2006 to about 2010 with nothing other than standard edition (no HA, no vmotion, nothing). Ran everything from Oracle DB servers to web and app servers, etc(all of the VMs if I recall right were linux). First couple years(first company) didn't even have vcenter. For a while in 2009 at least I was able to buy vsphere essentials packs and get the hosts managed by vcenter standard edition (vmware closed that license hole a couple years later)

I did have SAN storage though so if I needed to move a VM to another host I could do it(VM had to be powered off of course).

Back in 2008 before I left that one company my (new) manager at the time wanted to switch to Xen. He didn't even like paying the lowball standard vsphere pricing we were paying I think it was $3k for a 2 socket server for standard edition(excluding support I think). We got into a big argument about it at one point. After I left the company he directed my remaining teammates to start working on Xen (CentOS 5.x I think at the time which among CentOS 4.x and RHEL and Fedora we used as guest OSs). They spent about a month trying to get it to work and gave up and went back to Vmware. The core issue they were having at the time was the need to run both 32 and 64-bit CentOS guest OSs, and one of those(assuming 32-bit it was a long time ago) simply wouldn't even boot(mailing lists etc provided no solution). Didn't talk to that manager again for years but have since made up he apologized to me which was nice, and said yes (at least at the time)Xen sucked and vmware was better.

For the past 7 years or so at the current org everything is enterprise+, though I think the only real features of e+ that I use are VDS, DRS and host profiles. I'm probably a mix of a customer vmware would love and hate -- been using their stuff for 19 years now, very loyal customer(because of consistently good experiences) but at the same time not excited about any of their stuff other than the basics. vSphere 4.0 was the last product I was super excited about.

Biting the hand that feeds IT © 1998–2019