* Posts by Nate Amsden

2437 publicly visible posts • joined 19 Jun 2007

Flash price-drop pops Western Digital's wallet: Surprise revenue fall with worse to come

Nate Amsden

I just bought two 8TB WD Gold drives myself. Was tempted to go to 12TB though 8TB is double what I have on my storage system these days(4x2TB RAID 10). How much do 8TB SSDs cost now?

Microsoft promises a fix for Windows 10 zip file woes. In November

Nate Amsden

how is MS warning users?

Through some document buried on their website that you only see if you're specifically looking for a solution to that issue? Or are they proactive and giving users a message every time they open a zip file? Somehow I think it's the former, in which case, what 99.985% of people will never see it until it's perhaps too late or something?

Not that this affects me, mostly Linux and my Windows stuff is 7 only. I find it quite depressing how all of these complaints about windows 10 quality are falling on deaf ears. Not unique to Windows either of course. At least with Linux it was possible for(example) a group of folks to fork Gnome 2 to make MATE(which I use), and have been maintaining it ever since.

For me anyway very little(if anything) has come out of Windows or even Linux in the past decade that has gotten me excited.

AMD's shares get in a plane, take off and soar to 12,000 ft – then throw open the door, and fall into the cool rushing air

Nate Amsden

quite a hit

So many times I've seen people say a stock tanks or crashes or some other extreme term only to see it down 0-8%. 24% is pretty crazy though. I haven't followed anything stock market related in 4 or 5 years. I think the nasdaq was just over 2000 at that time or something.

Epyc hasn't impressed me though with all of intel's manufacturing difficulties(and seemingly everyone running into performance or power walls) they certainly have some window of opportunity.

I'll certainly look to replacing my opteron 6100 and 6200 systems with newer epyc next year. Am quite shocked I was just able to renew support on those systems from HP for another year (entering 8th year of service soon for me anyway). Was expecting HP to EOL them a while ago.

Zip it! 3 more reasons to be glad you didn't jump on Windows 10 1809

Nate Amsden

for me it was NT4 that drove me to linux. I liked win95 for a bit, got a NT 3.51 server cd from a friend at MS back in the day, liked that(more stable). NT4 was neat though I guess moving more shit into the kernel made it less stable. Quite a few crashes and seemingly have to reinstall every 6-12 months made me jump to Linux (Slackware 3.x) then Debian 2.0.

I still have Win7 at home and even an XP box(really games though I don't play much games). My main laptop dual boots to win7 but doesn't spend more than a dozen or so hours per year in win7 on average. I have a Win7 VM for work stuff that works fine.

So glad I never jumped on win10. I never did see the popups from MS offering free upgrades to win10. Win7 (and win2k8/r2 for the few windows servers I have) do what I need. win2k12 was quite annoying(have a half dozen of those systems).

Main interface for me though has been linux since about 1998.

GitHub.com freezes up as techies race to fix dead data storage gear

Nate Amsden

Re: If Only....

What are you protecting against? Snapshots certainly are backups. As is RAID. It doesn't protect against everything certainly.

Just today on an isilon cluster i was deleting a bunch of shit and i wasn't paying close attention. Realized i deleted some incorrect things(data was static for months). i just restored the data from a snapshot in a few minutes.

I've been through 3 largish scale storage outages(12+hrs of downtime) in the past 15 years. It all depends on what your trying to protect and understanding that.

In my ideal world I'd have offline tape(LTFS+NFS) backups of everything critical stored off site(offline being key word not online where someone with a compromised access can wipe out your backups etc). This is in addition to any offsite online backups. Something I've been requesting for years. Managers didn't understand but did once I explained it. It's certainly an edge case for data security but something I'd like to see done. Maybe next year..

understand what your protecting against and set your backups accordingly.

Nate Amsden

Re: storage

I remember nexenta's zfs solution to that problem was to corrupt the zfs file system and then get into a kernel panic reboot loop until the offending block device(and filesystem) was removed. Support's suggestion was "restore from backup". I tried using zfs debug tools (2012 time frame) to recover the fs. Was shocked how immature they were. Wanted a simple "zero out the bad shit and continue", didn't exist at the time. Disabled nexenta HA after that. Left nexenta behind a while later.

Our processor tech's got legs, says Arm: 'One million' data center servers will ship in 2018

Nate Amsden

Re: A million what?

sounds like 1 million boxes with ARM something inside of them. The number would be more useful if we had indications as to the number of boxes shipped in previous years.

With Qualcomm, AMD and at least one or two others having bailed on ARM powered servers (that otherwise have no x86 in them) seems the only thing ARM is replacing in the data center is other embedded processors. Perhaps once entirely custom silicon in many cases it may be simpler/cheaper to put the logic on a general purpose ARM processor(maybe a custom ARM processor).

Love Microsoft Teams? Love Linux? Then you won't love this

Nate Amsden

Re: "Vanishingly Small"

Not many developers use linux in my experience -- as someone who has worked supporting linux-based applications (mainly web stuff) for the past 18 years. I have been primarily linux on my desktop since 1998. I do keep a win7 VM running 24/7 for some work things though.

OS X seems to have killed 98%+ of of the developers I have worked with from using linux over that time. It was sad to see but understandable I guess for their use cases. I don't need more than 1 hand to count the number of folks at organizations I have worked at either developing software that will run on linux, OR support staff that run the linux systems that run linux as their primary desktop over the past decade (I think the actual number may be as high as 3, maybe 4). At the same time probably less than 20 people using Windows to do the same things(guesstimate). OS X dominates.

I tried OS X for a few weeks at one point but it was not my thing. Didn't even like the hardware ended up buying my own laptop so I wouldn't have to use the Mac trackpad (don't like (any)track pads, I want the touchpoint), Linux with Gnome2 + mouse over activation + desktop edge flipping + virtual desktops is what I like the most, so currently run Mint 17 MATE with 16 virtual desktops and the built in display(not a fan of multi display) on a Lenovo P50 laptop.

I was a believer in Linux on the desktop up until maybe 2004-2005, when I accepted that the linux kernel devs will likely never have a stable driver ABI which would of addressed a good chunk of issues with desktops and wide ranges of hardware.

I do use slack on a daily basis(Linux and Android) just for chat at work, never touched any audio or video capabilities it might have. I preferred(past tense) Skype which we were using before, but MS killed that generation of skype years ago and now it's as bad or worse than Slack for chat (don't care about audio/video).

I was die hard irc back in the 90s, the dot com bust caused the communities I was involved with on irc to mostly evaporate and I stopped going. irc is good too though the integrated ability to store messages while the user is not connected is something I never saw in irc (outside of maybe bots or something, I ran eggdrop bots for years).

Someone's in hot water: Tea party super PAC group 'spilled 500,000+ voters' info' all over web

Nate Amsden

as long as you bothered to read the doc

Haven't you got the memo? People haven't read the docs for just about anything for a long time now. ESPECIALLY the kind of folks that put data like this on a service like S3.

As someone who has been told they write awesome documentation time and time again I can't even begin to count the times when someone asked me "what about X?" only to point them to documentation (that is easily searchable) written (usually) years earlier. I could understand "I browsed document X but it was last updated in 2014 is the info still valid?" kind of questions.

Party like it's 1989... SVGA code bug haunts VMware's house, lets guests flee to host OS

Nate Amsden

Re: A standard dating back to 1987? -- Backward

I prefer a lower (standard) resolution on the console myself, whatever the default has been forever 320x240? I don't know.

My P50 laptop has Nvidia in it of course, and in X11 I have it fixed to 1080p (it is a 4k display). On the grub boot menu as well as the linux console the characters are the size of a tip of a pen, if that. And grub takes about 3 seconds to refresh the screen for choosing another OS to boot from.

Fortunately I don't need the console often on my laptop only to recover from the very rare issue affecting X11, but still would be nice to have a normal resolution for the console.

The march of Amazon Business has resellers quaking in their booties

Nate Amsden

Re: "once Amazon has destroyed their competitors the prices will surely rise"

I haven't intentionally bought anything from amazon since 3/2011. I say intentionally. It wasn't until 2013 that I managed to discover woot.com whom I bought some stuff from was owned by amazon(at the time that info was only obvious buried within one of their pages), so of course immediately stopped going there. I was surprised, or even shocked to see a couple of recent purchases this year from Newegg arrive in amazon packaging. If there was an indication that it went through amazon I wouldn't of bought it.

I moved away from Seattle region in 2011 as well having seen ex-amazon folk spread to just about everywhere there and try to make it like amazon. Conversely when I first moved to the Seattle region(2000) it was a lot of ex MS-people going around, and at least the companies I worked at did not do things like MS was (biggest one being they were using Linux not windows).

I never really was a Walmart customer but since I saw some documentary on them at least 15-17 years ago I made it a point never to shop there either.

The amazon effect is felt even stronger in the IT realm with their cloud crap.

very unsettling times.

Samsung: Swanky hardware alone won't save a phone maker

Nate Amsden

Re: Samsung bait and switch

Same for me, though a Note 3(my first Android device). Not every hour but usually 2 or 3 times a day it asks me to accept their new terms. I don't know if it's been 1 year or 2 years but I have no need to accept their terms I am not using any of their services, so I just clear the notification. Would be nice of course to be able to permanently decline so they stop asking.

AT&T is not quite as bad at pestering me to upgrade my OS, though their pestering comes in waves, for a few weeks they may ask me once or twice a day to upgrade, then they stop asking for a month or more. I have the latest Android from AT&T on another Note 3(5.0 from 2016) and I see no reason to upgrade my main phone, especially because I'd lose a critical feature of being able to mute the phone from the lock screen using the power button(which in itself was a downgrade in abilities from previous WebOS phones), something I do on a regular basis as my phone is also a pager for my on call stuff. I've kept wifi on my main phone for the most part disabled for the past two years to prevent automatic upgrades (at least on this vintage of phone upgrades require a wifi connection). In the early days I would turn wifi on when I needed it then off again, forgot a couple times and caught the phone mid automatic upgrade(sneaky bastards), till I killed wifi and the upgrade aborted. Now(well past ~18 months at least) with unlimited data don't really need wifi.

About to enter it's 6th year of service the only issue with my main Note 3 is the gyroscope and light sensors(assuming probably clustered together) started failing maybe a year ago. I replace the battery once a year, and maybe new screen protector every 2 years. I have a 2nd Note 3 that gets pretty heavy usage as well (Wifi only unless I'm traveling overseas), I bought that maybe four years ago and that has no issues other than I prefer Android 4.4 to Android 5(tried downgrading it one time couldn't get past KNOX).

The highest end Note 9 has me interested, though not in a rush at this time. I'd love an updated Note in the same form factor/design as the Note3 or Note 4(I have one of these too though rarely use it), but I guess that'll never happen.

In the two years since Dyn went dark, what have we learned? Not much, it appears

Nate Amsden

Re: Bind/Named

One use case at last is geographical performance. One company I was at that switched to Dyn (2008/2009 time frame) had a high performance requirement. They were using F5 Global Traffic Manager prior to Dyn, and it was hosted active-active out of two data centers one on each coast of the U.S. Apparently their customers were complaining that DNS lookups were too slow, part of the (then, not sure if it still is true now) F5 DNS architecture when routing traffic to different geo locations was it required an additional DNS lookup (I forget why), so going to www.mydomain.com resolved to one CNAME which then resolved to a 2nd CNAME then you got the IP of the geo source from there.

Dyn's setup removed one of those CNAME lookups and combined with more geo diverse locations allowed DNS query times to drop by maybe 20-30ms (maybe more I forget now). The lower responses made their customers happy. Though I thought it was stupid just because nobody can tell that difference in performance ("but it shows up in their monitoring" was the response). Whatever.

Nate Amsden

Re: Redesign

Just look to IPv6 to see how well that approach has worked?

To me the issue the people are advocating for WRT DNS is the centralization of DNS services, so many customers concentrated with such few providers.

I really don't see anything wrong with DNS as it is.

Certainly doesn't have to be that way, nothing in DNS prevents people from running their own DNS, though bigger companies are probably best off with a Dyn or Neustar to be able to absorb those DDoS attacks better. Obviously pretty much any internet provider has a DNS service available, and in many cases they may not even charge for the service since for the most part it doesn't cost much to run unless you're needing very regular updates assuming they don't have a UI to manage DNS.

I've run my own personal authoritative DNS since 1996 myself (still do today).

Nate Amsden

mutliple CDN as well

As a dyn customer for about 9 years now(across 3 different companies) after the attack struck some providers tried to get me to go multi provider. To me does make sense but only really when it is also paired with a multi CDN deployment as well, and for whatever reason I don't see nearly as many people talking about that as multi DNS deployments.

DNS also has made it easy to use multiple providers forever now with slaves and longer TTLs. Though if your all up in their APIs and stuff then that may make it more difficult.

For the org I am with and probably many others this one outage was not nearly enough to switch off of Dyn(or to go dual provider). It's still the most reliable service I have ever used, and as a bonus their UI hasn't changed in as long as I can remember (probably in the 9 years as a customer). Which is refreshing for once to see something stable.

If only other cloud providers could tout 1 outage in 9 years (certainly is possible dyn has had more than that though nothing that has registered on my monitors long enough to be detected). They seem to be very proactive about alerting customers with https://www.dynstatus.com/ -- one of the first public status pages I can recall coming across.

All of the DDoS attacks that have affected me have always been collateral damage, either attacking Dyn in that one case, or in several other cases attacking upstream ISP (which in itself has probably 8 different load balanced proivders). CDN we currently use hasn't had an attack big enough for us to notice but they aren't a big name player either. I have seen more than one article in the past about big outages due to attacks or something at Cloudflare for example (don't recall anything recently though).

Happy with your Surface Pro 3's battery? Well, here's a setting that will cut the charge by half

Nate Amsden

Re: I thought this was a solved problem

I have a Toshiba Tecra A11 from 2010 ($1878 before accessories+support), which has probably spent greater than 99% of it's operating hours plugged in. I bought 2 extra batteries for it in the beginning, I don't recall how often I rotated them, the current battery has about 40-50% of capacity left. The other two batteries appear to be in good condition according to this tool I have called hwmonitor(https://cpuid.com/softwares/hwmonitor.html - mainly use it for looking at temperatures). Probably been 2+ years since either of them were used.

I only say this because I thought that was "normal", didn't expect to read about batteries dieing(completely) from being plugged in too long, short of the occasional faulty battery I suppose.

Recently rebuilt the laptop to be a gaming system for older games. Decided to keep the bad battery in it since it's hooked to a UPS anyway.

Even on a good day when it was new, under Linux anyway and the Nvidia graphics a full charge wouldn't go much past 2 to 2.5 hours. I didn't buy it for good battery life though. One battery that is faulty on the Toshiba is the CMOS battery? If the laptop stays unplugged for too long (few days?). So even when it's turned off I keep it plugged in.

Currently using a Lenovo P50, it spends 95%+ of it's hours stationary on my desk plugged into a fancy double conversion UPS. Battery life on this on a good day in Linux with Nvidia is maybe 3-3.5 hours.

I replace the batteries in my main phones about once a year (Galaxy Note 3s), they spend a lot of time on wireless chargers which probably impacts the battery life. After a year they still work fine though it seems they lose 20-30% of the capacity.

Chinese Super Micro 'spy chip' story gets even more strange as everyone doubles down

Nate Amsden

I believe bloomberg myself

Though I am obviously biased I suppose as I have had a small fear about this exact kind of thing since Lenovo bought IBM's Thinkpad line.

Fortunately I don't have anything of value that the Chinese would want. After being a die hard Thinkpad fan for many years when Lenovo bought them I swore off of them for 11 years - I used Toshiba in between. I am on Thinkpad again after I guess I accepted whatever could happen to Lenovo Thinkpad is just as likely to happen to Toshiba (that and Toshiba didn't have the hardware I was looking for at the time).

I've read conflicting comments on whether or not this kind of thing is possible, and to me based on history of other sorts of surveillance activities from other countries I absolutely have to be on the side of the fact that is probable this happened given the resources of a country like China. I'm just as likely to believe something similar could happen in the U.S. as well with NSA/CIA whomever. I also totally believe that the intelligence community is pissed off at the report for revealing that they knew what China was doing. They'd rather keep that secret so they can continue monitoring and quietly contain it.

I'm just hoping some day to see another Snowden-style leak of internal documents that say yes this did in fact happen, and those paranoid folks were right all along. Sort of reminds me of the early days of the reveals about the taps that the NSA had at AT&T facilities. As a AT&T data center customer at the time I joked with their staff about it, but really didn't surprise me, I continued as their customer until I moved to another job.

Some folks say why didn't more places encounter this well the answer seems obvious they targeted the attacks to lessen the likelihood of it being detected, like any good APT.

Certainly sucks for Supermicro right now though I'd suspect the vast vast majority(99.99%) of their customers have nothing to worry about(as they are not juicy targets). I run (1) supermicro server myself in a colocation in the bay area. I was thinking about getting a new one as that one is 7 years old. This report does nothing to sway my opinion either way.

However I wouldn't be caught dead running supermicro in mission critical production (again, this report has absolutely nothing to do with that either, just based off of ~18 years off and on of using their hardware). I do realize of course some 3rd party appliances I have may very well have supermicro hardware on the inside, but at least those are managed by the vendor as in I don't have to worry about diagnosing strange hardware faults or asking fortune tellers what changes are in the latest firmware, and don't have to worry about resetting all configurations to defaults when flashing said firmware(and the obvious negative implications from doing so from a remote location -- my critical servers are 2,400 miles away from my home)

To me at the end of the day this is hopefully a good thing in that it would raise awareness. I think it's totally possible for similar things to happen to other manufacturers as well even the big guys like HP and Dell. The trend of racing towards the bottom on pricing really puts pressure on the abilities for companies to be willing to be extra vigilant.

Oracle? On my server? I must have been hacked! *Penny drops* Oh sh-

Nate Amsden

Re: bleh...

one of my co-workers likes to rant about Juniper (whenever the topic comes up). I don't use either Cisco or Juniper myself(Extreme networks customer for ~19 years now). Anyway his rant mainly revolved around this feature you mention. Which I had heard touted by another friend a long time ago. Had to do with the JunOS software not giving errors when the configuration was incorrect/syntax errors or whatever. It would go along like everything is fine, shit wouldn't be working but gave no indication as to why. He evaluated Juniper at his previous company (they were a Cisco shop) and it didn't get past the eval stage because of that I think, drove them crazy.

SUSE punts SES v5.5 out door, says storage is going software-defined and open source

Nate Amsden

how is this different from red hat?

Maybe pricing?

Red hat seems to offer the same sort of thing: https://www.redhat.com/en/technologies/storage/ceph

Given Red hat bought the company that created Ceph (https://www.redhat.com/en/about/press-releases/red-hat-acquire-inktank-provider-ceph), article is not clear why someone would want to choose this platform over another based on Ceph.

But I guess the article is not alone, looking at the SuSE Ceph site - it makes no attempt to say how or why they may be better than Red hat (or any other Ceph-based solution), it only compares itself to non Ceph.

Obviously I'd expect much of the underlying core code to be the same across systems but they can differentiate with their pricing and their monitoring/management tools(vs other Ceph solutions), if they do differentiate they don't do a good job at communicating that.

Of course if you are already a SuSE Linux customer than it would make sense to use them if you were interested in a Ceph storage option.

(Not a Red hat nor SuSE customer, never used Ceph))

I was a brief SuSE customer I guess you could say maybe 13 to 15 years ago, bought several versions of their desktop linux distro at the time. It was pretty slick, though my sister used it more than me I kept to Debian, until at some point switching to Ubuntu on desktop/laptop(only) and now Mint(MATE). I remember one time looking at my sister's computer with SuSE and she had installed Yahoo messenger on it, the windows version. I was shocked that wine worked so well she could do it herself without asking me(she knew very little about computers).

IBM won't grow, says analyst firm while eyeing flatlining share price

Nate Amsden

Re: Good News

am confused - why would anyone care to run that architecture on a desktop or laptop? I think even PowerPC which should be much easier to manage doesn't appear to have a laptop(mac of course I think was the only PPC laptop) or desktop version for many years now.

if anything I'd expect an emulator to be made(looks like there is at least one already - https://en.wikipedia.org/wiki/Hercules_%28emulator%29 ) for those that need to develop/test software on x86 systems.

You dirty DRAC: IT bods uncover Dell server firmware security slip

Nate Amsden

used to be a time

Where people shunned the idea of dealing with signed code with hardware as it limited your ability to mess with it (one big example which was more of a licensing thing than a signing thing was Tivo).

I also remember several communities years ago the whole concept of TPM was quite a scary (I include myself in the list of people that feared TPM - and I still don't like it, fortunately in most cases it can be kept disabled without any issue(AFAIK I've never had a system with it enabled), though I think perhaps in some products like MS Surface (guessing) it may be forced enabled.

Nowadays it seems sad that everything that allows you to install unsigned code which previously was a good thing is now an evil thing because it's not "secure".

sad.

Reanimated Violin returns to scene with flashy XVS 8 array, and, er, AR app

Nate Amsden

Re: Really?

back in 10? for me would be have to book a flight to the other side of the country(~4-5 hrs each way air time alone, at least 90min drive to the airport with no traffic). I typically make two such trips to our main data center per year, staying for 10 days on each trip. I do wish it was closer..

I too was quite confused when I saw mention of scanning a QR code to get info.

Cloudflare ties Workers to distributed data storage

Nate Amsden

Re: It's not the sort of thing you'd want handling millions of rapid-fire financial transactions

I suppose you could take that question and ask folks that run things like redis, or memcache, or similar types of key value store systems.

The most popular use case that I see anyway is for caching and holding session data. Given the distributed nature of this feature it seems like it could do a lot for performance of caching more complex things closer to the client in a simpler fashion.

I'm not a developer and have no use for this myself, and am not, and never have been a cloudflare customer. I have worked with many apps that use memcache and redis over the years though (usually in addition to a database of some kind).

One developer I knew(but didn't like, and he didn't like me either) years ago thought he was smart and thought he needed just a couple of weeks to code a database caching layer in memcache to replace the mysql query cache. When I heard that I just laughed. He wasn't at the company much longer so never got to see what he might of come up with. (yes I know the mysql query cache is frowned upon, at the time it was required for our core application)

Heimdall is a MySQL database load balancer, analytics tool, and cache accelerator, something I have been using off and on for the past year or two. As an example they generally suggest using redis for (their) cache, though they can use Hazlecast and maybe one other tech too. Just mentioned them as an example of caching with MySQL anyway.

But it seems strange to me for someone to compare a key value store to a traditional database.

Perfect timing for a two-bank TITSUP: Totally Inexcusable They've Stuffed Up Payday

Nate Amsden

Make sure those branches aren't dependant upon the same online services that keep failing.

TLS proxies? Nah. Truthfully Less Secure 'n' poxy, say Canadian infosec researchers

Nate Amsden

Re: lesser threat

Not quite the same I think. Well it could be I'm not sure how browsers behave in the background when this happens. What I'm referring to is the big warning dialog that pops up saying "this cert is not trusted", and says why. Then you can override the connection if you wish (unless the site uses HSTS or whatever that thing is called) and continue connecting. I don't expect browsers to submit data until that exception is granted but they might, I haven't checked myself.

I recall back to 2004 or so time frame the company I was at had tons of SSL certs, so many that we had a special portal to Verisign's site where I could issue certs without ordering them each time and they would invoice us(something like $90,000 a year in certs). It was also my first (and probably only) experience using client side SSL certs for authentication to a website.

Anyway in one case we had a cert error that I saw, and one of the support folks wasn't seeing it. He wasn't the smartest guy in the company but he was a good support person. But he was conditioned I guess you could say to just click past SSL errors (in this case I think it was IE with a pop up dialog box one step click to bypass the error). I went to his desk and was talking him through the process to get to the error. The error popped up and he instinctively clicked the "continue" (or whatever it was called button), the error didn't even register to him. I laughed and said STOP the error was RIGHT THERE. Went back again and he realized it at that point.

So certainly people can be conditioned to go past the errors but as long as "untrusted" certs can be allowed in browsers (and if browsers some day decide to stop that I'll just, get off the internet entirely perhaps), the risk of a un trusted cert intercepting data is far greater than that of MITM decryption data because of weak(er) encryption.

But at the end of the day the whole SSL CA stuff is flawed security wise anyway since the list of CAs that are trusted seem to go on forever and there doesn't seem to be good enough controls on how certs are issued. Obviously there's been several incidents over the years where certs were issued to the "wrong" people for big domains..

But go beyond browsers, think of all of the server side applications that use SSL, I'm talking server to server communications whether it is API endpoints, email services, and other proprietary protocols that use SSL. Maintenance on SSL versions and stuff is honestly I'd call it black magic in many cases. Something as simple as the ordering of the ciphers can throw everything off.

A few months ago I upgraded some of our internal systems and when we hit production a critical external endpoint was simply failing. It worked fine prior to the upgrade, but not after. It was working in test only because they had configured it to use http. In test https would fail because the vendor's cert expired years ago so it failed validation. In production http was not allowed(on their end). After some investigation I determined they were using ciphers on their site that were now considered very insecure and OpenSSL (or gnuTLS I forget which) refused to connect to the site(no matter what). Strangely enough whichever OpenSSL or gnuTLS refused to connect the other one worked fine(so if OpenSSL was failing gnuTLS would work or vise versa I forgot which worked and which did not). I ran a SSLlabs diagnostics on the site and it was reported as a grade of "F". Ended up building an older OS system for that API call until the vendor could fix their stuff.

Fortunately for HTTPS based sites there is ssllabs testing site, without that I don't know what I'd do myself.

As for BEAST, I don't recall the details of it much, but I do recall putting an easy workaround in on my Netscaler load balancers a few years ago, back when we were prevented from upgrading the code on the load balancers to something that supported newer than TLS 1.0 due to an unrelated bug in the platform which took a good 2 years to get a resolution on.

The whole dumbing down of the internet is quite annoying to me. Present the user with choices and let them choose which they want to do (I have no problem with default choices, just let them override that if they desire). Browser vendors in particular Chrome and Firefox have been absolutely positively terrible in this regard(I say this running the Pale moon browser, I clung to firefox for as long as I could).

Nate Amsden

lesser threat

I don't use these appliances(I haven't done corp IT work since 2002) but certainly can understand security folks seeing whatever vulnerabilities they have is less than letting people connect externally to things they can't inspect. I'd wager many companies would even be fine with SSL termination on the proxies and just providing http internally(that is quite common in application load balancing setups anyway, something I have been doing for about 15 years now).

Now it'd certainly be good if these vulnerabilities were fixed - though I don't agree with disabling support for older protocols not without a graceful failure mode of some kind. It drives me insane that browsers and people push to completely disable stuff without any sort of graceful failure mode. I've been saying for years now treat those sites as if they were using self signed certs -- provide a warning, and a way for the user to continue past the warning if they deem the risk is acceptable, or if they just don't care. Because the same level of threat exists with self signed certs as it does with weak(er) encryption. Well I guess technically non trusted certs are much easier to deploy and so a much greater risk than weaker encryption.

But as it stands the things in the article seem to be far less of a threat than if the companies removed the appliances altogether as an example.

I had discussions with one guy who is really good in security who at his previous company didn't allow any outbound communications from the servers to the internet unless it went through a proxy. Which on paper sounds fine but unless your doing SSL interception on that proxy there's still a very wide open door there for stuff to get out that you can't see(in the earlier days at his company it was long before wide spread https adoption). With more and more external services dependent upon large crappy clouds that have wide swaths of IP addresses that change often(without notice) it's not quite practical to try to lock down communications to IPs(at least to ones based in those clouds), and often even more difficult to determine what is on that IP.

Recently had to diagnose a network issue in this situation and fortunately the remote IPs were serving regular https and their ssl certs were specific enough that it allowed me to identify the organization(a provider the company I am with does business with so I recognized it) that was running the service on those ips.

Guys, geez... finally 5Gs: AT&T grows super-fast mobile net city rollout

Nate Amsden

waste of time

5G sounds like it may be useful for things like fixed wireless communications. AT&T struggles to get their 4G LTE stuff working most of the time. I can recall two situations in the past 5 years where I got above 20Mbit on LTE. Most of the time it is below 5Mbit. One was in a San Jose hotel (another time I was at another San Jose hotel and the LTE was sub 1Mbit). The other time was at a Las Vegas convention center where they obviously had LTE repeaters or whatever you call them in the room.

Too bad fixing coverage doesn't sound flashy like 5G.

It's not my phone either have tried at least 3 different phones side by side the coverage is quite similar. I have seen many times where I have "good" LTE signal strength(as measured by an app that looks at the numbers), but not enough bandwidth to resolve any DNS entries.

(AT&T customer since about 2010 or so, I switched from Sprint in order to use Palm/HP Pre GSM phones at the time currently have Galaxy note 3s and Sony XZ1 on their network). When I was on Sprint of course it was far worse at the time anyway, their Wimax 4G was slower than their 3G (I had a Sprint mifi hotspot at the time and despite unlimited 4G Wimax it was so slow I configured the device to stay on 3G even though it was no longer unlimited). Changing to (then) AT&T's HSPA+ it was easily 4-6X faster than Sprint. I'm sure Sprint has improved a bit since that time, bad performance wasn't the only reason I left 'em.

Windows Server 2019 Essentials incoming – but cheapo product's days are numbered

Nate Amsden

Re: Is Cloud computing Smart Meters for IT ?

"demand management" has been an issue with cloud since day 1. With the crappier public IaaS clouds such as MS, Amazon, google it's even worse as you can't provision into pools of resources(as in being able to over subscribe CPU/memory/disk -- which in itself is demand management as well but it can simplify things quite a bit depending on your workload). Conversely if you want to have much better resource utilization then you have to pick a cloud provider that provides that, though the costs typically go up even more in that situation.

SaaS clouds in theory should abstract that aspect of management away if managed correctly, but as big SaaS clouds like Google and MS have shown time and time again it's far from a mature process.

Who needs quality when you can just slowly numb your customers into a lower level of service without them realizing it.

Web cache poisoning just got real: How to fling evil code at victims

Nate Amsden

Re: So non-core services offered by a SaaS supplier likely to be less secure thatn core

Sort of ironically in many cases a website is not an IP, but a combination of IP+host name. Sort of a side effect rather than "by design" I credit name based virtual hosting on my load balancers from protecting many of the websites from casual drive by scanners. There may be dozens of different websites behind that IP(and in some cases those sites are meant for "internal" use and have no external published DNS), but without specifying the right host: header at most you will hit only one of them(per IP).

Bitcoin backer sues AT&T for $240m over stolen cryptocurrency

Nate Amsden

Re: So much for the "what you have" 2nd factor...

You seem to be focusing too much on hacking your account via their website because you use fancy 2FA, and not mentioning hacking the account via social engineering through the phone lines or in person.

For example I have a pass code on one of my bank accounts, if I call in they are supposed to ask me for the pass code before they can do anything. Though there is a way around that pass code if you provide enough personal information about yourself to verify you are who you say you are.. I can imagine without this the companies would be bombarded with complaints from users who do forget their shit. Can be a tough balancing act.

In the case of a bank account, or even a bitcoin site with millions of dollars of your own funds.. I have to believe there are ways around fancy 2FA in the event such tokens are lost/stolen/something. I mean I can't imagine an organization saying "sorry we can't authorize you because you lost your token(s) so your $24M gone forever".

Had this money been stolen from a FDIC insured account (knowing there are limits to the $/account that are insured) would FDIC and/or the bank cover the losses (at least to the limit of the insured value)? Or is FDIC only used for things like in person bank robberies?

Google bod wants cookies to crumble and be remade into something more secure

Nate Amsden

Re: Zero understanding of cookies

how are cookies not stored on the server side ? Any cookie associated with a site would be transmitted to the site and the site can store that data if it wishes(but it probably already has that data in other forms, e.g. session info, items in your shopping cart). Back when I worked for an ad targeting company many years ago we collected probably 40TB of log data per day, most of that was cookie stuff from the tracking pixels.

It's pretty trivial to configure most web servers to log the contents of the cookies.

Of course I could be misunderstanding what you are saying as well.

Oracle: Run, don't walk, to patch this critical Database takeover bug

Nate Amsden

could disable java

Though it is enabled by default, I remember disabling it on my last installs since the app that I use oracle with (vCenter) doesn't need it.

SQL> select comp_name from dba_registry;

COMP_NAME

--------------------------------------------------------------------------------

Oracle Enterprise Manager

Oracle XML Database

Oracle Text

Oracle Workspace Manager

Oracle Database Catalog Views

Oracle Database Packages and Types

6 rows selected.

You can probably do it on the fly(as in don't have to reinstall) as well assuming you don't need it:

http://fast-dba.blogspot.com/2014/04/how-to-remove-unwanted-components-from.html

Phased out: IT architect plugs hole in clean-freak admin's wiring design

Nate Amsden

I deployed some new pdus recently(as in 0U rackmount pdus not large scale datacenter pdus). They are pretty neat as they alternate the phase on every outlet (and the outlets are color coded). pretty convenient, though the locations on the outlets could use some improvement, assuming related to the extra hardware to do the alternate outlet thing, 36 outlets on the 208v 30a 3phase though probably a good 2 and a half feet of no outlets on the bottom part of the pdu.

Extreme Networks? Extreme Share Price Crash, more like

Nate Amsden

loyal customer for ~19 years

Been a small but loyal customer of Extreme's for about 19 years now(always felt a bit bad seeing Extreme hardly ever mentioned on El reg, though now more so since they acquired the Brocade data center assets for whatever reason), though technically I guess they wouldn't count me as a paying customer until about 2004, purchases before that were made off of ebay. Not super excited myself about their recent acquisitions I did like their "one platform" (XOS) story(much like HP 3PAR's one platform small to big story) that they had before that(my first XOS was BlackDiamond 10k in 2004). I went to a conference of theirs earlier this year and didn't like what I heard, mostly "ignore what we've been saying for the past 8+ years we like this new stuff better". Though I had been ignoring what they had been saying for the past 8+ years anyway since it was about M-LAG which sounds nice though I never had a real interest in that approach either.

But whatever, as long as XOS keeps chugging along, there hasn't been a feature they have added that I've cared about in a long long time, my network architectures haven't changed since 2005(ESRP + virtual routers), so as long as that stuff keeps going, hopefully stable since it is mature I'll be happy and ignore the other messaging about their new brocade data center kit(never been a fan of Trill anyway myself). Also not sold on FCoE either, am happy with isolated fibre channel networks for storage.

People have been telling me Extreme is going out of business since at least 2004 (back then it was Foundry Networks who was spouting that, ironically enough look where Foundry is now), then when Extreme turned down the Juniper acquisition, then several other things along the way.. somehow they manage to keep going though.

The pitch is they will be able to better grow these multiple businesses now that they are a billion $ company, rather than the several smaller ones that struggled before. Seems like a tall order, I hope they didn't take on too much debt and other bad things as a result of this stuff.

Their new security and intelligence story really reminded me of their same story back in 2005 with the Black Diamond 10k(with FPGAs I assume, though they called them "Programmable ASICs" at the time) and their Sentriant technology along with ClearFlow. Now of course things are better technology wise but found that amusing all of the stuff they were touting at this conference I was at I literally heard them tout 13 years ago. It sounds cool for sure, my org's got no budgets for that stuff though so doesn't matter anyway.

I tell people if you want to make a career out of networking then go the Cisco/Juniper route(much more complicated), but if networking is only one of the things you do, chose something else. Other than ESRP, the ease of use of the Extreme platform is what has kept me happy. Certainly have had issues here and there over the years, but at least not the constant headaches of dealing with a Cisco (or Cisco like) CLI interface. Extreme believes CLIs will go away entirely in favor of fancy SDN. Maybe they will some day.. companies have been promising such things for a long time and so far hasn't happened. Not holding my breath.

For all the excitement, Pie may be Android's most minimal makeover yet – thankfully

Nate Amsden

can you get updates only yet?

I'm still on my first Android phone which is a galaxy note 3 running Android 4.4 (I refuse to let it upgrade to 5), before that I was on webos. I have a note 4(android 5) as well(and another note 3 on android 5) which I'm typing this on (wifi only).

My Q is can you opt for JUST security updates in modern android. No feature upgrades. I see the July 2018 security bulletin still supports android 6, do the patches are there for older OSs. I haven't noticed a single headline feature addition to android since 5 came out (including 5) that looked interesting to me only annoying UI changes. The most frustrating of which other than the material design is the removal of the mute option from the pop up menu from pressing the power button which happened in android 5. I use this feature constantly, would be even better if there was a physical switch to mute like I had on webos, or the ability to mute the ring tone instantly by a quick tap of the power button.

I'm assuming not, but curious anyway.

Sitting pretty in IPv4 land? Look, you're gonna have to talk to IPv6 at some stage

Nate Amsden

Re: Overly Gloom and Doom 90's Predictions

So a better solution to "breaking" a few things with NAT, is to break *everything* with IPv6 right?(because back then what really supported IPv6) Then they can be forced to update everything because everything is broken, then everyone will be happy. Yeah I can see why that didn't happen.

I've been doing networking stuff since the late 90s(not really my primary role), these days load balancers, firewalls, vpn, layer 3 switching, though no dynamic routing protocols etc, and even I have zero interest in ipv6(along with a lot of others I'm sure). In fact I don't recall ever even having a conversation/chat with anyone outside of toy(home tunnel) deployments who was excited about IPv6.

I go out of my way where I can to disable IPv6 on systems because it can still cause issues(perhaps mainly when there is no IPv6 network), one example that came up again recently is BIND by default will query IPv6 name servers unless IPv6 is explicitly disabled on the service itself (having it disabled at the operating system is not sufficient), which results in many query timeouts.

I do remember being "excited" I suppose that the big core switches I purchased in 2004 supported IPv6 in hardware, though other than a bullet point on a spec sheet my interest stopped there.

IPv6, much like SDN still seems to me firmly only beneficial in the service provider/large enterprise space at this time. For most folks I think running out of IPs isn't a critical issue.

It was much more so an issue back before SNI -- I was at one company about 13 years ago where we had a couple hundred SSL certs(many different domains too) that had to be exposed externally -- so of course each required it's own IP. Getting those IPs wasn't difficult at the time but these days such a setup could easily be consolidated even as far down to a single ip address with SNI.

For the stuff I do (managing production e-commerce infrastructure), if the time comes where we NEED inbound IPv6 then my strategy would be as the article suggests - though I would just have our CDN do the conversion for us. If the time comes where we NEED outbound IPv6 for something then I imagine my strategy would be to do IPv4->v6 NAT (never looked into it before). Though if either of those situations appear in the next 5 years I'll be quite shocked.

'Prodigy' chip moonshot gets hand from Arm CPU guru Prof Steve Furber

Nate Amsden

reminds me of itanium

"[..]Power efficiencies are gained by moving out-of-order execution capability to software, Danilak said. “All the register rename, checkpointing, seeking, retiring, which is consuming majority of the power, is basically gone, replaced with simple hardware. All the smartness of out-of-order execution was put to compiler."

and then from wikipedia

https://en.wikipedia.org/wiki/Itanium

"[..]With EPIC, the compiler determines in advance which instructions can be executed at the same time, so the microprocessor simply executes the instructions and does not need elaborate mechanisms to determine which instructions to execute in parallel. The goal of this approach is twofold: to enable deeper inspection of the code at compile time to identify additional opportunities for parallel execution, and to simplify processor design and reduce energy consumption by eliminating the need for runtime scheduling circuitry. "

How to (slowly) steal secrets over the network from chip security holes: NetSpectre summoned

Nate Amsden

Re: Yup

if a nation state is after you this is the least of your worries.

Windows 10 IoT Core Services unleashed to public preview

Nate Amsden

Re: February 30th?

When you want 10 years of support? When you don't want systemd?

(Debian user since 1998 - wondering when support for Debian 7 will run out and I will have to do another round of upgrades, probably going to the Deuvian(sp))

Xen 4.11 debuts new ‘PVH’ guest type, for the sake of security

Nate Amsden

high availability is nice though not really required for a bunch of workloads. I mean I ran vSphere from about 2006 to about 2010 with nothing other than standard edition (no HA, no vmotion, nothing). Ran everything from Oracle DB servers to web and app servers, etc(all of the VMs if I recall right were linux). First couple years(first company) didn't even have vcenter. For a while in 2009 at least I was able to buy vsphere essentials packs and get the hosts managed by vcenter standard edition (vmware closed that license hole a couple years later)

I did have SAN storage though so if I needed to move a VM to another host I could do it(VM had to be powered off of course).

Back in 2008 before I left that one company my (new) manager at the time wanted to switch to Xen. He didn't even like paying the lowball standard vsphere pricing we were paying I think it was $3k for a 2 socket server for standard edition(excluding support I think). We got into a big argument about it at one point. After I left the company he directed my remaining teammates to start working on Xen (CentOS 5.x I think at the time which among CentOS 4.x and RHEL and Fedora we used as guest OSs). They spent about a month trying to get it to work and gave up and went back to Vmware. The core issue they were having at the time was the need to run both 32 and 64-bit CentOS guest OSs, and one of those(assuming 32-bit it was a long time ago) simply wouldn't even boot(mailing lists etc provided no solution). Didn't talk to that manager again for years but have since made up he apologized to me which was nice, and said yes (at least at the time)Xen sucked and vmware was better.

For the past 7 years or so at the current org everything is enterprise+, though I think the only real features of e+ that I use are VDS, DRS and host profiles. I'm probably a mix of a customer vmware would love and hate -- been using their stuff for 19 years now, very loyal customer(because of consistently good experiences) but at the same time not excited about any of their stuff other than the basics. vSphere 4.0 was the last product I was super excited about.

AAAAAAAAAA! You'll scream when you see how easy it is to pwn unpatched HPE servers

Nate Amsden

Never having used a Microserver am not sure if it's iLO capabilities are the same. But on Proliant DL systems anyway you can configure iLO to use either the dedicated port or share with onboard NIC. The default is dedicated.

Another data-leaking Spectre CPU flaw among Intel's dirty dozen of security bug alerts today

Nate Amsden

Re: So what? CPU Errata exist since the first products hit the market...

I think it's mostly an excuse to get page views. There are legit situations where these bugs can be considered dangerous(much more so if you are in an organization that is a tempting target) but those are pretty few and far between vs the more common security exploits as the article notes.

The page views things isn't specific to intel though it's to many of these recent security things where people are making up code names and dedicated websites for them, or in the case of AMD trying to manipulate the stock price. So far overblown.

I don't believe AMD is spinning this at all myself but certainly vocal AMD fans are trying (to no avail from what I can see -- don't get me wrong I do like AMD I was pretty hard core fan of theirs for Opteron 6000 but then they burned many bridges with those server chips and Epyc isn't yet enough to get me excited again -- mainly on power usage).

I'll change my tune if these intel bugs provide a way to crash the processor(I keep thinking back to the f00f bug).

Doing some searching seems there may be such a bug coming soon

https://en.wikipedia.org/wiki/Halt_and_Catch_Fire#Intel_x86

HPE primes storage networking pipes for NVMe-oF data deluge

Nate Amsden

more than HPC

The quote from the network expert seems specific to HPC, of which I think traditionally HPC hasn't used FC anyway, obviously the enterprises use it more.

And small department SAN? Shit I think most small department SANs could get by on 4Gbps FC. I know my mix of 4/8Gbps/16Gbps(switch side only I have no 16G HBAs) still has years of capacity built into it even at 8Gbps speeds. Of course I have no fancy NVMe stuff, just regular flash storage. Even for my newest servers I have no need to go beyond 8Gbps.

Time to dump dual-stack networks and get on the IPv6 train – with LW4o6

Nate Amsden

Re: So just like the network my phone uses?

My phone is using CGN on at&t. According to android(4.4) my ip is 10.146.31.141. I have had wifi disabled for the past couple of years so they can't upgrade my phone.

Perhaps att has an ipv6 network for mobile too not sure. Checking my wife's android 8 phone it is on the same ipv4 CGN that i am on. So clearly not device specific.

With CGN i have never had an issue connecting to anything. Though I have never needed to connect into my phone from remote too. So in a nutshell CGN works fine no need for ipv6 for me anyway.

Tintri terminates 200 staff, cash set to run dry in a couple of days

Nate Amsden

Re: Soo...

There was a report..here on el reg I think not long after the IPO the company may of stiffed the top sales folks which caused some sort of sales exodus and it spiraled down from there.

Their tech always seemed pretty cool to me I had several discussions with them over the years though not enough to get me away from 3PAR. My last talk with them I think earlier this year was surprised that they hadn't yet had the ability to reclaim deleted space from VMs and stuff(3PAR calls in thin reclamation and was introduced in 2010). Also was hoping Tintri would give more generic support for NFS for file serving purposes(on top of the other stuff) but they never got round to doing that as far as I know.

Their per VM approach which vmware tried to clone with VVOLs did seem pretty cool, though at the end of the day not a problem that has really ever affected my workloads. As someone who has access to all aspects of the infrastructure I could trace down and kill things pretty easily if they were causing problems. 3PAR has had vvols for years now though I've yet to use them, at this point probably will play around with them next year.

Micron: Hot DRAM, we're still shifting piles of kit, but somebody's missing our XPoint

Nate Amsden

Re: MLC and TLC are crap already.. QLC will be the worst really...

I assume you have a pretty extreme use case. The oldest SSDs(HPE 3PAR) at the organization I am at are Feb 2014. Their stats claim they have 95% of their life left. zero failures. Roughly 1000 VMs run on top of that array.

In general enterprise SSDs are so reliable these days that many vendors are offering 3-5 year(or perhaps more) unconditional warranties(MLC with any access pattern) on them. Back in ~2012 time frame HP came out and said that there was less than 5% of all deployed SSDs (on 3PAR anyway) had failed.

On the consumer side things are a bit murkier, with many brands and models seemingly being quite crappy. I steered clear of consumer SSD until the Samsung 850 Pro came out, have deployed Samsung 8xx and 9xx Pro/EVO across my own personal lineup of stuff(my main laptop has 1 850 pro and two 860 pros), with no issues.

I have one Intel SSD (the one with the Skull on it), didn't have anywhere else to put it so I tossed it into my PS4, runs fine though can't say I see any significant speed bump at least with the games I was playing at the time GTA 5 and Fallout 4 (about the only games I have played on PS4). (revised comment to reflect right year)

So my somewhat limited experience the past ~4-5 years says SSDs(all are MLC) are quite reliable, but there are certainly crappy ones out there like anything else.

SSDs aren't cheap though of course, last I saw(here on el reg) the raw cost/TB was still around 10X more than 7200 RPM(industry average numbers).

PayPal reminds users: TLS 1.2 and HTTP/1.1 are no longer optional

Nate Amsden

TLS 1.1 is fine for PCI ?

Having been going through PCI audits for a few years now unless something changed very recently TLS 1.1 is still perfectly fine for PCI. I did a few web searches and could not find anything mentioning TLS 1.1, only a dislike for 1.0 (though again I have yet to see any serious issues with TLS 1.0 itself, I have seen people point to specific weaknesses here and there but they were all(that I have seen anyway) easily mitigated while maintaining TLS 1.0(since I did so myself on my org's load balancers 2-3 years ago back when we could not upgrade past TLS 1.0(an issue that was resolved since, 1.1 and 1.2 are enabled these days and 1.0 disabled where possible/required).

Using SSLlabs test site is always real handy for validating configuration, it's so easy to misconfigure SSL setup, even the ordering if the ciphers is important. I've yet to know anyone personally who knows SSL well enough to be able to configure that kind of thing on their own. For my Citrix Netscalers I think I used this guide (https://www.antonvanpelt.com/make-your-netscaler-ssl-vips-more-secure-updated/), or something that looked real similar.

Had an issue not long ago where we upgraded some of our Linux systems and one of them had to connect to a 3rd party service. The upgraded openssl refused to connect to the 3rd party after the OS upgrade(with no obvious way to force it to connect). Ran an SSLlabs test on the site and it had a rating of "F" at the time. The vendor fixed their site after a few weeks, in the mean time we ran that job on an older OS. I believe that was a situation where I tested both wget and curl against the site, I think wget refused to connect but curl was open to talking to the site(maybe because it was using gnutls and openssl, whereas wget I think was linked to openssl only).

SSL-level logging is also terrible across the board in my experience. Very difficult to tell what protocols and ciphers are actually being used(and used by who/what). Developers I have worked with in the past 18 years are just as lost when it comes to SSL.

Microsoft: Blobs can be WORMs in the new, regs-compliant Azure

Nate Amsden

not as secure as optical media?

https://en.wikipedia.org/wiki/Write_once_read_many

"Write once read many (WORM) describes a data storage device in which information, once written, cannot be modified. This write protection affords the assurance that the data cannot be tampered with once it is written to the device."

Last I looked at amazon's offering(month or three ago) theirs was not really WORM. It sounds like Azure is going the same route, where the WORM aspect is just a policy. Adjust the policy and you can then write to the data again?(didn't see any indication that this was not easily achievable by an admin)

On the SAN side I know 3PAR has a feature called Virtual Lock which has a better approach:

"Virtual Lock Software gives users the ability to protect data volumes and volume copies from intentional or unintentional deletions. During the user-specified retention period, volumes and copies can be read but are protected against deletion, even by an administrator with the highest level user privileges."

(if you wanted to protect against any writes to the data you would create a read only snapshot and lock that)

emphasis on the fact that the admin cannot change the policy once set. If you lock the data for 2 years it is set for 2 years. More difficult to achieve perhaps on from a service provider perspective where you may be paying per month for the service I don't know.

I'm sure there are other systems that have similar capabilities I am just personally most familiar with 3par.

Microsoft says Windows 10 April update is fit for business rollout

Nate Amsden

good news

At this rate by 2021 they may have a decent replacement for win7.

So net neutrality has officially expired. Now what do we do?

Nate Amsden

next youtube or netflix

Whoever might be next there has to worry about youtube and netflix, and facebook before they have to worry about net neutrality. Just a couple of weeks ago there was news that Vevo was shutting down caving into Youtube(I personally don't really stream anything(~1,700 disc collection though), and don't use facebook either. I use youtube for a few minutes a month for the occasional clip from some movie or tv show).

Second to fighting the giants may be fighting the regulators who seem to be bent on trying to force platforms to exclude certain kinds of content on the sites, which just makes it more difficult/expensive to come up with the software(or human resources) to cope with controlling the content on the sites. Obviously even youtube and facebook struggle with this with the resources they have available.