* Posts by Nate Amsden

2438 publicly visible posts • joined 19 Jun 2007

So net neutrality has officially expired. Now what do we do?

Nate Amsden

next youtube or netflix

Whoever might be next there has to worry about youtube and netflix, and facebook before they have to worry about net neutrality. Just a couple of weeks ago there was news that Vevo was shutting down caving into Youtube(I personally don't really stream anything(~1,700 disc collection though), and don't use facebook either. I use youtube for a few minutes a month for the occasional clip from some movie or tv show).

Second to fighting the giants may be fighting the regulators who seem to be bent on trying to force platforms to exclude certain kinds of content on the sites, which just makes it more difficult/expensive to come up with the software(or human resources) to cope with controlling the content on the sites. Obviously even youtube and facebook struggle with this with the resources they have available.

Deck the halls with HALs: AI steals the show at Infosec Europe

Nate Amsden

Re: One box to server them all - To stop phishing attacks

Oh my that sounds absolutely terrible. The likes of facebook are already trying to get people walled into their gardens, don't need yet another garden.

Just because you know the source doesn't mean that source wasn't compromised and sending out bad messages, or hijacked DNS to send requests for a site to another location or taking over BGP routes to redirect traffic, or a legit message sending a user to a legit website that just happened to be compromised.

Phishing has never been an issue in my life. I find it amusing that so many people still seem to fall for it. But as long as people can be convinced they are sending $10,000 via wire transfer to a Nigerian prince who will then send back $1 million, so convinced that they get angry when the wire transfer service refuses to process the transaction -- there will be 1000x more that will fall for other social engineering attacks.

(I have been running email servers since 1996 -- though I haven't had to support corporate email since 2001, only personal email and data center applications since)

Nate Amsden

ML and AI just seem like an extension of "big data" and analytics. Is ML and AI even feasible without a fairly significant data set? Probably not a coincidence that the main leaders in this space(publicly at least) are the ones with the most amount of data.

Have to use SMB 1.0? Windows 10 April 2018 Update says NO

Nate Amsden

pop up a warning?

I haven't heard of this so assume it hasn't happened. But if not it would of been nice for MS to pop up a warning message when connecting to SMB1 shares to alert the user. More props if they pop up a warning for SMB1 capable servers even if the clients are able to connect via a newer version of the protocol.

I'd wager ~98% of the users out there have no idea what SMB version they might be using(or even how to tell). I count myself among those. My usage of SMB is quite small though I do have a samba system at home, just doing a quick check on Samba and SMB v1 I came across this article for how to turn SMB v1 off:

https://www.cyberciti.biz/faq/how-to-configure-samba-to-use-smbv2-and-disable-smbv1-on-linux-or-unix/

I checked the config (fairly default config) on my system and there is no mention of the "min protocol" setting(don't know what the default is for Samba 4.2), so maybe SMB v1 is enabled, or maybe not. The only clients that access it are windows 7, and there too I really have no idea what protocol version they use to connect.

(small disclaimer linux has been my main OS of choice desktop/server for 20 years now, though I have used windows from 3.0 -> 7(client) windows, and I do manage a dozen or so windows server VMs(win2k8 and 2k12) as well, so not totally green)

Same goes for enterprise stuff, I have SMB on an EMC Isilon cluster(code is fairly current) but no idea what version of SMB it runs(a quick search shows one person wanting to disable SMB v1 on Isilon 2 years ago, and another person suggesting a specific code version that introduced the option to disable SMBv1)

Oddly enough, when a Tesla accelerates at a barrier, someone dies: Autopilot report lands

Nate Amsden

Didn't need to stop if it had stayed in the lane. No crossing solid lines. Should be pretty simple.

Monday: Intel touts 28-core desktop CPU. Tuesday: AMD turns Threadripper up to 32

Nate Amsden

Re: where's the innovation?

I wasn't expecting myself, I was commenting on the article:

"[..]what matters is that someone is putting pressure on monopoly giant Intel, forcing it to innovate in the desktop "

Nate Amsden

where's the innovation?

Both of em are just tweaking at most server cpus to run on workstations. I have a dual socket HP opteron workstation maybe from 2009. Bought it refurb from HP maybe 2012. Upgraded the cpus from 4 core to 6 core about two years ago(12 total cores), after finally finding cpus that were a decent price. The cpus were specifically for HP blades. I just discarded the blade heatsinks and reused what the workstation already had. Nothin new here. I don't use it for much anymore but it's still a pretty solid system.

I've seen people claim the new Ryzen chips has forced intel to compete more. I don't really see that either myself. Ryzen fell far short of my own personal expectations on power usage anyway (not that intel is much better now). Sad to see seemingly everyone running into manufacturing walls relative to the past.

Where AMD forced intel to innovate was when intel came out with the core series architecture.

Cisco turns to AMD Epyc for the first time in new UCS model

Nate Amsden

Re: hot chips

The way you type makes me think you have absolutely no idea how much a 8 socket single system costs, not to mention there are very few such systems on the market anymore (as far as I can tell neither HP (Proliant anyway) nor Dell sell them anymore). Though HP may have 8 socket superdome for HPUX still and of course SGI systems(owned by HP now). HP's last Proliant 8 socket as far as I can tell was 980 which was 7-8 years ago.

Nate Amsden

Re: Is it called the Anti-trust edition?

If Cisco really cared about that they would of been releasing Opteron 6000 series systems back in ~2010. Speaking of which have to find out at some point if HP is going to continue supporting my DL385G7s (Opteron 6200s) past October of this year or not.

Don't read this, Oracle... It's the rise of the open-source data strategies

Nate Amsden

Re: "remove the bureaucracy inherent in acquiring Oracle’s database"

The bureaucracy inherent in acquiring Oracle's database is almost nothing. You can download any version you wanted from their website(checked again now just to be sure - http://www.oracle.com/technetwork/database/enterprise-edition/downloads/index.html) without anything.

They purposely make it easy to lure people into using it so then they can come back with audits.

Last time I seriously dealt with Oracle DB licensing was about 10 years ago, and at the time it was pretty easy. As a new hire I tried to advice the company how to deploy Oracle properly when they were undergoing an audit. My manager decided to ignore me, they paid their fines to get back to a normal stance and kept going. Until the 2nd audit came around when I once again told them what they needed to do, and they did it(I guess I did it as I did most of the work) that time, still had to pay fines but they were legit fines they were terrible/lazy about managing their licenses. I found it ironic I knew more about Oracle licensing than the Oracle reps did at the time (specifically around leveraging Oracle standard edition on multi core processors). I also did things like install single socket(quad core) vmware hosts(which vmware did not "support" at the time), to get more Oracle instances up (even though Oracle did not "support" vmware, I think they still may not). For production it was all bare metal and optimized with fast dual core cpus or quad core depending on EE or SE licensing.

That particular company when they started had Oracle SE "One", the tiny DB. Then they hired an Oracle DB consulting firm to help them manage the systems(this was before I was hired), the first thing the consulting firm did was to install Oracle EE everywhere. Company was hit hard for that filing support cases against EE when they were not licensed for it. Later on the 2nd audit got hit again because that DB consulting firm had monitoring that used partitions in oracle, another expensive add on. No other apps or anything was using partitioning but company got hit with the bill for using an unlicensed feature. The monitoring software was then updated to not require partitioning.

Previous to that company was a place that had massive abuse of Oracle licensing we had probably a dozen or more hosts, and were only paying for a couple(everything on EE). For some reason either Oracle didn't bother to audit, or when they did audit we got by somehow (I wasn't responsible for those systems). Eventually the company got correct in their licensing but took a few years.

I think Microsoft is similar, though at least with MS you can't(as far as I know) download their biggest products for free and be able to use them without a license key/file etc.

I have used Oracle DB for the past several years just as a back end database for VMware vCenter, very low utilization. Plan to move to vSphere 6 this year and to the vCenter appliance clustering along with it (Postgres I guess), so won't have Oracle anymore after that. They ping me every so often to try to get more sales but that doesn't go anywhere, and they haven't expressed any interest in an audit(for what I licensed I know I am way over licensed vs what is actually used). Oracle actually sent me an email recently reminding me my support is expiring on Feb 8 2020 (so why email now??) and the renewal fee is $3.15 (no idea where that number came from). They have been emailing me for a year saying my support is expiring in 2020 and I should renew. I mean I can understand emailing a few months before expiration but more than a year? Never seen that before.

It really would be nice if Oracle Enterprise Manager's features were available in Standard edition. I loved OEM at least the performance management stuff being able to see what queries are doing. I recall 10 years ago again it was possible (not "legally") to enable those features in Standard edition, then when the audit came I could just wipe out the data stores for OEM and replace them with regular ones, then reverse it again later(didn't care about data retention). Though with 11G that doesn't seem to be possible anymore(at least not in the same way it was then). My Oracle is really rusty these days though(I have never been a DBA).

Nate Amsden

did this article forget Oracle owns MySQL?

At least the official version? Sure MariaDB seems to be the more popular variant of that DB, but it's not as if you can't get MySQL directly from Oracle.

If customers don't want to pay for support it doesn't really matter what you make. The company I am with used to pay for Percona MySQL support(we use Percona across almost everything though there is a push to go to Maria). Percona's pricing was very attractive at one time I think it was basically $15k/year for unlimited support unlimited instances. We filed maybe a few cases a year, not much at all. Then one year it jumped to something like $120k/year for the same stuff(actually think it was less but not certain this was 2-3 years ago) so it was decided to drop their support and stick to internal staff only. Today I don't see pricing on Percona's site so not sure what it is now.

We tried RDS years ago when we first launched the app stack, it was just terrible. I'm sure it is probably fine for pretty generic setups, but the lack of control was just maddening. Getting data out of the thing was quite a mess too, when we finally moved out of amazon cloud in 2012 we had to do mysqldump to get the data out to import to real mysql servers, a process that wasn't quick.

Last I recall Amazon themselves were huge users of Oracle internally having a site license(s) -- licenses that made it cheap/easy to deploy everywhere. I have a friend who has been an Oracle DBA manager at amazon for 12 years now, haven't talked to him in a few years though.

Internet engineers tear into United Nations' plan to move us all to IPv6

Nate Amsden

Re: Mapping plan

1.4 million routes doesn't really sound like much to me for 2022. Other than the big service providers who really needs to carry the full bgp table anyway? Most folks that use BGP will probably only need a tiny fraction of it, or for the rest of us just uplink to a good service provider(in my case Internap) and let them do the routing.

I have a document here for a high end core switch from May 2004 where a vendor was using a IXIA traffic test tool against a couple of different products, once of which was capable of 1.2 million routes, though on a per port basis it was 230k. But still that was 14 years ago, and it was a switch, not even a "router"(which typically have a lot more memory).

Most companies have had to upgrade their hardware anyway just for increases in throughput.

Today I see routers at least claiming over 2M IPv4 routes and 2M IPv6 routes in hardware(vs 230k on that switch from 14 years ago) on modern equipment just on a quick search I'm sure there are others that can scale higher.

Hitch a ride on Storship Enterprise's weekly voyage of discovery

Nate Amsden

gdpr and location

Afaik gdpr has nothing to do with where your data is stored. Has to do with whether or not you serve European customers. Now if you do serve Europeans and have no staff over there then you can probably ignore gdpr as long as you don't mind them blocking you. Not the polite thing to do but they really can't do anything else (like russia trying to stop telegram)

I personally pulled all of the apps and data from the org I work at from Europe about 2 months ago(nothing to do with gdpr - pulled all hardware too) but we are still covered by gdpr since we have a few offices and employees there, pay taxes etc. All European data and apps live in our u.s. based colo along with everything else.

Activists hate them! One weird trick Facebook uses to fool people into accepting GDPR terms

Nate Amsden

Wonder what would happen

if Google and Facebook just decided to shut down stuff in Europe entirely for say 30 days.. that would be really interesting to see. Just leave a message on their websites that say something like "whoops give us some more time to make things GDPR compliant, in the meantime we can't let you use our services".

Are there european social networks that would explode over night? European web search engines? European Youtube? And what would happen when/if google/facebook turned stuff back on would the traffic come flooding back to them?

(I have never used facebook and my usage of google is quite minimal, I switched to bing as my search engine when I changed to Palemoon browser(Nov 24 2017), seems to do the job fine, though I still use google on firefox/android(minimal google usage there) -- I do use google maps though as bing maps really doesn't show much useful info, or maybe it's a browser compatibility issue with bing). My usage of youtube is quite minimal as well(I don't use any streaming services).

My switching to bing was really just an experiment, would I notice much by not using google, and I just haven't been bothered to change it away from bing since, I know there are other alternatives as well. I haven't had any cases where I felt I needed to go to google search to find something(that I could not find on bing search).

Who had ICANN suing a German registrar over GDPR and Whois? Congrats, it's happening

Nate Amsden

mis worded?

The organization that registers .de domains has to have personal info. They restrict who can register .de domains to people that live in germany. I had a .de vanity domain about 17 years ago, had it for probably 2 years then the .de folks took it back(I didn't know the "rules" at the time). Just checked again and the rule is the admin contact must be an address in germany.

Remember that $5,000 you spent on Tesla's Autopilot and then sued when it didn't deliver? We have good news...

Nate Amsden

right thing to do

is to refund all of the extra money paid up front by those folks(if they so desire it anyway). Perhaps that means turning off some software functionality on those cars if they have some sort of super enhanced beta software that isn't distributed to the rest of the tesla cars. The claims I have noticed mentioned by Musk/Tesla seem to revolve around the software only. Though I admit I don't track them very closely.

As Tesla hits speed bump after speed bump, Elon Musk loses his mind in anti-media rant

Nate Amsden

not sure what news musk sees

I see news on car accidents every day. As does anyone who watches local news.

Course it doesn't make tech news sites since there isn't a tech angle to it.

I don't use twitter either so I guess if not for el reg his rants would be as unknown to me as local car accidents are to him.

New Facebook political ad rules: Now you must prove your ID before undermining democracy

Nate Amsden

what is a political ad?

I don't use facebook or instagram, but short of an ad going for or against a particular candidate(or perhaps a specific ballot measure) I wonder how they determine whether or not an ad is political. I suppose if they have politics related ad targeting that would be a sign as well, though that's independent of the ad content.

Microsoft and boffins cook up hardware-secured database

Nate Amsden

banks and fraud detection

If this level of security is so important it would be interesting to know specifically what approaches a bank might take with today's technology to accomplish the same thing(assuming they even protect at that level).

Besides, even if you make the database ultra super protected, those queries have to come from somewhere, most likely an application of sorts, and applications I'd wager are generally compromised on a 100:1 ratio to databases.

Hold on. Here's an idea. Let's force AI bots to identify themselves as automatons, says Cali

Nate Amsden

Most of the junk calls i get are bots. They all seem to reply with the same response to my questioning whether they are a computer or not. They claim they are a person with a computer helping them. I hangup at that point.

Summoners of web tsunamis have moved to layer 7, says Cloudflare

Nate Amsden

old news, but good news?

https://en.wikipedia.org/wiki/Slowloris_%28computer_security%29

(just what I could remember off the top of my head)

anyway the good news is that there should be significantly less collateral damage caused by application layer attacks since you don't have to flood all of the pipes to kill the service.

I was at one place that I would consider "high traffic" (several years ago anyway), they processed a few billion requests per day. They were ad tracking pixels so the performance was high, when I was there the dual socket servers could sustain 3,000 requests per second in tomcat. Anyway before I started AOL had added their pixel to AIM, and AIM wasn't good about closing connections for some reason, so they got millions of requests which was exhausting the capacity of their systems just on open connections. They later tuned their load balancers to force terminate connections after something like 2 seconds(average request was maybe under 100ms), which fixed that issue.

At another company I was at their app was so bad sometimes even 1 request per second would tip it over(certain kind of requests I don't remember what kind). The executives would freak out and claim DDOS and want to manually block each inbound IP (and the IPs kept changing, at a low rate of speed). I just laughed, I mean come on that is just pathetic. They expressed no real interest in fixing the app just blocking the bad requests. That company died off several years ago. I don't even think that situation was even an attack, because if your app can't handle more than a few requests per second you have bigger problems.

I've never personally been on the receiving end of what I would call a DDoS, though have been collateral damage(including the Dyn incident a couple of years ago).

Dell EMC's PowerMax migration: Let's just swaaap out this jet engine mid-flight

Nate Amsden

Re: Technology marches forward

Customers have been able to migrate from vmax to 3par non disruptively for years as well.

I am very curious to know the architectural change that requires data migration.

Nate Amsden

Re: seriously? and they charge for this?

Because many workloads don't run in a hypervisor

Whois privacy shambles becomes last-minute mad data scramble

Nate Amsden

why is this even an issue

https://en.wikipedia.org/wiki/Domain_privacy

the approach has been there for years already. Though would be nice if the service was a standard(free) option with all domains, rather than a premium charge(as it seems to be with register.com whom I use or godaddy who my employer uses). Workaround to that would be just bake the service charge of the privacy service into the overall cost of the domain.

Samsung ready to fling Exynos at anyone who wants a phone chip

Nate Amsden

What differences did Samsung and Qualcomm have?

I don't recall ever reading anything about such differences, the article mentions differences but doesn't say what they were and provides no links to articles.

In a search I see this:

https://www.qualcomm.com/news/releases/2018/01/31/qualcomm-and-samsung-amend-long-term-cross-license-agreement

Which I assume is what was implied but even that doesn't say anything other than they expanded their cross licensing stuff.

Might this have to do with Samsung using Qualcomm chips in the U.S. for what I had always assumed was for CDMA patents or whatever from Qualcomm(but obviously that is not Samsung specific)? Or was there something else? I have read people say the CDMA patents in question are going to expire soon as well(don't recall how soon).

All this time I was assuming Samsung had long made their chips available to anyone that wanted them.

I saw a link to an Anandtech review of the Samsung S9/S9+ phones earlier in the year and found it interesting their claim that the Exynos version of the phone had significantly worse battery life vs the Snapdragon.

Pentagon on military data-nomming JEDI cloud mind trick: There can be only one (vendor)

Nate Amsden

compromise

Go ahead award it to one top level cloud vendor, just be sure that the vendor distributes the downstream hardware and software across at least 3 different top tier vendors(and not token deployments of said vendors):

- Data centers: Equinix, Switch, QTS ?

- Hypervisors: VMware, Citrix, (pick some KVM/Openstack supplier perhaps Red hat?)

- Servers: HP, Dell, Cisco ?

- Storage: HP, Dell, Hitchai or IBM ?

- Networking: Cisco, Juniper HP or Extreme/Brocade ?

That cloud can then put their API stuff on top of that stack and go from there.

Not so easy I am sure(I have been working in IT/Operations for 24 years), but at the same time would simultaneously address the outsider's want for more competition and the Pentagon wanting a single vendor. Also is good for the industry in obviously diversifying where the money goes for such a big contract. We are talking about long term stuff here after all.

Kaspersky Lab's move from Russia to Switzerland fails to save it from Dutch oven

Nate Amsden

Re: Having come up against Kaspersky's DRM...

"Kaspersky could have taken the decision to finally put their customers first and stop ignoring state malware"

Maybe I mis remember but I thought the whole thing that kicked this all off was Kaspersky catching NSA malware that some contractor wasn't supposed to bring home and automatically uploaded it to their cloud for analysis like they claim they do for pretty much all malware?

At the same time I do find it interesting that while Kaspersky is planning on opening up to outside audits and stuff the exact opposite has been happening in the U.S. security companies I recall an el reg article or two mentioning several companies at least say they will no longer allow other governments to inspect their code(which makes sense as those countries certainly can use the opportunity to find security issues with the code).

To me at the end of the day code inspection doesn't matter unless you're able to make sure the code you inspected is actually the code that is being installed(along with any updates). Also makes sense for any country that is highly concerned about security to use only locally sourced equipment/code which they can better maintain oversight of. Smaller countries are certainly at a disadvantage.

On my own systems anyway anti virus(currently kaspersky on my home windows systems and Sophos on my windows work VM and nothing on my linux systems(linux is my main system)) hasn't picked up anything new since the 90s(that I recall anyway). Obviously I am careful about what I download.

I believe Kaspersky is honest in they are not co-operating with the government, but also find it quite easily likely that there are government agents as employees(that the company isn't aware are agents) at the company that do stuff (I think the same is true for many/most/all big U.S. security companies too).

S/MIME artists: EFAIL email app flaws menace PGP-encrypted chats

Nate Amsden

Who relies on this stuff?

Does it only affect the encrypted data or does it affect messages that are simply signed by PGP (or S/MIME?)

I've only been using email for about 24 years and can't remember ever coming across anybody that encrypted their emails. I recall playing with it for a few minutes back in the 90s but that's as far as I got. Though I have seen many emails (mainly security or open source related mailing lists and stuff) that had PGP/GPG signatures(or so they claimed I have never tried to validate any of them).

Orchestral manoeuvres in the Docker: A noob's guide to microservices

Nate Amsden

sounds great

So this can auto scale applications so things like cascading failures across micro services due to massive inter dependencies, horribly written SQL that can take out database servers/clusters easily(and hey the developers don't even know what SQL is being used because they abstracted it away), locking within applications due to external dependencies, and removes the need for performance testing as well because "auto scale", right?

right.

bad code is still bad code, micro services or "monolithic". Micro services can make the effects of bad code even worse than monolithic because developers end up cutting more and more corners for their services to get the data that they need.

Meanwhile you've massively increased your failure potential by having a half dozen more services, all of which are critical points of failure(with maybe a dozen or more others that are not critical), before if one service went down your app went down, now you have six times the probability of your app going down because you have six times the failure points, any one of which takes the app down.

Yay for Nvidia, GPU giant report decent first quarter results despite recent setbacks

Nate Amsden

Re: Nvidia still sucks...

As someone who has used nvidia on pretty much every desktop and laptop with linux for about 17 or 18 years I can't really agree with that statement myself. Of course per the other poster I'll caveat my statement by saying I've always used the proprietary drivers. Also will say I don't run bleeding edge linux either I'm sure that stuff is more likely to break if you wish to track the latest and greatest kernels.

Before Nvidia I was using Number Nine cards if anyone remembers them, with AcceleratedX if I recall right. That combined with the commercial OSS drivers, those were interesting days for me anyway. Originally got #9 for the Windows stuff but then switched to linux for my desktops in 1998.

My first nivida cards were a used(?) Riva TNT, not sure what brand(maybe Riva was a brand back then), maybe it was OEM, got a pair of them from Fry's Electronics they came in unmarked boxes in anti static bags if I recall right. Ran them for many years..

Have had Nvidia in every laptop since 2006(which is just 3, I keep my laptops around for a while).

Chap charged with fraud after mail for UPS global HQ floods Chicago flat

Nate Amsden

more secure online I guess

I just did a USPS change of address again a few days ago though I did it online(did two more 2 years ago the process was the same). Service requires a credit card:

"Safe and Secure Safeguard your information with ID verification by a simple $1.00 charge to your credit or debit card"

I haven't tried to use a credit card that wasn't assigned to the original address, but assumed they verify the billing address(maybe they do not - but even validating the name alone should of failed the check if they are redirecting for a corporation). I guess they don't do this extra validation step when doing in person forwarding at the post office itself?

You love Systemd – you just don't know it yet, wink Red Hat bods

Nate Amsden

Re: fscking BINARY LOGS.

WRT grep and logs I'm the same way which is why I hate json so much. My saying has been along the lines of "if it's not friends with grep/sed then it's not friends with me". I have whipped some some whacky sed stuff to generate a tiny bit of json to read into chef for provisioning systems though.

XML is similar though I like XML a lot more at least the closing tags are a lot easier to follow then trying to count the nested braces in json.

I haven't had the displeasure much of dealing with the systemd binary logs yet myself.

Nate Amsden

as a linux user for 22 users

(20 of which on Debian, before that was Slackware)

I am new to systemd, maybe 3 or 4 months now tops on Ubuntu, and a tiny bit on Debian before that.

I was confident I was going to hate systemd before I used it just based on the comments I had read over the years, I postponed using it as long as I could. Took just a few minutes of using it to confirm my thoughts. Now to be clear, if I didn't have to mess with the systemd to do stuff then I really wouldn't care since I don't interact with it (which is the case on my laptop at least though laptop doesn't have systemd anyway). I manage about 1,000 systems running Ubuntu for work, so I have to mess with systemd, and init etc there.

If systemd would just do ONE thing I think it would remove all of the pain that it has inflicted on me over the past several months and I could learn to accept it.

That one thing is, if there is an init script, RUN IT. Not run it like systemd does now. But turn off ALL intelligence systemd has when it finds that script and run it. Don't put it on any special timers, don't try to detect if it is running already, or stopped already or whatever, fire the script up in blocking mode and wait till it exits.

My first experience with systemd was on one of my home servers, I re-installed Debian on it last year, rebuilt the hardware etc and with it came systemd. I believe there is a way to turn systemd off but I haven't tried that yet. The first experience was with bind. I have a slightly custom init script (from previous debian) that I have been using for many years. I copied it to the new system and tried to start bind. Nothing. I looked in the logs and it seems that it was trying to interface with rndc(internal bind thing) for some reason, and because rndc was not working(I never used it so I never bothered to configure it) systemd wouldn't launch bind. So I fixed rndc and systemd would now launch bind, only to stop it within 1 second of launching. My first workaround was just to launch bind by hand at the CLI (no init script), left it running for a few months. Had a discussion with a co-worker who likes systemd and he explained that making a custom unit file and using the type=forking option may fix it.. That did fix the issue.

Next issue came up when dealing with MySQL clusters. I had to initialize the cluster with the "service mysql bootstrap-pxc" command (using the start command on the first cluster member is a bad thing). Run that with systemd, and systemd runs it fine. But go to STOP the service, and systemd thinks the service is not running so doesn't even TRY to stop the service(the service is running). My workaround for my automation for mysql clusters at this point is to just use mysqladmin to shut the mysql instances down. Maybe newer mysql versions have better systemd support though a co-worker who is our DBA and has used mysql for many years says even the new Maria DB builds don't work well with systemd. I am working with Mysql 5.6 which is of course much much older.

Next issue came up with running init scripts that have the same words in them, in the case of most recently I upgraded systems to systemd that run OSSEC. OSSEC has two init scripts for us on the server side (ossec and ossec-auth). Systemd refuses to run ossec-auth because it thinks there is a conflict with the ossec service. I had the same problem with multiple varnish instances running on the same system (varnish instances were named varnish-XXX and varnish-YYY). In the varnish case using custom unit files I got systemd to the point where it would start the service but it still refuses to "enable" the service because of the name conflict (I even changed the name but then systemd was looking at the name of the binary being called in the unit file and said there is a conflict there).

fucking a. Systemd shut up, just run the damn script. It's not hard.

Later a co-worker explained the "systemd way" for handling something like multiple varnish instances on the system but I'm not doing that, in the meantime I just let chef start the services when it runs after the system boots(which means they start maybe 1 or 2 mins after bootup).

Another thing bit us with systemd recently as well again going back to bind. Someone on the team upgraded our DNS systems to systemd and the startup parameters for bind were not preserved because systemd ignores the /etc/default/bind file. As a result we had tons of DNS failures when bind was trying to reach out to IPv6 name servers(ugh), when there is no IPv6 connectivity in the network (the solution is to start bind with a -4 option).

I believe I have also caught systemd trying to mess with file systems(iscsi mount points). I have lots of automation around moving data volumes on the SAN between servers and attaching them via software iSCSI directly to the VMs themselves(before vsphere 4.0 I attached them via fibre channel to the hypervisor but a feature in 4.0 broke that for me). I noticed on at least one occasion when I removed the file systems from a system that SOMETHING (I assume systemd) mounted them again, and it was very confusing to see file systems mounted again for block devices that DID NOT EXIST on the server at the time. I worked around THAT one I believe with the "noauto" option in fstab again. I had to put a lot of extra logic in my automation scripts to work around systemd stuff.

I'm sure I've only scratched the surface of systemd pain. I'm sure it provides good value to some people, I hear it's good with containers (I have been running LXC containers for years now, I see nothing with systemd that changes that experience so far).

But if systemd would just do this one thing and go into dumb mode with init scripts I would be quite happy.

Systemd-free Devuan Linux looses version 2.0 release candidate

Nate Amsden

Re: so far so good

still getting updates on debian 7 myself on my main personal servers, devuan looking like a good upgrade.

VMware to finally deliver full-function HTML5 vSphere client

Nate Amsden

Re: C# vSphere client was - how shall we put this? – bad. Just bad

As a linux user I remember my early days with esx 3.5 (before that I used GSX). I hated the .net client. But I grew to like it over time. Obviously much better than the flash version. I access it over xenapp from a windows VM(on top of linux). The linux xenapp client has trouble accessing remote console on my system at least(keyboard gets trapped in console). Xenapp allows me to run just the app remotely from the datacenter(3-7,000mi away) with very low latency to vcenter itself (yes still 5.5). I see a huge perf difference from running the client locally. I have even used the .Net client via xenapp over vpn on android on my note 3 on several occasions . Works best with the stylus to have greater precision on the cursor.

I have never used the flash client for more than a few minutes and have never seen the html client aside from a screen shot here and there.

Most of my co workers use mac and use xenapp on mac to get to it.

My only complaint though minor is when I got my new laptop 2 years ago with 4k screen. Vsphere was pne of two apps (other was remote desktop ) that were unusable at 4k on win7 anyway. (No not using win10). So just went back to 1080 resolution and that works fine. 4k was just smaller i had to increase the size on everything to get it usable. I see little difference to 1080 otherwise at least on my thinpkad p50.

It's not the only thing that windows vm does. I do a bunch of work related things in it including vpn(and outlook and rdp). I ssh to my servers from linux first by bouncing through the windows vm ssh server(cygwin). The linux SSL vpn clients just don't work well enough for me.

Windows Notepad fixed after 33 years: Now it finally handles Unix, Mac OS line endings

Nate Amsden

relief arrived a long time ago

Open the file in wordpad instead of notepad. Though I suppose it's good for the newbies that probably never knew that.

Australian prisoner-tracking system brought down by 3PAR defects

Nate Amsden

Re: 3PAR losing data

Neither of those events were the fault of 3PAR technology. The 2nd was the fault of humans working with the hardware, that were managed by HP, so HP was at fault(and owned up to it). The first was also the fault of humans but not those working with the hardware.

I have suffered 2 largish scale SAN outages since I have been working with them (though the first one I wasn't responsible for the storage). First was EMC, well our websites were down for a good 35 hours or so while we recovered(lots of corrupted data). The cause was a double controller failure(software fault I believe), as to what caused the double controller failure I am not sure but the storage admin blamed himself after the fact, apparently a configuration setting "allowed" the 2nd controller to fail(nobody was working on the system at the time of the failure it was a Sunday afternoon, I recall the Oracle DBA told me he was driving to lunch and almost got into a car accident when I sent the alarm showing I/O errors on the Oracle servers), I don't know specifics. The hardware did not have to be replaced from what I recall (this was 2004).

Second failure was a 3PAR failure(2010), downtime was about 4-5 hrs, root cause was a Seagate SATA hard disk (in an array of ~200 disks) began silently corrupting data (it would acknowledge disk writes but then mess up the data on the reads). Took several hours for the situation to become critical, given the nature of the system to distribute data over all disks by default one disk doing bad things can wreck havok. Had a few cases of data being corrupted and then later that night the controller responsible for that disk panic'd, and then the 2nd controller took over and saw the same problem and panic'd(4 controller array but 3PAR architecture has disks being managed by pairs of controllers). That particular array wasn't responsible for front end operations (front end servers were all self contained, no external dependencies of any kind), but it did take out back end data processing. It was the best support experience I have ever had myself (this outage was before HP acquired 3PAR, support not as good since). From the incident report(2010):

"After PD94 was returned, 3PAR’s drive failure analysis team re-read the data in the special area where ‘pd diag’ wrote specific data, and again verified that what was written to the media is what 3PAR expected (was written by 3PAR tool) confirming the failure analysis that the data inconsistency developed during READ operations. In addition, 3PAR extracted the ‘internal HDD’ log from this drive and had Seagate review it for anomalies. Seagate could not find any issues with this drive based on log analysis. "

I learned a LOT during that outage both the outage itself and recovering after the fact.

That particular scenario I believe was addressed in the 3PAR Gen4 systems (~2011 ?) when they started having end to end check sums on everything internal to the array, and extended it even further on Gen5 having check sums all the way from the host to the disk.

In both outages, neither company had any sort of backup system to take over load, the array itself was a single point of failure(even though they are generally highly redundant internally). I'd bet 80% of the time companies deploying these do it like this just for budget reasons alone.

I had a controller fail(technically just the hard disk in the controller that has the OS on it) mid software upgrade on an 3PAR F200 system(2 controllers only, end of life now for 2 years), system never went completely down but write performance really goes down on two controller arrays that use disk drives when a controller is out. The situation was annoying in that it took HP about 26 hours to resolve the issue because the replacement controller didn't have the same OS version(and refused to join the cluster) and the on site tech had problems with his laptop crashing every 30 minutes from the USB serial connector.

But really all you need to do is look at the change logs for these systems(or any other complex system) and many times you'll find some really scary bugs being fixed.

Having been a customer for 12 years you may guess that I know MANY stories good and bad about 3PAR stuff over the years. All things considered, I am still a very satisfied customer, most of that(90%) is because of the core technology. Less satisfied with the level of support HP gives out these days, but the support aspect wasn't unexpected after being acquired by a big company.

I have a few 3PAR arrays today, all of the company's critical data are on them, though I don't have as much time to work with them as I used to(I am the only one in the company that does work with them though). They just sit back and run and run, like the rest of the infrastructure. The oldest 3PAR is also part of our first infrastructure and it has been online since 12/19/2011. Hoping to retire it soon and replace it with something current, but don't see it happening this year.

Though I have learned to be MUCH more conservative on what I do with storage, obviously LONG gone are the days where I thought "hey this disk array has two controllers and does RAID it's the same as this other one that has two controllers and does RAID".

Risky business: You'd better have a plan for tech to go wrong

Nate Amsden

scheduled vs unscheduled downtime

I don't see mention of this in the article. Back at my first SaaS gig more than a decade ago we served the biggest mobile carriers in the U.S. at the time. Our SLA was 99.5% for *unscheduled* outages. We had in our contract something like 16 hours/month of allowed downtime mainly for software deployments(minimum downtime for software deployment was probably 4-6 hrs). But even with 99.5% I don't think we ever came close to meeting that. Most of the outages were application layer issues.

I just re-read an agreement, I wouldn't even call it a service level agreement from a bigish name in one vertical of the SaaS space(used by e-commerce companies) where their commitment is along the lines of "taking all reasonable care" to maintain 24x7 operations. They reserve the right to take scheduled downtime. But no real SLA.

By contrast my personal favorite ISP is Internap and they do provide a 100% uptime SLA(and they even have latency and packet loss in their SLA), though it does exclude some things like DDoS.

Dyn last I checked(years ago) had a 15 second SLA which I was always impressed by, none of my monitoring has ever been able to have that sort of resolution. Though the org I am with was hit by the Dyn DDoS attack it was the one and only outage I've noticed from Dyn in 9 years of using them, so I can give them a pass pretty easily on that.

That one SaaS vendor that takes "reasonable precautions" has outages typically at least once a week by comparison (usually lasting a few minutes).

Industry whispers: Qualcomm mulls Arm server processor exit

Nate Amsden

Re: It's the software.

I assume you are referring to Windows software? Since Linux has run on ARM for well over a decade I believe, looks like there is Java for ARM as well which covers a huge selection of software that run on servers.

Though I'd wager in many cases the code isn't as optimized for performance as on x86.

Nate Amsden

Re: Why should ARM Holdings help?

El reg reports the new ARM server chip is going for $800(16 core) $1,800(32 core) in 1,000 unit quantities. Also Qualcomm's server chip going for $2,000(48 core)

https://www.theregister.co.uk/2018/05/08/cavium_thunderx2/

I have found it interesting that the server arm chips do draw a lot of power - the article above mentions power from 75W to 180W(32 cores - which seems to be in the same ballpark as a 32-core AMD Epyc?)

Open Internet lovin' Comcast: Buy our TV service – or no faster broadband for you!

Nate Amsden

Re: Just trying to ruffle feathers?

Poking at Xfinity's site I see there is an offer for 400Mbps for $75/mo (first 12 months - internet only), they also have a special page for gigabit offerings(up to 2Gbps) though looks like they want an address to check first.

As another user noted the data cap is still 1TB, and as you noted there isn't much use of having such a fast connection anyway for most people. I have a 250mbps plan and the ONLY reason I'm on that plan is it was the one(at the time anyway) that gave me the fastest upload (20Mbit). I'd be perfectly fine with a 50Mbps or even less (less than two years ago I was on another cable provider in another city with a 30Mbps plan, though upload was only 2 or 3Mbps).

I saw flyers around recently advertising 400Mbit(that it was available, no mention of free upgrades) so I have no idea if I will get upgraded, I really don't care since I won't use that speed anyway.

VMware and Microsoft make up and get NSX-y together

Nate Amsden

224 million packets per second

Doing what exactly? 1U 10G switches have been able to push a billion packets a second for about 10 years already. Today's 1U switches can push 2 billion+.

software defined isn't everything, in the case of MS --

https://www.theregister.co.uk/2018/01/08/azure_fpga_nics/

Wonder if vmware will be able to tie into that kind of stuff on Azure.

Press F to pay respects to the Windows 10 April Update casualties

Nate Amsden

pause for a moment

The moment being windows 10. Maybe it'll be usable in 2020 not holding my breath though.

It'd be nice if they unlocked that long term support version for anyone to use.

If I wanted an "agile" operating system I'd use gentoo.

(My main desktop/laptop OS has been linux since 1998 but I have kept a windows vm running for work stuff for at least a decade, semi obviously using win7 for that)

Democrats need just one more senator (and then a miracle) to reverse US net neutrality death

Nate Amsden

tumblr is on board?

Verizon owns them now right? Oh I see an article from another publication last year saying "Verizon is killing Tumblr's fight for net neutrality", wonder if that will repeat again.

Autonomy ex-CFO Hussain guilty of fraud: He cooked the books amid $11bn HP gobble

Nate Amsden

Wouldn't this be a SOXX issue too?

I don't see SOXX mentioned in the article perhaps there would/should be separate charges against CxOs of Autonomy for violations. Or maybe it wouldn't apply if Autonomy was a UK company, or maybe it would apply if they were listed on a U.S. exchange(seems they went on Nasdaq in the 90s) even if they were a UK company, am not sure how the rules work obviously!

Amazon: For every dollar of op. profit going into Bezos' pockets, 73 cents came from AWS

Nate Amsden

Re: Azure vs AWS

Microservices is a bullshit buzzword bingo with regards to "seeing benefits from cloud applications". What you need to see benefits is higher ability to handle failures at every level (both at an ops as well as dev level), as well as intense ongoing performance evaluation of the product to know how it scales and when to scale. All of these things are incredibly complex to get right for many applications. With intelligent infrastructure the first two are often quite easy to achieve with off the shelf hardware and software(that is typically agnostic to any application stack). The performance testing aspect is complex regardless (depending on application complexity of course).

That has absolutely nothing to do with micro services. If anything micro services dramatically increases the complexity because now you are having to scale many more individual components and watch what kind of impact of scaling those components has in the other components in the system.

That obviously doesn't stop people from trying just look how many companies go down when amazon has an outage (or even service degradation) in their U.S. East facilities.

Nate Amsden

Re: Azure vs AWS

Infrastructure geek here, infrastructure geeks want control over their own stuff (i.e. run in a data center or colo or something), public cloud(IaaS at least) is quite far from that(and based on history will never be that).

There seem to be fewer and fewer infrastructure geeks around anymore though, just like fewer and fewer good developers around(as a ratio of the whole anyway). The flood gates have opened and armies of mediocrity are displacing the experts at ever increasing rates.

Princeton research team hunting down IoT security blunders

Nate Amsden

Re: S.M.A.R.T. We know what that really stands for, don't we!

With the way some software is developed these days possible the developers don't even know what 3rd party services they are talking to ("Oh I just included this library/module because it did X for me, had no idea it was doing this other stuff too")

Red Hat sticks its storage software cap on Supermicro hardware

Nate Amsden

no flash?

Seems most any NAS system today should come with at least some degree of flash either for tiering or better yet for caching(even if for just reads, and/or metadata).

Their general purpose NAS data sheet makes no mention of the word flash, has no mention of how much memory is available for cache(and how that memory is used e.g. mirrored write caching), the only mention of the word cache is with their Broadcom RAID controller with 2GB of cache. Given they are using hardware raid controllers, there are raid controllers that could handle the flash caching for them(I think broadcom has such controllers as well) if redhat software cannot do it by itself.

Hard for me to believe anyone would deploy 288TB (minimum configuration - 120TB usable so a considerable overhead) of storage and not have some amount of caching from flash involved. The nature of the system(likely aiming for lower cost) doesn't strike me as one that would be all-flash (also no mention of deduplication or compression, the latter being pretty nice to have for any NAS system).

The design seems more suitable for the content repositories, perhaps redhat is just trying to make their system appeal to a wider audience by throwing a "general" use case configuration in there.

Noise from blast of gas destroys Digiplex data depot disk drives

Nate Amsden

why replace servers

Seems that simply replacing the disks would be much quicker. Though perhaps their servers don't have easily swappable drives.