* Posts by Nate Amsden

2438 publicly visible posts • joined 19 Jun 2007

Cisco discloses self-sabotaging SSD bug that causes rolling outages for some Firepower appliances

Nate Amsden

when might this end?

Seems like we have been getting reports for the last ~5 years or more about SSD firmware bugs that brick drives after X period of days from a wide range of manufacturers.

For me these firmware bricking bugs are the biggest concern I have with SSDs on critical systems. Fortunately I have never been impacted(as in had a drive fail as a result of these) yet. But have read many reports from others who have other the years.

Even the worst hard disks I ever used (IBM 75GXP back in ~2000 and yes I was part of the lawsuit for a short time anyway) did not fail like this. I mean you could literally have a dozen SSDs fail at exactly the same time because of this. It's quite horrifying.

I have a critical enterprise all flash array running since late 2014, no plans to retire it, all updates applied(with no expectation of any more firmware updates being made for these drives anymore), oldest drives are down to 89% endurance left so in theory endurance wise they could probably go another 20-30 years, though I don't plan to keep the system active beyond say 2026 assuming I'm still at the company etc.

Cisco: A price rise is coming to a town near you imminently. Blame chip shortages

Nate Amsden

pretty crazy

Came across this a few days ago, comments regarding network equipment lead times - https://www.reddit.com/r/networking/comments/n644ux/lead_times_through_the_roof/

My org hasn't needed to buy much in the past year so haven't been affected by the situation. Most critical network has enough capacity for the next 2-3 years without requiring anything new. Oldest critical stuff(everything is redundant) has been in service 3,348 days(9.4 years today) and goes EOL by the vendor 6/30/22. Probably can get 3rd party support after.

Servers don't have any new needs at this point before 2024 if things keep going the way they have the past ~3 years anyway.

Nothing exciting, but runs super stable almost zero issues on everything.

That Salesforce outage: Global DNS downfall started by one engineer trying a quick fix

Nate Amsden

bad app and DNS

Here's a great example of a bad app. Java. I first came across this probably in 2004. I just downloaded the "reccomended" release for Oracle Java on linux (from java.com) which is strangely 1.8 build 291 (thought there was java 11 or 12 or 13 now? ) anyway...

peek inside the default java.security file

(adjusted formatting of the output to make it take less lines)

# The Java-level namelookup cache policy for successful lookups:

# any negative value: caching forever - any positive value: the number of seconds to cache an address for - zero: do not cache

# default value is forever (FOREVER). For security reasons, this caching is made forever when a security manager is set. When a security manager is not set, the default behavior in this implementation is to cache for 30 seconds.

# NOTE: setting this to anything other than the default value can have serious security implications. Do not set it unless you are sure you are not exposed to DNS spoofing attack.

#networkaddress.cache.ttl=-1

I don't think I need to explain how stupid that is. It caused major pain for us back in 2004(till we found the setting), and again in 2011(4 companies later, couldn't convince the payment processor to adjust this setting at the time had to resort to rotating DNS names when we had IP changes) and well it's the same default in 2021. Not sure about newer than Java 8 what the default may be. DNS spoofing attacks are a thing of course(I believe handling them in this manor is poor), but it's also possible to be under a spoofing attack when the jvm starts up resulting in a bad dns result which never gets expired per the default settings anyway.

At the end of the day it's a bad default setting. I'm fine if someone wants to for some crazy reason put this setting in themselves, but it should not be the default and in my experience not many people know that this setting even exists and are surprised to learn about it.

But again, not a DNS problem, bad application problem.

Nate Amsden

wth is it with always dns?

I don't get it? Been running DNS for about 25 years now. It's super rare that a problem is DNS related in my experience. I certainly have had DNS issues over the years, most often the problems tend to be bad config, bad application(includes actual apps as well as software running on devices such as OS, storage systems network devices etc), bad user. In my experience bad application wins the vast majority of times. I have worked in SaaS-style (as in in house developed core applications) since 2000.

But I have seen many people say "it's always dns", maybe DNS related issues are much more common in windows environments? I know DNS resolution can be a pain such as dealing with various levels of browser and OS caching regarding split dns where DNS names resolve to different addresses if you are inside or outside the network/vpn). I don't classify those as DNS issues though, DNS is behaving exactly as it was configured/intended to, it was the unfortunate user who happened to do an action which performed a query whose results were then cached by possibly multiple layers in the operating system before switching networks and the cache didn't get invalidated resulting in a problem.

I know there have been some higher profile DNS related outages by some cloud providers(I think MS had one not long ago) but still seems to be a tiny minority of the causes of problems.

It makes me feel like "it's always DNS" is like the folks who try to blame the network for every little problem when it's almost never the network either(speaking as someone who manages servers, storage, networking, apps, hypervisors etc so I have good visibility into most everything except in house apps).

LG intranet leaks suggest internal firesale of unsold, unreleased smartphones as biz exits the mobile market

Nate Amsden

Re: I had one of the HP Touchpads

I have about 5 or 6. I kept one of my original 16GB Touchpads in the brown box(mailing box, the device box is white) from the firesale never opened. Don't know why just wanted to.

Two of my touchpads get daily use and have for many years as digital picture frames(the others are mainly spares). Paired with the touchstone charging dock they just sit there scrolling through hundreds to thousands of pictures. Have a 3rd with touchstone which I did use as a picture frame too but stopped using it for now as I don't have a good spot to put it where it would actually be noticed often.

Took some time to work around limitations in the software in getting a more random selection of pictures as well as distributing them in directories where the file quantity wasn't too big for the device to deal with. Also cropped the pictures to minimize the cpu required to auto resize when displaying as well as limiting pictures to either portrait or landscape to maximize viewing area. I've been quite impressed with how well they have held up, 0 failures in a decade. I would of expected at least a screen to die or memory to go bad or something. Their clocks aren't accurate as they are never on the network so there is serious drift but I don't use them for clocks.

I remember spending at least a couple hours working through errors on HP's site the day of the fire sale to buy some, bought four 16GB models at the time(used 2 for gifts at the time and sold one at cost to a friend), and had bought a single 32GB model day 1 from best buy whom later refunded me the difference in cost once the firesale started I think.

Still have a HP Pre3 as well though that has mostly sat in a box since I got my Galaxy note 3 (been on S8 Active for a couple of years, no plans to upgrade anytime soon, maximizing battery life as best as I can with chargie.org dongles that limit charging time).

WebOS was pretty neat, though it was clear it was pretty doomed when HP appeared unwilling to invest in it, instead they tossed what $10 billion at Autonomy instead? It was going to take several billion to even consider trying to compete seriously with Android/IOS.

21 nails in Exim mail server: Vulnerabilities enable 'full remote unauthenticated code execution', millions of boxes at risk

Nate Amsden

shocking

Well maybe I shouldn't be shocked, but I am still. Not at the security issue but looking at that MX server survey I had no idea that Exim and Postfix combined had that high of a market share, and that Sendmail was at 40% ~15 years ago and is now at under 4%. I really expected nobody to have more than say 20-25% market share. Personally I have been using Postfix since about 2001 I think. It was suggested to me for a anti virus solution I was looking to deploy at the time and just haven't had a need to look at anything else.

I went off to look at sendmail.org, and wow they are old school(except they seem to be operating under the "ProofPoint" brand not sure when that happened), just read the stuff under the "Contact us" section. Also it's the first reference to a FTP server I have seen on a website in a long time(I have nothing against ftp myself other than it is funky to work through firewalls).

I still prefer text email myself and my personal email server does strip html off of incoming emails automatically which can sometimes make things difficult (and in very rare occasions impossible as in the entire message is empty) to read. But it certainly brings back memories of an earlier era(an era that was much more fun for me computing wise anyway).

For work my org uses office 365 (and hosted exchange at rack space prior), MS introduced breaking changes in the OWA client which I use for most of my mail which makes text based email composing impossible. Reported it almost 2 years ago and last I checked it was still broken (the behavior being new line characters are broken making the entire email be one long line, in many cases totally unreadable. Message is fine in the "outbox" and only gets mangled once it gets beyond that level).

Working from home is the future, yet VMware just extended vSphere 6.5 support for a year because remote upgrades are too hard

Nate Amsden

Licensing and hardware support would be the biggest reasons(much more so for 7.0 than 6.7). 7.0 dropped official support for tons of hardware(and 6.7 isn't officially supported by a bunch of hardware too). And of course for orgs that have been running vmware for a while they may not have maintained their support contracts over the years which is required to get upgrades to 7.0 license keys. I know on more than one occasion I have gone to upgrade license keys only to get rejected saying the support was expired(support was through HP), and had to go to HP to get them to "sync" the system to get the vmware portal to recognize that support is current.

Still running vSphere 6.5 and vCenter 6.7 across my org, nothing in the newer versions compelling enough to upgrade (vCenter 6.7 only for the newer HTML client). Really miss the .NET client(saying that is weird given I've been linux on the desktop since 1998, and hated using .NET back when I first started using ESX 3 because of that, but it's so much better than the HTML and flash clients, except for 4k screen support found it unusuable on my laptop when I first tried out 4k a few years ago, but worked around it by switching laptop to 1080p).

Last vSphere release I was super excited about was 4.1. Everything since has been "nice, but not that exciting" which is good I suppose, nice stable product(I file less than 1 ticket/year on vmware issues with ~700-1000 VMs running).

Microsoft customers locked out of Teams, Office, Xbox, Dynamics – and Azure Active Directory breakdown blamed

Nate Amsden

I guess they are going to miss their SLA?

https://www.theregister.com/2021/01/06/four_nines_azure_active_directory_sla/

Google says once third-party cookies are toast, Chrome won't help ad networks track individuals around the web

Nate Amsden

Re: Once upon a time...

firefox removed the ability to prompt to accept cookies a long time ago(I think it was just after firefox 33.something). I held onto it as long as I could then switched to Pale moon, who eventually I guess had to retire that feature probably a year or two ago(because of upstream changes). Tried waterfox back before I decided to use Palemoon but the feature did not work at all in that browser either at the time.

I still have 37k entries in my moz_perms sqlite database which I assume pale moon still uses(can right click on a page and see the permissions for that page and they seem to hold up), though don't have a way to add more entries(easily anyway).

$ sqlite3 permissions.sqlite

SQLite version 3.31.1 2020-01-27 19:55:54

Enter ".help" for usage hints.

sqlite> .schema

CREATE TABLE moz_hosts ( id INTEGER PRIMARY KEY,host TEXT,type TEXT,permission INTEGER,expireType INTEGER,expireTime INTEGER,modificationTime INTEGER,appId INTEGER,isInBrowserElement INTEGER);

CREATE TABLE IF NOT EXISTS "moz_perms" ( id INTEGER PRIMARY KEY,origin TEXT,type TEXT,permission INTEGER,expireType INTEGER,expireTime INTEGER,modificationTime INTEGER);

sqlite> select count(*) from moz_perms;

37009

sqlite> select * from moz_perms where origin like "%thereg%" limit 5;

35|http://www.theregister.co.uk|cookie|2|0|0|1512317577932

36|https://www.theregister.co.uk|cookie|2|0|0|1512317577932

37|http://nir.theregister.co.uk|cookie|2|0|0|1512317577932

38|https://nir.theregister.co.uk|cookie|2|0|0|1512317577932

130|http://forums.theregister.co.uk|cookie|8|0|0|1512317577932

but zero entries for theregister.com I guess they changed that after I lost access to that feature.

The wrong guy: Backup outfit Spanning deleted my personal data, claims Cohesity field CTO

Nate Amsden

agree with the other posters

This CEO is an idiot. Don't care what the EULA says don't be paying such tiny amounts of money for such a massive amount of storage, the math doesn't work out, not even close. It's like people wanting to leverage google drive for a few dollars a month and storing hundreds of gigs to tens or hundreds of TBs and somehow think that is a reasonable thing to do. It's so beyond absurd I don't even have words.

And what kind of small business has that much data? Really sounds fishy. Maybe this CEO is trying to disguise his hoarding habits by saying it's for his "Small business".

To add insult to injury the guy is a CEO of a storage company. He has no right to be angry he should be embarrassed for being so stupid about this, and then doubling down and going public about it. Hell he could of fired off an email to people on his IT team and said "hey I'm thinking about using this for X, what do you think?" Or maybe he did and he didn't agree with their response.

Red Hat returns with another peace offering in the wake of the CentOS Stream affair: More free stuff

Nate Amsden

really the same situation can be there for any "free" linux distribution as well, the maintainers could find themselves not interested in doing it anymore for any number of reasons "forcing" the customer to make a move.

https://en.wikipedia.org/wiki/Scientific_Linux (similar to CentOS)

There was another one(probably more) I think many years ago that was similar to CentOS that quit too saying they were just going to use CentOS. And of course think of how many distributions out there have come and gone over the past 20 years. I thought it was named Scientific Linux which is why I looked it up but seems I remembered wrong, or maybe there was another Scientific Linux in the 2005-2010 time frame.

I haven't used CentOS since ~6.something in a professional environment and haven't used RHEL since RHEL v3(I remember going from RH7.2->RHEL 2.1). Not that I have anything against either distro just the past two companies I've been at(almost 11 years now) have been Ubuntu based and I haven't felt enough of a need to make that big change(I did like aspects of CentOS/RHEL after having used them for many years despite using Debian on my personal stuff since 1998 - since all my Debian systems have gone to Devuan). The whole systemd thing has long soured me on RHEL, not that Ubuntu doesn't have that same issue but it's just more reason not to bother with moving to another distro.

if your not willing to pay for support, just be prepared to have to jump distros every now and then.

VMware warns of critical remote code execution flaw in vSphere HTML5 client

Nate Amsden

kill openSLP

FYI you can run this command to see if the SLP service is even being used (at least on vSphere 6): esxcli system slp stats get

to see stats about SLP, assuming it is accurate, when I disabled SLP on my clusters back in October the stats indicated no hits to the service since the system started(assuming some kind of health check that ran when the hypervisor booted, time stamp of the event matches boot time exactly).

vmware suggested in the past to disable SLP if you are not using it as a "workaround" though they implied(as of Oct 2020, though looking now looks like they have removed that language, tried checking archive.org for the older page but page was just a blank page) that may break stuff so it's not a long term solution(for me based on the stats of that command, it is a long term solution don't think that feature has ever been used at the orgs I have worked at):

https://kb.vmware.com/s/article/76372

as an extra check I ran nmap against the hosts to verify the port was closed after making the change.

Linux Mint users in hot water for being slow with security updates, running old versions

Nate Amsden

Re: An option for automatic updates would be good

On Mint 17 I went as far as locking my kernel version for something like 2 years because I was tired of sound randomly breaking between versions. Haven't had that issue on Mint 20 yet but my kernel hasn't been updated for a while, build date says April 20 2020 (didn't install Mint 20 till August 2020).

Nate Amsden

Re: Solve the Nvida problem and I'll be on board

I assume you have a super new Nvidia card?

I've been using Nvidia on linux since probably 98-99, Riva TNT something like that. Haven't had to use Nvidia's drivers from their website(I mean downloading them manually and using their installer, I have relied on the packaged drivers which seem to do something similar but in a more clean way) I think I want to say since something like Ubuntu 8 or 9 myself. But I am by no means on the bleeding edge of Nvidia. My main machine runs a Quadro M2000M (Lenovo P50 laptop). Though I really don't use it's capabilities for much these days.

I remember having to purchase AcceleratedX back in the 90s for Linux not sure if any others around here remember that. I think mainly for my Number Nine(brand) cards which I was a fan of back then.

Nate Amsden

Re: Could they pick a better example than Firefox?

Would be nice if Mint LTS releases, being an LTS release used Firefox ESR instead of regular firefox for this kind of reason.

I upgrade my ESR about once every month or two myself. I haven't run the built in browsers to mint probably since Firefox's version was below 30.x. (maybe was on Ubuntu 10.04 at that time I don't recall how long ago that Firefox version was).

Nate Amsden

Re: So...perhaps Mint SHOULD have automatic updates turned on.

sure, change the default.

Then the more advanced users who care can easily turn it off if they need to.

(Linux user since 1996, no auto updates, ever)

I ran Mint 17 until not long after Mint 20 came out, the timestamp on my last downloaded ISO of Mint 20 was Aug 15 which sounds about right. I maintain my browsers separately from the OS (Palemoon, Firefox ESR and Seamonkey, none of which appear to be in the Mint repos). I also run them as a somewhat more limited user and launch them via sudo e.g.

sudo -u firefox -H VDPAU_NVIDIA_NO_OVERLAY=1 /usr/local/palemoon/palemoon %u

Could go further of course but haven't bothered to do so. Does make sorting file permissions out funky sometimes.

Just wasn't looking forward to doing the work to do the migration until Mint 20 came out. I rely on a gnome app called brightside which I guess hasn't been maintained in years and it took several hours of work to compile it on Mint 20 from Ubuntu 16 sources(was available in the Mint 17 repos). Plus more hours to get everything setup again(started from scratch on a new partition rather than try to upgrade the existing OS, still have Mint 17 installed and can boot to it if needed). I ran Ubuntu 10.04 on my laptop for a good 12-18 months after EOL before installing Mint 17 years ago.

For me personally the security risk is quite low. I suspect for most linux users the risk is quite low(though mine personally I think is much lower than even that), just by nature of the type of users most likely to be running linux, combine that with stats like this:

https://www.theregister.com/2021/02/18/cve_exploitation_2_6pc_kenna_security/

I guess at the end of the day the "install all updates now!" group of people generally come across as you will be secure if you have all of the latest updates. (may not be the intention but that's the way it sounds in my opinion) Which of course isn't true. In my opinion even running older software you are safer if your not going to tons of random sites and downloading random things and opening random email attachments with all security updates applied. So of course it depends on the user. Hence going back to linux users are more likely to not do that kind of thing.

I have run internet exposed servers since 1996. I host my own websites, DNS, email etc all on public IPs(behind an openbsd firewall) at a co-location facility(and I even have an FTP server still for a couple people that use my systems). My "exposure" there I guess you could say is "high" because the systems are always open from the outside(at least the ports I want opened are). However I have had zero security incidents(that I am aware of) since ~1999 (in that case the incident was caused by an malicious user on the system who was granted legitimate ssh level access but ended up being not trustworthy).

Microsoft announces a new Office for offline fans, slashes support, hikes the price

Nate Amsden

now if they only let consumers buy LTSC windows

That would be a nice improvement. I know enterprise customers can get LTSC, but consumers should be able to as well. MS is supporting the LTSC windows regardless so it's not as if it would be much effort. Slap a premium price on it, that's fine. I'll pay double, even triple the price for that peace of mind without much hesitation.

I purchased a copy of Windows 10 LTSC for a work VM last year, and the cost was almost $500(had to buy win10 pro then a LTSC upgrade license, note company paid not me of course). Possible the vendor quoted sub optimal part numbers I am not sure. Support until 2029 so that's good.

My main desktops/laptops have been linux since 1998, any windows systems at home still run 7(most are off as I don't need them), no plans to upgrade them at this point. AV software still supported, and I haven't had a known security issue with any of my personal systems since the early 90s.

Housekeeping and kernel upgrades do not always make for happy bedfellows

Nate Amsden

don't understand

As someone who manually compiled and installed many kernels from 2.0 until late in the 2.2 series(fond memories of 2.2.19 for some reason), I never ever even one time had to delete any files as a result of a kernel update (outside of MAYBE the /boot partition if it was low on space). I had to check the article again to make sure it was referring to Linux and it seems to be. Since with 2.4.x onwards(basically when they stopped doing the "stable" and "development" kernels) I rely on the distro packages for kernel updates again, no files need be deleted for such updates.

Could this story somehow be referring to an OS upgrade rather than a KERNEL upgrade? Of course in linux land these can and typically are often completely unrelated(even in 2021, as much as the linux kernel is terrible about binary compatibility with it's own drivers many people know that whether you run kernel 3.x or 4.x or 5.x provided it supports your hardware there isn't much difference for typical workloads). But even with an OS upgrade I don't see a need to delete (m)any files. I also spent hundreds of hours back in the 90s compiling things like gnome, kde, X11, even libc5 and glibc, among dozens of other packages I can't remember anymore.

maybe this was a thing before 2.0 kernels (my intro to linux was slackware 3.0 with 2.0.something I believe back in 1996), but I suspect it was not.

this whole story just doesn't make any sense.

Salesforce: Forget the ping-pong and snacks, the 9-to-5 working day is just so 2019, it's over and done with

Nate Amsden

Re: Up yours to HP and Yahoo etc

I remember when HP announced that employees had to come to the office again a few years ago. There was claims that there literally wasn't enough office space available for all employees to come. Perhaps they corrected that situation I'm not sure, or reversed course on the concept.

Nate Amsden

Re: WFHSS

Probably much less of a thing than the "stress syndromes" driven by commute times, traffic, open floor office plans(never thought I'd REALLY miss cube farms), etc etc.

So I'd expect it to be a net plus overall for worker health. Certainly not universally but the reverse situation is not universal either. Hopefully employers can figure out the right balance for their employees.

For me personally, going to an office isn't the end of the world but it is more about cost of housing and commute times which really make that unattractive in many situations. I did have a job for a couple of years in a small city(~100k) where I had been living for 9 years, and the new job was literally across the street from my apartment. I had co-workers who parked further away(to avoid parking fees) than I lived. That was an awesome setup. I originally moved to that apartment for another job, which was about 1/2 mile up the street back in 2000.

More patches for SolarWinds Orion after researchers find flaw allowing low-priv users to execute code, among others

Nate Amsden

ServU FTP

Wow that brings back memories. I knew a guy(online only never met) back in the late 90s who distributed a "hacked" ServU which was popular for a certain file sharing people back then. People used it because it didn't need a license key, but he also inserted his own backdoor account(s).

Synology to enforce use of validated disks in enterprise NAS boxes. And guess what? Only its own disks exceed 4TB

Nate Amsden

Never used Synology(I do have a 4-bay Terramaster whose software I declared unusuable after about 1 hour and fortunately was able to easily replace it with Devuan running on a USB connected SSD been running for a year now at a colo for my personal offsite file storage) but I would imagine they got tired of getting their support burned by customers (often not their fault) SMR drives or something.

Nate Amsden

Re: Are they going proprietary though?

In what world is Synology an enterprise NAS? They are a best a SMB option, same goes for the TrueNAS/FreeNAS stuff. Whole different sport. Maybe they want to look more enterprise like by adopting enterprise things like custom firmware?

I can't think of any time a supported enterprise storage system would have any storage in it ever other than from the vendor of the system. Same goes for every other component in the system.

So I can certainly see why users would be upset since Synology is not an enterprise system, never has been, probably never will be. Same for Qnap and others in that group of products(can't name any others since well I don't use 'em). Maybe some think they are enterprise because they provide a rackmount version of their product or something (I'm guessing they do).

IBM cloud tries to subvert subscriptions with pricing plan that stretches some discounts

Nate Amsden

Re: This kind of makes the financial motivation of moving to the cloud moot

Brick and mortar infrastructure: you can(many have since VMware became a thing) consolidate and oversubscribe to slash costs of underused services/servers/storage.

Cloud: you only pay for what you provision (no way to take cpu/memory/disk shares from instance 1 to instance 35 where it can be used).

(this very important distinction is what allowed the org I work for to save more than $1 million/year since I moved them out of public cloud in early 2012, there have been recent cost analysis by people who wanted to move back as a resume bullet point but they couldn't make the numbers work and aren't with the org anymore -- read another comment about ROI - for us the ROI in 2012 was about 8 months)

I'm assuming most clouds don't offer committed rates on general resources

for example, customer commits to 100 "big" instances each having 10 CPU cores, 64GB of memory, and 1TB of disk.

Which gives a total of 1,000 CPU cores, 6400GB of memory, and 100TB of disk.

While that customer pays for those resources they can then provision any instance sizes (and number)they want as long as it fits in that total aggregate capacity. Likely still are forced to used pre defined fixed sizing for those instances(so no fine tuning number of cpu, memory and disk space/type per instance leads to more wasting).

But I think most public clouds still do fixed instance allocations and pay per instance rather than pay per resource, which leads to massive wasting of resources. Containers help address some of this but there is quite a bit of wastage there too. This is an issue I've been pointing out for about 11 years now.

Severe bug in Libgcrypt – used by GPG and others – is a whole heap of trouble, prompts patch scramble

Nate Amsden

systemd and DNSSEC ?

wtf? I wouldn't be surprised if systemd is doing DNS these days but isn't DNSSEC a server-to-server thing not a client to server thing? If so wtf is systemd doing with it?

on the topic of DNSSEC I came across this blog a while back and found it informative, rips into DNSSEC https://sockpuppet.org/blog/2015/01/15/against-dnssec/

"In fact, it does nothing for any of the “last mile” of DNS lookups: the link between software and DNS servers. It’s a server-to-server protocol."

Been running DNS myself since about 1997(both hosting authoritative BIND9 servers as well as hosting domains with Dyn in the last decade or so), though no DNSSEC.

Linux maintainer says long-term support for 5.10 will stay at two years unless biz world steps up and actually uses it

Nate Amsden

Re: Not a company but as an end-user...

Me too, hell on my laptop I ran Ubuntu 10.04 LTS way past end of life(didn't want Unity, eventually installed Mint 17), and only in the past 3 months I think did I install Mint 20 (was on Mint 17 before so ran it a good 18 months or so past end of life). So far Mint 20 has more bugs that affect me than 17 did, but whatever no deal breakers(and really nothing new experience wise that makes me happy I upgraded). I do maintain my browsers separately(manually) from the OS so I do get updates (running Palemoon, and firefox ESR and seamonkey at the same time for different tasks).

When I was on Mint 17 I actually locked my kernel to an older release(4.4.0-98) and ran that for a solid 3 years because I was tired of shit breaking randomly after upgrades(mainly sound not working after more than one new kernel upgrade, using a Lenovo P50 laptop). I would of probably stuck to the 3.x kernels on Mint 17 but had to upgrade to 4.x to get wifi working (something I didn't realize for the first 6 months until I traveled since I never use wifi at home with my laptop always ethernet).

Have been annoyed for so long with Linux's lack of a stable ABI for drivers. I know it'll never get fixed I've been using linux for almost 25 years now but it still annoys me. Fortunately these days server wise most of my servers are VMs. It was so frustrating back in earlier days having to slipstream ethernet and storage drivers into Redhat/CentOS kernels to kickstart systems and having to match up drivers with the kernels(even if it was off by a small revision it would puke). I think that was the last time I used cpio.

Nate Amsden

Re: support life?

why do you care what kernel is in your TP-link device, especially if it is a supported version? I just checked one of my $100k storage arrays which is running a supported software from the vendor, Running 2.6.32 kernel(apparently built in 2017). It works fine, it's a pretty locked down system(managed to sneak my ssh key onto the system during a recent support call otherwise customers don't generally have linux level access to the system). I don't have concerns. It's by far not the most recent OS release for that platform but it is technically the most recent recommended release (Just had latest patches applied a few weeks ago) for that generation of hardware(the hardware was released in late 2012, system purchased probably 2015 I think).

I recently reported(again) some bugs with the software(not kernel related but to the storage functionality). Support said I can upgrade to the next major version to get the fixes, though I deem that too risky myself given the engineering group has told me themselves they don't generally recommend that version for this hardware. It does work, and is supported, but I'm a super conservative person with storage so I'd rather live with these bugs which I can work around then risk different perhaps worse bugs in the newer version. Unlikely to see such bugs given my use cases but it's just not THAT important to upgrade so I will run this release of software until well past end of life (late 2022), probably not retiring this piece of equipment before 2024.

(linux user since 1996)

AMD, Nvidia, HPE tapped to triple the speed of US weather super with $35m upgrade

Nate Amsden

Same network speed as current system?

This article says the network links are 200 Gigabits on the new system.

However apparently their current system has 25 GB/s:

https://www2.cisl.ucar.edu/resources/computational-systems/cheyenne

"Partial 9D Enhanced Hypercube single-plane interconnect topology Bandwidth: 25 GBps bidirectional per link"

25 Gigabytes * 8 = 200 Gigabits.

Would of thought things would be faster on the newer system. That bidirectional statement may imply the current system is just 100Gbit in each direction and the new system is perhaps 200Gbit in each direction? that would be a good boost.

Give 'em SSPL, says Elastic. No thanks, say critics: 'Doubling down on open' not open at all

Nate Amsden

Re: It's your cash they're after

Curious don't you think they are getting Microsoft Windows and Oracle DB licenses for running those products in their cloud stuff for customers?

https://aws.amazon.com/rds/oracle/

https://aws.amazon.com/windows/products/ec2/

I certainly hope they are paying for them, and I think it's safe to assume those costs are passed onto the customers.

Signal boost: Secure chat app is wobbly at the moment. Not surprising after gaining 30m+ users in a week, though

Nate Amsden

why phone number required

I installed signal again just now to verify the experience. First thing it wants me to do is send me a text message. it also wants access to contacts. So i deleted it. I don't use the other apps mentioned in the article either.

I think i tried it a couple years ago with one of those virtual burner phone services but it didn't work.

I don't get why they don't have an email sign up option. I assume if you never grant access to contacts you can manually add your friends.

I do have line installed. It too wanted a phone number to install. Account was created while i was overseas with another phone and local sim. I decided to install it on my newer phone in 2019 and it wanted a sim card too. Also didn't work with virtual phone texting i think. So i bought a prepaid sim. Used it to register the app then switched back to email authentication and removed phone rights from the app and changed back to my normal sim.

With signal i think(memory is hazy this was a while ago), i did the same process only after verifying I could not find any way to switch to email authentication. So i nuked it before removing the prepaid sim.(possible i am confusing signal with another chat app in this situation i tested at least a couple)

I used the line chat app with nothing but wifi on a dedicated phone for 2 years and it worked fine( it had a sim originallywhen it was installed). I can search for friends by thier username in the app or if they are with me in person i think there is a QR code function take a picture with the app and it adds the friend.

Are there chat apps (make it simple) that work from signup on a wifi only device? I just think if you're really concerned about privacy then you'll want the option to use a dedicated device on wifi. ( cheaper than maintaining a prepaid sim ongoing). Before I moved line to my main phone if i traveled i took both phones and used tethering.

I couldn't find any last i checked.

I wouldn't be surprised if signal is more "secure" than line i just don't want to give them my phone number. So i don't use it. I'd like to though have heard good things.

Android devs: If you're using the Google Play Core Library, update it against this remote file inclusion CVE. Pronto

Nate Amsden

users may be more inclined to update apps

If the developers (of both OS and apps) would develop versions that just have the security fixes not new features. Also sorely lacking is the ability to easily roll back as well.

Was burned too many times early on after switching to Android many years ago I have had auto app updates off and only update when I really have to.

Two cases in point. Ironically both Weather apps.

Weather.com perhaps before their app was sold to IBM had a pretty good Android app. Then they improved it I guess and wrecked it pretty royally in my opinion anyways. Fortunately I had a backup of the older version(v4.2) and it continued to work for several years(some MINOR things broke, but 85% of the app was workable which was better than the new official version). I was actually quite impressed how long the older version lasted. I powered up my Note 3 just now with that app and it does not work anymore(no errors, just no weather data), but it did at least up until July 2019 when I moved to a newer phone as my daily driver.

On my newer devices I switched to Accuweather, which too had a really nice(in my mind anyway) user interface worked well, paid for the no ads version. Then recently they revamped it. Wrecked it again (check google reviews MANY complaints). Fortunately again I had a backup and reverted to the older version. For whatever reason since I downgraded the notification bar doesn't update automatically anymore no matter what I do, I have to click a little icon on the bar to get it to update. But it works otherwise and again is better than the alternative of using their new app. They started sending popups in the app to get me to upgrade but have ignored them. Not sure if I will get lucky enough to be able to use this older version in the years to come, or if I'll need to find another weather app.

OpenZFS v2.0.0 targets Linux and FreeBSD – shame about the Oracle licensing worries

Nate Amsden

zfs in ubuntu since at least 16.04

I'm not sure if it was considered "supported" back in 16.04(2016) but am currently running several 16.04 systems with zfs with packages from ubuntu.

Perhaps the 19.10 innovation with zfs was installer support? I recall reading news about that but don't remember the version specifically.

https://packages.ubuntu.com/xenial/zfsutils-linux

No special setup, just using zfs in some cases where the compression is helpful, as the back end SAN storage is old enough that it doesn't support inline compression(no zfs on root, just extra file systems after system was installed already)

Haven't personally used any Ubuntu that wasn't LTS since 10.04 so don't know how much further back than 16.04 built in zfs got.

AWS admits to 'severely impaired' services in US-EAST-1, can't even post updates to Service Health Dashboard

Nate Amsden

Re: what a great day

Actually just retired some of our earliest hardware about 1 year ago. A bunch of DL385 G7s, an old 3PAR F200, and some Qlogic fibre channel switches. I have Extreme 1 and 10 gig switches that are still running from their first start date of Dec 2011(they don't go EOL until 2022). HP was willing to continue supporting the G7s for another year as well I just didn't have a need to keep them around anymore. The F200 went end of life maybe 2016(was on 3rd party support since).

Retired a pair of Citrix Netscalers maybe 3 years ago now that were EOL, current Netscalers EOL in 2024(bought in 2015), don't see a need to do anything with them until that time. Also retired some VPN and firewall appliances over the past 2-3 years as they went EOL.

I expect to need major hardware refreshes starting in 2022, and finishing in 2024, most gear that will get refreshed will have been running for at least 8 years at that point. Have no pain points for performance or capacity anywhere. The slowdown of "moore's law" has dramatically extended the useful life of most equipment as the advances have been far less impressive these past 5-8 years than they were the previous decade.

I don't even need one full hand to count the number of unexpected server failures in the past 4 years. Just runs so well, it's great.

As a reference point we run around ~850 VMs of various sizes. Probably 3-400 containers now too, many of which are on bare metal hardware. Don't need hypervisor overhead for bulk container deployment.

The cost savings are nothing new, been talking about this myself for about 11 years now since I was first exposed to the possibility of public cloud. The last company I was at was spending upwards of $300k/mo on public cloud. I could of built something that could do the handle their workloads for under $1M. But they weren't interested so I moved on and they eventually went out of business.

Nate Amsden

what a great day

I guess that's all I had to say, moved the org I work for out of their cloud about 9 years ago now saving roughly $1M/year in the process. Some internal folks over the years have tried to push to go back to a public cloud because it's so trendy, but they could never make the cost numbers come close to making it worth while so nothing has happened.

VMware reveals critical hypervisor bugs found at Chinese white hat hacking comp. One lets guests run code on hosts

Nate Amsden

Most probably aren't affected

It seems according to the advisory a workaround is to remove the USB 3.x controller. As far as I know this is not added by default, none of the ~850 Windows and Linux VMs I manage have it. I had to go and add a USB controller to see the option even appear. Have never needed USB 3 otherwise.

Even my vmware workstation at home which I use every day is using USB 1.1 controller.

score one for good defaults I suppose.

(vmware customer since 1999)

How Apple's M1 uses high-bandwidth memory to run like the clappers

Nate Amsden

Re: Apple leading the way once more

Several folks seem to think this performance will be possible on Windows anytime soon. MS partnered with Qualcomm for their ARM stuff and it seems to be weak by comparison. Qualcomm's ARM datacenter chips went nowhere as well. The trend of higher performing processors on mobile Apple vs Android seems to have been going on for a long time. While there are others that make ARM on mobile it seems general opinion is Qualcomm is by far the best/fastest when it comes to Android.

Things would be totally different if Apple had any history of licensing their chip designs or even agreeing to sell their chips to other companies but they have no interest in doing so(no signs of that changing). Also not as if MS (or google) can encourage Apple financially given Apple has so much money in the bank.

Apple has certainly accomplished some amazing stuff by vertically integrating all of this, really good work. I'm certainly not their target market so won't be using this myself but for many people it will be good.

Will be interesting to see how this affects market share in these segments I'm guessing Apple will pick up quite a bit vs Windows. Lots of folks touted OS X as being a great easy to use OS, but add into that this new processor and the speed/battery savings it gives it's pretty amazing.

If anything this won't obviously inspire significant fear from Qualcomm or other ARM vendors because Apple's locked in ecosystem. They can't sell into IOS/OS X, and vise versa. Just look at the progress of processors in the wearable space for comparison. I have read Apple has made quite a bit of progress there over the years meanwhile many others either got out of the space or let their designs sit for years without improvements.

Since MS can't go to Apple to buy chips, they are sort of stuck. Same for Google. Sure MS or Google could design their own chips like Apple but it would take many years before they are viable like this (assuming they ever get to that point before being killed off).

HP: That print-free-for-life deal we promised you? Well, now it's pay-per-month to continue using your printer ink

Nate Amsden

no printer at home since 2004

I rarely print. Last time I printed regularly at home was 2003, i would print out my resume along with mini CD labels for business card CDs with a bunch of samples on them, attach to my paper resume at snail mail it to job applications (in addition to applying online), i figured it was a good way to get noticed at the time. Anyway i got a new job in 2003 and my printing needs sort of stopped. When I needed to print i printed at the office.

Fast forward a few jobs and many years I shifted to fully remote work in 2016. Was working from home prior to that but the office was close by(about a mile). In 2016 i moved 90 minutes away.

I started using fedex office for my printing needs. Have to drive 15min each way to get the print outs and there's a $1 minimum for submitting jobs through their website but it works well. I probably go on average 5 or 6 times per year and spend probably on average $8 to $12 per year for those jobs.

There is a UPS store that is much closer and they claim to do online printing too but last time I checked I could not find a way to submit a simple 1 page job(or a few 1 page jobs). It seemed geared towards project level stuff but maybe that's different now.

Capita still wants to offload education software unit, sale talks ongoing

Nate Amsden

quite confused

The article references a "peak" share price of about 8, but links to a site which shows the current share price in the low 30s

https://www.investegate.co.uk/CompData.aspx?CPI

Looking at the past few years of performance it seems as if they've probably done several reverse stock splits? It seems back in 2015 they peaked at around $800 according to finance.yahoo.com.

I think the article should be updated to reflect the split adjusted(?) pricing.

Banking software firm tiptoes off to the cloud with MariaDB after $2m Oracle licence shocker

Nate Amsden

not difficult to optimize cost for Oracle in VMware

I did it back in about 2007-2008 time frame. When I was hired the company was undergoing an Oracle audit. They were licensed for Oracle SE One, however their DBA consultants had installed Oracle EE. I pushed to move to SE but manager didn't trust me yet, assured us after we paid the fees everything was all cleared up. The following year we had another audit and we failed again. This time I was able to lead the charge to move to Oracle SE. Which at the time(I assume still now) has a per socket charge (max 4 sockets in a system) rather than per core.

So of course moving to Oracle SE was huge, as it turned out really the only reason the consultants installed EE was their monitoring system used partitioning, which of course hit us with another license breach. I recall buying new CPUs for our Oracle systems going from high speed 2 core (better licensing for EE) to lower speed quad core (better licensing for SE). I ran into a limitation on the DL380G5 systems that we had our system boards were too old to run quad core not even HP knew at the time (they later updated the spec sheets to reflect that), they replaced the boards and installed the new chips.

From a VMware perspective we did two things - we limited where we used Oracle (didn't allow it to run on just any VM host), and we also ran our DL380G5s with a single socket, cutting licensing further. This officially wasn't supported by VMware (ESX 3.5) at the time though I believe their intention was they didn't support running a single cpu, I had no doubts a single quad core cpu would work fine. VMware sold licenses at the time in pairs of sockets so we used 1 socket license on one system and one socket on another saving costs even more. (in hindsight perhaps this was a VMware license violation I never looked into that aspect). We never ended up needing VMware support, and Oracle had no issues with our new setup. I remember specifically having to educate our Oracle rep(s) on the licensing of Oracle SE being per core. Our production OLTP servers ran on physical single socket DL360G5, partly because I wanted no performance impact from VMware, but also because Oracle's support policy at the time was "reproduce on physical hardware before we support you in a VM". Didn't need that extra risk for the production OLTP. That and we just had VMware standard, so no VMotion, no HA etc. Just basic hypervisor.

Obviously Oracle SE has far fewer abilities than Oracle EE, the biggest one we missed at the time was Oracle Enterprise Manager with the query reports. Though at the time (10gR2 I think??) you were still able to install OEM on SE. It was easy to install and delete in the case they came to audit again(they did not). Newer versions of Oracle from what I could tell made it impossible to install OEM on SE (or at least it wasn't dead easy like it was back then).

Though I'm sure that Oracle SE is very cost effective and far easier to migrate to than MariaDB - not only that Oracle SE has much more abilities than MariaDB even.

Unfortunately there is nothing in the MySQL world in 2020 that comes close to those Oracle query reports I had access to in 2008. Our DBA does log many queries and can run query reports but it is a very time consuming process and can have quite a lot of overhead (if your not careful your query logs can be multi GB per hour easily which means query reports can take hours to get results back). I have used tools like ScaleArc and Heimdall which are MySQL aware proxies which come with real time analytics, though they are limited in that they can just see the queries and the response times, they can't get the in depth metrics of what is going on inside the DB for a given query.

I do get tons of internal metrics on each MariaDB server we have even query response times(can't see WHAT queries just # of queries at given response time thresholds), probably 200-250 metrics per server per minute or something. But still pales in comparison to the internal query level metrics available in Oracle in real time.

In a more modern world if I needed to run Oracle on a larger scale in VMware I'd build a dedicated VMware cluster for nothing but Oracle DB. Run the apps on other servers, limit licensing impact. I did run a very small Oracle system at my current org for many years it was for our vCenter 5 backend database. Given the choice was MSSQL, Oracle, or IBM DB2, the internal vCenter DB cannot scale very high at all. For me the choice was obvious, running Oracle on Linux. I used named user plus licensing(so not per CPU or per core) on Oracle SE and it was dirt cheap just a couple hundred a year or something. Eventually retired it almost 2 years ago after finishing migration to vCenter 6.5 on VCSA which uses an internal Postgres database.

Had many conversations over the years with Oracle they never cared about auditing us(I was ready regardless). Though we did undergo our first VMware audit last year, and we failed. I wasn't aware we couldn't move 3 nodes of essentials plus licensing from our UK office which had permanently shut down to our California office. The licensing was only in use for a few months but still had to pay the fees(which was buy a new essentials license which we used for 1 month before transitioning to Enterprise plus licensing as I retired some older systems from our primary datacenter and gave them to our HQ office with the VM licensing intact).

Fortunately we didn't get hit with a VMware audit a couple years earlier, the VAR that sold us our licensing for our datacenter stuff in EU bought everything in the U.S.(through HP) and shipped it to EU, would of had to pay probably more than 10x the fees for that, though that location for us was permanently closed 2 years ago (moved systems to U.S. for EU customers), then the business decided to close all EU operations entirely last year.

Having a region lock on a license code is just so stupid. I can understand region locks for things like on site support(had to jump through a lot of hoops(too many and took too long) to ship our HP gear from Amsterdam to the U.S. and get it under support again), but otherwise makes no sense to say you can't use this license code you bought in the EU on a U.S. system even though the EU system is permanently shut down. Not as if this was a site license, just a one off license purchase.

OpenStack at 10 years old: A failure on its own terms, a success in its own niche

Nate Amsden

don't forget cheap

Customers wanted the easy to use, the scalability and they didn't want to pay VMware etc for licensing. Perhaps Openstack will get a big boost from the new EU cloud project or whatever.

The move from utility computing(what I prefer, not an often used term) to (infrastructure) cloud computing comes at an astronomical relative cost, and in my opinion in most cases isn't justified. Huge cost increases come from huge increases in complexity etc.

Atlassian pulls the plug on server licences, drags customers to the cloud

Nate Amsden

Re: GitBlit + XWiki

as someone who has been fairly hard core confluence user over the past ~13 years or so I have to agree that Xwiki seems to be a good alternative, I installed it on my home network a few months ago. Though I don't have too much in it yet. At the org I work for we have been on the Atlassian cloud since before I was hired more than 9 years ago now I think(briefly brought some things in house such as SVN, and fisheye when they retired their cloud usage of those systems, until the developers eventually moved onto something else and we stopped using those too). The user experience has degraded significantly during that time, and it's really sad to see. I've complained many times but 99% of the time they just ignore it. (par for the course for most SaaS systems I believe).

But I have great hopes for Xwiki.

If you want to practice writing exploits and worms, there's a big hijacking hole in SonicWall firewall VPNs

Nate Amsden

Re: Interesting quote from SonicWall website

Got the reply from support, who say that if you don't have SSL VPN enabled, and if you don't have management access enabled externally the risk of exploitation is very low. I assume perhaps there may be a way to exploit one of these things on an internal management interface but that's still unclear. The person who replied to my ticket's english wasn't that great but they said "So the conclusion here is that we do not need to do anything for SonicWall DoS & XSS Vulnerabilities as our setup is good.".

Hopefully they update the advisory to clarify this more.

Nate Amsden

Re: Interesting quote from SonicWall website

SMA is a totally different product line, which is not affected.

What is sort of unclear to me(as a Sonicwall customer) and I have opened a support ticket this morning on it, is the Tenable blog says disabling SSL VPN on the firewall can work around the issue. I have not had SSL VPN enabled on my NSA firewalls in many years because, well last time I tried it, it was a basically unusable. The Sonicwall SMA appliance was an acquired technology I think(forgot from who) and was supposed to be a much better platform. In the end my org wasn't able to use SMA I think because of complications with tieing it into Duo at the time(this was 5 or 6 years ago, it may of been related to inline enrollment with Duo), so ended up going with Pulse Secure (still have Citrix Access gateway which was originally the SSL VPN product used, had to jump off of Citrix due to design problems with their OS X client at the time I had support tickets spanning more than 1 year on that before they finally came clean I guess, Windows clients were fine - I still use Citrix Access gateway to this day). Of course both Citrix and Pulse have had their share of vulnerabilities too.

Anyway, I asked them to clarify if by disabling SSL VPN entirely does that remove any exploitation possibility? I think it does for several of them, and am sort of assuming it does for all but their website(https://psirt.global.sonicwall.com/vuln-list) really lacks any details on this issue as of yet. They have another site https://www.sonicwall.com/support/product-notification/sonicwall-dos-xss-vulnerabilities/201015132843063/ which has more info, but again most of them seem specific to SSL VPN. No solid clarification as to which vulnerabilities are not an issue if SSL VPN is disabled.

Oracle starts to lose patience with Solaris holdouts

Nate Amsden

Re: Why?

SPARC support for one.(article implies many customers running sparc on older than 2010 hardware that cannot run 11.4).

https://www.freebsd.org/platforms/sparc.html

"UltraSPARC is a Tier 2 architecture through FreeBSD 12.x. It is no longer supported in FreeBSD 13.0 and later"

https://www.freebsd.org/doc/en_US.ISO8859-1/articles/committers-guide/archs.html

"Tier 2 platforms are functional, but less mature FreeBSD platforms. They are not supported by the security officer, release engineering, and port management teams."

Official support for Oracle server software for two.

http://www.orafaq.com/wiki/FreeBSD

"Oracle Corporation does not officially support the Oracle database or any other of their products on FreeBSD."

At this point I'd wager the bulk of the Solaris installs out there are for running Oracle server software. If there was to be a migration it would likely be to Oracle linux.

(Last time I touched Solaris was 2006)

It's 2020 and a rogue ICMPv6 network packet can pwn your Microsoft Windows machine

Nate Amsden

reminds me..

reminds me of the basic packet exploits against windows systems back in the 90s, I think teardrop was one, then there was a "ping of death" and others though at the time those just caused crashes, not sure if they were able to execute code as well.

Before you buy that managed Netgear switch, be aware you may need to create a cloud account to use its full UI

Nate Amsden

"No subscription equals your office hardware bricked"

I think I have read that Cisco has some product lines that do this? (new product lines which have been quite controversial with the new subscription model). I don't know specifics as I have generally tried to avoid anything Cisco for at least 15 years now.

The org I work for uses Aerohive for corporate wireless which is a cloud based management product(company was purchased by Extreme Networks last year I think, I've been an Extreme Networks ethernet customer for 20 years now). Last year our subscription lapsed. I asked support in advance what would happen as we were just about to expire and our vendor was struggling to get a quote. They said no issues just can't manage the devices. We ended up getting the order in on time but it got canceled for some reason(never got notification, I was assuming once the order was processed Aerohive would automatically update the subscription on their end and things would just keep working) and we ended up with no subscription for several months(took me that long to need to login again to poke at the interface before I realized the order never went through as our vendor confirmed it was received by Aerohive and being processed several months before). At the end of the day there was no impact, network stayed up and fine. The only reason I had to access the management UI at the time was to a quarterly "rogue AP scan" for PCI, otherwise it was not for at least another 7 months before I actually had a need to make a configuration change(adjusted radio strengths).

Even without a subscription I would wager that I would be able to login to the command line of the Aerohive access points and make changes there if needed. I mean I can already login and make changes, I don't think the AP would phone home and say "hey I don't have a subscription so you can't make any changes", but it's technically possible I suppose. I logged into the CLI of the APs many times while cleaning up the configuration last year. I monitor each individual AP via SNMP with our general monitoring tool and even without subscription there was no issues gathering metrics either.

While you lounged about all weekend Samsung fired up its biggest-ever chip factory and started cranking out 16Gb LPDDR5 DRAM

Nate Amsden

Re: Always registered.

I haven't paid close attention to memory specs since the days of socket 7 it just got so confusing. Anyway for me if I need memory I just go to Micron's site (or Kingston if I have to), and use their memory selector to find which memory is compatible. Historically I prefer Micron (since I learned maybe 15 years ago that Kingston uses several different manufacturers at least at the time), but it takes the complexity out of trying to figure out what exact type of memory you need to get for a given system.

Ex-Autonomy CFO Sushovan Hussain loses US appeal bid against fraud convictions and 5-year prison sentence

Nate Amsden

Re: This is still HP's fault

Never bought a house so perhaps someone could chime in. I read a thread recently about someone buying a house only to find out it had tons of hidden water damage something like 25% of the value of the house. The inspector didn't flag it. Someone replied if the buyer can provide expert testimony saying there's no way the seller couldn't of known of the extensive damage and did not disclose it then it's not difficult to win a suit against the seller to get them to pay for it.

Relying on plain-text email is a 'barrier to entry' for kernel development, says Linux Foundation board member

Nate Amsden

Re: So not just about plain text email

I'm sure I'm a tiny minority here when I say i host my own email(since 1997) and even have it auto strip most html tags. Sometimes that means the email is hard to read and in rare cases end up completely blank. But most of the time it's fine. Also of course compose in plain text.

I haven't spent much time messing with my email system over the years other than OS updates. hell just recently i dug into some errors and turns out one of the RBLs i was using shut down about 7 years ago. Wow. But wait then i saw another RBL shut down 14 years ago. Never caused an issue other than errors in the logs but finally removed them.

I was forced to switch to html mail for work last year i usually use OWA on office 365. However they put a critical bug breaking composition of text emails by auto removing line breaks. I complained to IT who brought it up with MS. (UPDATED) - more than a year after I reported this issue the line break problem still exists today. The message in "Sent" email looks fine, but when it came back to my inbox it was mangled.

Nate Amsden

Re: Where's the IT content?

Haven't talked with sarah in probably 12 years but knew her when she was a consultant at a Seattle company named blue gecko(they were db experts, company was later sold). She was really good back then has been interesting to see her rise so much in importance (?) Since then. Meanwhile I'm still doing the same sort of things I was 12 years ago. More refined of course but was not interested in any radical departure from my core skillset.