Like Dell+VMware w/vSAN.
I'd be a little curious to be in a room with teams from both groups and see how the conversation goes.
2022 posts • joined 19 Jun 2007
4G isn't even sorted out yet, so myself I am not holding my breath for 5G. I can count on one hand the number of times I was doing something on 4G and the speed was really good (above 10-15Mbit) over the past 6 years. I think that count is 2 or 3(one of those times was in a Vegas conference hall where they obviously had repeaters of some kind inside).
Meanwhile I can go to many busy places on 4G and not have enough bandwidth to even get DNS resolution to work.
It's not my phone, since I have tried a couple of other phones(new and old) which behave about the same.
Carrier is one of the top two in the U.S.
The California city I'm in has a population of 200,000+. (That and I've traveled a bunch of places and LTE generally sucks everywhere, though usually I can get 2-6Mbit).
If there was an easy way to switch my phone to 3G on the fly I would but it requires 2 reboots and removing the sim card in order to get around the locks that the OS/carrier/whatever have on it. I do flip the mode whenever I travel outside of the U.S. though.
5G sounds promising for fixed wireless communications at least on paper. It seems to make absolutely no sense for mobile phones. It's just a gimmick and will be for years to come.
The article says the 3rd party was the FBI. So not surprising they didn't know if the FBI told them. I saw a stat a few years ago and it said something along the lines of intruders have network level access for on average about 190 days before being detected(stat was quoted by the then-CTO of Trend Micro). I think the number of days has been going up slightly as well in recent years.
The one thing that the article doesn't specifically cover is how much/if any of the source code was taken. They say corporate network, I have no idea if that includes development stuff or not.
Security is a hot topic these days but for the foreseeable future it will continue to be a losing battle for just about everyone(especially with state actors, APTs etc), not a game I'd like to play.
There's a huge distinction between being rejected and not getting a visa in time because of a backlog of requests. I don't know if there are restrictions as to how far in advance you are able to request a visa. I poked around and it seems the main form is the DS-160(which I helped someone with a couple of years ago) though I don't see at first glance anyway whether or not you have to file within X number of days of travel.
The article implies the person was not (yet at least) rejected they just hadn't gotten the visa processed in time for the event.
so many top level domains.. I got on a support call recently and they said go to <vendorname>.support. I asked them to confirm as I expected something longer like <vendorname>.support.somethingelse.[some well known TLD]. But nope the actual TLD domain was .support.
I guess I must admit I have yet to encounter many of these TLDs in real world usage.
Per the .dev stuff I assume if people use .dev and host their own internal DNS they could override the behavior ? Provided of course the browser isn't sending DNS requests directly to the interwebs. I don't use Chrome nor do I use .dev so not sure, but am curious.
I don't use DD, I use HPE StoreOnce. But DD has a cloud tier option, looking at their specs it ranges from a usable capacity of 96TB on low end to 3PB on the high end.
HP on the low end starts from 94TB usable cloud capacity to 5.2PB on the high end. On the HP end there are some restrictions on how this can be used. e.g. all of my usage of StoreOnce is over NFS, which means no cloud tier available even if I wanted to use it.
I've got no idea what if any restrictions there may be on the DD stuff.
(I haven't used the HP or anyone else's cloud backup stuff)
Per your X86 cruft comment, Intel did try to push exactly that concept. Get rid of X86 replace it with Itanium. Didn't work so well. I'm sure Itanium wasn't the best but they probably still spent billions of dollars developing it hoping to kill X86. I think it also wasn't the first time Intel wanted to kill X86, didn't they try something much earlier I want to say the i860 or i960 processors or something -- I want to say I remember reading something along the lines of those processors were the first ones that MS built NT on and only ported it to X86 later (and alpha and mips and ppc..)
As for peer review. I find it funny to see comments like this. This obviously isn't a new issue, this stuff has been in the chips for more than a decade. No real stink was made (outside I recall reading OpenBSD folks harping on hyperthreading and other stuff about 10 years ago). Lots of people knew the architecture,it wasn't top secret.
For me personally I am not patching my systems(at least at the firmware level). The risk outweighs the benefit. My laptop(Lenovo P50), and my personal servers(both run recent Intel Xeons) are not getting fixed for this stuff.
I haven't had a known security incident on any of my personal systems hardware or software since literally I think it was something like 1992, when my 486 computer at the time got the [STONED] virus. Though I don't recall it doing any damage. I don't remember if anti virus took care of it or what.
Professionally I haven't had a known security incident hardware or software on any of my equipment since 1997. I was running a small ISP, someone who had a legit shell account on one of my Linux servers decided to hack it. I was involved in software piracy back then so not everyone I knew was super trustworthy. Though they were detected within seconds (as I was logged in at the time, I detected it by them being stupid and firewalling my IPs from contacting that server, system was disconnected from the network within an hour or so and rebuilt).
I have assisted in a few security incidents of things that I had access to (but was not responsible for) though. Presently I manage more than 1,000 virtual servers and server hardware and networking and storage that run under them. So I have a decent amount of experience.
So yeah, my ~22 years of online experience, many of which running internet connected services in both personal and professional capacity makes me believe that the risk of this is far overblown for MOST people (exception is shared environments where you have untrusted workloads,e.g. public cloud providers, or high value targets).
The knee jerk reactions to most of these security things are just crazy. It would be different if there was an active exploit available, something that is networkable and can infect/spread/worm itself etc.
There's far more critical security related things to patch or secure from than this.
I believe the most vocal people talking about this stuff are more so the hard core AMD fans who want Intel to fail so AMD can rise up again. I can certainly understand that angle, though it's not going to happen.
One thing to keep in mind, if someone (say a state actor) really wants in, they will get in. Doesn't matter if you have all the patches, they will find a way in.
Curious can you quantify "forever" ? I have a Note 4 though I don't use it too much it is more of a backup of a backup. My daily drivers are Note 3s (my first Android phone). One has Android 4.4(my main) and the other has 5.0 (I prefer 4.4). Anyway performance wise it seems fine. Literally on day one of having the Note 3 I installed Nova launcher and it has been my launcher of choice on all android devices since(am not sure how much if at all that may influence the performance of the device).
I don't use many apps, no social media, no banking, mainly built in email, SMS, firefox(with ad blockers - mobile is the only place where I use actual ad blockers), the built in gallery app, google maps(I use an older version the newer ones have too much crap in them). I do have about 60 apps installed though overall usage of them is much less frequent/consistent.
Apps I am less sure of privacy/security wise (that I otherwise want to try or use) go on the 2nd Note 3 or sometimes the Note 4 neither of which have any personal info on them. The wifi in my home is strictly DMZ I guess you could say, I have some ports open on the firewall for my IP cameras to be reached by the phones but otherwise the phones have no need to access internal network so they don't have access.
I have been interested in a new phone just because well this one probably won't last forever. Though it's hard to decide what compromises to make since all new phones would be some form of serious compromise for me (over Note 3), the only exceptions I think would be camera, CPU performance, and memory (the only areas I care about upgrade wise).
Everything else - having a removable battery(I change mine once a year to keep it fresh), IR blaster, MHL, flat screen, headphone jack, reasonable bezels, wireless charging and having something reliable (at 6 years this Note 3 is by far the longest I have had a single phone as my daily driver - and I've never needed to repair it. Though the gyroscope has been failing for the past year or two, doesn't matter much).
At this point I don't care if it's $1200 bucks if it can last 6 years.
This seems to have nothing to do with azure itself just bad or missing testing on the Ubuntu update.
All of the big cloud providers are "built to fail" (as in you should expect failures to happen quite frequently that are not easy to recover from short of rebuilding or restoring from backup or if you have an app that has better redundancy to handle that kind of stuff). So most of that stuff is so common it doesn't make the news. For the org I am with for example we haven't lost a VM in the ~7 years since we moved out of public cloud.
Azure I think gets more headlines to some extent as they have more SaaS offerings that are critical like Office 365/email etc. SaaS should be more resilient to those types of faults but it seems in many cases it is not quite there yet.
Larger scale cloud issues certainly hit the news though, El reg has had quite a few on Google and Amazon too.
I have been a wells fargo customer since the 90s.
I don't use the online stuff often maybe once a week or something. But I just logged in, no issue(8:40 PM pacific time). Someone mentioned they couldn't move funds between accounts(2 checking accounts though the transfer doesn't use wells fargo it uses Zelle or something?? but it's fully integrated into the UI). I just did no issue. I don't really use the online banking for much more than viewing the balance and seeing the transactions.
From those aspects everything seems perfectly normal. I even clicked on a check I wrote and the image for the check came up immediately.
No errors anywhere.
I do not, and have never used their mobile app maybe that is more impacted I don't know.
How is a DNS issue related to Century link (a telecom provider, and I guess colo too) ? Probably will never find out
(not a customer of either, just confused what kind of DNS setup MS would have that would have their internal services reliant upon an external DNS provider).
If my external DNS went down(Dynect) completely or got corrupted or whatever,the worst thing that happens is users can't resolve the names or resolve to the wrong place and end up not being able to use the services. Internal DNS has dedicated zones(even duplicates of a dozen or more external zones to override external IPs in some cases with internal), so nothing would be affected internally. Certainly wouldn't cascade database failures or data loss or anything remotely like that.
I've used cyrus for the past 19 years now.. for email it works great(though the migration from v1 to v2 was quite painful - though I haven't run email for a corporate type environment since 2002 and at the time it was Cyrus). Since then the only email hosting I have done is just my personal and family stuff.
I don't have opinions strong enough to try to talk someone else into what email solution to use, but I wanted to (sadly) mention this that I noticed last year:
made me kinda feel sick inside (the main reason of course being they built Cyrus, am not sure how much involvement they have in it today).
I have been a user of office 365 for the past 6 years or so(I don't work in corporate IT so have never managed exchange). I don't have major complaints. I'm certainly not the office power user who leverages their stuff more so can understand those who need that groupware functionality. I could get by with just IMAP without an issue though I know many others need much more than that. Office 2010 on windows 7 and OWA on Linux and email on android all seem to work ok.
Shouldn't be anyone who is not experienced or at least not willing/eager to dive deeper into linux using something like Debian. For those folks anyway this specific thing mentioned in the article is a non issue to begin with.
I started with Slackware 3.something back in 1996(instead of Red Hat which was the only other option I was aware of at the time) specifically because I wanted to get more into Linux. Went off building(eventually) my own kernels, libcs, X11s, gome, KDE, whatever. Red hat of course you could/can do the same though the lack of a similar formal testing/package repo to me at the time at least meant I didn't want to use it.
Tried Debian in 1998(Debian 2.0) by recommendation of someone I knew online at the time. Still remember spending 2-4+ hours i dselect(oh the pain) choosing packages those first few times I installed. apt-get came later(debian 2.2 ?). Ironically enough I still find dselect vital these days for just 1 reason (dpkg --get-selections and dpkg --set-selections makes things very easy when building new similar systems (that don't otherwise have/need massive automation, such as my personal servers, laptops, desktops - the latter run Mint which is still Debian based).
because hosting it on https makes it totally secure right? HTTPS protects against some things, but introduces extra complexity(good luck troubleshooting when you don't have the SSL key) and performance hit(can be huge depending on your settings for a site shoveling as much data as debian's mirrors likely are - that and they are mirrors after all). I'm all for making https an option though for those that are super paranoid.
There seems to be approx 418 mirrors on debian's site https://www.debian.org/mirror/list if my quick checks map out. Of those I see about 177 valid HTTP responses on https ports. I did not attempt to do anything other than view the debian directory(with wget, and I told it to not to validate the certificate since wget doesn't know all CAs. There were ~80 SSL cert errors reported ranging from unknown CA to expired cert to "no certificate subject alternative name matches").
Personally I'd be more concerned about people hacking into debian's systems (or even the mirror you're connecting to) and uploading bad packages than I would ever be of someone doing a MITM on one of my systems. I think overall the chance of a real problem is VERY low for most people. No reason to freak out, but freaking out generates the headlines I guess.
(Debian user since 1998 - though switching to Devuan)
From the previous article https://www.theregister.co.uk/2015/11/06/amd_sued_cores/
"it claims it is impossible for an eight-core Bulldozer-powered processor to truly execute eight instructions simultaneously – it cannot run eight complex math calculations at any one moment due to the shared FPU design"
This article seems to be referring to desktop processors though I assume the Opterons at the time were affected as well ? (I have several Opteron 6176 and 6276s in service still as vmware hosts - though checking now at least Wikipedia says only the 4200/6200 Opterons were bulldozer).
So if desktop processors were affected I am curious what sorts of apps would be impacted seriously by this? I mean I expect in most games and 3D rendering type apps that GPUs are far more important than FPU for math calculations. Perhaps media encoding ? I think that is often accelerated by the MMX/SSE type instructions.
I would assume that CPU(FPU) based math would be more common in the HPC space (even with GPUs), and I can certainly see a case for an issue there - however at the same time I would expect any HPC customer to do basic testing of the hardware to determine if the performance is up to their expectations regardless of what the claims might be. Testing math calculation performance should be pretty simple.
I want to say I was aware of this FPU issue years ago when I was buying the Opterons, and then, and even now I don't care about the fewer FPUs, I wanted more integer cores(for running 50-70+ VMs on a server). I really have had no workloads that(as far as I am aware at least) are FPU intensive. Though it certainly would be nice if it was possible to measure specifically FPU utilization on a processor, much like I wish it was easy to measure PCI bus bandwidth utilization( not that I have anything that seriously taxes the PCI bus(again that I am aware of) but having that data would be nice.
I think back to when Intel launched their first quad core processor, or one of the first, I think it was around 2006-2007. They basically took two dual core procesors and "glued" them together to make a quad core. I remember because AMD talked shit about Intel's approach as AMD had a "true" quad core processor.. fast forward a decade and it seems everyone is gluing modules together.
quite surprised but perhaps I shouldn't be for the lack of comparison to the HP (Palm) Veer. Almost an identical form factor and target market. It was the last WebOS phone to fully launch (I don't count the Pre3 as it was canceled quite suddenly).
The veer was quite cute, and no built in headphone jack though it did have a magnetic attachment to get a headphone jack(and micro USB). It did have a slide out keyboard which this new Palm lacks. It also had wireless charging. No WebOS devices had expandable storage unfortunately.
I don't know what the Veer's sales were like but I remember the comments at the time when the market was going to bigger and bigger screens it seemed crazy to go the exact opposite direction though I appreciate the risk they took trying something different. I bought a Veer myself I think it was basically free when I renewed my ATT contract at the time(was using a Pre3). Never really had a use for the Veer outside of a toy. Ended up giving it to a webos developer a couple of years later.
Still have my ATT Pre3 sitting in a box along with a French language version of the Pre3. My nearly 6 year old galaxy note 3 remains my daily driver and my first Android device.
Curious how many reboots is a lot these days? My primary phone is a note 3 for 5+ years and it last was rebooted 112 days ago. I think that reboot was me pulling the battery out to get to the SD card to copy about 45gb of files over to it directly before a 2 week trip (instead of through the phone which is a lot slower).
not sure what version of exchange like functionality Office 365 has but I have outlook 2010 on windows 7 connected to office 365 no issues., have been since at least 2013 I think, prior to that the company I am at was using hosted Exchange at Rackspace and it worked fine there too.
official support looks to conclude in Oct 2020 (updates etc). Haven't heard/seen anything saying when or if outlook 2010 will stop working with office 365 exchange.
Most of my email is done via Outlook web access from Linux but I do keep outlook running 24/7 in my windows VM sometimes it is useful.
I'm curious what most people might find more "offensive" if they were given a choice
1) swear words or 2) USING ALL CAPS
I remember at my first job back in the 90s they used an old IBM system over serial ports and it only accepted input in all caps. One day the CEO of the company (about ~120 people) sent me an email saying "NATE PLEASE COME SEE ME IN MY OFFICE". I was shook up at the time but quickly learned he wasn't upset he just forgot to turn off caps lock after switching out of that app.
I think if I had to choose it would be CAPS would be more strongly felt than swear words. Hell in many cases people use swear words as comedy stuff.
Routing something that far away will of course kill latency and perhaps short of something like residential broadband or mobile connections would result in the customers seeing that drop in performance/throughput and reporting it.
I was involved with one such incident that I kept the traceroute for in 2004, the story I was told was there was a fiber cut in the midwest of the U.S. that caused some previously unused BGP broadcasts to route traffic for the company I was with(AT&T customer at an AT&T facility near Seattle) to go through Russia. Packet loss got as high as 98%, and being we were an online application had to take our site down since it was wrecking havoc with transactions. Took AT&T and friends 6-7 hours to put the filters in place etc to resolve the issue.
Traceroute from the time (source and destination addresses both in Seattle area)
32 hops, 98% packet loss and 280ms later arriving at the destination.
.local I think is mainly used in windows and mac environments ? I've never used it in linux anyway. At the org I work for we have 47 domains hosted internally, some are both internal and external. All are .com or maybe .co.uk .de .fr etc.
Used to have over 100 but cut it down by a bit last year.
So saying .local isn't affected does nothing for what may be my situation at some point.
I'll admit I haven't followed this closely. Just looking now at DNS over HTTPS for example, other than some web browsers, and Cloudflare (maybe some other public recursive systems) where are the other implementations of this? (specifically looking for server implementations, not stuff that is locked away by a service provider black box like Cloudflare). Speaking as someone who has run recursive and authoritative name servers for 22 years now(and still does today).
Wikipedia's info on it is pretty void of info https://en.wikipedia.org/wiki/DNS_over_HTTPS
Recently I looked into DNS over TLS(was just curious), specifically for BIND anyway and came across this page https://kb.isc.org/docs/aa-01386 , which talks about using stunnel in front of bind for DNS over TLS. Which to me is just a hack. I'd expect to see native TLS support in something like BIND, at least so you have full visibility into the IPs that are sending requests(with a proxy like stunnel that information would get lost).
Myself I am fine with DNS as is I have no need for TLS or HTTPS. Though I can certainly understand there are people in situations where they have a much higher need for privacy and for whatever reason a vpn may not work for them.
SSL/TLS connections are difficult enough now to debug with encryption protocols and ciphers and versions and generally crappy logging on behalf of the applications.
DNS in browsers is already a pain with the browser often caching DNS responses. Probably been 100s of times over the past decade I've had to tell users to restart their browsers to use another browser to clear their browser dns cache.
My general bigger concern with the likes of firefox and probably other browsers wanting to use DNS over HTTPS how that might affect my services. e.g. users connect to a VPN, and that has DNS resources that resolve stuff for internal names. I would kind of expect/fear that if firefox and others are defaulting to a public DNS over HTTPS provider than it would break DNS queries for basically everything internal the user is trying to connect to. Time will tell I guess.
Also of course more broadly along the same lines having inconsistent DNS behavior depending on whether or not the app is using DNS over HTTPS to a public resolver or the operating system's resolver.
(and yeah not a fan of IPv6 either)
sad to see such big companies splash such massive amounts of cash while simultaneously practically strangling themselves for profits.
I mean IBM could invest $1 billion more in their cloud stuff every quarter for the next 8 years with that kind of cash, and I'd hope they'd be able to get 10X out of it relative to what they may get from Red Hat in that time frame. Maybe they just aren't capable of doing that kind of thing(if not then that is sad too).
If I was an IBM customer especially looking to use their cloud stuff I'd certainly be more impressed with an 8 year big investment push for their cloud stuff than a one off purchase of a services company whose software is all pretty much open source anyway. Everything I have seen claims IBM's cloud abilities are distant from the big guys. I can't imagine how Red hat can change that, with their software readily available to IBM this whole time.
I haven't been a Red Hat customer in 13 years (though used CentOS more recently than that, last time I used it was about 2010). I could see Red hat being bought for $5B or something, but $30+ ? Just insane, such a wasted opportunity. I haven't been a customer of IBM stuff since about 16 years.
Not that HP(who I have used a lot) is a whole lot different. If they had taken that $10B they dumped on fraud Autonomy and put it into mobile/WebOS that could of been something great.
Count me in the camp of who hates systemd(hates it being "forced" on just about every distro, otherwise wouldn't care about it - and yes I am moving my personal servers to Devuan, thought I could go Debian 7->Devuan but turns out that may not work, so I upgraded to Debian 8 a few weeks ago, and will go to Devuan from there in a few weeks, upgraded one Debian 8 to Devuan already 3 more to go -- Debian user since 1998), when reading this article it reminded me of
Through some document buried on their website that you only see if you're specifically looking for a solution to that issue? Or are they proactive and giving users a message every time they open a zip file? Somehow I think it's the former, in which case, what 99.985% of people will never see it until it's perhaps too late or something?
Not that this affects me, mostly Linux and my Windows stuff is 7 only. I find it quite depressing how all of these complaints about windows 10 quality are falling on deaf ears. Not unique to Windows either of course. At least with Linux it was possible for(example) a group of folks to fork Gnome 2 to make MATE(which I use), and have been maintaining it ever since.
For me anyway very little(if anything) has come out of Windows or even Linux in the past decade that has gotten me excited.
So many times I've seen people say a stock tanks or crashes or some other extreme term only to see it down 0-8%. 24% is pretty crazy though. I haven't followed anything stock market related in 4 or 5 years. I think the nasdaq was just over 2000 at that time or something.
Epyc hasn't impressed me though with all of intel's manufacturing difficulties(and seemingly everyone running into performance or power walls) they certainly have some window of opportunity.
I'll certainly look to replacing my opteron 6100 and 6200 systems with newer epyc next year. Am quite shocked I was just able to renew support on those systems from HP for another year (entering 8th year of service soon for me anyway). Was expecting HP to EOL them a while ago.
for me it was NT4 that drove me to linux. I liked win95 for a bit, got a NT 3.51 server cd from a friend at MS back in the day, liked that(more stable). NT4 was neat though I guess moving more shit into the kernel made it less stable. Quite a few crashes and seemingly have to reinstall every 6-12 months made me jump to Linux (Slackware 3.x) then Debian 2.0.
I still have Win7 at home and even an XP box(really games though I don't play much games). My main laptop dual boots to win7 but doesn't spend more than a dozen or so hours per year in win7 on average. I have a Win7 VM for work stuff that works fine.
So glad I never jumped on win10. I never did see the popups from MS offering free upgrades to win10. Win7 (and win2k8/r2 for the few windows servers I have) do what I need. win2k12 was quite annoying(have a half dozen of those systems).
Main interface for me though has been linux since about 1998.
What are you protecting against? Snapshots certainly are backups. As is RAID. It doesn't protect against everything certainly.
Just today on an isilon cluster i was deleting a bunch of shit and i wasn't paying close attention. Realized i deleted some incorrect things(data was static for months). i just restored the data from a snapshot in a few minutes.
I've been through 3 largish scale storage outages(12+hrs of downtime) in the past 15 years. It all depends on what your trying to protect and understanding that.
In my ideal world I'd have offline tape(LTFS+NFS) backups of everything critical stored off site(offline being key word not online where someone with a compromised access can wipe out your backups etc). This is in addition to any offsite online backups. Something I've been requesting for years. Managers didn't understand but did once I explained it. It's certainly an edge case for data security but something I'd like to see done. Maybe next year..
understand what your protecting against and set your backups accordingly.
I remember nexenta's zfs solution to that problem was to corrupt the zfs file system and then get into a kernel panic reboot loop until the offending block device(and filesystem) was removed. Support's suggestion was "restore from backup". I tried using zfs debug tools (2012 time frame) to recover the fs. Was shocked how immature they were. Wanted a simple "zero out the bad shit and continue", didn't exist at the time. Disabled nexenta HA after that. Left nexenta behind a while later.
sounds like 1 million boxes with ARM something inside of them. The number would be more useful if we had indications as to the number of boxes shipped in previous years.
With Qualcomm, AMD and at least one or two others having bailed on ARM powered servers (that otherwise have no x86 in them) seems the only thing ARM is replacing in the data center is other embedded processors. Perhaps once entirely custom silicon in many cases it may be simpler/cheaper to put the logic on a general purpose ARM processor(maybe a custom ARM processor).
Not many developers use linux in my experience -- as someone who has worked supporting linux-based applications (mainly web stuff) for the past 18 years. I have been primarily linux on my desktop since 1998. I do keep a win7 VM running 24/7 for some work things though.
OS X seems to have killed 98%+ of of the developers I have worked with from using linux over that time. It was sad to see but understandable I guess for their use cases. I don't need more than 1 hand to count the number of folks at organizations I have worked at either developing software that will run on linux, OR support staff that run the linux systems that run linux as their primary desktop over the past decade (I think the actual number may be as high as 3, maybe 4). At the same time probably less than 20 people using Windows to do the same things(guesstimate). OS X dominates.
I tried OS X for a few weeks at one point but it was not my thing. Didn't even like the hardware ended up buying my own laptop so I wouldn't have to use the Mac trackpad (don't like (any)track pads, I want the touchpoint), Linux with Gnome2 + mouse over activation + desktop edge flipping + virtual desktops is what I like the most, so currently run Mint 17 MATE with 16 virtual desktops and the built in display(not a fan of multi display) on a Lenovo P50 laptop.
I was a believer in Linux on the desktop up until maybe 2004-2005, when I accepted that the linux kernel devs will likely never have a stable driver ABI which would of addressed a good chunk of issues with desktops and wide ranges of hardware.
I do use slack on a daily basis(Linux and Android) just for chat at work, never touched any audio or video capabilities it might have. I preferred(past tense) Skype which we were using before, but MS killed that generation of skype years ago and now it's as bad or worse than Slack for chat (don't care about audio/video).
I was die hard irc back in the 90s, the dot com bust caused the communities I was involved with on irc to mostly evaporate and I stopped going. irc is good too though the integrated ability to store messages while the user is not connected is something I never saw in irc (outside of maybe bots or something, I ran eggdrop bots for years).
Haven't you got the memo? People haven't read the docs for just about anything for a long time now. ESPECIALLY the kind of folks that put data like this on a service like S3.
As someone who has been told they write awesome documentation time and time again I can't even begin to count the times when someone asked me "what about X?" only to point them to documentation (that is easily searchable) written (usually) years earlier. I could understand "I browsed document X but it was last updated in 2014 is the info still valid?" kind of questions.
I prefer a lower (standard) resolution on the console myself, whatever the default has been forever 320x240? I don't know.
My P50 laptop has Nvidia in it of course, and in X11 I have it fixed to 1080p (it is a 4k display). On the grub boot menu as well as the linux console the characters are the size of a tip of a pen, if that. And grub takes about 3 seconds to refresh the screen for choosing another OS to boot from.
Fortunately I don't need the console often on my laptop only to recover from the very rare issue affecting X11, but still would be nice to have a normal resolution for the console.
I haven't intentionally bought anything from amazon since 3/2011. I say intentionally. It wasn't until 2013 that I managed to discover woot.com whom I bought some stuff from was owned by amazon(at the time that info was only obvious buried within one of their pages), so of course immediately stopped going there. I was surprised, or even shocked to see a couple of recent purchases this year from Newegg arrive in amazon packaging. If there was an indication that it went through amazon I wouldn't of bought it.
I moved away from Seattle region in 2011 as well having seen ex-amazon folk spread to just about everywhere there and try to make it like amazon. Conversely when I first moved to the Seattle region(2000) it was a lot of ex MS-people going around, and at least the companies I worked at did not do things like MS was (biggest one being they were using Linux not windows).
I never really was a Walmart customer but since I saw some documentary on them at least 15-17 years ago I made it a point never to shop there either.
The amazon effect is felt even stronger in the IT realm with their cloud crap.
very unsettling times.
Same for me, though a Note 3(my first Android device). Not every hour but usually 2 or 3 times a day it asks me to accept their new terms. I don't know if it's been 1 year or 2 years but I have no need to accept their terms I am not using any of their services, so I just clear the notification. Would be nice of course to be able to permanently decline so they stop asking.
AT&T is not quite as bad at pestering me to upgrade my OS, though their pestering comes in waves, for a few weeks they may ask me once or twice a day to upgrade, then they stop asking for a month or more. I have the latest Android from AT&T on another Note 3(5.0 from 2016) and I see no reason to upgrade my main phone, especially because I'd lose a critical feature of being able to mute the phone from the lock screen using the power button(which in itself was a downgrade in abilities from previous WebOS phones), something I do on a regular basis as my phone is also a pager for my on call stuff. I've kept wifi on my main phone for the most part disabled for the past two years to prevent automatic upgrades (at least on this vintage of phone upgrades require a wifi connection). In the early days I would turn wifi on when I needed it then off again, forgot a couple times and caught the phone mid automatic upgrade(sneaky bastards), till I killed wifi and the upgrade aborted. Now(well past ~18 months at least) with unlimited data don't really need wifi.
About to enter it's 6th year of service the only issue with my main Note 3 is the gyroscope and light sensors(assuming probably clustered together) started failing maybe a year ago. I replace the battery once a year, and maybe new screen protector every 2 years. I have a 2nd Note 3 that gets pretty heavy usage as well (Wifi only unless I'm traveling overseas), I bought that maybe four years ago and that has no issues other than I prefer Android 4.4 to Android 5(tried downgrading it one time couldn't get past KNOX).
The highest end Note 9 has me interested, though not in a rush at this time. I'd love an updated Note in the same form factor/design as the Note3 or Note 4(I have one of these too though rarely use it), but I guess that'll never happen.
One use case at last is geographical performance. One company I was at that switched to Dyn (2008/2009 time frame) had a high performance requirement. They were using F5 Global Traffic Manager prior to Dyn, and it was hosted active-active out of two data centers one on each coast of the U.S. Apparently their customers were complaining that DNS lookups were too slow, part of the (then, not sure if it still is true now) F5 DNS architecture when routing traffic to different geo locations was it required an additional DNS lookup (I forget why), so going to www.mydomain.com resolved to one CNAME which then resolved to a 2nd CNAME then you got the IP of the geo source from there.
Dyn's setup removed one of those CNAME lookups and combined with more geo diverse locations allowed DNS query times to drop by maybe 20-30ms (maybe more I forget now). The lower responses made their customers happy. Though I thought it was stupid just because nobody can tell that difference in performance ("but it shows up in their monitoring" was the response). Whatever.
Just look to IPv6 to see how well that approach has worked?
To me the issue the people are advocating for WRT DNS is the centralization of DNS services, so many customers concentrated with such few providers.
I really don't see anything wrong with DNS as it is.
Certainly doesn't have to be that way, nothing in DNS prevents people from running their own DNS, though bigger companies are probably best off with a Dyn or Neustar to be able to absorb those DDoS attacks better. Obviously pretty much any internet provider has a DNS service available, and in many cases they may not even charge for the service since for the most part it doesn't cost much to run unless you're needing very regular updates assuming they don't have a UI to manage DNS.
I've run my own personal authoritative DNS since 1996 myself (still do today).
As a dyn customer for about 9 years now(across 3 different companies) after the attack struck some providers tried to get me to go multi provider. To me does make sense but only really when it is also paired with a multi CDN deployment as well, and for whatever reason I don't see nearly as many people talking about that as multi DNS deployments.
DNS also has made it easy to use multiple providers forever now with slaves and longer TTLs. Though if your all up in their APIs and stuff then that may make it more difficult.
For the org I am with and probably many others this one outage was not nearly enough to switch off of Dyn(or to go dual provider). It's still the most reliable service I have ever used, and as a bonus their UI hasn't changed in as long as I can remember (probably in the 9 years as a customer). Which is refreshing for once to see something stable.
If only other cloud providers could tout 1 outage in 9 years (certainly is possible dyn has had more than that though nothing that has registered on my monitors long enough to be detected). They seem to be very proactive about alerting customers with https://www.dynstatus.com/ -- one of the first public status pages I can recall coming across.
All of the DDoS attacks that have affected me have always been collateral damage, either attacking Dyn in that one case, or in several other cases attacking upstream ISP (which in itself has probably 8 different load balanced proivders). CDN we currently use hasn't had an attack big enough for us to notice but they aren't a big name player either. I have seen more than one article in the past about big outages due to attacks or something at Cloudflare for example (don't recall anything recently though).
I have a Toshiba Tecra A11 from 2010 ($1878 before accessories+support), which has probably spent greater than 99% of it's operating hours plugged in. I bought 2 extra batteries for it in the beginning, I don't recall how often I rotated them, the current battery has about 40-50% of capacity left. The other two batteries appear to be in good condition according to this tool I have called hwmonitor(https://cpuid.com/softwares/hwmonitor.html - mainly use it for looking at temperatures). Probably been 2+ years since either of them were used.
I only say this because I thought that was "normal", didn't expect to read about batteries dieing(completely) from being plugged in too long, short of the occasional faulty battery I suppose.
Recently rebuilt the laptop to be a gaming system for older games. Decided to keep the bad battery in it since it's hooked to a UPS anyway.
Even on a good day when it was new, under Linux anyway and the Nvidia graphics a full charge wouldn't go much past 2 to 2.5 hours. I didn't buy it for good battery life though. One battery that is faulty on the Toshiba is the CMOS battery? If the laptop stays unplugged for too long (few days?). So even when it's turned off I keep it plugged in.
Currently using a Lenovo P50, it spends 95%+ of it's hours stationary on my desk plugged into a fancy double conversion UPS. Battery life on this on a good day in Linux with Nvidia is maybe 3-3.5 hours.
I replace the batteries in my main phones about once a year (Galaxy Note 3s), they spend a lot of time on wireless chargers which probably impacts the battery life. After a year they still work fine though it seems they lose 20-30% of the capacity.
Though I am obviously biased I suppose as I have had a small fear about this exact kind of thing since Lenovo bought IBM's Thinkpad line.
Fortunately I don't have anything of value that the Chinese would want. After being a die hard Thinkpad fan for many years when Lenovo bought them I swore off of them for 11 years - I used Toshiba in between. I am on Thinkpad again after I guess I accepted whatever could happen to Lenovo Thinkpad is just as likely to happen to Toshiba (that and Toshiba didn't have the hardware I was looking for at the time).
I've read conflicting comments on whether or not this kind of thing is possible, and to me based on history of other sorts of surveillance activities from other countries I absolutely have to be on the side of the fact that is probable this happened given the resources of a country like China. I'm just as likely to believe something similar could happen in the U.S. as well with NSA/CIA whomever. I also totally believe that the intelligence community is pissed off at the report for revealing that they knew what China was doing. They'd rather keep that secret so they can continue monitoring and quietly contain it.
I'm just hoping some day to see another Snowden-style leak of internal documents that say yes this did in fact happen, and those paranoid folks were right all along. Sort of reminds me of the early days of the reveals about the taps that the NSA had at AT&T facilities. As a AT&T data center customer at the time I joked with their staff about it, but really didn't surprise me, I continued as their customer until I moved to another job.
Some folks say why didn't more places encounter this well the answer seems obvious they targeted the attacks to lessen the likelihood of it being detected, like any good APT.
Certainly sucks for Supermicro right now though I'd suspect the vast vast majority(99.99%) of their customers have nothing to worry about(as they are not juicy targets). I run (1) supermicro server myself in a colocation in the bay area. I was thinking about getting a new one as that one is 7 years old. This report does nothing to sway my opinion either way.
However I wouldn't be caught dead running supermicro in mission critical production (again, this report has absolutely nothing to do with that either, just based off of ~18 years off and on of using their hardware). I do realize of course some 3rd party appliances I have may very well have supermicro hardware on the inside, but at least those are managed by the vendor as in I don't have to worry about diagnosing strange hardware faults or asking fortune tellers what changes are in the latest firmware, and don't have to worry about resetting all configurations to defaults when flashing said firmware(and the obvious negative implications from doing so from a remote location -- my critical servers are 2,400 miles away from my home)
To me at the end of the day this is hopefully a good thing in that it would raise awareness. I think it's totally possible for similar things to happen to other manufacturers as well even the big guys like HP and Dell. The trend of racing towards the bottom on pricing really puts pressure on the abilities for companies to be willing to be extra vigilant.
one of my co-workers likes to rant about Juniper (whenever the topic comes up). I don't use either Cisco or Juniper myself(Extreme networks customer for ~19 years now). Anyway his rant mainly revolved around this feature you mention. Which I had heard touted by another friend a long time ago. Had to do with the JunOS software not giving errors when the configuration was incorrect/syntax errors or whatever. It would go along like everything is fine, shit wouldn't be working but gave no indication as to why. He evaluated Juniper at his previous company (they were a Cisco shop) and it didn't get past the eval stage because of that I think, drove them crazy.
Red hat seems to offer the same sort of thing: https://www.redhat.com/en/technologies/storage/ceph
Given Red hat bought the company that created Ceph (https://www.redhat.com/en/about/press-releases/red-hat-acquire-inktank-provider-ceph), article is not clear why someone would want to choose this platform over another based on Ceph.
But I guess the article is not alone, looking at the SuSE Ceph site - it makes no attempt to say how or why they may be better than Red hat (or any other Ceph-based solution), it only compares itself to non Ceph.
Obviously I'd expect much of the underlying core code to be the same across systems but they can differentiate with their pricing and their monitoring/management tools(vs other Ceph solutions), if they do differentiate they don't do a good job at communicating that.
Of course if you are already a SuSE Linux customer than it would make sense to use them if you were interested in a Ceph storage option.
(Not a Red hat nor SuSE customer, never used Ceph))
I was a brief SuSE customer I guess you could say maybe 13 to 15 years ago, bought several versions of their desktop linux distro at the time. It was pretty slick, though my sister used it more than me I kept to Debian, until at some point switching to Ubuntu on desktop/laptop(only) and now Mint(MATE). I remember one time looking at my sister's computer with SuSE and she had installed Yahoo messenger on it, the windows version. I was shocked that wine worked so well she could do it herself without asking me(she knew very little about computers).
am confused - why would anyone care to run that architecture on a desktop or laptop? I think even PowerPC which should be much easier to manage doesn't appear to have a laptop(mac of course I think was the only PPC laptop) or desktop version for many years now.
if anything I'd expect an emulator to be made(looks like there is at least one already - https://en.wikipedia.org/wiki/Hercules_%28emulator%29 ) for those that need to develop/test software on x86 systems.
Where people shunned the idea of dealing with signed code with hardware as it limited your ability to mess with it (one big example which was more of a licensing thing than a signing thing was Tivo).
I also remember several communities years ago the whole concept of TPM was quite a scary (I include myself in the list of people that feared TPM - and I still don't like it, fortunately in most cases it can be kept disabled without any issue(AFAIK I've never had a system with it enabled), though I think perhaps in some products like MS Surface (guessing) it may be forced enabled.
Nowadays it seems sad that everything that allows you to install unsigned code which previously was a good thing is now an evil thing because it's not "secure".
back in 10? for me would be have to book a flight to the other side of the country(~4-5 hrs each way air time alone, at least 90min drive to the airport with no traffic). I typically make two such trips to our main data center per year, staying for 10 days on each trip. I do wish it was closer..
I too was quite confused when I saw mention of scanning a QR code to get info.
I suppose you could take that question and ask folks that run things like redis, or memcache, or similar types of key value store systems.
The most popular use case that I see anyway is for caching and holding session data. Given the distributed nature of this feature it seems like it could do a lot for performance of caching more complex things closer to the client in a simpler fashion.
I'm not a developer and have no use for this myself, and am not, and never have been a cloudflare customer. I have worked with many apps that use memcache and redis over the years though (usually in addition to a database of some kind).
One developer I knew(but didn't like, and he didn't like me either) years ago thought he was smart and thought he needed just a couple of weeks to code a database caching layer in memcache to replace the mysql query cache. When I heard that I just laughed. He wasn't at the company much longer so never got to see what he might of come up with. (yes I know the mysql query cache is frowned upon, at the time it was required for our core application)
Heimdall is a MySQL database load balancer, analytics tool, and cache accelerator, something I have been using off and on for the past year or two. As an example they generally suggest using redis for (their) cache, though they can use Hazlecast and maybe one other tech too. Just mentioned them as an example of caching with MySQL anyway.
But it seems strange to me for someone to compare a key value store to a traditional database.
Biting the hand that feeds IT © 1998–2019