Yes, yes but...
But can I use Outlook for my calendar?
The privacy-paranoid Linux distribution Tails has decided it's time to send 32-bit distributions the way of the 8086, from the planned June release of version 3.0. Tails' developers offer two reasons in their announcement: make the distro safer and save precious developer resources. The group explains that at the start of …
While I fully understand these decisions, and would make the same decision myself, it is a shame to have to abandon fully functional hardware due to a lack of software.
I have an ultraportable laptop from 2007, which is still in good working order (with a bit more memory and an SSD). I've been using it for setting up and in-field configuring of a couple of quadcopters. I run Ubuntu on it, which still supports 32 bit, but many of the specific programmes I need to run use it for are Chrome apps. Chrome, and hence Chromium, no longer support 32 bit, so it cannot run them. So despite being perfectly capable of the job, it's now basically obsolete.
The press release regarding Chrome specifically states that "we intend to continue supporting the 32-bit build configurations on Linux to support building Chromium". Reading between the lines, you probably won't get more Pepper Flash releases and also no more Widevine, but will get a browser. Not such a big loss for most Chrome apps.
Yeah its sad, however I tend to find that OEMs are problem here dragging out old hardware longer than necessary.
The netbook niche angers me the most. Its a brilliant form factor but the specs are woeful.
Theres a whole lot of Celeron and Pentium landfill out there.
I was hoping the Core-M might fix this problem but a Core-M based netbook is rare as rocking horse shit and when you do find one they cost far more than they should.
A good example is the Lenovo Yoga 710. Cracking piece of kit but flawed. Its 11.6", has a 1080p screen and a Core-M CPU...but its touchscreen and expensive presumably because of this.
Ive been desperate to bridge the gap between my workhorse laptop (Asus UX303LA running Arch) and being out and about for a long time now.
I love my 303 but its a bit too large to shove in a bag and dash out with. 11.6" is perfect for whacking in a small bag and trotting about with to nail those tickets you get at 8pm on a Friday or while you're away on business.
Deapite what most people say you can't just get by on a Celeron N3040 with 2GB RAM. Especially if you have to tunnel into a DC to manage $JAVA_BASED_KIT.
ILO and DRAC run like shit on low end kit such as Celerons.
Aside from server management I do a fair amount of coding which in itself isnt a heavy task but if you need to spawn a webserver for testing etc netbook specs start to get a bit thin.
Core-M, 6GB RAM, 64GB SSD, 1080P TN (none touch) screen and 6 hours of battery. That should be easy to achieve and should cost no more than £500-600.
Dont bother with Thunderbolt, AC wifi or gigabit ethernet. Keep connectivity "good enough". Id rather forego decent networking in favour of carrying a dongle or two.
"...but its touchscreen and expensive presumably because of this."
A high price is more likely caused by being a niche product. Fewer sales means it needs a larger profit margin to justify the effort.
My requirement isn't niche at all. Its a general requirement that fits anyone. Its just that most people dont know that
As much as I hate Apple kit, I have to admire their form factors.
In the world of PCs we have to contend with specs that read like someone is just trying to get rid of spares from the garage.
Who wants a 15" 1366x768 Celeron with 2GB RAM. Most of the crap they sell at supermarkets has inflated pricing simply because they can, it capitalises on the average joes crap knowledge.
I posit that you could whack a 1TB HDD and a large screen on any low end piece of tat and charge £499 for it in Tesco...thats a huge margin.
Try and sell someone a machine orders of magnitude faster with a smaller screen and smaller hard drive for the same price and they touch it.
I guess the problem im getting at is dumbass mainstream users.
Sadder still is a 8GB, decent-ish recent CPU... with a 15" 1366x768 screen. With a whole row of similar crippleware on the shelf next to it.
The machine as a whole is pretty decent, the screen res hobbles it, esp for coding.
That's still what a lot of dumb chain stores sell. The trick they try to pull on the unwary is labeling it as an "HD screen", with no pixel count anywhere in sight - you have to figure out where Redmond put the screen resolution setting this time around.
It's not just old hardware, I've got quite a few VM's (various distros) which are 32 bit, some of which date back to before 64 bit hardware, others were set up deliberately as 32 bit so as to minimise memory usage, but they've all updated smoothly through numerous major releases (except Ubuntu which broke twice).
Given most will be dropping 32 bit support soon, I'd like an upgrade path to 64 bit, but only Open Suse has so far done so. I converted Open Suse 13.2 to 64 bits and then upgraded to Leap 42.1 with remarkably little difficulty.
I see a potential pitfall here.
While 32-bit application compatibility *might* create the need for additional files (aka the 32-bit libs) to be loaded, making EVERYTHING 64-bit is not necessarily an improvement.
64-bit code is typically LARGER in the binary than 'otherwise identical' 32-bit code.
64-bit code ALSO runs "just a tad slower" because of it. You fill up your L1 cache faster. You have to occupy more RAM. Operations involving memory structures that, for some reason, occupy MORE space now [let's say "pointers"], can also take longer. And so on.
The logic used in the past (by Micro-shaft no doubt) is that "the computers are FASTER now, so they can SUPPORT running the less efficient code without people noticing"
Anyway, abandoning 32-bit entirely might be a mistake. There are still SOME advantages to running 32-bit instead of 64-bit, especially for small applications that don't need >4Gb of addressable memory space.
Nope. Yes, 64 bit code is larger and uses more memory, but (on Intel) 64 bit mode provides access to many extra registers so code runs appreciably faster, and handling large amounts of data is faster too.
The problem with 32 bit is not specifically the 4GB limit for a process
1) The limit isn't 4GB. It's usually 2-3GB because of the way the process space is organised.
2) 4GB is a hard limit (unless PAE is used, and that causes driver issues). 2GB of web browser, plus a large spreadsheet, a database, and the OS and suddenly 4GB is breached. Out of memory, game over.
Memory is dirt cheap, it does not make sense to be miserly.
Also, this is universal. The accommodation of even lightweight Unixes (i.e. NetBSD/OpenBSD) for old systems is decreasing, due to a lack of user base, the need for more drivers/modern support, and compiler support holding back an entire ecosystem to the least capable item (this is one reason why OpenBSD dropped some of the older systems). They're still pretty lightweight, but the days of running a Unix system in 4MB are largely gone..
"2) 4GB is a hard limit (unless PAE is used, and that causes driver issues). "
Careful, there. There are *two* 4GB limits on 32-bit processors.
The first is an absolute limit - the virtual address space is limited to 4GB without regard to anything like PAE. There is nothing you can do to increase that limit short of switching to a 64-bit architecture. (You might be able to fake something up by having multiple processes, but that rapidly becomes non-scalable.)
The second is NOT a limit, in fact. Prior to the introduction of PAE, the *physical* address space was limited to 4GB. Of course, we have to remember that the first processors with PAE were released in 1995, and it is pretty much universal these days. PAE allows more physical address space, but doesn't solve the 4GB virtual address space limit. However, 32-bit Windows "workstation" type builds of any sort do not *use* physical memory beyond the 4GB limit, for those driver-related issues, but they *DO* use PAE for its other features. In case you missed that: the OS runs with PAE enabled, but does not exploit the ability to use more than 4GB of physical memory. Server builds do exploit that ability, but the workstation/desktop builds do not, because home users in particular will have all sorts of wacky hardware with drivers made of chewing gum and baling wire, and such drivers will rarely be reliable when faced with physical addresses beyond 4GB.
EDIT: the above applies only to 32-bit builds of Windows, of course. 64-bit builds can and do make use of physical and virtual memory beyond the 4GB line.
>but the days of running a Unix system in 4MB are largely gone..
Maybe with modern general purpose FOSS OSes but my guess is the embedded RTOS crowd might disagree with you (granted often without full POSIX support) and also that doesn't preclude using older FOSS code as long as you have/write drivers.
Re: 2) 4GB is a hard limit
This limit isn't restricted to just 32-bit architectures, I've come across systems with 64-bit capable CPU's hobbled by the chipset, hence why you can find systems that did run Win7 64-bit but were upgraded to Win10 32 bit...
>Anyway, abandoning 32-bit entirely might be a mistake.
32 bit hell my company makes most of it revenue selling 8 bit microcontrollers but then again you aren't running linux on them. Still an awful lot of the world runs on 8 and 16 bit code. You probably own 20 to 30 times more 8/16/32 bit microcontrollers than 64 bit CPUs without realizing it. Its just they are largely a black box to you.
Biting the hand that feeds IT © 1998–2019