* Posts by BinkyTheMagicPaperclip

1451 publicly visible posts • joined 11 May 2012

Ad agency boss owned two Ferraris but wouldn't buy a real server

BinkyTheMagicPaperclip Silver badge

For *the consumer* there is little point even bothering with enterprise solutions for perceived reliability or speed. For specific applications there will always be a reason for SLC, and DOMs are definitely useful in embedded systems.

BinkyTheMagicPaperclip Silver badge

Depends what capacity you need. As the capacities rise, particularly in spinning rust, the disparity becomes pretty small to non existent.

Most people should be using SSDs though, and there's little point in enterprise solutions there.

SAP accused of age discrimination, retaliation by US whistleblower

BinkyTheMagicPaperclip Silver badge

Re: Demeaning?

If I unlock my front door and stick a sign on it that says 'front door broken, do not steal anything' and you walk in and steal something, I presume you'd also claim that it's unfair if there's police round the corner?

I do agree that whistle blowing often does not work out well, but suing for discrimination is hardly bad faith, when the obvious way to avoid this 'trap' is to act ethically and legally.

Intel's $699 Core i9-14900KS turbos to 6.2GHz – assuming you can keep it cool

BinkyTheMagicPaperclip Silver badge

Looks like a lot of hassle for mostly minor benefits

Intel's previous high end consumer CPU doesn't have a convincing lead over AMD's offerings, and the turbo is only on a limited number of cores as has been the case for years. As the article says, it really does depend on your workload. Why bother with all this hassle until you're truly desperate for performance in specific applications?

Still, both Rzyen 9 and the i9 support ECC with the appropriate chipset, so that's good. The Performance and Efficiency cores seem an interesting idea too, but it looks like the number of applications that explicitly specify they want to optimise efficiency is quite low.

If I was looking for a new system, I'd definitely consider Ryzen. There's a lot of second hand Xeon systems on ebay though..

Fedora 41's GNOME to go Wayland-only, says goodbye to X.org

BinkyTheMagicPaperclip Silver badge

Re: Wayland only

Yes. They can like it, or write the code themselves.

OK, I'm deliberately being a bit contentious here. I do agree that Wayland is not finished, offers less user choice than X, and users (and particularly non Linux platforms) are increasingly not considered over developer and commercial interests. It also doesn't help there is almost no commercial non Linux Unix, so the perfect storm of X being funded by large companies and having to compromise to work on multiple platforms simply did not happen with Wayland.

However, it has always been that way to some extent. Pick a less popular configuration and you'll have issues with application support. It's also what a lot of Linux users want - functionality NOW! Forget catering for BSD or whatever, which limits functionality and slows development, but is also very likely to incorporate compromise and flexibility into a design.

Certain oft quoted benefits such as support for old hardware are also flat out wrong. If you've old, popular hardware it's very likely it still works, but this is only because of ongoing work from developers. There are at least *three* (four?) different display driver models in X, and maintaining old hardware support relies on drivers being re-written each time, it is not automatic.

Can AI shorten PC replacement cycles? Dell seems to think so

BinkyTheMagicPaperclip Silver badge

Re: Dell omitting critical detail

Not really, that's one thing you can't blame Microsoft for - at least until Windows 11. Let's take a decent PC from 2008, it's still capable of web browsing and productivity even today.

It comes shipped with Vista. In 2009 Windows 7 is released, followed by 8, 8.1, and 10. With a suitable graphics adapter and a large enough hard drive/SSD the system could be usable from 2008 until right now.

Upgrading to Windows 11 wouldn't be possible without workarounds because the CPU would be too old, and so would the TPM (if it even had one, back in 2008).

'We had to educate Oracle about our contract,' CIO says after Big Red audit

BinkyTheMagicPaperclip Silver badge

Re: Move away

I'm genuinely glad to hear it - I just haven't used it at scale. It's just that some parts of it are substantially less turnkey and don't inspire as much confidence as MS SQL.

I mean, I accept that if it Was All That, then MS SQL would be far less prevalent, but still.

BinkyTheMagicPaperclip Silver badge

Re: Move away

No. You don't hear nearly as many horror stories around MS SQL as you do about anything Oracle.

The only vaguely dodgy thing I remember MS doing SQL license wise was around the time of MS SQL 7 when it released prior to the web really hitting its stride, and implied you could run it as a back end to a web server using a standard per user license. This was tightened up by them trying to apply an updated license to SQL 7 later on, but the original license was still there printed out in black and white. They didn't make that mistake for future SQL releases.

Ultimately you're going to have to choose your flavour of poison, and to my mind Microsoft is better than Oracle in that regard.

PostgreSQL has some impressive features, but it still feels like an open source product (the backup facility is *appalling* for instance), and if some obvious parts of the product aren't polished, what's the confidence level in pushing serious amounts of important data through it?

EU users can't update 3rd party iOS apps if abroad too long

BinkyTheMagicPaperclip Silver badge

I hope they get sued out of existance

From what I can see they're on extremely dodgy ground. A European consumer of a company with a large EU presence remains their customer regardless of their location.

There's also a double edged sword in that if Apple are trying to restrict customers in this way, if a non EU customer is then at a location where they're not in the EU, logically they should gain the benefits of that location, if any.

HDMI Forum 'blocks AMD open sourcing its 2.1 drivers'

BinkyTheMagicPaperclip Silver badge

DisplayPort is better, but the consumer experience is mostly worse

Fine if you're connecting one display device to one video source.

Troublesome if you want to switch between devices - it's likely you'll be paying considerably more than an HDMI switcher, and have to use a KVM rather than just a switch. The amount of advice online will be somewhat lower than for HDMI.

A pain if you want to run multiple monitors through one cable using DisplayPort MST. The tools for understanding the limitations of each MST hub in the signal path and what is actually happening to the signal appear generally non existent. Not to mention driver support can be distinctly variable.

I do like the fact I have a working setup using USB-C and DisplayPort hubs to drive three monitors off a small number of cables, but a multi port HDMI switch was mostly an awful lot easier to source and set up.

FOSS replacement for Partition Magic, Gparted 1.6 is here to save your data

BinkyTheMagicPaperclip Silver badge

It's not just Mint - it is a niche case, just as dual booting Mint/Ubuntu is, but the default EFI boot menu for FreeBSD doesn't handle multiple installations well (there's older, boot code that does handle this, but it's easier just to edit the boot config setting every time you want to boot the other OS). It's a bit annoying when you want one installation to use as a vaguely production system, and another install to check out -current or hack on the same version elsewhere.

I think some Intel server and possibly workstation boards may also have used 32 bit UEFI, I'd have to experiment with my quite unusual S3210SHLC board which uses the X38 offshoot S3210 chipset and features both an EFI bios and VT-d (PCI passthrough) support, but with Core2 processors.

BinkyTheMagicPaperclip Silver badge

OK, it can never reach '100%' but 'feature complete' is when it matches or exceeds commercial products and handles the majority of situations you'd reasonably expect it to. I'll concede that a single disk stripe is an unusual oddity probably specific to a limited selection of motherboards (it was a pentium 4 one), but ZFS is moderately mainstream these days.

BIOS translation utilities intercepted BIOS calls and let large disks boot DOS on BIOSes that weren't designed for it. Outside DOS this tended not to work, it was rare for OS/2,NT, or Linux to support it, and Windows would be forced to use BIOS calls to access the disk rather than protected mode drivers.

It largely went away when LBA arrived, but there's still various limits, 128GB booting limits being a later one. The solution without translation software is to set all the OS bootable partitions entirely below the BIOS limit, and then the OS driver handles partitions above the BIOS limit.

BinkyTheMagicPaperclip Silver badge

It depends what you mean by 'special' - some partitions are hidden from UEFI BIOSes, or are hidden when booting certain operating systems, but disk management software should allow you to see it.

There's also an MBR in every GPT disk which contains a protective MBR to highlight to conventional MBR tools the disk is being used for GPT.

Whilst the functionality of various Linux tools is impressive for free, it is not necessarily feature complete.

BinkyTheMagicPaperclip Silver badge

Must give it a go some time

Being able to do what Partition Magic and Ghost did for free is certainly welcome.

I always tend to find some of the rough edges, though, including an inability to cope with disks in a system using an early SATA controller that sets up disks as a stripe with one disk in it. Windows is fine with it, but some partition tools throw a wobbler.

Also, ZFS. Some Linux partition tools don't know what to do with disks with a ZFS signature on them, and the functionality in wipefs is not built in.

BEAST AI needs just a minute of GPU time to make an LLM fly off the rails

BinkyTheMagicPaperclip Silver badge

Re: All that's required is an Nvidia RTX A6000 GPU with 48GB memory

4 grand. Not obscene in the grand scheme of things, just generally outside the reach of most enthusiastic amateurs

I would imagine that it's also possible on GPUs with less memory, it'll just take a fair bit longer. 16GB GPUs are readily available. Tends to get rather expensive beyond there though.

Nvidia aren't the sole supplier of GPUs - there's also AMD and Nvidia, it's just that AMD are content being an also ran, and Intel are still pretty early in re-entering the GPU market.

Broadcom CEO pay award jumps 164% to $160.8 million

BinkyTheMagicPaperclip Silver badge

Re: Insert obligatory Lord Farquaad 'some of you may die' meme here

I mean, if you *really* wanted to change the system include all holding companies, and all suppliers and outsourced functions, including those in third world countries. Yes, I know then the executives who 'will move country if you raise taxes, honest guv' might actually move country to somewhere with more permissive ruling, but you could at least try to make it a bit fairer.

I'm not against decent pay, or a healthy differential, but that much money is extracting the urine.

BinkyTheMagicPaperclip Silver badge

Insert obligatory Lord Farquaad 'some of you may die' meme here

If ever there was a reason to make CEO pay a maximum multiple of the lowest paid worker..

The self-created risk in Broadcom's big VMware kiss-off

BinkyTheMagicPaperclip Silver badge

We did, although it was more for platform consolidation and ongoing costs than VMWare's behaviour as far as I'm aware. For historic reasons we had a large VMWare on-prem system that was costing us a lot of money. Requests to migrate started early 2023 if not before, and were repeatedly followed up throughout the year.

I completed the migration in my area prior to the end of the year. Some of the VMs were old and in need of a little spring cleaning that it was easier to re-provision, copy data and setup, rather than upgrade and live migrate.

We are largely a Windows shop. There's also a lot of legacy but that's being effectively migrated. As such it makes sense to go to Hyper-V, and from an end VM user point of view rather than VM management there's no discernible difference between VMWare and Hyper V VMs. If anything it's better now.

BinkyTheMagicPaperclip Silver badge

Emulation of hardware? Not any more

This is possibly nitpicking, but it's important in some cases : 'Virtualization's entire purpose is to vanish entirely by precisely mimicking hardware' was never true and increasingly is only a vague relation to the truth, but it doesn't matter much any more.

The early virtualisation products were 'good enough' emulations of hardware that if you squinted a bit looked like it probably was a real PC, but even then there were gaps - OpenBSD failed on one virtualisation product because it used a particular unemulated network card feature no other OS did.

Qemu (which is both an emulator by itself and also used by some hypervisors to create a device tree) has a few separate PC types : Q35, 440FX, and ISA. These are all old, from 2010, 1996, and potentially 1981 respectively. There are some very unusual PCs out there, particularly in the industrial PC arena, but seeing a 2023 CPU hanging off a 1996 host chipset is more than a little odd.

it doesn't matter, however. Emulation and virtualisation by being a close hardware representation is also wasteful, and as virtualisation popularity grew the operating system vendors started modifying their operating systems to specifically support virtualisation. Today there are both optimisations when operating systems realise they are running inside a VM, and virtual hardware drivers that bear a passing resemblance to real hardware but have efficiency and convenience far over precisely emulating the real item.

It's a mature market [1] so whilst migrating VMs can to some extent be a hassle, the architecture is generally understood, and there are tools to assist the process. Broadcom may have some ability to lock in VMWare customers due to their management tools, but also face the danger with all system migrations that an audit naturally tends to occur at the same time not infrequently reducing the amount of required licences and leading to re-evaluation of the solution.

[1] There are established vendors such as VMWare, Xen, Microsoft, and cloud providers. Most other operating systems also feature hypervisors with varying degrees of functionality and operating system support - a serial console, network adapter, and storage are all that's required to get some useful work started. Then there are emulators such as Qemu that are sometimes hardware assisted, that run almost everywhere, and moving into specific domain support such as DOSBox 'looks like DOS enough to run games, but not much else' and DOSBox-X 'accurate enough it should run DOS/Win9x business apps'.

I don't see guaranteed PCI passthrough support without a hardware compatibility list working any time soon though. Too many dependencies, hardware bugs, and configuration issues.

Preview edition of Microsoft OS/2 2.0 surfaces on eBay

BinkyTheMagicPaperclip Silver badge

Re: Worth noting the discovery that made OS/2 1 redundant

I can understand the 286 being a fab process stop gap, but that's a different matter from it enforcing the mass market staying on 16 bit.

As I've mentioned on reflection I believe the market was driven by 1) memory - all the 'proper' protected mode OS tended to eat memory for breakfast, and it was expensive and 2) applications. People thought I was a bit mad getting a 486 with 8MB memory to run OS/2 2.1 in 1993, and really that was the reasonable minimum, not the comfortable amount.

Even if OS/2 and NT are discounted (not entirely unfair, neither achieved mass market) the mass market was *heavily* using 32 bit well before 95. At the low end there were DOS programs escaping some of DOS' limitations with DOS extenders, predominantly DOS/4GW. Outside DOS, as soon as Windows 3.1 was released in 1992 it was very clear that its 286 supporting protected mode was a second class citizen, and real mode had been dropped entirely. On the run up to Windows 95 various parts of 32 bit code had been added for disk and network drivers, and win32s operated as a stop gap enabling a subset of the Win32 API to run on Windows 3.11.

BinkyTheMagicPaperclip Silver badge

Re: Worth noting the discovery that made OS/2 1 redundant

There's a few 80186 PCs, but they're rare. One reason is that the 80186 has some incompatibilities with the 8086/88 that the 286 doesn't suffer from.

The 80286 is, in general, faster than the 80386 at the same clock speed - but it does depend what you're running and which processor a program was written for.

The 286 was really, really fast at text based applications with some graphics at the time, but if you had the software to take advantage of a 386 it was clearly better.

BinkyTheMagicPaperclip Silver badge

Guess what 'thus far lost to history' means?

Yes, if you want 'OS/2' and aren't picky about the variety you can go and buy a modern release right now in the instance of ArcaOS, hit ebay and get an historic copy (mostly of 3.x or 4.x, 1.x and 2.x are less common), or a few Internet sites where you can 'obtain' a number of disk or ISO images for quite a few releases including barely released products such as OS/2 PowerPC.

However this is an extremely early release of OS/2 2.0 that isn't archived anywhere. There's no guarantee that even Microsoft has a copy or would be prepared to release it.

Sometimes software is lost to history without backups. This is especially true if it's a pre release no longer considered useful, or a number of games where source control is frequently lacking (once it's been shipped for a while, a number of companies historically weren't concerned with keeping it for the future)

BinkyTheMagicPaperclip Silver badge

Re: Worth noting the discovery that made OS/2 1 redundant

The reason for OS/2 was multitasking and memory protection. Even 16MB became a noticeable limit for certain OS/2 applications reasonably early on. OS/2 should have been released for the 386 to start, but early 386 steppings were extremely buggy, it wasn't a cheap processor, and IBM had made commitments to bring OS/2 to the 286.

There's a lot of reasons for OS/2's failure and the breakup, but the large ones were because of the success of Windows 3.0 due to its applications and crucially the reduced memory usage. OS/2 required more memory than Windows up until the mid nineties when it no longer mattered. NT had similarly high requirements but the majority of users chose 3.x and 9x instead due to lower hardware requirements. greater driver support, and prior to 2000 certain features such as USB and DirectX.

BinkyTheMagicPaperclip Silver badge

The reason for NT's OS/2 subsystem

Obviously Lan Manager wasn't needed once NT was released. Don't know about SQL Server, but the mail routing component of MS Mail definitely required OS/2, and until Exchange was available that was one reason the subsystem had to remain.

BinkyTheMagicPaperclip Silver badge

Re: Nice museum piece

I know you're putting up the 'joke' icon, and you're probably right - in terms of reading them as-is in a random 5.25" drive some disks probably don't work.

However there have been recent articles about what can be achieved with a Greaseweazel. an oscilloscope, and a waveform editor. With careful selection of a good floppy drive, and manual correction of sectors where the data are unreadable (by studying the peaks and troughs in the waveform and correcting it) it can be possible to completely recover data

VMware takes a swing at Nutanix, Red Hat with KVM conversion tool

BinkyTheMagicPaperclip Silver badge

Re: There is a need for stand-alone hosts

Xen hasn't been buried, there's still XCP-NG. You can also build it from the bare metal, I was mad enough to run a Salix based Xen system a number of years ago using the LILO bootloader, it worked fine until I had a hardware failure. Linux dom0 naturally, NetBSD is as ever too shonky, FreeBSD lacks passthrough, Illumos Xen0 appears long dead.

Had a play with XCP-NG last week, was moderately impressed. If you want 'completely free' you'll have to install the community maintained Xen Orchestrator rather than the bundled XOA VM which pushes you towards the maintained but still 'free as in beer' 'please buy a license' XCP options, and 'set up an account with a third party to do anything useful'.

The xl and xe utilities at the command line appear solid, as does the ability to modify boot parameters without faffing around with linux-isms of grub defaults and initramfs or similar.

XOA, at least the one installable over the Internet prior to updates, seems a bit unfinished. Error messages are opaque. It is not designed for a single standalone system hosting both the VMs and an ISO storage pool. Still, xe commands can work around that.

Got VMs working including passthrough with minimal hassle.

KVM doesn't seem that bad either. A try of virt-manager and it was all working fairly easily. Passthrough is more of a pain though.

It's great that in the last seven years ish or so we've gone from virtualisation being an occasionally provided option which requires a lot of fiddling to being bundled in most available OS : Windows (Hyper V), Linux (KVM), Xen[1], FreeBSD (bhyve), OpenBSD (vmm). A basic VM is now easy. Migration is available some times. Passthrough is still a bit of an arse - too many hardware, driver, and chipset foibles out there.

[1] I'm a particular fan that Xen is a type 1 hypervisor, and that PV or PVH dom0 guests load underneath it and can therefore have devices completely hidden from them for PCI passthrough, whereas for KVM you're going to have to stick things in the initramfs or mark modules for early load or blacklist. For hypervisors such as bhyve which are still distressingly bleeding edge, whilst devices can be captured by the passthrough driver it's impossible to exclude them from being probed by the OS really early in the boot process.

BinkyTheMagicPaperclip Silver badge

Brings back flashbacks of migrating VMWare images between ESXi versions

It was a bit more than 'a disk image with metadata'. It was typically several files that needed to be converted into one monolithic image, that can then be copied to your new server and converted to the new ESXi format. It worked but was a real pain.

Then again, this was doing it the free way. I'm sure VCenter makes it a snap. VMWare have progressively restricted what is permitted with the free product for years. Can't blame them really, but it does make you wonder why they're targeting KVM, just when it's becoming half decent.

I always used vCenter Converter in the past for going to ESXi, at least until they gutted the functionality and prevented it working properly with later releases.

Europe's data protection laws cut data storage by making information-wrangling pricier

BinkyTheMagicPaperclip Silver badge

Re: Less data stored is the entire point of GDPR!

That shouldn't be a problem though? There was plenty of notice, and there very probably still is time to do so before any potential disaster hits.

To give work credit, although they're not perfect we do take things quite seriously. Every year has been an improvement in data storage and security. Far, far better than a decade ago (but we were then owned by another company who had less stringent standards, and insufficient interest in improving them).

The slightly depressing part is that although we offer GDPR provision for every customer, only a small subset have requested it to be set up.

There is also a degree of self interest. For some customers we're actively pruning historic data, not because of GDPR, but because regular penetration tests and security audits are such a huge pain for legacy customers it's to our advantage to get rid of legacy systems. Which admittedly does make all this testing pretty useful.

BinkyTheMagicPaperclip Silver badge

Less data stored is the entire point of GDPR!

I've very mixed feelings about GDPR.

Data can only be stored for defined purposes for as long as required to achieve those aims. Given that some types of data are only legally required by the authorities for the past <n> years data shouldn't be kept for longer than that, and that results in lower storage costs.

Having implemented clear down for various customers it's not straight forward either, particularly if the functionality isn't built in to a pre GDPR product. Ensuring data are cleared down, it doesn't impact on performance, data aren't prematurely cleared down due to customer mistakes, and ideally that there is a (temporary, which is itself shortly cleared down) log that a clear down occurred for instances when data are incorrectly marked as old, cleared down, and the customer asks where the data have gone..

However, whilst GDPR is a worthwhile principle, legally mandating it is the point at which my enthusiasm fades. Companies who do the right thing will do the right thing regardless. The cowboys will continue to flout it and the punishment is, precisely what, exactly?

It's added another layer of useless cruft to the web, and again, sites that don't apply GDPR properly are very rarely corrected.

Not to mention that the government, perhaps the prime target to apply the GDPR properly, have exceptions, routinely flout the rules, and don't apply it. Remember the DVLA contacting every holder of a HGV license to ask if they wanted to drive trucks again? Certainly illegal under the GDPR, but they did it anyway. Witness the obscured and redacted ongoing dodgy Palantir contract with the NHS. Etc.

Wyze admits 13,000 users could have viewed strangers' camera feeds

BinkyTheMagicPaperclip Silver badge

'This represented around 0.25 percent of all users'

It may be 0.25% of all users, but 1504 users out of 13,000 is a not inconsiderable 8%, and you can safely bet the number would have been much higher if the access had been left open longer.

There's no sign of anyone following the sensible advice of :

Never buy an IoT device you can't host elsewhere or at home (To be fair, it appears possible to do this for Wyze)

Don't trust the vendor's cloud service. Especially if it's cheap.

If having the data inadvertently exposed is that important or distressing, don't expose it.

Damn Small Linux returns after a 12-year gap

BinkyTheMagicPaperclip Silver badge

There are a few, but they're a bit specialist.

USB sticks generally take an ISO image anyway, but ISO images specifically are useful for workstation or server motherboards with network media redirection built in

There may be instances where you need to boot a system but don't want to have USB enabled.

The media redirection detailed above may require legacy USB support to be enabled, but legacy USB support can sometimes cause issues with PCI passthrough in virtualisation. I'd have to check if USB sticks themselves work without legacy support set to on.

I'll grant that actual physical media use has dropped to the point that 'DVD' and 'CD' images do not always fit on an actual DVD-R, and most of the time a USB stick, or a Zalman VE USB CD/DVD emulator is a better idea.

Moving to Windows 11 is so easy! You just need to buy a PC that supports it!

BinkyTheMagicPaperclip Silver badge

Still trying to move away from Windows, wonder if I'll make it by 2025 though..

I'm still aiming to not move to Windows 11 at home (at work, already on 11. It mostly works but I'm getting very tired of the bug of taskbar icons going blank, and occasional multi desktop bugs).

It's probably much easier if you're on Linux. On FreeBSD things are more tricky, particularly as I want to use WINE for games. Old games : largely fine. More modern ones, running into problems. Bhyve virtualisation is basically functional but needs work, especially for PCI passthrough. I'm not optimistic it will all be sorted in a year when Windows 10 support ends.

I'm perfectly prepared to pay for up to three years of extended Windows 10 support, by that time hopefully WINE will be in better shape on FreeBSD. Expecting VR support is unlikely though!

For everyday use FreeBSD works fine for browsing and basic productivity, but so long as I keep booting Windows for games or specific apps, I haven't migrated yet.

That also, depressingly, means in the mid 90s when I was in Peak OS/2 Enthusiasm I hadn't managed to ditch Windows properly either. True I was only booting OS/2, but my dissertation was written in Ami Pro for Windows under WinOS/2, because all the OS/2 word processor options sucked. I had a licensed copy of Describe, but it wasn't pleasant to use for long documents and they never added a word counter despite many, many prompts. Ami Pro for OS/2 was horrendously buggy, Word Pro was OK but released after I needed a proper word processor, WordPerfect wasn't great either. I'm not sure Star Office was around, and things like Open/Libreoffice weren't even a speck on the horizon.

BinkyTheMagicPaperclip Silver badge

Re: Moving the Linux is easier

Whilst I am also migrating away from Windows (to FreeBSD in my case), it's not 'easy' if Windows is still your primary machine.

Turn off your Windows system, leave it off for a few months, and handle everything you need to run on Linux, without a VM running Windows

If you can do that, then you've migrated to Linux and it's easy. If you can't, you're still tied to Windows.

Broadcom terminates VMware's free ESXi hypervisor

BinkyTheMagicPaperclip Silver badge

Re: XCP-NG to the rescue

Having just had a play around with XCP-NG I would question that. Usable, yes, but not to a level of polish of even the free version of ESXi.

If you want to use it 'completely free' it's also necessary to patch in the community orchestrator VM rather than the default which needs an account that's pushing you towards commercial offerings.

It has more than a bit of a whiff of management tools thrown together on top of a solid base (Xen). The Xen command line tools are great. XOA less so - missing functionality, doesn't refresh automatically, opaque error messages.

BinkyTheMagicPaperclip Silver badge

Re: Linux is not always allowed in corporates

That's irrelevant. What I'm talking about here is what software is acceptable in some corporates to get you out of a hole. In some cases ESXi was acceptable whilst 'Linux' was not.

They're different skill sets. For the most part you can administer ESXi only knowing about ESXi, whilst for Linux you need to know about Linux. As mentioned ESXi is only nominally Linux, there are noticeable changes in both the distribution and driver model.

Also as I mentioned, probably a moot point now Hyper-V is more mature, but it was a factor a number of years back.

BinkyTheMagicPaperclip Silver badge

Re: KVM certainly would not

My point is that Linux is not always allowed in corporates, and VMware is its own recognised thing[1]. It also depends what you're doing to it. I had another go with KVM last night and whilst the base technology through virt-manager is really easy and impressive to get going, if you need passthrough (which I grant is a bit specialist) it can require an awful lot of fiddling. That's one thing ESXi did extremely well, if you can cope with its limitations.

[1] Yes, technically VMware is 'Linux', but its customised distribution and driver model are different enough that it's best seen as its own separate thing.

BinkyTheMagicPaperclip Silver badge

'Broadcom has pledged to increase VMware's profits substantially and quickly'

Difficult to read this as anything other than 'we're going to screw our locked in customers'

I lost some enthusiasm for VMWare a couple of years in, when they moved from trying to improve their core virtualisation technology to concentrating on management tools.

I used to use ESXi to host a limited number of Windows Server VMs, when work had an owner that spent far too long provisioning systems and arguing about cost. Now everything is for the most part properly hosted, I suspect on Hyper-V, one different large ESX based cloud was recently removed due to cost.

It was abundantly obvious even years ago, that VMWare didn't really want you to use ESXi. The management tools without vCenter are limited, and the hardware compatibility list quickly moved to remove older less capable servers.

Nevertheless ESXi had the advantage that it was acceptable in a corporate environment, in a way that Xen might not get away with, and KVM certainly would not. If I was still in the same situation no doubt Hyper-V would be the preferred option instead.

I see the commercial Xenserver appears to have dropped their free tier (trial is available, but not for production work loads), so for free offerings things like XCP-NG or Proxmox are the way to go.

I'm currently trying to use bhyve on FreeBSD, and boy but is it a technology in its early stages. considerably below the functionality of using Xen from scratch on top of Linux five years ago.

You're not imagining things – USB memory sticks are getting worse

BinkyTheMagicPaperclip Silver badge

Re: Currys

eSpares *used* to be good. Now at least for some products they will only push their third party alternative that is 'just as good'. After a faff trying to get strimmer wire that worked without hassle the only way of getting a genuine Flymo part was from Flymo themselves.

Their shipping is also extortionate.

BinkyTheMagicPaperclip Silver badge

Re: ValiDrive

I don't know - I don't actually use USB sticks heavily enough to worry about their lifespan. Now SD cards on the other hand, those I now refuse to buy anything but the endurance products, after too many failures or the cards suddenly going 'read only', and generally I try to use SSDs instead.

I don't see lifespan being an issue, a program such as f3probe should only need to write to the device once. After it's been established that the 16GB stick really has at least 16GB of concurrently usable flash, it's safe to use.

Surely the spare blocks are a buffer against an effective controller initiated trim. When the device is using the spare blocks to achieve fast write speeds, once idle it should be clearing some of the other blocks to replenish the spares

BinkyTheMagicPaperclip Silver badge

Re: ValiDrive

According to a Google search, if the stick supports UASP then it *should* also support TRIM.

I would have thought a tool like F3 without a subsequent TRIM would impact on performance, but the performance should still be 'good enough'. You can't claim a '60Mb/s stick' is 60Mb only when unused, it needs to handle it once the stick is full, space freed, and data written again.

It's still far better to have reliable but slow performance over partial or complete data loss due to a controller that lies about the stick capacity

BinkyTheMagicPaperclip Silver badge

Re: Simple solution?

Hoover bags - use Argos. I know Sainsburys have gutted them a bit, but for a lot of things they're pretty decent, and their reviews are honest.

Currys are actually pretty decent. Yes, sometimes the mark up over online is considerable, but at other times it is not and you do need to make *some* allowance for operating a physical store.

It does depend what you're buying though, the range of slightly unusual computer accessories that you could previously expect to find in a store or online has reduced to increase profit margin, and in those cases the only convenient options are admittedly Ebay and Amazon

IT suppliers hacked off with Uncle Sam's demands in aftermath of cyberattacks

BinkyTheMagicPaperclip Silver badge

Re: [Was: Warrantless searching in 3, 2, 1] It'll Work Like This:

There are ways around this :

separate division specifically to handle government contracts

separate hardware

separate staff, or at least a very large amount of slack in staff numbers

eyebleedingly expensive cost and contract terms for this that would make even Herod say 'wait, that's a bit severe'

In its tantrum with Europe, Apple broke web apps in iOS 17 beta, still hasn't fixed them

BinkyTheMagicPaperclip Silver badge

Re: a dev

The solution to being able to deploy apps without administrative permissions is not a PWA, it's writing a native application that doesn't require administrator rights. There are plenty of portable apps out there which install in the user's home directory.

If the actual reason is 'I don't want to comply with the IT department's ability to restrict software installation', then the PWAs should also be blocked, system administrators and businesses don't restrict things out of spite.

Wait, hold on, everyone – Mozilla thinks Apple, Google, Microsoft should play fair

BinkyTheMagicPaperclip Silver badge

Yeah, going to echo the commentators above who say you reap what you sow

I am using Firefox here, because I dislike the direction the market is going in : Chrome is clearly becoming the dominant browser, and Google are abusing their position.

I'm definitely not using Firefox because it's the best browser. Some of the extensions are decent, but a lot of its most appealing functionality is despite Mozilla, not because of it.

Mozilla spent literally years faffing around with non browser technologies that failed to take off, and were unlikely ever to take off. Chrome improved in leaps and bounds and started to become dominant to the point where we're approaching the bad old days of the 90s to early 00s were sites require per browser customisation or simply only work in one browser (Chrome). Firefox dropped functionality that made it appealing, and then only very slowly added some of it back in.

You can't put out a load of mostly spurious complaints generally concentrated on mobile, whilst at the same time spending literally years bringing extensions to Firefox mobile.

Mozilla took their eye off the ball, this is the result. It was entirely predictable, many people have warned about it, and they were ignored.

Amazon Ring sounds death knell for surveillance as a service

BinkyTheMagicPaperclip Silver badge

Re: Correction for UK readers

Dropping a 'social tariff' doesn't surprise me at all. Energy companies (via Ofgem) are already supposed to help the vulnerable, but Ofgem's only other remit is to protect the stability of the energy market, not provide the lowest bills for non vulnerable customers.

A social tariff is a Tory vote loser, even if it would help many people. So the social tariff helps people, but immediately it's necessary to add a taper, because if a hard cut off is used it breeds resentment by those just outside the cut off who end up worse off than those inside it.

The taper is logically much larger than immediately expected, because paying for the social tariff or taper is probably going to be from increasing bills for those outside the social tariff or taper. This increase has to either be considerable, or lower but for an extended number of years. If it's for an extended number of years it has to be administered, which has a cost, and people will try to game the system.

At the point consumers who are most definitely outside the social tariff and the taper are reached it's already moving towards a large minority of the population. No idea on the numbers, but wouldn't surprise me if it starts working out at those on the higher rates of tax.

I'd also bet a small amount of money that those with increased bills are more likely to vote Tory. They can afford the bills but they might notice the difference in their budget, and simply don't like the fact that some people are paying much, much less than them.

There's already (rightly) a huge amount of resentment at the large increase in the daily standing charge, to force consumers to pay for the failures of Ofgem to regulate, and manage companies that have gone bankrupt. Ramping up bills for a social tariff would I suspect have an even larger impact.

Apple has botched 3D for decades. So good luck with the Vision Pro, Tim

BinkyTheMagicPaperclip Silver badge

Again, visualisation. At the consumer end : Google Earth. Also look at mathematical modelling, sculpting, etc.

It's not necessary to be using VR all the time for it to be a worthwhile technology.

I have to say personally weight is not an issue I have with VR, although I've not tried any of the Meta Quest offerings yet.

Tolerance to VR/inner ear and eye disagreements does improve over time, but most apps work hard not to trigger it.

If you fancy falling over or provoking your inner ear, try Aircar (both on the Meta/Oculus store and on Steam). It's free and akin to Bladerunner. It is also very intense, the most trying VR example I've found. I only found it completely tolerable by occasionally having to close my eyes to resolve the disparity between a rapidly changing view and my inner ear telling me I wasn't moving.

BinkyTheMagicPaperclip Silver badge

Re: Not convinced

The market for games and porn should be large enough to support VR, but that's not as large a market as some want, and VR is usable for much more than those two activities.

Google Earth is an excellent use of VR, it's a good platform for visualisation. A stereoscopic 3D monitor doesn't really compare on that level and almost always requires glasses: the number that work with an unaugmented pair of eyes is extremely small and comes with caveats.

However, VR is unlikely ever to be more than a large niche - it will always require something to be strapped to your head, and architecting software to be properly VR capable requires significant effort. Consumers tend to object to the cost vs experience length, in comparison to non VR display based experiences.

AI PC hype seems to be making PCs better – in hardware terms, at least

BinkyTheMagicPaperclip Silver badge

Stick it in 'the cloud'

Leaving aside whether AI is actually going to be worth it for the majority of users (a very large if) that doesn't equate to needing gobs of local compute power.

Stick in a hosted or departmental server with a few GPUs in it, offload tasks to that. There is no need for every user to have a powerful GPU and huge amounts of memory when it will remain unused 98% of the time.

No argument that 8GB isn't enough, though. My rarely used main system has 64GB, because DDR3 was cheap and I like virtualisation. This fanless Dell system used for browsing has 20GB in it, because again, one 16GB stick was cheap. It's currently using 9GB just to run FreeBSD with Wayland and Firefox with 25 tabs open! That includes 5GB of ZFS cache, and zero use of swap.

The rise and fall of the standard user interface

BinkyTheMagicPaperclip Silver badge

Re: GUI Standards?

What you're missing is that 1) To get work accomplished is followed by 2) 'with minimum effort'

There is unlikely to be a lower effort than using an application you already know well, unless the other application really is that much of an improvement.

Having a standard means the amount of retraining needed to learn newer applications is lower, and it's more likely they can be picked up and used with minimal effort.

I think in fact you would be rather upset if appilcations failed to meet the standard : 'standard' use of mouse buttons[1], 'standard' use of clipboard, standard use of command line parameters of '-v' '-V' or '--version' to get an application version (if it used '-vers' I bet you would roll your eyes)

[1] Mostly standard these days. In the 90s? MacOS : one button with keyboard modifiiers. Windows : left mouse button drag with keyboard modifiers. OS/2 : right mouse button drag with keyboard modifiers, X Window System : Frankly could be anything, quite possibly middle mouse button for drag (plus modifiers)

Not to forget truly horrid design such as one obscure Unix system that hosted a powerful CAD program where it was necessary to hold the mouse button down, scroll all the way down the menus and not at any time slip off the cascading menus or release the mouse button, because that activated functionality

BinkyTheMagicPaperclip Silver badge

Re: This far down in the comments

Oh I don't know, there was WordPerfect 6 for Windows and DOS too. Did 5.2 use Windows printer drivers, rather than that or the earlier Windows version which still required WordPerfect specific drivers? They should never have released it with that requirement