* Posts by Boothy

1232 publicly visible posts • joined 17 Jun 2011

Surprise! That £339 world's first 'anti-5G' protection device is just a £5 USB drive with a nice sticker on it

Boothy

Re: 5G Facts Summary So Far = Not Much

Troll or for real?

1. The type of radiation being emitted

There is no 'real' type, it's all electromagnetic radiation (EMR). Unless you simply mean things like radio, infrared, visible light, which could be seen as sub-categories of EMR.

specifically its wavelength or frequency

All this changes is the sub-categories the EM radiation falls into, e.g. radio, visible light, x-rays etc. Or if looking specifically at radio, the bands it falls into, but in those cases, that's just an agreed convention (i.e. long wave etc).

The only other major differentiation related to frequency is if the EMR is Ionising or not, i.e. contains enough energy to actually harm biological cells in plants and animals. You need to be at ultraviolet or above for that to be the case, anything from visible light or below, which includes all radio frequencies, just doesn't contain enough energy to do any direct cell damage.

...This is variable with 5G as there is no single standard wavelength used.

Nope, fixed and well defined frequencies that are part of the standard. There are many frequencies in use, but they are all defined and agreed. This has to be the case, as the phones and masts have to be using the same frequencies otherwise they wouldn't work.

2. The amplitude of the radiation, akin to the volume or amount of radiation that reaches the subject of concern.

Which is absolutely tiny for 5G, or any modern phone related radio signals (and just radio in general). Major breakthroughs here over the years has been improving the sensitivity in the receivers, thus allowing signal strengths to be reduced. Plus dynamically changing the strength of the transmissions, to be just enough to work for that connection.

3. The length of time of exposure.

True, but irrelevant for 5G or any radio signals, as there isn't enough energy to cause cell damage in the first place.

4. The sensitifity [sic] of the subject tissue to a specific type of radiation.

All well researched, documented and understood. i.e. Radio/5G is not harmful.

It may turn out that 5G radiation is as innocuous as the radio waves we've had traveling around and through us for over a century

There is no such thing as 5G radiation!

5G is just a new standard covering how we modulate a signal on a radio carrying frequency, and defining what frequencies to use for that standard. The same thing we've been doing since we discovered radio in the first place, just more advanced.

Just to be clear, all radio frequencies already exist, we can't create new ones, all we can do is use the ones nature provided, and use them more efficiently.

Whether that frequency is carrying 5G, 4G or some other radio standard, is irrelevant, it's still the same radio frequency, irrespective of the modulation standard being followed. (For frequency modulation, this would actually be a small band, with an upper and lower frequency limit around a defined central frequency).

Much of the spectrum (i.e. frequency) being used by 5G has been in use for many years, some of it currently in use by 4G for example, some of it was used for old analogue TV transmissions, which were far stronger signals than 5G.

Also 5G has many updates to improve things like power usage, for example 5G can focus the signal on a specific device (think of it like shining a torch from a tower to one spot where the phone is), this can be done for hundreds (or thousands depending on tower size) of devices connected to each cell tower, this means less radio just being blasted out in any direction, reducing overall power needs etc.

If EM sensitivity was a real thing (which it isn't imho) then 5G, and the overall migration to modern more efficient digital transmissions, would most likely help those people, as 5G is much less wasteful than older techs that use some of the same frequencies, like 4G and and analogue TV transmissions, etc.

Microsoft blocks Trend Micro code at center of driver 'cheatware' storm from Windows 10, rootkit detector product pulled from site

Boothy
WTF?

Perhaps update the certification requirements

Is there a valid reason for a driver to ever look at VerifierCodeCheckFlagOn()?

If not, then I'd suggest MS update their certification requirements to include a statement along the lines of "Your drive must not access VerifierCodeCheckFlagOn() at any time", and then update the testing to include a scan of the code for any references to VerifierCodeCheckFlagOn() and automatically fail the driver if found.

'I wrote Task Manager': Ex-Microsoft programmer Dave Plummer spills the beans

Boothy

Re: Kudos to a skilled programmer..

Especially seen as they own it these days.

Perhaps they could add it in as an optional component in the new Power Toys?

Boothy

Indeed, I'd forgotten about doing that. Long time since I've used 9x machine, but I can remember having to restart explorer.exe many time. It surprised some of my colleagues at the time, they hadn't realised that you could do that.

Boothy

I always suspected, after first seeing the updated TM in Win 10, and knowing that MS now owned Sysinternals (which I'd used for years) that they'd basically borrowed some of the functionality from Process Explorer to put into the new TM. Like the tree view on the Process tab.

I still install Sysinternals suite on every machine I use though, and often have Process Explorer run on startup, with tray icons for CPU and IO, as it's so much better than TM, imho anyway.

Also glad that so far at least, MS don't seem to have messed with Sysinternals.

In colossal surprise, Intel says new vPro processors are quite a bit better than the old ones

Boothy

Re: Scratch, Scratch-scratch, Scraaaaatch

Replying to my own post here, but just in case anyone reads this now, things have changed (and despite the downvote, what I wrote was true and confirmed by AMD at time of the original posting).

The new news is that AMD have now backed down on the no Zen 3 support for X400 based boards, such as the B450 Max etc. (Although 300 boards are still unsupported).

But they have said this will only be available in a beta BIOS from the board manufacturer, at their (the board manufacturers) discretion. The user has to confirm they have a Zen 3 CPU before they can get or apply the beta BIOS (as this BIOS removes older CPU support for some boards, due to ROM size limitations, and so could effectively brick your board if applied without a Zen 3 chip to put in).

Boothy

Re: Scratch, Scratch-scratch, Scraaaaatch

Got a Ryzen, very happy with it.

Just be careful if getting an AM4 socket desktop, and if you want it to be upgradable to the new Zen 3 chips later on, as AMD just announced that only the 500 based chip-sets (X570 + B550) will support Zen 3. Basically breaking their earlier commitment to support all AM4 CPU's through 2020 on existing AM4 platforms.

So anyone with for example a brand new X470 or B450 board (even the new 'Max' versions) can't upgrade past Zen 2. (But that still means up to a 16 core, 32 thread beast of a machine, if going desktop).

Linus Torvalds drops Intel and adopts 32-core AMD Ryzen Threadripper on personal PC

Boothy
WTF?

Re: AMD vs. Intel: War Games v3.0

Why do I notice auto-correct typo's after the edit time window is up!

architecture coming at

=

architecture coming out

many thing next year, but something they might try for a Nov/Dec launch for Christmas sales

=

many think next year, but some think they might try for a Nov/Dec launch for Christmas sales.

Boothy

Re: $$$

The price was based on competing with Intel, who were at least 2 (and sometimes many) more times more expensive for a similar (and often lower spec) machine.

Intel did drop their prices in order to compete with AMD, but they are still far more expensive than an AMD equivalent. Plus of course Intel can't compete directly with the high end Zen 2 parts, as they have nothing to pair against the 64 core, 128 thread part.

Boothy

Re: AMD vs. Intel: War Games v3.0

AMD are competitive in the mid range GFX front, with their 5700XT being on average somewhere between a 2060 Super and a 2070 Super (which itself isn't far behind the 2080 non super). Although their drivers have been a bit meh for the last year or so (black screen issues etc), but they seem to be working to fix those problems.

Hopefully RDNA2 (current 5700s use RDND 1) which is due out towards the end of the year, will help AMD in the GFX front. So far RDNA2 in the new XBox and PS consoles seems to be comparable, at least on paper, to the RTX 2080. Silicon has already been demonstrated for the PS5 (look up 'unreal engine 5 tech demo'). The expectation that in a PC GFX card, this should be even better than the console versions, as you don't have the same power or heat limitations. So who know, maybe AMD will beat the 2080Ti at that point!

Although also worth noting, nVidia isn't sitting idle, they have their new 7nm Ampere architecture coming at, with some speculating that the new 3000 chips being quite a bit faster than the 2000 range, so will be interesting to see what the new 3080 Ti looks like. All just rumours currently though on release dates, many thing next year, but something they might try for a Nov/Dec launch for Christmas sales. Who knows!

Linux desktop org GNOME Foundation settles lawsuit with patent troll

Boothy

Re: impressive, but how ?

I imagine turning up for a chat with RPI, and explaining that they'd engaged Shearman & Sterling, a near $1B revenue law firm, that's been around since 1873, and who has clients like Sony & Bank of America on their books, likely focused their mind a bit on the prospect of being able to win.

Oh yes, and Shearman & Sterling are doing the work for free, how much is your legal team costing you?

Also seems the patent itself is bogus, looks like a generic method of sorting images based on criteria such as a topic. I suspect the patent is too broad, and should never have been granted in the first place. (Fails the Alice test).

I think RPI here have basically decided they can't risk court, as the patent would likely be revoked, thus automatically loosing the case, and of course risking their portfolio.

The Electronic Frontier Foundation did a write up on the patent side here.

Tata Consultancy Services tells staff to go to their rooms and stay there, even after the pandemic passes

Boothy

Re: WFH

If this does change the status quo with regards to many more people WFH, and especially if it becomes the new norm, at least in some sectors, then I'd expect landlords will change their offerings to accommodate. e.g. A small office space (i.e. a desk and chair) per person, could become a requirement for many apartments.

Landlords would do this, as they'd assume 'professional' person, therefore we can bump the price up for the same floor space (space for desk, gained by a smaller bed/wardrobe/less other furniture) etc.

Boothy

Many years ago I worked for a company that had two distinct versions of WFH.

Working From Home : Was basically someone who did it occasionally, i.e. office based most of the time, had a desk, and just the occasional, perhaps a day every week or two, working at home.

Then there was Home Workers: These were officially based at home, so no allocated desk in an office, and only expected to go into an office on rare occasions (like annual appraisals, that always had to be done in person, client workshops, group training sessions etc).

The key difference was, if you were just WFH, there was no special treatment.

But if you were a 'Home Worker', someone from HR came to inspect your house, check that you had a proper desk and chair, a working heating system, a phone line (late 90s, so before tin'ternet took off).

If you were missing anything, such as a proper H&S compliant chair, one would be provided. You were also paid a monthly allowance to cover things like the increased heating bill, and phone line usage etc.

HR would visit each year or two to make sure you were still H&S compliant, i.e. correctly set desk and chair etc.

Apple, Google begin to spread pro-privacy, batt-friendly coronavirus contact-tracing API for phone apps

Boothy

Quote: "I guess I wont be updating my phone then"

You don't mention what phone OS you have.

If it's Android, then the update is via Google Play Services, so is automatic on Android 5.0+ (with Googles' stuff installed of course), Google Play Services also does silent auto-updates, so ignores the Play Stores setting for automatic app updates.

You'd have to disable data for Google Play Services (which may well break other things), or never use data or WiFi on that phone again, or do something else to block the silent updates.

Plus of course you'd have to never buy a new Android or Apple phone again, as any new ones will have this new API baked in anyway eventually.

Boothy
Linux

Shame in a way it's a closed API

Shame in a way it's a closed API (although I very much understand why that's the case).

As someone, or more likely a team, could knock up a UK Open Sourced version, using the API, and with none of the centralised crap. Fully peer reviewed etc.

This could then get promoted by all the various privacy groups etc that are currently lambasting the NHS version.

Won't happen though, as they'd need access to the API, and cooperation from the NHS for the testing positive side of things.

Boothy

Re: Accidentally

Apparently part of the issue with location services and Bluetooth BLE, is the increasing use of Bluetooth beacons. If BLE could be enabled without location services being on, it would still mean some apps could find your location via the beacons. Therefore Google pushed BLE into location services, so turn location off, you also turn BLE off.

Not saying this was the right choice, but I can sort of understand the rational.

Personally I'd much rather have more granular control of a device, and be able to explicitly switch on/off regular Bluetooth, BLE, GPS etc all separately, and also deny, by default, any apps from accessing those services, and only enable it on a per app, per function basis. But I suspect this goes directly against what Google are trying to do with Android!

SD cards hop on the PCIe 4.0 bus to hit 4GB/s with version 8.0 of storage spec

Boothy

Re: Presumably

Assuming that's the 'SanDisk Extreme PRO 1TB SDXC Memory Card', which are listed as 170 Mb/s, they are £323.99 on Amazon currently.

Still expensive though, you could buy 3 x 1TB M.2 NVMe cards for £330!

Swedish data centre offers rack-scale dielectric immersion cooling

Boothy

Re: In days of yore

@TechnicalBen

Quote: In tests water cooling is not "better" as such.

Granted 'better' is subjective, but a good water cooler will always * beat a good air cooler. Water is just more efficient at moving heat around, and normally you'd mount the water cooler rad on the top of a case, venting heat directly outwards, unlike an aircooler that vent's within the case and so then needs that heat removing via case fans. (You still have case fans in a water cooled system, they just have to do less work as they are only moving GPU and motherboard heat, so tend to run slower, so quieter).

* 'beat' is only by something like 2-4 deg c. Which may not matter to you, and so may not be worthwhile depending on your use case, but could matter to someone else.

* Also a good aircooler will beat a bad, or mediocre water cooler.

Gamers Nexus did a good video on this only a few months ago. Liquid Cooling vs. Air Cooling Benchmark In-Depth

That includes running the coolers at 100% fan speed (so best cooling performance irrespective of noise), and also noise normalised testing (so same db for all coolers).

Quote: Is it more flexible in positioning in cases?

Yes, although that's kind of obvious ;-) An aircooler can only go in one location, whereas a water cooler rad can be (depending on case) mounted on the top, front or bottom of the case. Most modern mid tower cases are designed with water cooling in mind.

Personally, I'd also say overall, an All-In-One is easier to fit, at least compared to a good, i.e. large tower, air-cooler. (Based on experience with both types). Although a simple small aircooler (like the sort AMD provide with their CPU's) is easier still.

Quote: Does it allow for larger radiators?

Yes, the standard sizes (for an All-In-One) are normally 120, 240 or 360, which have 1, 2 or 3 x 120 mm fans respectively, or 140, 280, 420, which have 1, 2 or 3 x 140 mm fans respectively. With the radiators matching the area that the fans cover. So a 280 would have a rad double the size of a 140 etc.

Radiators are usually slightly wider than the width of the fans, and are longer than the posted size.

For example a '280' (one of the more common ones in use), uses 2 x 140mm fans, so the rad would be around 143mm wide, 315mm long (as it needs to accommodate the fans + a small reservoirs at one end, plus the pipe connections at the other end), and around 30mm thick. Although these sizes vary by model and make.

Quote: Yes. Is it "better"? Results may vary. ;)c

Indeed they do vary :-)

Edit. And just to mention, all the above is related to AIO (All-In-One) rather than a custom loop.

Boothy

Re: In days of yore

I haven't used an air cooler on a CPU for over a decade (in systems I've build).

All-in-one water coolers these days are so simple to fit, easier than most air-coolers. They also generally provide better cooling and are normally quieter doing it than an equivalent air cooler (due to lower fans speeds needed), and don't get in the way of your memory slots.

Only real issue tends to be price, plus eventually they'll need to be replaced as they can bung up internally over time.

Never done a custom loop yet, but I think I'd only look at that, if I was sticking something like a 3950X and a 2080Ti in the same case, and I'd water cool both CPU and GPU then. But they are stupidly expensive, and I just can't justify that sort of money on a hobby machine.

Worried about the magnetic North Pole sprinting towards Russia? Don't be, boffins say, it'll be back sooner or later

Boothy

Re: And the South Pole?

Yes, they are also not antipodes (they are not exactly opposite each other).

Have a look at this site: https://maps.ngdc.noaa.gov/viewers/historical_declination/

Untick 'Isogonic Lines' for a clearer view, and tick 'Modeled Historical Track of Poles' to see the path the poles have taken.

Boothy

Re: A local effect?

All that's changing is the point on Earths surface, at which the Northern magnetic pole is focused. The overall strength of Earths field, as a whole, hasn't changed.

You can see that in the 2nd diagram, the right hand image shows the decrease in the Canadian 'hot spot', and an increase in the Siberian one, pulling the pole towards Siberia, but overall the strength is still the same as it was before.

Ampere, Nvidia's latest GPU architecture is finally here – spanking-new acceleration for AI across the board

Boothy

Re: OK ok so it's fast

Just as a comparison, some current GeForce RTX 2080 Ti cards pull over 300W at full load (reference cards are around 280W).

That's with a Turing card, so on the TSMC 12nm process. Ampere uses 7nm instead, and is still pulling ~33% more power!

Boothy

Do you mean directly? As in a workstation/PC?

If so, then no, at least not as far as I know. The A100s' are specifically designed for data-centre usage. They don't even have video output on them.

But Ampere, the microarcitecture the new A100 is built on, is coming to workstation and mainstream cards at some point, we just don't know when yet.

For ref, nVidia have stated the Ampere based chips will replace all current mainstream consumer (i.e. regular GTX/RTX), prosumer (Titan) and professional (Quadro) cards. With the expectation being that the Titan and Quadro cards will be similar, if not the same as the chips being used in the A100, and the other cards being a cut down version (i.e. less CUDA cores etc).

DBA locked in police-guarded COVID-19-quarantine hotel for the last week shares his story with The Register

Boothy

Re: How far away is home?

As someone who drove (early January) from Sydney to the north end of the Gold Coast coast, and then a week later flew from there to Melbourne, I concur.

The drive took about 12 hours, including a couple of short breaks. Thank <deity> for automatics and cruise control!

The flight down to Melbourne, which of course passes Sydney about half way down, took a little over 2 hours, and being domestic, you only needed to be at the airport about an hour before the flight.

'We're changing shift, and no one can log on!' It was at this moment our hero knew server-lugging chap had screwed up

Boothy

Re: 1990s banking / No-one was allowed near the IT or comms rooms, even managers

Had a similar thing happen to me in the early 2000s. One of the teams in our building was moved out to head office, about 100 miles away (our office was basically the IT hub).

But they still had gear in the single server room we had, which had a daily backup tape that needed rotating out. I got he job for a short while.

Boothy
Linux

It was outside, by the back door!

Some time ago, 1999 I think, where I worked we had what was called the Customer Data Interchange server, or CDI as it was called back then. Basically a small integration platform, running on AIX, managing incoming data from customers, mostly dial-up at the time, using UUCP and Kermit, we also had a few leased line connections with larger clients, although no Internet connections back then (they arrived in 2001).

Peak times were late afternoons during the week, but we did have a little bit of data over the weekends from some of our larger clients. One of these larger clients had tried using the service one Sunday, around lunchtime, to no avail.

I was the lucky one providing call out that weekend, we had a shared laptop and a pager, that was handed over every Tuesday to the next person on call. The pager went off that Sunday lunch time. I called the Unix Ops team, who paged me and who were in the office 24/7, and asked what's up, "Client X can't get any data through, can you have a look?". "Okay" I say.

I dialled in from home (we had a modem rack for remote terminal access), got onto our jump box, and then tried to access the CDI server, it timed out. Tried various network tools, no response to ping etc. The CDI server was an AIX box that basically just kept going, 24/7, I'd never known it once to actually crash or freeze up. So I'm thinking maybe a hardware issue, or network problem.

So I called Unix ops again..

Me: "Hi, anything going on in the DC today?"

Ops: "Yes, there was someone scheduled in this morning to decommission some old unused gear. Why?"

Me: "Are they still there, and could they go check what was actually decommissioned, specifically anything related to CDI?"

Ops: "ok, I'll call you back in a few".

There was me thinking maybe someone had pulled out one too many network cables or something.

30 minutes later, the phone rings, it ops, "Hi, did you say CDI?".

Me: "Yes why?".

Ops: "Well the guy doing the work left the building about an hour ago after finishing the work, and isn't answering his pager (turned out later it was turned off). But we got one of the security guys (the only other people on site) to go have a look, and they found a box by the back door, near the skips, with a label on it, saying 'CDI'"

Me: !!!! "Ah, that could be an issue!"

Turns out of course, the guy doing the decommission work had decommissioned one two many servers. Our CDI server was ancient, and was due for replacement the following year. Turned out everything else in that section of the DC (builtin the early 80s I believe), was being decommissioned, and he'd basically just removed the lot, including our active server.

Some panicked calls from Ops trying to get hold of someone else who could help. They did manage to get someone, who then had to travel to the DC, carry the box back in from outside (thankfully it hadn't been raining!), connect everything back up, and just hope for the best when the power button was pressed.

I got another page late afternoon, spoke to Ops, who said the server was up and running, and could I check please. I dialled-in again, had a look around, did some housekeeping, and sure enough, everything seemed to be working fine.

To the credit of whoever it was who went in that afternoon, despite not being involved in the removal, they went in, and managed to get everything hooked back up and working, and stayed on site while I checked things out. He apparently also put a big label on the front of the box to state that it was a Live service, and not to remove it without getting clearance from my team!

Needless to say, we did a lot of manual monitoring for the next few days to make sure everything was running fine, and I was also involved in a few lessons-learned meetings, which changed a few of the processes we had (or just created new ones as they didn't exit yet!). Including for example, requiring anyone doing any out of hours work at the DC, to be available on call for the next 24 hours minimum. If they couldn't do the on call cover, they weren't allowed to do the work.

US small biz loan system bans software robots. The lesson? Make sure IT knows about any automation projects

Boothy

Out of curiosity, did the target organisation not provide an API? Seems odd (to me) that if they were expecting this level of user input, that they'd provide an API instead.

Or if they did provide an API, why ever use RPA in the first place?

My general thought would be always use an API, no API we take our business elsewhere, but of course context is everything!

Boothy
Mushroom

Always involve IT, even if it's just as an FYI at the start of a project

Quote: "The lesson for anyone thinking of deploying RPA is that they must involve IT in projects early on. Business teams thinking they can use RPA to get automation without troubling IT will find it is a false economy, said Neil Ward-Dutton, VP of AI and automation practices at IDC."

This goes for many other things, not just for RPA.

Oops, this ended up longer than expected! Sorry.

Not RPA, but in a similar vane. A good few years ago (early 2000s), the company I worked for did a project without our (IT) knowledge. I was one of the techies looking after what was basically an Integration platform. We took in customer data in various formats, CSV, EDIFACT, XML, TRADACOMS, that came in via Internet FTPS, dial-up (UUCP and Kermit!) converted all this to formats the internal systems could handle (XML for the newer stuff (at the time), good old fixed-width for the IBM Mainframe we had).

The peak data started mid afternoon, ended late evening (end of day stuff) and the majority of the clients (with a very few low volume users as an exception), did everything as batches at the end of the day, hence the peak being when it was. So we'd typically get just one or two files from each customer, but each file would contain 100s to 100,000s of data items in each file.

As such the platform was tuned for batch processing, and all the internal transfers and back end systems were set up the same way. the expectation being few files per customer, but with many records in each file.

Anyway, my employer had outsourced some of it's IT to a certain company who's name begins with the letter 'F', and they'd designed a web app a year or so earlier, that customers could use, instead of baking their own system (aimed at small and mid-sized customers), and this system simply appeared as just another customer to us. The fact this was 100s, or 1000s of customers behind it, didn't really matter to us. They batched up all the customer data together every 30 minutes or so, and sent that through to us. All was happy with the world.

The 'Business' decided they didn't like this added latency due to the batching being done on Fs' web platform, (as it delayed when the data turned up on back-end systems), so they asked them to change it to near-real-time processing instead. This they implemented one weekend, without informing anyone in IT or any service managers or owners.

Everything was fine till about 10:30 on Monday morning, when one of my colleague noticed there was some lag between data arriving on our system, and when it hit any back-end system, and this lag was getting gradually worse.

I jumped onto the UNIX platform where it lived, and had a look around, and found a working directory that was used during a batching up process had something like 1,000,000 tiny files in it, when we'd only expect to see a few hundred larger files, at most.

We eventually figured out the web interface created by F had been updated, to generate a 'batch' file for every single item of data being generated (millions a day).

Worth noting at this point, that each batch file had two headers and a trailer. So as an example, whereas a single batch of 1000 items of data would have had 1003 records in total in one file, this now meant we had 1000 individual files, each with 4 records, as each needed their own headers and trailers! Resulting in 4000 lines to process, rather than the original 1003. (This also broke the Mainframe, as it created jobs based on headers, so 1 old job, became a 1000 new jobs, the MF team were also not happy!).

Basically we ended up (in total with other customer data) with something like 100 times more files than expected, plus around a three fold increase in overall volume of records. The Integration platform was already running at around 95% utilisation during peak hours (about 4 hours a day), poor system didn't stand a chance. (It was still working, just not fast enough to keep up, as the backlog increased, the slower it got!).

The only initial work around was to close down the feed from Fs' system, move all the backlogged data out of the way, and allow new stuff to come though at its regular speed. We also manually pushed through all the data from other customers (as this was still in batches), so at least it only impacted this one source (although this one source accounted for a large portion of daily volumes). It was late afternoon by the time we got this done.

The 'Business' had to go back to F and get them to backout the change, as none of the systems could cope with it. Massive egg on their faces, at least internally, as it turned out they'd been selling this near-real-time service for a while and this was their grand launch! No doubt we (IT) probably got blamed behind our backs for it not working!

All they needed to do was ask one person in my team, "What would happen if we changed the batches to this?", and anyone on the team could have predicted the outcome with ease, and saved everyone the all the wasted time, effort, lost revenue etc. and perhaps even come up with a solution for their business need!

About a month later the change was re-implemented. This time after engaging with our team, where we developing a solution and even tested it before go live! Worked perfectly 2nd time round. Go figure!

Microsoft puts dual-screen devices and Windows 10X in the too-hard basket

Boothy

Re: you could locate the "Ribbon"

On 1600 x 900 here, which is at least marginally better than 768 I used to have, but not by much.

Even when in the office (where was that again, I forget?) the company had bought what must have been the cheapest external 15 inch monitors they could get hold of, not sure what they were, but they were not 1080p I know that for sure.

So glad I'm at home and can use my gaming monitor as an external, 3440 x 1440 makes working on extra wide Excel spreadsheets a breeze, or three A4 Word docs (or 3 pages) open at the same time, at full size. Although sharing a screen in a meeting, not so good! (I just share the laptop screen instead). I just stick Skype (aka Lync) and Outlook on the laptop screen.

Business laptops really aught to be 1080p minimum, 1440p preferable (in a large enough form to matter).

Boothy
Pint

...a more streamlined way to pair Bluetooth devices in Windows

  • Right click Bluetooth icon in tray and select "Add a Bluetooth Device". Sounds easy enough so far.
  • Wait for new Bluetooth window to open.
  • Make sure Bluetooth is 'On' hmm mkay.
  • Okay, now click "Add Bluetooth or other device". erm, didn't we do this already?
  • Okay, another window opened called 'Add a Device".
  • Now click "Bluetooth", hmm, are we going in circles?
  • Wait for your device to show up, click it and then click 'Connect'!

I can't see any way this could possible be streamlined! What possible steps could be considered superfluous and be removed? I just don't see it myself, mkay.

Icon, because I want one, in a PUB with friends! :-/

Scratch that, I NEED one!

Square peg of modem won't fit into round hole of PC? I saw to it, bloke tells horrified mate

Boothy

Re: VGA Plug Screws

I wonder if the same person moved to Asrock?

Bought a new X570 motherboard back end of last summer, for a new Ryzen build. Had a new case as well (old one was difficult to keep cool, still had 4 x 5.25 slots!). New case has a USB-C connector on the front panel, and a fancy lead and plug inside to go onto the motherboard.

The motherboard has a matching socket, happy!

Plugs USB-C cable into motherboard, then plugs in GFX card, hmm, won't fit! Can't actually push the card home, something seems to be in the way?!

Yup, card sits on top of the plug!

Basically someone in Asrock decided a good place for the USB-C socket was directly under where the GFX cards back plate would be, and even with a 90deg plug, you still can't actually fit a standard sized GFX card into the first slot!

Did consider removing the plastic on the plug, to see if that would be enough, but in the end I just took the cable out and don't use the socket.

Intel is offering more 14nm Skylake desktop processors, we repeat: More 14nm Skylake desktop processors

Boothy

Re: HELP!!!

Define performance? e.g. best frames per second, best performance per watt, best frames per $ spent etc?

e.g. Are you primarily a gamer, do you game and stream, are you focused on productivity etc?

To be honest, atm irrespective of how you define it, AMD win hands-down. Even with Intels new chips and prices cuts.

The only real single exception is if you want the absolute best frames per second, and are using an RTX 2080Ti (and you have no budget limitations), in which case the fastest i9 is arguable 'best'.

But for all other use cases, go for an AMD.

I you want to see comparison benchmarks, I'd recommend looking at Gamers Nexus on Youtube. Although don't expect anything other than an overview of these new chips current, as no benchmarks exist yet, as no one has the hardware yet.

For a more enthusiast level, rather than hardcore techi, have a look at JayzTwoCents instead, he's done a vid on the new Intel chips here.

Note these Intel chips are new, so no benchmarks yet, as reviewers are still waiting for the hardware, and then they will need some time to actually run all the benchmarks, so be a few more days for these to turn up.

Boothy

Re: Intel has lost it

Roll on Zen 3 due later this year on an improved 7nm process.

Then next year we get Zen 4, due to be on 5nm and using DDR5.

It's going to be an interesting year or so!

Boothy

Re: Last paragraph of the article"

I jumped from Intel to AMD last year (last used a personal AMD system around 2011 I think). Very glad I did, have an 8 core, 16 thread system currently (3800X) and it just eats anything I throw at it.

One of the big selling points for me was AMDs ongoing support for the same socket. Something Intel just doesn't seem to get!

If I need an upgrade at some point, no need for a new system, I could drop a 3950X (16core, 32 thread) directly in the box, the TDP is even rated the same as my current 3800X (as the 3950X is much better binned), so I wouldn't even need to upgrade the cooling (which is overkill anyway atm).

Better yet, later this year the new Zen 3 based chips are due out, which will use an improved 7NM process, which should allow for lower power and/or higher clocks, plus Zen 3 improves the IPC again on top, so the 4950X (or whatever it becomes) should be even more of a beast of a chip than the current 3950X!

Unfortunately the Zen 3 chips are very likely to be the last for the AM4 socket, as Zen 4, due out some time next year, is using DDR5 rather than DDR4, so needs a new socket. But I can't see me needing above 16 cores for quite a few years to come!

Also with AMD, I'd expect the new AM5 (or whatever it gets called) socket, for Zen 4, to get supported for years to come, at least until DDR6 comes along.

You can get a mechanical keyboard for £45. But should you? We pulled an Aukey KM-G6 out of the bargain bin

Boothy

Some modern kit still come with PS/2.

Asrock seems to like PS/2 for some of their motherboards, the X570 Taichi launched in July 2019, and has a single PS/2. MSI have a few motherboards as well.

For a while PS/2 keyboards were considered faster (as in less lag) than USB keyboards, especially for competitive gaming. This is still the case for some USB keyboards (as in PS/2 has less lag), but many newer USB keyboards (and anything mid to high end like gaming keyboards) now usually have less lag than PS/2 (sometimes a lot less), as the updates to USB standards and interfaces has allowed for faster polling intervals on modern keyboards.

PS/2 for mice hasn't really been a thing for a long time, as the polling speed for PS/2 mice is capped quite slow (really designed for ball mice, rather than modern optical based ones), so USB mice outperformed PS/2 mice (as in lag) basically as soon as USB mice came out.

Academics demand answers from NHS over potential data timebomb ticking inside new UK contact-tracing app

Boothy

Re: How to stop people from having "fun"

They already said* that if you get diagnosed, you get a verification code on the document telling you you've got COVID-19, and you have to enter that into the app, in order to declare "I've got it".

* For example one on BBC News : quote: "To report testing positive, the user would have to enter a verification code, which they would have received alongside their Covid-19 status." : Article here

I've not seen any details yet, so I've no idea if this will be a unique code, one only usable by a single person, time limited, how these are generated etc. Guess we have to wait for the full details yet.

My guess would be they'll release a private companion app, or web site, that has to be used by whoever creates the Covid-19 reports (or add it to an existing Covid-19 reporting solution, assuming there is one), they enter a few of the persons details, and it generates a code, perhaps using the persons details as a seed, to make it unique, so no one else could use it?

UK snubs Apple-Google coronavirus app API, insists on British control of data, promises to protect privacy

Boothy

Re: Difficult choice

Can't comment on the 'apps' yet for obvious reasons, but the API changes being done are meant to go back to phones at least as old as Android 6 from another article I've read. I've no idea about Apple.

The Android API changes are being pushed via Google Play Services (like a lot of the other Android services these days), rather than as an actual OS OTA update, so it's not dependent on the manufacturers doing anything.

Boothy
Big Brother

Re: The clocks were striking thirteen

No, no, no, we are at war with Eastasia. We've always been at war with Eastasia.

WAR IS PEACE, FREEDOM IS SLAVERY, IGNORANCE IS STRENGTH.

Airbus and Rolls-Royce hit eject on hybrid-electric airliner testbed after E-Fan X project fails to get off the ground

Boothy

Re: Standard engines + fuel vs fuel + gas turbine + electric?

Quote: "Unfortunately you need the other way around. Big noisy jet engines on maximum power (and afterburner ;-) on take off and minimum engines at cruise altitude."

Not sure I understand? Or perhaps you've misunderstood me?

We already know electric aircraft can take off, even with heavy batteries, as these aircraft already exist, (many still early prototype, but there are flying examples already) so the 'Big noisy jet engines on maximum power...' isn't really relevant.

I'm comparing standard existing aircraft, i.e. The 'Standard engines + fuel' bit. i.e. a normal current commercial aircraft.

against an electric only aircraft, but using a gas turbine to generate the power rather than a battery, i.e. 'fuel + gas turbine + electric'.

Also worth mentioning, this would almost certainly be competing against short to medium haul turbo-prop aircraft, rather than jet aircraft, at least till the tech matured a lot.

The main issue with existing electric aircraft is their lack of range, (heavy batteries + not much actual stored energy as compared with liquid fuel). Plus having to then wait to recharge the batteries after a flight.

Burning fuel* to produce the electricity removes, or at least reduces the need for batteries (might still be useful for take off and landing to keep noise down, and/or give a boost to available power), and you've still got the high potential energy stored in the fuel that drives the turbine, more range needed, just add more fuel.

* Obviously this also need the fuel to be 'clean', i.e. made from crops or something. But that should be easier to do for turbine fuel, than for jet fuel (turbines can be run from almost any fuel, so you'd pick something suitable for the use case in hand).

My question, is basically is this feasible, from a physics/engineering standpoint? i.e. Would having fuel + turbine + generator + electric engine, still be too heavy to be efficient?

Even if this wasn't as efficient as a turbo-prop (or jet), it might still be viable if it wasn't too far off, simply to help keep local noise and emissions in check, as rules around these things are almost certainly going to get stricter over time.

PS: I didn't down vote you.

Boothy

Standard engines + fuel vs fuel + gas turbine + electric?

I'd be curious to know what differences there would be between things like weight and fuel consumption, between using standard engines burning 'jet' fuel directly, and a gas turbine (or two, one under each wing?) burning (perhaps cheaper) fuel, to generate electricity + electric motors?

Add a few batteries and/or super caps to the mix, to boost take off power/store excess energy, so you could have a smaller turbine (although the added battery weight might off-set any savings in using a smaller lighter turbines anyway?).

If using batteries (or some other suitable storage method?) you could use stored power for take off and landings (quieter and cleaner for the locals), and then only use the gas turbine/s once at altitude.

Might not be as efficient as a traditional jet due to extra weight etc, which also means it likely wouldn't have the range, but I could see this being useful for short haul flights, especially into city airports, or environmental hot-spots.

Or perhaps just not viable at all?

Dumpster diving to revive a crashing NetWare server? It was acceptable in the '90s

Boothy

Did something similar about 30 years ago with a bench drill, for some reason the power-on button was really easy to push, and could be set of by someone just leaning against the thing (waste high controls). Which could be quite dangerous!

We took some heavy cardboard tubing (think toilet roll but about 4mm thick), cut about a 2 inch length, and then gaffer taped it over the on button. Worked like a charm.

FTP is crusty and mostly dead, right? AWS just started supporting it anyway

Boothy

Re: It's used because it works

We used to use UltraEdit years ago, when I was in a support and development team (before DevfOps became a thing!), it also had built in FTP and SSH etc.

At peak we were a team of 6, all sat in one bay, all on desktops (early 2000s, no laptops or option to WFH). Change control was basically "Anyone doing anything with file x on box y at the moment?", If none said yes, "okay, I'm deploying change 'z' ,should be live in two mins".

Although I did eventually set up a cron job that automatically took hourly snapshots of all configurable files, only backing up those that changed, and created a diff log, so we could see in one place what changed, where and when.

Was also quite handy being able to have things like log files open in a tab on your local machine, without having to open a terminal.

Kerching! Intel PC chip shortage over just in time for everyone to buy computers for pandemic home working

Boothy

Loving my home built Ryzen 3800X system running on an NVMe M2 drive. So fast at everything, and running VMs is a breeze! So much better than the old i7 3770k it replaced!

Web pages a little too style over substance? Behold the Windows 98 CSS file

Boothy
Pint

Re: The Modern UI/UX

Quote : "I'll see your 23 years and raise you 12..."

Does learning machine code on a 48K rubber keyboard Sinclair Spectrum in 1982 when I was 14 count? ;-)

I also did lots of hardware hacks on the Spectrum from age ~13 to 17 (girls and beer became more interesting at that point!), custom joystick, an external keyboard, motor control systems controlling meccano and lego devices etc. which I think is partially responsible for me heading into electronics as a career initially!

Boothy

Re: The Modern UI/UX

As someone that is used to computers (in the 'bus' for about 23 years now, mostly development (not web!) and was an electronics engineer for about 10 before that, so I consider myself very technical), but even I find some modern UI's to be just plain awkward sometimes.

I remember a few years ago being given access to web based system, that pulled system and application logs together into a single central location. Doing complex filters was easy, this expanded a section that had obvious things text entry boxes for keywords, drop down lists, date and time entry fields that were editable (and it was obvious they were in the UI).

But it took me ages to figure out how to do a simple date/time filter, such as just show me everything up to a specific time, or only for yesterday.

Turned out at the top of the page was a date and time display, that just looked like part of the page background, no border, same background and text colour as everything else, nothing at all that indicated this could be interacted with.

Turned out of course, these were actually buttons, and once clicked brought up a date and time filter section! So inconsistent with the rest of the UI!

Boothy

The Modern UI/UX

Quote: not knowing to tap things because they just look like text

This.

It puzzles me that basic things like readability (e.g. obvious buttons, search boxes etc) usability (i.e. it should be obvious how it works) etc. seem to have been dropped from modern UI designs!

There really should be a basic check list of should and should nots, that all UI designers must follow. At the moment, many seem to either not bother, or seem to be using a check list that makes no sense, and really shouldn't exist!

Video game cloud streaming shaken up as Nvidia loses more big names, Microsoft readies its market killer

Boothy

Compared to a PC

Quote: Gamers pay Nvidia $5 a month (for now) to run games they have already bought at a far higher performance level and speed than their devices can manage.

What devices is this being compared to?

From what I can see, the nVidia Now service is capped at 1080p and 60fps (with occasional 120fps), all dependant on the games. Which is about the same as a mid range PC. So saying 'far higher' is only really valid if people are running a potato at home. Not saying some people aren't running a potato, and that those people wouldn't get an uplift at least in GFX with this service, but the wording is rather inaccurate for anyone with a mid range or higher gaming PC/Laptop, (or even the 'pro/enhanced' versions of current XBox or PlayStation).

Quote: , it means ordinary gamers can compete with those on high-end gaming machines, which cost thousands of dollars...

What are you defining as a high end PC? If you're talking about something that can match, or beat GeForce Now, then that's not high end, and certainly not thousands!

A low to mid range PC, that would match and even outperform GeForceNow at 1080p would cost around ~$650 *.

Bump that to ~$1000 * and you'll get a reasonably high end 1440p 60Hz system that would be far better than GeForce Now.

* These are real prices based on pcpartpicker just now, so should be doable.

1st system Ryzen 3600 + AM4 mobo + 8GB RAM + small SSD + RX590 + case + PSU

2nd system Ryzen 3600X + AM4 mobo + 16GB RAM + 500GB fast Samsung SSD + RT 5700 XT + case + PSU

Throw in a 3700X and a RTX 2070 Super (only ~5% slower than an RTX 2080), and it's getting proper high end, fast 1440p. reasonable 4k gaming, and that's still only a little over $1300.

Even sticking a RTX 2080Ti in doesn't break $2000 unless you go for one of the extreme editions, $1850 for basically as high end as it gets (unless you go silly).

If you do want to get to thousands, get an Intel CPU, such as a 9700K * or 9900K (the newer Comet Lake ones are just a rehash), then you can get above $2000 with a 2080Ti, but whilst you get a little more FPS on average with the 9700K (and a fair bit more with the 9900K), it's not really worth the extra money. (Also with the AMD platform, you can easily upgrade the CPU later anyway, to a high end 3900X etc, or even the upcoming 4000 series).

* The 9700K whilst expensive, is also only 8 core 8 thread, so for anything muti-threaded, a 3700X (8 core 16 thread), tends to be much faster. So unless the system is only ever going to be for gaming, and also wont be used to do streaming from it, the 3700X is a better option.

All of this also assumes you are buying everything from new. If you've got an existing rig, you could like save money with things like existing case, drives, PSU etc.

Note, none of the above includes displays or controllers, as you'd need to have those anyway to use GeForceNow.

Quote: With even a microsecond meaning the difference between winning and losing, it is something many are prepared to pay for.

Just using GeForce Now with it's added built in latency will be enough to disadvantage anyone using the service against pretty much any PC (or console for that matter) player, even someone on a low end/last gen PC, as all the PC gamer has to do is drop the GFX settings down to medium to low, to bump up their FPS. Something many competitive gamers already do.

One last word, not saying any sort of PC building is for everyone, and of course some people simply either can't afford even a basic gaming PC, or have other priorities for the money (family etc). So I can see services like this being potentially useful for them, just don't try to make them out to be something they are not, or spout inaccuracies like over inflated costs for building a PC.

Google pre-pandemic: User-Agent strings are so 1990s. Time for a total makeover. Google mid-pandemic: Ah, we'll reschedule to 2021

Boothy

Re: "One thing I did notice that seems to be specifically missing is anything related"

@LDS

I agree. The current proposition seems to mean that someone, somewhere, i.e. the web developer, the server devs, or more likely the browser devs, needs to maintain a list of their browser versions and the capabilities at each version, in a format that can be imported into and parsed by a web server, and this list will need to be maintained.

What are the rules if the web server doesn't recognise the browser? Such as one of the smaller player, with say a high security browser? Do you drop back to a basic web page, with no extras? If that's the case, then as this 'brand' field (i.e. the browser) is free text, then you'll end up with smaller browsers spoofing as 'Chrome' or something else, in order to get the 'real' page.

There may well be valid reasons for asking for the browser name and version at times, but to me those should be in the optional section, with only the basic browser capabilities being in the mandatory section, (i.e. HTML version supported), with other capability checks being optional.

Boothy

Re: User-Agent strings are kinda useless these days...

Looking at https://github.com/WICG/ua-client-hints

Snip: "For that use case to work, the server needs to be aware of the browser and its meaningful version, and map that to a list of available features. That enables it to know which polyfill or code variant to serve."

So seems they do expect all servers to have a list of browser versions and capabilities.

From the 'Browser bug workaround' section below that, seems they expect web servers to already be doing this anyway, in order to work around existing browser bugs.

Boothy

Re: User-Agent strings are kinda useless these days...

Quote: "Not that this is a bad idea, IMHO instead of user agent names they should announce web standard compliance (and web standards should ease it with clear identifiers) - just it should not be Google to decide how it works - otherwise it's just the new IE."

Seems the minimum with US-CH is "brand (i.e. browser)"; v="significant version"

e.g.

Sec-CH-UA: "Browser"; v="73"

Everything else, full browser version number, platform, platform version, architecture etc. is optional, and has to be specifically asked for by the server, and the client chooses whether to provide those details or not.

I also noticed this in the spec: Quote:

"User agents SHOULD keep these strings short and to the point, but servers MUST accept arbitrary values for each, as they are all values constructed at the user agent's whim."

One thing I did notice that seems to be specifically missing is anything related to what the browser capabilities are, e.g. HTML version etc.

Which seems an odd choice to me, as won't that mean servers will need to keep track of browsers and version numbers, in order to know what standards they can utilise?

The draft spec isn't all that long, and can be found here: https://wicg.github.io/ua-client-hints/