Re: Flash in the pan
Could be worse. Imagine how bad it would be if Microsoft had built Flash support into Excel.
2646 posts • joined 19 Oct 2007
If you kill or cure all the addicts today that will create a financial incentive to produce more addicts tomorrow. The DEA were set up to deal with suspiciously large drug orders from small towns. Congress fixed that last year. 3 senators and 44 congressman did not take hefty campaign contributions from drug companies, so I have confidence that no effective solution will be tolerated for years.
I thought the fake news sites were funded by Google ads and they selected content based on what people wanted to believe without checking and show their friends on Facebook. If the news favoured Trump/Russia I assumed it was for the same reason that 419s target Christians.
If you have evidence to the contrary then I am very interested.
more people adopting Linux for the desktop, running X over the wire (or air) is becoming an edge case
Back when people started to have more than one desktop, X over the wire became an every day occurrence. Now that desktops are slowly fading into the sunset, it makes sense to put cheap X servers where you want displays and have a big X client where the noise does not matter.
I blame the customers for buying hardware without sufficient publicly available documentation for creating an independent open source driver, but I cannot see any way to fix that without a hypno-toad.
I am not root. All the remote stuff works fine with Xorg. I did not need to change any permissions. What are you talking about?
IIRC the reasons for Wayland were to get the network out of the way so it would be fast and light and to drop a pile of legacy code.
Disks are cheap and no-one is going to notice a few megabytes of old libraries that never get paged into RAM. They will notice their absence when old applications stop working. Wayland needs (has?) a compatibility library so cutting out the legacy code is a non-starter.
The old requirements of X were 4MB RAM and 12MB of swap. Yes megabytes. X is small and light. Its the applications that can be huge and inefficient. Swapping in Wayland for X is not going to fix bloated applications.
If the client and server are the same machine, X uses shared memory for 'networking', so no overhead. When the client and server are on different machines, X can run well over networking kit from the 80s by sending a command stream. Wayland draws a picture, compresses it, sends that over the network, and decompresses it on the far side (they got this working in August of last year!). Calling that a huge step backwards is over generous.
When Weyland has been stable for years, working over a network (around 50% of my use case) on kit that runs on batteries then it will be a competitor for Xorg and might actually have a future. I expect the sun will become a red giant first.
These devices have a low power chip that listens continuously for anything resembling "Alexa" or "OK Google". When it hears something that matches it sends a recording to the cloud for speech recognition. Putting proper speech recognition into a low power chip would be difficult.
The strange thing is that early attempts at speech recognition (what you say) turned out to be voice recognition (who is speaking) devices. The down side is that antique tech requires training. Say "Siri recognise my voice" a hundred times and a low power chip probably could (but it would also respond to you saying "OK Google" or "OW! Who spread drawing pins on the floor?"). The problem is to find customers with enough brains to understand the problem, enough patience to actually train the device and sufficient courage/gullibility to let such a device in their home.
When the first computers hauled themselves out of the ocean they talked to each other through a long coax cable that went from one computer to the next in line. Each computer would have a T-junction connector plugged into the back, with the base of the T in the computer and the coax lines on each side. To prevent the signal bouncing of the ends of the cable each end was fitted with a terminator (pictures).
When a user decided the network was the cause of all their problems instead of unplugging the computer from the T junction they would unplug both sides of the coax. As well as breaking the network in half, each half would not be able to communicate because each had a missing terminator.
When computers came down from the trees they talked to each other over SCSI. SCSI worked like 10Base2, either with a ribbon cable with multiple connectors for up to 8 devices or each device had two connectors so they could be daisy chained together. Again, a terminator was required at each end (sometimes a separate dongle and sometimes enabled by setting jumpers in the device). Unplugging any device again broke the bus into two pieces that wouldn't work because of lack of proper termination.
Someone with a greyer beard than mine is required to explain IBM 360 peripherals, but I can easily believe unplugging either end of the cable would crash the mainframe and that the PFYs of the time were expected to know this.
Clearly the time has come for me to wire a motion sensor to a Raspberry Pi so it can shout "Get off my lawn!" when any of the neighbours' kids get close.
Your choices are small enough to fit in one lane of traffic and too loud to go near anywhere residential or quiet enough to land near home but eats both lanes, half the pavement on each side and struggles to lift one average man. Once you in the air, you have to land at once to comply with minimum reserve fuel requirements.
Ask again next year because there might be lighter batteries available.
People tend to ascribe to others crimes they would commit themselves. For techies, this shows as an attempt to find sane intelligent motives consistent with other peoples' actions. This cannot work with Teresa May. Although ability at government is not a required attribute of a successful politician they do need to be better at politics ... than other politicians. She called for an election in June 2017. Now you know her level of competence at a core skill you have to base the motives behind her other activities consistent with determined ignorance and fly bashing against the closed half of a window level stupidity.
The only defence against such people is education - somehow we have to educate enough voters to prevent people like her getting elected again.
Hello phone, some new judges have been appointed. Here are their public keys. Did I accidently put my key in the list?
I wish I could find the video I saw of an old judge explaining some aspect of technology. I cannot tell you what sort of technology he was explaining because he kept getting stuck half way through sentences and forgetting what he was talking about. After about quarter of an hour, I could not stand to watch more. Not all judges are senile (although that does seem to be a popular career move in the US). There is even a judge who understands every single line of code Google copied from Java. Such judges are rare. I have met "techies" without the brains to understand what a secret key is, and PHBs with the computer literacy to keep a secret key secret are few and far between.
Giving each judge a secret key is as sane as giving each employee a four digit access code (someone will pick 1066).
... for flash ... for a little longer. Although the size of transistors has gone up (and bits/cell) the number of layers has gone up faster.
CPUs cannot use the same trick (yet). 99.9% of a flash chip is idle with only a few sectors active so it does not use much power. Large chunks of a CPU transition every cycle. Getting the heat out of one layer of CPU transistors is bad enough. Trying that with 100 layers will cause a loud bang and instant vaporised CPU.
IBM have been trying to drill thousands of holes in a CPU so they can pump a cooling liquid through them. Might be cool for a data centre, but it will burn you phone battery in minutes.
Just try a different web site.
There was a fun video about drone enthusiasts and gun enthusiasts settling the arguments about gun vs drone. At first the shooters were surprised at how fast and nimble drones are. They could not hit them. The drone pilots got cocky and flew slower and slower into the drones got shot. Later, the shooters learned to wait for the drone to corner so it presented a slower moving target. With practice a competent shooter can kill a drone flying back and forth in front of the hill used to stop stray bullets. Sometimes the drone kept flying after it was hit one or twice and sometimes it died on the first hit.
This thing is a bigger target - easier to hit but a single round will not smash the whole to bits. I am sure the army will not be flying it back and forth at low altitude until it gets killed. Shooters are going to have a dull time waiting for hours for the few seconds when they stand a chance of killing a drone. I would like to think losing a drone is cheaper than losing a convoy of trucks but this is military budget so I would not bet on it.
Someone sensible came up with the idea of over 600 MPs because such a large number will spend far more time arguing with each other than doing any governing. The work-around was to select a few - the cabinet - to make all the decisions then coerce, intimidate or con most of the party to vote as required. The cabinet used to be about a dozen with reasonably separate responsibilities so each could set policy in their bailiwick without consulting the others for every detail.
At some point it became so obvious that even a prime minister noticed there were not 12 candidates in the party capable of running a branch of government. The solution to decades of negative selection was simple: increase the size of the cabinet to about 100 and give each of them overlapping job titles.
Sure there a lot of planets, but if you filter out those that are to larger/small, wrong location, do not have molton cores or the parent suns are to violent, that number comes down a lot.
Put that number all the way back up because it included sensible size, right location (near star and in galaxy) and stable stars. Molten core is related to size and large moon: Nests and eggs are not that common on Earth but that does not make finding the two together extremely unlikely.
I kept the red dwarfs separate because of the reasons you gave.
Moons of gas giants are a possibility. We have some in this solar system that are possibilities for life (and others that aren't). We have gas giant moons with a thicker atmosphere than Earth that will burn up meteors. Tidally locked to the gas giant means not tidally locked to the star. Moons of gas giants would be worth counting if we had the technology to do it for exoplanets.
About 11 billion planets in this galaxy meet the first four of your conditions. Add in red dwarfs and we are up to 40 billion. I do not even have a figure for moons of gas giants with a reasonable chance of having had surface water for billions of years.
Multiply that by at least 100 billion galaxies in the observable universe and life becomes something we should expect (although possibly too far apart to stand a reasonable chance of contact).
We have a limited supply of planets for counting large moons, but if you look at trans-Neptunian objects, large moons are quite popular.
So I go to all the trouble of registering a bunch of other peoples' pictures with Kodak and what do I get:
The exchange of money will get the added step of converting dollars, which can be spent anywhere, into "KODAKCoin", which can be spent nowhere outside of the KodakOne service.
The opportunity to give photographers and scammers KODAKCoin for images on my website so that they can give photographers and scammers KODAKCoin for images on their websites.
As far as I can tell, the destination is an elliptical orbit around the sun with perihelion near Earth's orbital radius and aphelion near Mars's orbital radius. As the launch is at the wrong time, when the roadster reaches aphelion Mars will be somewhere else.
The brochure for Falcon Heavy offers 16800kg to Mars. Presumably this is for an Earth/Mars transfer orbit. A 2009 Tesla Roadster is 1300kg. Even with a few hundred kg for the payload adapter a Falcon Heavy is massively over powered. A Falcon 9 can get 4000kg to MTO. There are things a Falcon heavy can and cannot do with such a light payload:
Pluto transfer orbit: The brochure offers 3500kg to Pluto.
Fast flyby of Mars. There will be an aphelion that is outside Mars orbit that puts the Roadster near Mars either on the way to aphelion or on the way back.
(Probably) cannot do orbital insertion to orbit Mars. The stage 2 engine could shut down with propellant to spare after setting up for a fast flyby of Mars. I have not seen an endurance figure for stage 2. The liquid oxygen will slowly boil away and the liquid helium will boil away more quickly. Helium is needed to pressurise the propellant to the minimum required for the pumps to operate, so the choice is to use it near Earth or lose it before you get to Mars.
SpaceX does have long endurance propulsion: Draco. Early versions of Falcon had 4 Draco thrusters on stage 2 but these have been replaced with nitrogen cold gas thrusters. A Super Draco could do something to slow down a Mars flyby, but they have 1300kg of propellant and I think we would have seen one in the pictures if they had duct-taped one onto the car.
Who needs to fire off events at precise _times_? The usual events are "required data is in memory" or "disk has confirmed that the data will be read back as required even if the power fails right now". Delete the high resolution timer, and the vast majority of software would not even notice.
Back when I was a PFY, the scheduler interrupt was 50Hz - if you hogged the (only!) CPU for 40ms the OS would give something else a turn. Even back then, if the current process stalled, the scheduler would pick a different unstalled process immediately. Later, Intel CPU's got caches huge enough to hold multiple copies of the enormous state required by the X86 architecture, so the tick could be moved to 1000Hz without continuously thrashing the cache. (Linux got tickless for battery life).
Databases need to put requests into an order, and I always assumed they used a sequence number for that rather than the time. Make has difficulty with FAT's 2 second (!) resolution last modified time stamps. I am sure uuid and NTP actually need nanosecond accuracy, but apart for a few oddities the only contexts I have actually seen using nanosecond accuracy are performance monitoring for optimisation and malware cache timing attacks.
Most software does not touch the high resolution timers at all, so I too am interested in why restricting access to them is not a solution.
Itanium's first success was before it was even a product, R&D on existing 64-bit designs stopped on the assumption that they would not be able to compete with Intel. Anyone know if any of the old 64-bit designs could later have become susceptible to meltdown? Itanium took ages to get to market either because it was a difficult design or because with the competition gone there was no reason to rush.
Itanium was not built for speed. The primary design goal was to use so many transistors that no-one would be able to manufacture a compatible product. This goal was achieved by such a large margin that the first version used too much power to become a product. Even when Itanium became a real product its performance per watt stank. Software was either non-existent or priced higher than the SLS so sales were crap leading to poor performance/$. Itanium was never a competitor to X86 and was a zombie incapable of eating brains before AMD64 was available.
68020 had separate tables for user and supervisor address translations. It was meltdown proof, and the same went for 88110. I do not know if Itanium had a sane MMU design, but it was never an option for anyone without an unlimited budget and it did kill a bunch of architectures some of which were meltdown proof.
I used to work with a pretty young immigrant whose English was excellent but had very predictable gaps. The cost of using a word she did not know was you had to explain it to her. I promptly repeated the words "Read the FRIENDLY manual" over and over to myself. Thanks to her, even when the deadline is minutes away, when a fan starts to rapidly distribute mushrooms I can now say bother.
Presumably malware uses details of how Windows organises virtual memory and changes in this area may cause malware to crash the OS. Have malware authors provided updates so normal uses can enjoy the benefits of keyloggers and RATs without risk of BSODs?
When I anticipate possible danger I move my foot over to the brake. Most of the time I do not need to press brake, but I am ready if the car that pulled out in front of me stalls, or the car about to turn right decides to wait for a bigger gap. With this new technology, I would have to keep my foot on the accelerator and hope complex software will detect my intention to stop if a possibility I anticipated actually happens.
Inflicting this tech on drivers who only react instead of anticipate will just cause them to pay even less attention. (And the next penguin to drop their bubble gum by the big barrel gets a sink plunger in the face.)
Back when Munich was about to switch to free software there was a report showing how much more expensive that would be compared to staying put. That report was secret, available under NDA for €40,000, leaked and blatant bullshit.
1) Lend it to someone who leaves it at the print shop.
2) Lend it to someone who drops it somewhere.
3) Lend it to someone who forgets they borrowed it.
4) Lend it to someone who puts it in the washing machine.
5) Someone complains the freebie flash from a sales rep stopped working after a week.
6) Someone complains flash bought at a market stall stops working after a week.
7) Someone complains flash found in the car park did something strange to their computer.
8) Someone complains flash bought at the supermarket (or any place that does not specialise in computing kit) stopped working after a week.
9) You bought it from a distributor within a month of them going bankrupt or being bought for a pittance and it did not work for the previous customer either.
More levels per cell requires increased over specification, but still works out cheaper for the customer.
Wakipedia knows about 449 fabs. 26 are marked as manufacturing flash or NAND (224 do not say what they build). Two are marked as making 3D NAND. Even assuming only 32 layers, either of those two are likely producing more bits per month than the other 24 combined. It would be nice to sum (wafers/month)*(diameter^2)/(scale^2), but Wakipedia does not populate enough fields in the table to make that easy. 6ish more/enbiggened modern 3D NAND fabs will increase capacity by nearer 500% than 10%.
Take a look at what happened with the memory translation hub. Intel will watch calmly while the smaller vendors get sucked into the wood chipper. Even if the big distributors get free replacement chips from Intel, the cost of distribution and installation will land on the distributors. Anyone - big or small - who soldered Intel CPUs to the PCB is in for a world of hurt.
Employing the best engineers is no use when PHBs insist on RDRAM. I cannot see rumours doing Intel any damage whatsoever when headlines across the tech press never did any serious damage before. Take a look for Intel's previous epic cockups in the main stream news. If they are mentioned at all it is only a few words because non-techies will tune out the moment a news reader tries to explain what speculative execution and virtual memory translation buffers are. Outside the tech news this will be forgotten by Monday. Customers will keep buying Intel despite FUCKWIT because most of them do not realise they have a choice.
Almost everyone who bought or sold Intel kit will pay for this mistake and only a small portion of the damage will land on Intel. A few of the big players like Google and Amazon might get a financial apology from Intel - if they can switch their orders to AMD/ARM. If you do not believe me, join the class action lawsuit and three year from now watch Intel settle ... with the lawyers.
Robert Bigelow has said he was absolutely convinced there were alien visitors to Earth. He founded the National Institute for Discovery Science to investigate anomalies. The Deputy Administrator of the institute appears to understand the difference between unidentified and extraterrestrial (the first requires the absence of evidence while the second requires strong evidence - some UFOlogists get this backwards). The institute was disbanded in 2004.
According to the article the idea for the Advanced Aerospace Threat Identification Program came from within the Defense Intelligence Agency. Pentagon officials went to visit Bigelow presumably because he has money and a desire to believe stories about extraterrestrial visitors. Bigelow took this to Harry Reid (D NV) presumably because he had power and a desire to believe stories about extraterrestrial visitors. Reid went back to the pentagon officials then roped in two more senators: Ted Stevens (R AK) and Daniel Inouye (D HI). The three senators got the funding approved, much of which went to Bigelow. All three senators did not want public discussion of the project or its funding. Pick a reason: embarrassment, to avoid being deluged with crank calls or corruption.
In Bigelow's place, I would have spent a pittance on evidence of a serious investigation, sent some campaign contributions to the senators and put the rest in my pocket. There is evidence of a serious investigation. Bigelow is the sort of person who would actually spend this money on the reason it was paid to him. I looked for campaign contributions back to the senators, but found nothing related to Bigelow (would not be hard to hide this from me as it is not something I do for a living).
Funding began in 2007 (I would like to blame Bush but I am sure he knew bugger all about any of this). The project officially ended in 2012 (I would like to blame Obama but I suspect the real reasons are that Stevens died in 2010 and Inouye died in 2012). The project has trundled on without funding for another five years and has hit the news because Luis Elizondo (project leader in the Pentagon) resigned to protest excessive secrecy and internal opposition.
It is possible the investigation caught evidence of secret aviation research or military aircraft where they were not supposed to be. In Elizondo's place I would welcome the secrecy to hide the fact that my department had nothing to show for $4.4M/year and to fuel support from conspiracy theorists.
So far, the story is consistent with people spending tax payers' money on a genuine attempt to investigate unidentified flying objects. If you have actual evidence of corruption (or extraterrestrials visiting Earth), please post it.
1) Because testing is impractical. In the commercial world, drivers get updated for a limited amount of time to require people to replace the hardware every 2-5 years. In the open source world, drivers can easily live a decade. Linux support for outdated hardware is outstanding, but utterly impractical to test in house because that would require a warehouse full of kit and someone to walk around to see if it is behaving as required during tests.
2) A good range of commonly sold machines more difficult than you expect. OEMs frequently change the bill of materials without changing the product name. Do you really want to buy a new machine of each type from each manufacturer every month and run lspci and lsusb to see if there has been a change? Rest assured the OEM will not tell you about changes and may not be able to get you a specific configuration on request. How do you expect to fund this massive regular hardware purchase from free software?
3&4) Gross professional incompetence springs to mind but management diverting resources from incomplete software to put out some other fire is very common.
Long ago, the boot sequence was area where the competent thought before they bought, and made sure they had access to an unbricking tools before doing anything interesting. These days, the competent are so vastly outnumbered by the clueless that UEFI exists without catastrophic loss of sales to the OEMS inflicting it on us.
Biting the hand that feeds IT © 1998–2019