Re: Because of "PC Sales"
Surely not so much the PC downturn, as the SSD upturn? Who wants a 250Gb HD when they can have an affordable 250Gb SSD?
3206 posts • joined 10 Jun 2009
Surely not so much the PC downturn, as the SSD upturn? Who wants a 250Gb HD when they can have an affordable 250Gb SSD?
Power from the mains and comms via Homeplug protocol might be even more useful.
(Not sure if I want my new smart TV online at all, but it's wired-only, and there's a fireplace between the router and the TV so installing that wire is non-trivial).
Write a shim that goes between the browser and the Flash player, displaying appropriate things about the dangers associated with clicking the "view" button and the stupidity of the site that wants you to use Flash. Give it a password option so you can lock non-authorised users of the computer out of Flash.
Flashblock contains most of this functionality, apart from the insults (which might have to be lawyer-vetted).
I'm starting to think that they are faulty by design ?
Only starting to think so?
I've thought that since about a year after flash first arrived on the scene. Only thing I'm not sure, is whether the faulty design is by incompetence or by malice.
Windows is also faulty by design, ever since MS broke the NT 3.5 kernel's designed-in security on purpose. Again, incompetence or malice? You decide.
Adobe Flash... pretty sure it serves a useful purpose, somewhere, for someone.
the NSA and other countries' intelligence agencies?
Decision time: Uninstall Microsoft Windows or install yet another critical patch
At least Windows can apply its own bandages ... until the bad guys get there first and cripple its auto-updating.
and it might come to that, soon
But surely there's hundreds of times more porn out there than any person could watch in a lifetime? So start recycling it, like rubbish.
about 600 EUR for a shoot
That's about sixty times the recently uprated minimum wage (1 hour shoot?), for which a tidal wave of illegal immigrants are trying to break into the UK. Methinks it's got a long way to fall yet.
Aren't there some strange folks who would pay the producers to be in a porn movie? Exhibitionists, I think they're called ....
The assumption seems to be that IBM is interested in this for making CPUs.
I would guess that the first application of very small (and therefore very fast) 7nm transistors will be in specialist datacomms devices (things like 100Gbit Ethernet). It's also probably true that in that field, billions of transistors aren't needed. Mere thousands might be useful, mere millions certainly would be.
There's a hint, in that they are talking about SiGe, not plain Si. SiGe is more difficult (but intrinsically faster).
I thought that discovery of the bullet galaxy had rather damaged the chances for MOND. It's two clusters of galaxies that have collided head-on. The normal matter has been slowed down by that collision. The weakly-interacting dark matter has continued at pretty much unchanged velocity, and it is now possible to deduce from observations (of gravitational lensing) that it is displaced with respect to the original galaxies.
It's not entirely clear-cut, though.
Proper engineering but with silly political constraints.
We've known how to assemble a nuclear reactor (U235 fuel, unshielded) in orbit and how to build ion thrusters for quite a while. Unfortunately the anti-nuke lobby are so strong, they won't let us do that, even though the vehicle wouldn't become significantly radioactive until after it had left Earth orbit, never to return.
Apparently no one forecast a one-ton Mars-invading laser-toting nuclear-powered space truck named Curiosity
I recall reading SF about lunar mining operations using tele-operated hardware controlled by people here on Earth, and how the speed of light made that bloody tricky.
Given the capabilities, weight and power demand of computing hardware back then, it's unsurprising that nobody forecast machine "intelligence" sufficient to allow a Mars rover or Pluto probe to look after itself in real time while communications between it and Earth crawled along at the speed of light. CMOS (and Moore's law) didn't arrive until the late 70s. (I'm ignoring the sort of SF that postulated hardware that could support AI, with no plausible extrapolation from then-existing hardware.)
There is definitely something worthwhile in Bitcoin given the investment that is still being put towards blockchain R&D.
Always bear in mind that currency has two purposes: a store of value and a medium of exchange. Those who seek to make a fortune using bitcoin as a store of value, risk losing a fortune. It's more fragile than electronic pounds sterling or dollars, and *much* more fragile than gold.
But as a medium of exchange, it may have a lot going for it. Coin, Banknotes, Cheques, Visa, Paypal ... Bitcoin?
There was a woman who put something like 150k miles on her SUV (or whatever) without a single oil change, without even knowing that it was necessary,
The people who really need to be laughed at are the ones who junked that SUV when it finally broke down, or tipped the oil in the sludge can and serviced it in the usual way. Had they stripped that engine down and analyzed the oil in forensic detail, they might have learned something valuable! (I'm assuming that they didn't).
Reading between the lines, I think that's probably "in theory, ignoring noise".
Allowing for noise, either this piece of magic will fail rendering the whole thing useless, or assuming that the boffins have more of a clue than the journalists, you'll be running a sort of physically accellerated annealing process that will give you an answer that's pretty close to THE answer to the NP-complete problem. However, it won't necessarily be that answer, and you may never be able to know whether it is or not. For a travelling salesman problem (route optimisation), that doesn't matter. Any close approximation is equally useful. For other (crypto?) problems, that most certainly does.
It's almost certainly a good idea to build one and study its operation. As with quantum computation, detailed information on what can and can't be achieved may lead to the next breakthrough in our understanding of our universe.
I can't see a bank of capacitors for an emergency power reserve. If I'm correct, what happens to the contents of that 256Mb of RAM when the power fails (especially if the drive is in the middle of remapping some blocks on which your filesystem metadata resides)?
Kinetic energy = mass times velocity squared. Mass is proportional to diameter cubed. So there's a factor of about eight from the relative sizes. Double the velocity could easily account for another factor of four. Finally, there's the extent to which it dissipates its energy in the atmosphere before impacting the ground. It'll be far worse if it's coming straight down compared to a very oblique impact with the atmosphere. A large one will shed relatively little energy in the atmosphere. Small enough, and it's just a shooting star. That Chelyabinsk meteorite shed all but a small amount of energy in the lower atmosphere, which broke a lot of windows over a wide area rather than subjecting a smaller area to the equivalent of a small ground-burst nuke.
If you can't divert it enough to miss the planet completely, divert it into an area with low population density and evacuate that area, or onto an island and evacuate that island. Make sure it misses cities and oceans (the latter because it'll cause a Tsunami all around that ocean's coasts).
Millions of years isn't really the right measure of distance. Some organisms haven't changed a lot since well before the dinosaurs (sharks, for example). There are "living fossils" that are still extant, but most closely connected to larger groupings mostly long-extinct (the pearly nautilus is an example).
And then there are the Archaea, single-celled organisms with biochemistries far more different from today's main groups than the difference between a man and a cabbage.
Check out the OpenWRT Hardware list http://wiki.openwrt.org/toh/start
The one I have is the Trendnet tew-732br
installing openWRT Barrier Breaker was a piece of cake.
The one problem is that AFAIK there are NO ADSL2+ All-in-one routers that can run Openwrt, so I'm using my ISP's Router to NAT everything onto a piece of wire connected to the Linux router. Double-NAT isn't perfect, but I can cope with it. Or you coudl buy a proprietary ADSL2+ box that can run in bridge mode (but apart from the Draytek PPPoA to PPPoE bridges, most are doing horrible things that a bridge really shouldn't be doing)
In most jurisdictions, wilfully aiding and abetting a crime is a crime, and inducing a well-intentioned individual to commit a crime in ignorance of doing so is usually regarded as a worse crime.
Watch out, Microsoft execs!
A good solution would be a multi-SSID capable AP with VLAN capabilites (but the latter requires also support all down the chain...) to segregate some less secure device, but not every user is a sysadmin with the required knowledge, for many wifi and internet is just a plug&play "experience", and without knowledge, they can't undersand the full picture...
Well, you can get a router with that hardware for about £15 and OpenWRT to enable the capabilities for free. So with a bit of luck, someone will package it all in a form that the only slightly clued-up can use and either open-source it or sell it (if it's a user-mode wrapper running on OpenWRT or similar, the GPL doesn't force you to give away your source, only OpenWRT).
You don't necessarily need VLAN support in the rest of your hardware. You just need routing rules to segregate your subnet from the kids' one.
How do you know that these won't last twenty years?
Over the years I have encountered models that were complete lemons, and batches of formerly reliable drives that suffered presumed common-mode component failures. Excluding these, I've found that the majority of IDE and SATA drives were working well up to the day the system they were in was scrapped. No manufacturer stood out as better or worse, but really unless you are the like of Google (who aren't telling), you haven't got a big enough sample set to judge past history let alone extrapolate the future of a newer model.
By the time you (or the manufacturer) knows that a particular design is long-term reliable, it is also obsolete and no longer in manufacture. So cross your fingers, touch wood, mirror your disks, pair different manufacturers to minimise common-mode risks, and make sure of your backups!
Yes. But a lot of that is because the format (pdf) is not designed for e-paper displays, or perhaps that the pdf software in the Kindle is not well developed (because there's no money in it?) Not that much money in technical publishing either, compared to entertainment-type novels. That's why the mashed-tree technical books cost so much.
This is the sort of thing that would rapidly get fixed, if Kindles were open devices.
E-readers. I love them. But they'll most likely, in the long run, die just because they bloody _work_.
Not the ones that are tied into a proprietary locked-down framework for selling content. Amazon will carry on selling Kindles to supply a replacement market, because the profit is in selling the content to read on them. They'll buy the company that makes the displays if they have to. Having used a Kindle I'd buy a replacement even if my daily newspaper subscription was the *only* available content.
I really wish that there were an open equivalent to a Kindle (even if only as open as an Android phone, rather than a Linux'ed PC), but I think you've nailed why there isn't and probably won't be.
there has never been a rocket system that hasn't had a catastrophic failure at one time or another
For unmanned rockets, occasional catastrophic failure should probably be designed in at some level.
The penalty of any added weight for something going all the way to orbit is very high (in terms of reduced payload). Henry Ford once asked which parts on his cars never went wrong, and then ordered "make them cheaper". For getting an unmanned vehicle into orbit, there's far more justification for "make them lighter".
This is also the weak point of any proposed spaceplane. Because it'll cost very much more than a rocket, it has to be reusable many times, but that level of reliability will impose a weight penalty. It would be a non-starter, if it didn't have the advantage over a rocket of being able to do away with a large weight of oxidizer (it can use ambient air until it's a few miles up).
A better simile might be "as if I had been born with brown eyes and was still 20 years old". (I have blue eyes, which are more prone to dazzle than brown ones, and my natural lenses will be less clear than they were in my youth -- give me another forty years and a cataract operation will probably be the least of my worries).
Apart from oncoming drivers who don't dip their xenon-arc lights, my other hatred is the highway designers who think it's sensible to light junctions and roundabouts to near-daylight intensity, leaving the rest of the route unlit. So you lose your night vision passing the junction, and wildlife pays the price. Why not light the whole route to a much lower intensity, say that of a full moon, for which our eyes are well-evolved? Especially now we have LEDS which are a very good match to that requirement. Heck, you could probably run LED lighting off batteries charged from small solar panels, so no expensive copper wiring needed.
But most people I know rely entirely on wireless phones, which won't work during a power cut.
And I've always wondered, why? The phones have rechargeable batteries in them that last for several days on idle and many hours of talk. Why don't the base-stations also have rechargeable batteries as back-up? Maybe the battery uptime would be hours rather than days, but and awful lot better than zero.
I still have a non-wireless phone is a cupboard, so I can avoid being charged by BT for diagnosing that my phone line is OK but my base-station has died. I thought everyone did.
the phone lines are CRAP
That ought to be an acronym.
Copper wRapped Aluminium Padding?
I just can't understand how anyone considers it acceptable to have to pay a supplier to fix defects in the product they sold you because the defects were not discovered within some time limit the supplier set.
Right now I'd (somewhat) happily pay Microsoft for another XP license, complete with all the bugs that it had at its termination date. BUT I CAN'T.
I am looking at an XP PC embedded in a microscope that cost a hundred grand when new, and which is still working and useful and another hundred grand to replace it (which is out of the question). But the PC is flaking out. I can't simply stick a copy of its disk into some other PC and make it work because it's an OEM XP License locked to that (ancient) motherboard. And some experimentation is likely to be required, so even if I could get Microsoft to transfer the license once, that may not go enough to solve the problem. (It would be nice if we had an installable copy of the software we need to transfer, but needless to say we don't, and the microscope manufacturer isn't around any more).
I've wasted a day on this Wombat already. I'll need to track down a second-hand Windows XP Retail license, so I can do unlimited reinstalls. They're selling on Ebay at a **premium** to the price that Microsoft charged while XP was available for sale. What does that tell you?
Surely MS could at least sell XP Transfer licenses, so people could keep their XP running until eventually there's no compatible hardware left for love nor money (sometime around 2060 I'd guess). But no, they just want to piss on us.
Lots of fossils in our rocks. Some rocks (for example chalk) are all-fossil. But whether they'd have found any fossils after exploring only to the extent that out Mars landers have explored Mars, I don't know.
The ruins of dams will probably present evidence of intelligent life visible from Earth orbit for some tens of millions of years after the demise of homo sapiens. Inactive geostationary satellites will last for rather longer.
BTW does anyone know if life on Earth can be deduced by the isotopic ratio of C12 and C13 in the CO2 in our atmosphere? Life selectively excretes C13 to a small extent, and our bodies are C12-enriched. When ocean life dies, it takes C12 to the bottom from where where some of it gets subducted, so the atmosphere must be slightly C13-enriched over the natural abundance. Detectable remotely?
Add "no Moon" (no huge one like Earth). That creates a lot of heat inside the Earth by tidal drag, and also stabilizes our axis of rotation. Our moon may well also be a key element in the not-well-understood generation of earth's magnetic field.
It's also possible that Mars's core is less radioactive than Earth's, because Mars formed further out from the sun and the natural radioactives are less volatile elements. That's a lot less certain.
Yes. I think you'd now be on mine if I'd bought one of your infected laptops described above. You've escaped by a whisker.Take note.
I still haven't forgiven or forgotten the two hassle- and stress-filled days of my life which Sony inflicted on me, by putting malware on audio CDs a good many years ago. I have a personal "buy Sony last" policy and that will last until I'm six feet under, or until Sony is in the corporate graveyard, or until enough competing vendors do enough even worse things that Sony fall off the bottom of my list.
Consumers are no longer valued customers,
So opt out. Become one of the folks that we are forever being told "don't exist". Run Linux on your desktop, laptop, ... and become a participant in a community rather than a resource to be exploited.
If that's too radical for you, dump the bloatware distributors. Build or buy a bare PC and buy your Windows from Microsoft. At least that way you'll be free of any 3rd party bloatware and will be able to nuke and reinstall whenever you need to.
That will work until the on board Wi-Fi gets smart enough to find an unlocked network somewhere in the vicinity to the TV then all bets are lost.
I hadn't thought of that horrible possibility. (that the malign "they" will start putting unsecured base stations out there for thingies to connect to, in case we are unkind enough to refuse to connect them to our own broadbands. Stealthed base stations with source filters, so most of the world that isn't a thingie will remain unaware of their existence ... )
I guess we'll have to open up our TVs, locate the wifi aerial, and remove or mangle it, and hope that the TV still works as a TV. But by then they'll have stopped broadcasting in favour of internet. Oh dear. The Vingean nightmare of civilisational death by omnipresent surveillance looks to be happening a lot faster than I'd hoped.
Actually, I'm pretty sure that educated Chinese are wilfully unaware. In the same way that most of the population of Nazi Germany was unaware of the fate of Jews "resettled in the East". It was a good idea for self-preservation to keep one's doubts to oneself.
So, if I had one of these devices in front of me now and I set it to solve a 1000-node travelling salesman problem, would it be able to spit the answer straight back
As I understand it, a perfect 1024-qubit quantum computer ought to be able to do that. (My own belief is that our attempts to build such a device will always fail, and will tell us something interesting about the nature of the universe once the reasons for failure are well-understood).
Clearly these devices are imperfect. I think there's some debate over whether they are actually quantum computers at all in any meaningful sense. But if they can outperform a conventional compute cluster eating a few megawatts and as much financial capital as it takes, I should think that's good enough for the time being.
Take a look at a photo of an old enough computer that the CPU consisted of a large number of logic modules connected with a wire-wrapped backplane (for example Google "Images PDP-8 Backplane). You'll soon deduce that the interference problem is not insurmountable. It was not negligible, though!
The routing of wires within the backplane was a black art. Some were artificially lengthened so as to introduce deliberate signal delays. Others took non-parallel routes from A to B to reduce crosstalk - interference is by far the greatest between closely parallel wires. The general term was "random-wired". It was most definitely not a good idea for structure in the circuit schematics to be explicit in the physical arrangement of wires in the backplane.
There's also bit-serial parallel computing ... SIMD, with one instruction at a time broadcast to an array of one-bit processors. The ICL DAP, if there's anyone else out there who can remember that ill-fated project. I had great fun one summer learning to program it in assembly language.
I've met computer science graduates who don't understand the connection between writing a value to an output port:
and (say) light number 3 going to brightness level 12/16.
A technology that existed in Babbage's time, but of which Babbage was unaware, is hydraulic logic. It's possible to create a bistable out of fluid (air) being pumped through an appropriately shaped cavity, and to switch it between its two stable states using pipework connected to the output of others. Logic gates are also feasible.
Anyone fancy building the world's first (?) hydraulic programmable computer?
Or even a simulation thereof, just to hear what it might sound like while it is computing.
Those wires act as huge capacitors which need to charge and discharge on each cycle to allow the signal to stabilise.
The general rule for a signal to settle on a plain old wire is something like six times longer than the speed of light along the wire. (Or two to-and-fro bounces at 0.7c)
I've often wondered what is the optimum design for a discrete-transistor computer. Minimise the transistor count, build as small as possible, and clock as fast as possible, or go for wider buses and more transistors clocking more slowly? (Of course in the early days they went for small component counts, because transistors - germanium alloy junction ones - were significantly expensive, and suffered thermal runaway at fairly low temperatures so cooling really mattered. )
You can get a J1800 / J1900 Board. My home system is build around a Gigabyte GA-J1800N board (fanless, 2.4Ghz 2-core Celeron). The J1900 is Quad-core, but the cores have a lower top speed, so for single-threaded apps the dual-core may be faster. The board includes the CPU and heatsink.
I immediately thought of a companion phrase, "slightly pregnant".
If you want quiet, the only way to go is passively cooled, SSD, no moving parts at all. You can do that with a NUC in an expensive, heavy, cast aluminium case. If you don't need such a powerful CPU, I'd recommend an ordinary ITX board in an ordinary ITX case (still small enough to attach to the back of a monitor). Boards in question have dual- and quad-core Intel Celerons with ~15W TDPs.
The city-buster meteorite now has your name on it.
3D printing costs more and yields results less polished than the hype suggests. For one-off prototyping it's great, and it may have a good future for rarely-needed replacement parts. For mass production of anything except very small components, injection moulding wins hands-down.
(Very small: well under a centimeter cubed, which a 3D printer can print in fairly large multiples per job)
Built in SATA and space for at least one hard disk. A second SATA and space for an optical drive would be nice.
Room for a couple of expansion cards
And yes, I'd be prepared to pay thirty or forty quid for it. Frankly, I'd settle for an easily available board to make it ITX compatible.
You don't get SATA and PCI interfaces from a case! You're describing a different platform.
Are you aware that you can get an ITX format fanless Intel-x86 PC board with all you ask for, plus probably rather more CPU grunt, expandable RAM, etc. for £50ish? Gigabyte GA-J1800N-D2H. Built my home PC around this - a 100% solid-state PC, completely noiseless. Although to be fair this is around £200 by the time it's built into a full-blown PC, or around £100 for a bootable bare board.
More Pi-like, there's the CubieBoard series with SATA and more RAM than a Pi. I've not used one. https://en.wikipedia.org/wiki/Cubieboard
Cheapest linux system-in-a-box is a broadband router running OpenWRT or similar. I have a £17 Trendnet router (check the OpenWRT hardware compatibility list). Inside is a five-port fully VLAN-capable switch and double the RAM that most of them boast. Slightly dearer ones also have USB ports. Others cost under £10 for CPU, Lan and Wireless. Main drawback with routers is (usually) very small RAM capacity.
Dum-dum rounds have banned for military use since ~1900...
though probably only because all the world's militaries could work out that this both looked good and was how they'd act anyway.
Because you'd rather that your bullets seriously injured enemy troops than killed them. An injured soldier consumes far more resources on and off the battlefield than a dead one.
Isn't war horrible.
I imagine it could run on a Pi. Might be a bit sluggish but hard to beat the price. For a faster desktop system, you can choose between an old PC that Windows won't run in (probably scroungeable for free, but will eat £30 of electricity quite soon if you leave it powered up) or a fully solid-state system based on a fanless mini-ITX board and case such as Gigabyte J1800N-D2H or its quad-core J1900 variant.
Anyway, whatever you run it on: Cinnamon - completely recommended.
In passing Cinnamon works (yum install) on Fedora 20, maybe older Fedora. And from memory, on Centos 7.
My experience is that Intel graphics hardware works well with Linux, and that's because Intel have been very supportive of Linux for quite a few years. Of course, Intel graphics hardware isn't the fastest, should you actually need high performance by today's definitions thereof.
NVidia are still sticking to their closed-source binary blob. When it works it works well, but when it doesn't work with your current kernel / distro / whatever, you are stuffed. Good route to upgrade hell as well. I buy these only if there's a good reason to (most often, a package demanding a CUDA-capable GPGPU to run at all or to run much faster). I sometimes wonder if they won't go open-source because when the card isn't doing your graphics, it's pillaging the internals of your PC on behalf of some three-letter agency! (yes I know ... more prosaically, they don't want to tip off whoever owns the IP that their hardware is arguably infringing).
ATI were late to the open-source party. Don't know how they are getting on, nothing I look after uses ATI.
Quite often, what's described as graphics driver problems is actually problems in Gnome / KDE / whatever (user mode code). Nothing to do with the Linux kernel or driver, but rather with the desktop project or your distro's packaging thereof.
Actually I'm fairly certain that if they could get 20x capacity by stacking a dozen or more platters, each with twice the area, in a 5.25 inch full height container, then they would. But there are good physics reasons why they can't. Such large stacks will have all sorts of extra vibration modes, and failure to tame any of these would make the whole project non-viable. There's also the extra inertia of a bigger stack of heads, increasing problems with inter-drive vibrational coupling for those who design whole storage arrays. And of course, multiplying the number of heads and platters will considerably reduce the MTBF of the assembly, to the extent that it is heads and head/platter contact issues that dominate HD failures. I'm not certain as to this being the case, but in my experience over half of disks fail "soft" with deteriorating SMART metrics and increasing bad block counts. "Instant Brick" is relatively less common, especially once drives have survived in service for a month or so.