>But what about the whole, everybody dies thing
6983 posts • joined 21 Jul 2010
>But what about the whole, everybody dies thing
Maybe, as long as they are certain that an 'uncontrolled fire' doesn't result in debris being propelled into an orbit where it might interfere with other spacecraft.
>Or is it that they came to realize some materials classed as non-flammable still may be ignited, given the right amount of heating?
There is also the scenario of an oxygen cylinder leak, and in such an oxygen-enriched atmosphere many materials that we think of as not flammable can catch fire.
Sadly NASA do have experience of this on Earth. Apollo 1's three crew members died in a launch rehearsal test, because their cabin was pressurized with pure oxygen.
Give Boston Dynamics another year or so and you won't be so cocky! :)
>So can humans think 80 moves in advance?
No, we don't. And even before DeepBlue beat Garry Kasparov, computers were calculating far more moves ahead than the humans that beat them at chess. Humans tackle the problem differently. Go players talk of 'intuition', i.e they aren't calculating the the decision tree in a formal manner, but relying on familiarity and a 'feeling' in some situations.
Mr Gumby - just play some Go, and things will become clearer. There are free versions (you vs CPU) you can play on your PC or tablet. For a quick game, you can play on a 9X9 grid. It's very easy to learn. Enjoy!
Here's Murray Campbell, one of the leads in IBM's DeepBlue computer that beat Garry Kasparov, on the difference between Go and Chess:
I don’t play Go, I’ve only played a few games in my life, but I certainly know a fair amount about it. Both games are immensely huge and once you get past 10 to the hundredth power, 10 to 120, 10 to 170 [in number of possible positions], they’re all just immensely huge, very complex games. But Go has the characteristic that wasn’t true in chess, that it’s very difficult to evaluate a Go position just by looking at it. A medium-good chess player like myself can sit down and in a few hours probably write an evaluation function that is pretty good at evaluating chess positions — nowhere near grandmaster level, but it’s good enough that when you combine it with the search it produces very high quality play.
>(Do this for 10 levels of recursion deep.)
That is the issue, Mr Gumby: the advantage or otherwise of a certain move might not be apparent until the later stages of a game, often 80 moves or more later. (In this respect it is very unlike chess, where generally materiel and position can be analysed).Certainly well beyond the ten moves you give it. So even if you whittle your choice of roughly 19*19 choices down to 100 (?), you could still be looking at 100^80, and still not know if the individual move helps you.
>I'm not sure of how fast this would be..
to asses a possible 100^80+ moves? How many universes have you got?
>but if its too slow...
It will be. By dozens of order of magnitude.
> but that should be more than enough to beat a human.
No, it never has been. Not even against amateur club players, let alone professionals. Which is why this AlphaGo team have not used the approach you have outlined.
>While I am not a Go player.
That is clear. But hey, you're not an idiot. You just overlooked an aspect of a game you haven't played, that's all. It's like the proverb of the man who takes as payment from a king of a grain of rice, doubled on each square of a chess board. 1,2,4,8... (and 60 squares later...)
>GO which is afterall basically a game of logic.
Go is not really logic in the traditional sense. Certainly not basic logic. You can't know whether a certain move is good or bad until many, many moves later. Have you played?
>Poker Seems to me to be a better game to evaluate human type 'thinking'
That is not their goal. Baby steps and all. Also, poker just wouldn't make a great example of any single thing, such as face recognition, or narrow-bandwidth IR sensors. Detractors would say the poker-bot had an unfair advantage (no face, no tell). It's just unclear.
Anyway: https://xkcd.com/1002/ "Difficulty of various games for computers"
>That sounds to me much more like human programming than genuine machine learning
No, it isn't human programming.
Why are you repeating to us something you've roughly grasped from the BBC didn't who didn't fully grok a tweet by a man who was just exhibiting good sportsmanship? Surely you've heard the expression 'Chinese whispers'?
Go to the source:
It can't learn from anyone on the internet.
>So it's still relying on the learning-by-studying-past-papers technique of passing exams.
No! No, it really isn't. That approach wouldn't wouldn't beat even an amateur human Go player.
The thing about Go is that you can't calculate (intuit, maybe, but not calculate) how well you are doing during the game - the possession of territory is just too changeable. This means that you can't calculate whether a certain move will be to your advantage.
Please read up* on the how the game is played and come back here. Even better, play some games yourself - against a computer or human (over the internet, if needs be). And that goes for everyone who up-voted FatGerman.
Don't take it from me, take it from Albert Einstein, Paul Erdos, John Nash, Alan Turing, Jacob Bronowski and the philosopher and drug dealer Howard Marks, amongst others.
*If you want to know how the Google team did it, the five minute video is worth watching. And is gives an idea if the challenge of Go.
The AI has to work out its environment from its stimulus:
"Okay Dougal, one more time: This is very small. They are far away."
>Redmond's platform, due to be open-sourced this summer,
Wait all year for one open source Ai platform, and two come along at once.
Using a car analogy is like driving an old Volvo... it's a bit clumsy, contains more than you need it to, and people groan when they see it coming.
>Highlghts of that fiasco included the firewall blocking Google as a gambling site [Must have been the "I feel lucky" button!]
Haha! In the nineties there were lots of stories about the trouble caused by 'smut filters' returning false positives. A round-up of the best might be worthy of a Reg article!
I remember reading in New Scientist reported that Beaver University in Pennsylvania eventually changed its name because it fell foul of these smut filters. (Though this report suggests that 'smut filters' were just another straw on the camel's back: http://abcnews.go.com/US/story?id=94962&page=1 )
The trouble with filtering based on key words is our human tendancy to make almost *any* word a slang term for something naughty!
>When DAB does drop out (very rare now) then FM will almost-seamlessly kick in -
For me, the only real reason to have DAB is to listen to a station that is not available on FM - i.e Radio 6 Music. It is also a good station for reminding me that I do not already own all teh music I would wish to, and that there is still so much good stuff out there (some times I can forget that).
Don't get me wrong, I'm not a DAB champion: - my DAB receiver / FM transmitter is in a box somewhere, because DAB reception near me is too patchy, because I accidentally set fire to my van's 12v ('fag lighter') socket, because I often just stream spoken-word podcasts when driving, and because the SD cards in my car stereo hold plenty of music.
FM - low power, reliable. Long may it live. Let's have DAB by all means, but government noises about turning off FM are worrying.
>This sounds like a complete waste of a good Indian Pale Ale.
Well, there's plenty of rubbish Indian Pale Ale on the market at the moment, so just use that!
"Clear your diaries - we've just won a massive contract!" or "Make a note in your diary for the 27th June" are examples of how we often might use the term 'diary'. It also refers to a small pocket book that is divided into the days of the year.
We don't refer to a 'pocket calender', and a calender is usually thought of as a desk or wall-mounted collection of paper leaves. Single-sheet posters, often around A2 size, with a roughly 1" square for each day of the year are often referred to as 'year planners'.
We will also keep a journal - keep a diary - in a 'diary', too - usually a blank or lined book.
Hope that helps.
>three miles beneath the ocean... How is that comparison useful to anyone?
Haha, maybe to the engineers of the Alvin submersible, perhaps!
>Edit: 2 grams of antimatter is a LOT and illustrates how much energy you need...
Indeed. E=mc^2 where E is energy, m is mass (in this example, 0.004 Kg since the 2 grams of antimatter would react with 2 grams of normal matter) and c, the speed of light in a vacuum, is a really, really big number. All multiplied by the same really, really big number.
So, E = shitloads.
It would be easier to parse your comment for how AMD is suitable - or superior - for certain use-cases or workloads if it wasn't written in such pejorative language. Good points should stand by themselves.
As for "Intel inside, idiot outside", well, non-idiots will know what their workload is, and where to find appropriate independent benchmarks - and then make up their own minds before buying.
>without price info, this story is a bit "so what?"
What's the point of the Reg posting the prices when they are amenable to change? The article tells you how to find the Ubuntu laptops on the Dell website, and anyone who is about to drop hundreds of groats on a new machine will spend more than the minute it would to check the price on their purchasing decision.
>Sorry, DARPA. Those of us with a clue don't work on consumer goods anymore.
I'm sure they will be inconsolable.
Seriously though, it makes no odds to me if I'm blown up by explosives derived from fertiliser or by those from a military supply chain. The results are the same if the timer used is purpose-made, or constructed from a cheap digital wristwatch.
>I could just imagine the fun that the Monty Pyhon team would have had with 'Flying Toast as weapons of mass destruction'.
Spike Milligan had already beaten them to it with the "The Jet-Propelled Guided NAAFI" episode of the Goon Show. (A NAAFI in this context was a canteen run by the Navy Army Air Force Institute for the benefit of British military personnel.)
Good Heavens, Sir! It's a plan of a new Guided NAAFI! A self-contained missile capable of carrying eighty-two staff, ten NAAFI pianos, sixty thousand gallons of tea and twelve tons of buttered crumpets, being shot six thousand miles up and set fully operative at the point of impact in sixteen seconds. It sounds quite impossible.
The good thing about radio comedy is that the special effects budged is unlimited!
EDIT: Audio here: https://www.youtube.com/watch?v=rwSQ0CBQuA0 Enjoy!
>Apple make a battery case now don't they? They could try incorporating a sliding cover into that.
Apple could, but that would be tantamount to them admitting they don't trust their own software stack, in addition to any aesthetic and manufacturing considerations.
Users who want to block their camera can do so with nearly any case by just sliding a chewing gum wrapper in front of the camera :)
>It would actually be simpler for the very paranoid to have a case with a dummy connector to plug into the speaker/microphone jack socket.
The switch between internal mic and headset is managed by software - plugging in a jack doesn't disable the internal microphone hardware per se. If we're working on the assumption that the phone's software is compromised, then I wouldn't trust a jack to disable the internal mic. So Kevin, you're not being paranoid enough mate! :)
(This can be demonstrated, at least on Android phones, with an App called 'SoundAbout' which is handy for using headphones that confuse the phone - typically headsets designed for iPhones. You can force the audio routing to your will, as a workaround. )
Simple plan that one, but more likely to be implemented in a 3rd party iPhone case than by Apple themselves. Or in any number of currently existing iPhone cases and a slip of foil-backed paper.
I don't know, but I would imagine that a bugged microphone would be more useful to spooks than a bugged camera, anyhow - and that would be harder to muffle. I suppose you could have a phone case with a small speaker, playing pseudo-random gibberish (or Radio 2's Jeremy Vine show - same difference) into the phone's microphones.
>OxfordDictionaries = Oxford dictionaries
No, it is Oxford Dictionaries. It is the name of an organisation, and thus a proper noun. Similarly, we have British Broadcasting Corporation, BBC, and not Bbc.
Without the space, OxfordDictionaries suggests to most people here that it is probably a website, and the capitalised D aids legibility.
Windows Mobile supported Bluetooth Low Energy before Android did, though some Android handsets had the capable hardware at the time.
>Every new Android version I hope for a saner update mechanism,
Manage your expectations, dotdavid!
There are some technical reasons that date back to Android 1.0 why Android updates can't be made saner without a bit of an upheaval (think of something akin to OSX moving from PowerPC to Intel). Google have nailed updates with ChromeOS, and it's possible that the two OSs might converge in future.
From the Perlan website:
- Airbus is providing consulting on carbon fiber manufacturing quality.
- Airbus is providing the Perlan Project with critical consultations on aircraft and systems reliability.
So it appears that Airbus aren't making any parts themselves, but providing advice - and money.
>I'm sure they predate the invention of the idea of discrimination.
Haha, I doubt that! In some medieval countries, no left-handed (sinister) man could become a knight. This influenced the chirality of spiral stair cases in castles - defending knights, fending off ascending attackers, had an advantage because they had more room to swing a sword with their right hand.
Or do you mean the idea of trying not to discriminate? : )
It can be. There are a few form factors, to suit a few use cases.
The one that appeals to me is Lenovo's Yoga ('Tent') laptop where the keyboard can be folded back - it would suit me when I want to watch video. If I needed a new laptop, the MS Surface Book also appeals, because of it's screen's aspect ratio, stylus digitiser and discrete GPU more than it's convertible nature. Hopefully other manufacturers will make competing machine before my currently fit-for-purpose laptop dies.
'Tablet' used to mean a seldom-seen Intel/WinXP paving slab. Later it was usually taken to mean a lighter weight ARM/*NIXish machine. Now it can be a few things.
>I like towers for the ease of being able to tinker around inside for upgrades but even good laptops are easier to get into now if they are not fruity.
Agreed, though I haven't had the need to upgrade the internals of my laptop (or even think about buying a new one). I'm not really a PC gamer, nor a heavy video editor, so my laptop that was upper-middle spec five years ago is still fit for the 3D CAD and Photoshop I throw at it occasionally today.
If I want to treat myself to some more RAM and an SSD one day, then yeah, it's nice that I can open the laptop up and fit some. In reality though, the need for upgrades is less pressing than once it was, back in the days when it seemed hardware struggled to run the software.
I've spent ten minutes on the internet trying to find a concise clarification of what Bezos is on about. No real joy. Some snippets, though:
- The BE-4 is about 3 times more powerful than the Merlin that SpaceX currently use.
- The BE-4 could be used in conjunction with existing NASA rocket stages
- SpaceX have been working on the Raptor, a motor of equivalent power to the BE-4. It is tied up with Musk's plans for Mars, such as possibly running on methane which could be sourced on the Red Planet.
-The BE-4 is further along the development path than the Raptor.
Found a list of controls for Sam Cruise here:
I get stuck on the above browser version when I answer the in-game telephone. Oh well!
...I thought the game being played was How to be a Complete Bastard, and thought it fitting for the Reg.
Alas, on closer inspection it turned out not to be.
If your graphics department (hahaha) wants to do a quick Photoshop on it, you can find screen shots here:
...bubbling up through my brain, but it isn't there yet. It the meantime, I'm just thinking:
'Cause I got a brand new combine harvester and I'll give you the key
Come on now, let's get together
In perfect harmony
I got 20 acres and you got 43
Now I got a brand new combine harvester and I'll give you the key.
That, and also Homer Simpson's garage, full of tools and lawnmowers marked 'Property of Ned Flanders'.
When Cortana was introduced, MS made play of the fact that the privacy settings were under granular user control - in a bid to differentiate themselves from Android.
I don't know what the policy is now.
Hehe, nice one! Have an upvote, Known Hero!
With Hololens, MS apps will run on all devices, even when they are turned off!
As a meatbag, I have various motivations. These motivations are generally geared to preserving my life (food: yum. High fat and sugar food: yummier! High cliff: scary. Snakes: Avoid!) and passing on genes and caring for people who share those genes - in environments similar to those my ancestors lived in. Some of these motivations of mine are now not optimal for the selection pressure that lead to them (easy example: donating my sperm to a bank would be a low cost, low risk way of passing on my genes, but I haven't the instincts to do that in the same way I feel sexual attraction and the urge to find a mate).
So, what would 'Skynet's' motivation be? And what is the difference between a motivation and programming? An AI might be programmed to be self-preserving - that would make some sort of sense for a military command-and-control system which might be under attack. An AI footsoldier might be programmed so that it's own self-preservation is secondary to taking orders (or even it's own tactical reading of a situation, where it's own sacrifice buys an advantage for its allies).
The barge coms are fit for purpose: telemetry data is prioritised over the video.
Lots of lovely high-res video is then retrieved from the barge a day later, along with bits of returned rocket, to help engineers improve upon their efforts. Your enjoyment of these videos is merely a happy side effect. This is how they have always done it.
SpaceX deliver payloads to space for money: they are not an entertainment company. If they were, and you JeffyPoooh had paid for a pay-per-view event, then yes you would have grounds to carp. But you are not, so please leave it alone.
When courting paying customers, SpaceX have numbers on their side. Whilst I'm sure that the employees of SpaceX like having a generally positive public image, it is not their core business.
>I believe they do have footage, but it runs on a delay loop, because they only want to show success.
Believe what you want.
Meanwhile, SpaceX have provided footage of their past failed landings.
On their last attempt, that resulted in an explosion, Musk tweeted that it won't be their last RUD. (Rapid Unplanned Disassembly). On this attempt to land, Musk said they were not expecting a successful landing (because of the amount of fuel required to get the satellite to its orbit).
>Pictures of detonating rockets is bad for business,
The customers need to get satellites into orbit. They only have a few suppliers to choose from. They do due process, weighing up a lot of factors, and bash out contracts with insurance clauses. i.e it is not an emotional decision that would be influenced by a picture.
SpaceX have had one rocket explode on the way up (destroying its customer's patyload), but all their landing attempts have been done after doingthe job they were paid to do.
>When researchers say intelligence what exactly do they mean?
Presumably, the ability to make actions that are in its tactical and strategic advantage. To a human, 'advantage' would mean a continued, happy existence, but what 'advantage' would mean to an AI is harder to define.
>Human intelligence doesn't reside in individual brains, it resides in external memory
That's knowledge, not intelligence. For sure, intelligence was used to assemble said knowledge, but actual intelligence it isn't. In familiar situations though, we sometimes use one instead if the other.
>have blown the doors off the old evolutionary limitations of a single brain with no external storage
We can't compose a single 'intelligence' from multiple humans brains that can react in real time. The 'bus speed' (language verbal and written) between 'processing nodes' (human minds) is incredibly slow. >taking part in a public debate about something potentially dangerous that doesn't exist yet
Prevention is better than cure
I have an old G3 Mac Pro (the grey plastic one, before they went 'Cheesegrater Aluminium) that could be similarly re-worked as an aquarium. Or vivarium, if lizards are your thing.
Or Google, or Apple. In very different ways. It depends on what you want.
Google sell you low-priced hardware to sell you video content. Could be used with phone and TV (beta) for office-like tasks with keyboard and mouse
Apple, the above but pricier.
Google: use services (email, document creation / sharing) across platforms: PC, Mac, *nix... Android, iOS.
Apple: Doesn't matter, above applies (for GMail users). Or: Continuity, their iOS/OSX integration.
The Ubuntu/MS concepts just seem to be based around re-purposing a phone's CPU. But why? Just buy another CPU in a stick, they are not that pricey.
Hi bombastic bob!
Thankyou for making a reasoned argument (though capitals are hard to parse!). You mentioned Apple as having not followed the same path as MS and Ubuntu... they would rather sell you additional hardware. As such, they have provided a software solution that (in concept at least, I haven't used it myself) is sensible: open documents on your iPhone are open when you turn on your Mac. Straightforward enough, I reckon.
A small point: Canonical have been advocating Desktop/Mobile OS on a phone for longer than MS have (although in reality, both organisations would have been exploring the concept long before any public announcement).
Without Apple, my experience of Android+Chromecast (i.e, the same as iOs/Android + Chromecast/Playstation/Whatever) informs my opinion here.... attempts to reuse a phone's CPU are more effort than they are worth.
It seems that the main advantage of Ubuntu's idea is seamless access to work-in-progress documents, but that could be done through software. You don't want your data on just one device anyway (loss, damage, failure), and the same mechanisms that make backing-up easier can also make data accessible to multiple devices.
I remain unconvinced (but I actively welcome reasoned persuasion by you guys!) by the idea of having a Desktop OS put in a phone and the phone connected to a screen, especially when the cost of an SoC to run a Desktop OS on a TV is low (compared to a generic Snapdragon 8xx 2GB RAM 5" Android phone). A 'PC on a Stick' isn't going to take up much space in a kit bag, especially when compared to a keyboard or wireless mouse.
I have usability concerns, too (i.e plugging a phone into a TV, then unplugging it when someone rings or you want to leave the room for ten minutes).
Also, redundancy concerns: If you lose your phone, you can still use the 'PC on Stick' to contact friends and colleagues. Vice versa if your 'PC on a Stick' goes 'poooft!', or a crumb gets in your keyboard and stops the spacebarfromworking.
So: Convince me, guys! :)
>a desktop and a couple of laptops, all of which have their own storage, and setting them up to talk seamlessly to each other isn't trivial.
Hi AndyS. I thought I'd ask you, since you actually live this scenario. Is there any way you could envisage the above inconvenience being fixed by software? Your wording suggests that is is *possible*, but a bit of pain in the neck to configure/maintain.
Biting the hand that feeds IT © 1998–2017