* Posts by LionelB

227 posts • joined 9 Jul 2009

Page:

Software update turned my display and mouse upside-down, says user

LionelB
Bronze badge

Re: Every day's a school day

One thing I don't get about this: do these people have five thumbs and one opposable finger?

7
0

Calm down, Elon. Deep learning won't make AI generally intelligent

LionelB
Bronze badge

Re: Bishop Bollocks

@Rebel Science

I could go on but then I would barf out my lunch.

I think you just did...

3
0
LionelB
Bronze badge

Re: The more I study AI the more ot looks like conciousness is essential to it.

It seems that the best way to know what leg to move forward next uses a simulation of a simple model of the mechanisms involved to help predict what to do next in the current environment.

Having worked a little in robotics, it turns out that that's a really, really bad way to "know what leg to move forward next", and almost certainly not the way you (or any other walking organism) does it. The idea that to interact successfully with the world an organism maintains an explicit "internal model" of its own mechanisms (and of the external environment) is 1980s thinking, which hit a brick wall decades ago - think of those clunky old robots that clomp one foot in front of the other and then fall over, and compare how you actually walk.

In biological systems, interaction of an organism with its environment is far more inscrutable than that (that's why robotics is so hard), involving tightly-coupled feedback loops between highly-integrated neural, sensory and motor systems.

0
0
LionelB
Bronze badge

... and since chaotic systems have so far defeated mathematical modelling ...

Errm, no they haven't. Here's one I made earlier:

x → 4x(1–x)

That's the "logistic map". Here's another:

x' = s(y-x)

y' = x(r-z)-y

z' = xy-bz

That's the famous Lorentz system, which has chaotic solutions for some parameters. Chaotic systems are really easy to model. In fact, for continuous systems, as soon as you have enough variables and some nonlinearity you tend to get chaos.

2
0
LionelB
Bronze badge

Re: "AI is more artificial idiot than artificial intelligence"

Because in any reasonable definition of AI ...

Well what is a reasonable definition of AI? Genuine question: I get the impression that most commentators here equate "real" AI with "human-like intelligence" - under which definition we are, of course, light-years away. But does the "I" in AI have to be human-like? Or, for that matter, dog-like, or octopus-like or starling-like, or rat-like?

Perhaps we need to broaden our conception of what "intelligence" might mean; my suspicion is that "real" AI may emerge as something rather alien - I don't mean that in the sinister sci-fi sense, but just as something distinctly non-human.

2
0

Shock! Hackers for medieval caliphate are terrible coders

LionelB
Bronze badge

Re: People who want to kill other people for stupid sky fairy reasons are not clever

Jihads, Crusades, Intifadas - they're all the same.

Not quite: Intifada was, in its original meaning, a political term with connotations of "rebellion against oppression" (the first Intifada was a socialist protest against the monarchy in Iraq). Of course it is now more strongly associated with the Palestinian struggle against Israeli occupation - which may or may not (depending on who you are talking to, and when) have been hijacked by religious extremists.

Agreed on the others, though.

8
0
LionelB
Bronze badge

Re: C'mon, ElReg.

The particular sort of barbarism practised by Desh doesn't respect even those rules even if they were often evident more in the breach than the observance.

This is a an entirely deliberate strategy. You need to appreciate their motives: they are a doomsday sect. They believe that the global Caliphate will arise only after an apocalyptic showdown between Islam and the non-believers. Their avowed intention is to evoke the highest levels of disgust and abhorrence in order to hasten that showdown.

8
0

AI in Medicine? It's back to the future, Dr Watson

LionelB
Bronze badge

Re: Experience and subtle clues

Why? Because Drs and nurses say, hmm I have seen something like this before, and short circuit the differential diagnosis. Try encoding that in an expert system !

That would be easy to encode in an expert system - if the system designers were able to pin down what "something like this" actually meant. And that's the Achilles' Heel of expert systems: identification (and encoding) of the explosion of edge cases and hard-to-articulate intuitions that constitutes the deep knowledge of an experienced human expert. This is why knowledge-based systems hit a brick wall. We've known about this for a long time.

3
0

'Don't Google Google, Googling Google is wrong', says Google

LionelB
Bronze badge

Re: re: Contacting someone implies you were successful;...

My personal favourite IndEng word is "prepone", meaning "to bring forward in time", by analogy with "postpone".

Mine is "doubt" to express a misunderstanding. As an academic I am sometimes contacted by Indian students/researchers expressing a "doubt" about some aspect of my published work. The first few times this happened I thought they were being a bit cheeky, until I twigged that they were just seeking clarification.

1
0

Boffins fear we might be running out of ideas

LionelB
Bronze badge

Re: "They're all people who, in past times, would have been doing something more useful."

... Thomas Edison is known for having electrocuted elephants ...

How is that not useful?

0
0
LionelB
Bronze badge

Re: Semiconductors are getting hard to fill

Babel fish are worth a punt. If everyone understood each other it would improve not only research but a lot of other aspects of commerce.

Actually, in research and academia language is hardly an issue; English is already de facto the lingua franca of science.

2
0

Fruit flies' brains at work: Decision-making? They use their eyes

LionelB
Bronze badge

Re: eyes as brains ...

Not sure about fruit flies, but I have a colleague who studies genetically-modified zebra fish (same technology - calcium imaging). Young zebra fish are almost completely transparent, so you can image their entire brain/neural system in one shot. It's pretty impressive watching screeds of individual neurons (~ 10,000) flickering away in real time.

Turns out that there are more neurons in the zebra fish visual system than in the rest of the brain/nervous system in its entirety. Seeing well is pretty damn important to those critters - wouldn't surprise me if fruit flies were similar in that respect (although their visual system is very different).

0
0

Climate-change skeptic lined up to run NASA in this Trump timeline

LionelB
Bronze badge

Re: Belief has nothing to do with it: The fundamental difference between religion and science

The problem is that the models are continually getting it wrong.

"All models are wrong, but some are useful" - George Box

"The best material model for a cat is another cat, or preferably the same cat" - Arturo Rosenblueth

Non-scientists routinely misunderstand the purpose and utility of models in science. Here's a famous example of an exceptionally useful - but completely "wrong" - model: it's the Ising model for ferromagnetism. When a ferromagnet is heated up to a specific temperature (the Curie point), it abruptly de-magnetises. This is a classical phase transition (like the boiling of water, etc.). The Ising model was proposed in 1924 by Ernst Ising, in an attempt to understand the ferromagnetic phase transition (phase transitions were poorly understood at the time). It is elegant, abstract, and - as a model for ferromagnetism - completely wrong. It's absolute rubbish. It's childishly simplistic. Real ferromagnets are, in reality, nothing like the Ising model - they're way more complex in structure and (quantum) electrodynamics. But here's the strange thing... the Ising model completely nails the ferromagnetic phase transition. It describes the behaviour of the relevant physical quantities near the Curie point astoundingly well. The Ising model (which was finally solved analytically in the 1940s by Lars Onsager) subsequently became the "fruit fly" of the physics of phase transitions. It's probably not far off the mark to say that almost everything we know about phase transitions (and we now know a lot) is rooted in studying the Ising model. It is one of the most elegant, successful and influential models in the history of science.

It's instructive to consider just why the Ising model is in fact so successful. It turns out that, in general, phase transitions fall into distinct "universality classes": that is, many apparently completely different physical phenomena which demonstrate phase transitions turn out to behave in identical, stereotyped ways near their critical point - they may be described, not just qualitatively but quantitatively, by the same mathematics. (This is a rather deep discovery, which stems from studying - you guessed it - the Ising model.)

So the Ising model didn't have to be "correct", or even "accurate" (it's not). It just had to nail the one phenomenon it was intended to model. It abstracts the problem. That is what useful models do - that's what they're for.

In climate science, as in any other science, that is how we should view models: not as "right" or "wrong" ("another cat"), but as useful in abstracting and pinpointing the crucial aspects of the phenomenon we wish to understand.

1
0
LionelB
Bronze badge

Re: I don't mine a skeptic. I prefer a skeptic in this position

BB, your misunderstanding of science is quite extraordinary.

0
0

Not another Linux desktop! Robots cross the Uncanny Valley

LionelB
Bronze badge

One thing the article might have mentioned (although I'm really not sure what to make of it), is that Hiroshi Ishiguro, due to the effects of ageing, had cosmetic surgery to make him look more like his android. He apparently claims it was more cost-effective than updating the android.

7
0

In the red corner: Malware-breeding AI. And in the blue corner: The AI trying to stop it

LionelB
Bronze badge

Re: So it's Core War played with "real" virtual processors between machines

The joker of course is have you developed a system perfectly adapted for finding only the malware that the attacking ML system produces.

That's an excellent point, and one which you can be sure is not lost on the designers of this system (or adversarial ML in general). I could imagine ways of getting around this, though. First of all. you would have to ensure that the malware detector does not "forget" earlier attempts at evasion. This could be done, for example, by continually bombarding it with all thus-far generated malware attacks. That's the easy part. Getting the malware generator to diversify wildly is likely to be much harder. It probably needs to be "seeded" with exploits from the real world, not to mention the designer's imagination in full black-hat mode.

1
0
LionelB
Bronze badge

Re: Pattern matching is dumb, thus anomaly detection, with history and rollback.

If I was writing malware, I'd probably use random salted compressed and encrypted launch/payload sections, including deceptive "buggy" code/data and resource access, to defeat easy binary-pattern and behaviour detection.

So perhaps the malware generator could discover and deploy this strategy (with a bit of nudging, perhaps) - and the malware detector could then attempt to mitigate against it.

0
0
LionelB
Bronze badge

Re: Scary

Sounds like the equivalent of loading a bacterium on a petri dish with increasing doses of antibiotic.

More like loading mutating bacteria on a petri dish with increasing doses of "mutating antibiotics"; you get an arms race - kind of what's happening in the real world with antibiotic-resistant bacteria (cf. the Red Queen effect).

0
0

Machine 1, Man 0: AlphaGo slams world's best Go player in the first round

LionelB
Bronze badge

Re: Newsflash

Just don't call it "AI" until it can design a game like Go by itself. From scratch.

You can do that? Hats off, sir/madam.

0
0
LionelB
Bronze badge

Re: And how much hardware it takes....

To be fair, human brains throw massively more hardware than any computer system in existence at doing ... just about anything. Plus they have had the benefit of aeons of evolutionary time to hone their algorithms and heuristics.

Looked at that way, it hardly seems like a fair contest.

0
0
LionelB
Bronze badge

Re: Cracked and good PR

However it's not AI

According to ... what/whose definition of AI? (not a rhetorical question).

Can the system play any other game not programmed in?

I play a fair game of chess, but am absolutely rubbish at Go. Never had the time or motivation to program it in.

Did the computer "learn" the previous matches? No, they were loaded into a database.

Correction: it learned from previous matches. Perhaps those matches were "loaded from a database" during the training phase. I used to load matches from databases for training during my chess days - we called them "chess books" back then.

Hint: why not find out how AlphaGo really works.

0
0

74 countries hit by NSA-powered WannaCrypt ransomware backdoor: Emergency fixes emitted by Microsoft for WinXP+

LionelB
Bronze badge
Facepalm

Re: Risk Management

Of the >140,000 million NHS yearly budget, only about 40,000 million is available for things like buying drugs, new hospitals, MRI scanners and desktop refreshes. The rest goes on wages. That's a political failure.

Yeah right, why should we pay people to do this stuff?

12
0

Take a sneak peek at Google's Android replacement, Fuchsia

LionelB
Bronze badge

Re: Old joke

It was named after a German botanist, so no, it would be closer to "Fook-sia", or "Fooch-sia" with the ch similar to that in the Scottish "loch".

Sorry, no "spoilsport" icon.

0
0

Linux homes for Ubuntu Unity orphans: Minty Cinnamon, GNOME or Ubuntu, mate?

LionelB
Bronze badge

Re: windows manager choice

FWIW, Fluxbox (which has been my WM of choice for a decade) is still under development - albeit at a somewhat leisurely pace. It knows what it is, and is comfortable to stay that way - which suits me fine.

3
0

Apple fanbois are officially sheeple. Yes, you heard. Deal with it

LionelB
Bronze badge

Re: "the grammar is relatively simple"

What would you consider a language with complicated grammar then?

Basque, Finnish, Navajo, Adyghe, Abkhaz, Korean, Icelandic, Thai, ...

English grammar is in fact pretty simple compared even with its latinate and germanic progenitors. I recall learning Spanish, that the trickiest things to get to grips with were the imperfect past tense (which doesn't exist in English) and the subjunctive mood (which English has almost lost).

Now Afrikaans - there's a really, really simple grammar.

0
0

Facebook decides fake news isn't crazy after all. It's now a real problem

LionelB
Bronze badge
Pint

Re: "I would trust Mark on this," de Alfaro said in an email to The Register.

More specifically, he called them dumb f**ks for trusting him with their data - jokingly, perhaps, but in the context you have to say he had a point.

(Beer icon because it's 5.00 ... somewhere. Here, in fact. Now.)

1
0
LionelB
Bronze badge

Re: FB wants to go the same way as the MSM

Now, I only look at the MSM to learn what is the latest lie that they are propagating or which piece of information they are trying not pass on to the general public.

Out of interest, having rejected mainstream media as a source of reliable information, what are your alternative sources of information, and how sure can you be that they are any more reliable than the mainstream media?

1
0

Shock horror: US military sticks jump leads on human brains to teach them a lesson

LionelB
Bronze badge

"Deep brain stimulation" is already a thing, and an active area of research, particularly for the treatment of severe epilepsy, Parkinson's disease, depression and Tourette Syndrome.

The article should surely have mentioned this.

3
0

A bot lingua franca does not exist: Your machine-learning options for walking the talk

LionelB
Bronze badge

Re: WTF is this crap

A group of cells in any brain learns to manage all its systems in the body and learns to balance itself so as not to destroy itself as things change from conception. We can do this too guys!

How? The Nobel committee is waiting to hear from you.

0
0

Hasta la Windows Vista, baby! It's now officially dead – good riddance

LionelB
Bronze badge

Re: Oh alright...

Hasta la Vista

You do realise that that translates roughly as "until we meet again"?

Eek.

0
0
LionelB
Bronze badge

Re: Embiggen

Big: adjective. Verb forms: embiggen, bignify. Adverbial form: bigly. Abstract noun: bignation.

4
0
LionelB
Bronze badge

Re: @LionelB Vista Capable

Yes, agreed - and apologies (and an upvote): it wasn't clear to me what you were getting at.

1
0

Riddle of cannibal black hole pairs solved ... nearly: Astroboffins explain all to El Reg

LionelB
Bronze badge
Coat

Re: They solved nothing. Just fairy tales.

I expect you're also one of those people who refuses to believe that any historical event took place unless you were there to see it with your own eyes.

... while trees fall silently in deserted forests ...

(Couldn't be arsed to go full haiku.)

0
0
LionelB
Bronze badge

Re: They solved nothing. Just fairy tales.

Are you sure your arse is different from your elbow? Better send a probe up there.

You don't really get science, do you?

13
0

Google's video recognition AI is trivially trollable

LionelB
Bronze badge

Re: Is it a bug?

@DropBear

LionelB wrote earlier:

That's what general (i.e., non-domain-specific) AI is up against - and yes, it's hard, and we're nowhere near yet.

IOW, I don't entirely* disagree with you. I just thought your analogy was crap.

*OTOH, I don't think "real" AI (whatever that means) is unattainable - always a duff idea to second-guess the future (cf. previous unattainables, like heavier-than-air flight, or putting humans on the moon). Basically, I don't believe that there are magic pixies involved in natural (human) intelligence.

0
0
LionelB
Bronze badge

Re: Is it a bug?

No, you have to go back, create test cases for every imaginable scenario, ...

Sorry, no. You seem to have a total misconception as to how machine-learning in general, and "deep-learning" (a.k.a. multi-layer, usually feed-forward) networks in particular, function. You seem to have latched onto the bogus idea that a machine learning algorithm needs to have "seen" every conceivable input it might encounter in order to be able to classify it.

In reality, the entire point of machine-learning algorithms is to be able to generalise to inputs it hasn't encountered in training. The art (and it's not quite a science, although some aspects of the process are well-understood) of good machine-learning design is to tread the line between poor generalisation (a.k.a. "overfitting" the training data) and poor classification ability (a.k.a. "underfitting" the training data).

It's a hard problem - and while the more (and more varied) the training data and time/computing resources available, the better performance you can expect, I'd be the last person to claim that deep-learning is going to crack general AI. Far from it. But it can be rather good at domain-specific problems, and as such I suspect will become a useful building-block of more sophisticated and multi-faceted systems of the future.

After all, a rather striking (if comparatively minor) and highly domain-specific aspect of human intelligence is our astounding facial recognition abilities. But then we have the benefit of millions of years of evolutionary "algorithm design" behind those abilities.

4
1
LionelB
Bronze badge

Re: "why the algorithm..such a heavy weighting on..only 2% of the footage."

@John Smith 19

Yes, deep-learning networks (usually) are just multi-layer networks - but that doesn't imply that "people could actually work out how they work". It's notoriously hard to figure out the logic (in a form comprehensible to human intuition) of how multi-layer networks arrive at an output. I believe the so-called "deep-dreaming" networks were originally devised as an aid to understanding how multi-layer convolutional networks classify images, roughly by "running them in reverse" (yes, I know it's not quite as straightforward as that).

2
1
LionelB
Bronze badge
FAIL

Re: Is it a bug?

So your reply to "homoeopathy is not medicine" is "write a new treatise on it and make it better!" Yup, got it.

Sorry, but that's a fantastically lame "analogy".

12
1
LionelB
Bronze badge

Re: Is it a bug?

If you have to insert an explicit rule, it's not AI. It's a human-written heuristic.

You might well argue, though, that natural (human) intelligence is a massive mish-mash of heuristics, learning algorithms and expedient hacks assembled and honed over evolutionary time scales.

That's what general (i.e., non-domain-specific) AI is up against - and yes, it's hard, and we're nowhere near yet. And of course it's hyped - what isn't? Get over it, and maybe even appreciate the incremental advances. Or better still, get involved and make it better. Sneering never got anything done.

8
1
LionelB
Bronze badge

Re: Is it a bug?

If it was a tiny bug it could be fixed.

What makes you so sure it can't be fixed? FWIW, I suspect it is probably not a "tiny" bug, but may not actually be that hard to fix (off the top of my head I can imagine, for example, a training regime which omits random frames, or perhaps a sub-system which recognises highly uncharacteristic frames, which might mitigate the problem).

This research may well turn out to be rather useful to Google (although I'd also be slightly surprised if they weren't aware of something similar already).

3
0
LionelB
Bronze badge

It's just taking a statistic generated from the data and finding the nearest data point to that statistic in its database and then returning it.

No, it's not doing anything like that. Please find out how deep-learning systems actually work before posting fatuous comments.

At best, it's an "algorithm".

Yes, of course it is. Computers run algorithms - that's what they do. Whether you call it an "AI algorithm" (I wouldn't) or a "pattern recognition algorithm" (I might) is a matter of what you think those terms ought to mean.

15
5

Boffins crowdsource hunt for 'Planet 9'

LionelB
Bronze badge

Re: why the search isn't using artificial intelligence.

There is no "general" AI ...

True, and likely to remain so for the foreseeable.

... and successful specific AI is large data sets with human curated rules

Not so true: successful domain-specific AI these days tends to be human-designed learning algorithms trained on large data sets - really not the same thing. With human-curated rule-based systems you can generally trace precisely how the system arrives at an output. With modern machine learning systems (particularly neural network-based ones) you generally cannot - the internal logic of the trained system (as distinct from the learning algorithm underlying training) is inscrutable. Call it pattern-recognition, if you like, but these are not hard-coded rule-based systems. They tend to be better at generalisation (within their domain).

Human-curated rule-based systems are still around to some extent (very 1980s), but machine learning (I won't call it AI, if it pleases) has moved on.

0
0
LionelB
Bronze badge

Re: "...you'll have to find someone else to name it after"

I kind of like "Kevin". No reason.

0
0

Speaking in Tech: Elon Musk and the AI apocalypse

LionelB
Bronze badge

Re: Vegemite - FTW!

Vegemite > Marmite > Bovril

where ">" = "better than".

Note: I am of South African origin, and therefore strictly neutral on this issue. I am less neutral (but no less correct) on:

biltong >>>>>>> beef jerky

1
0

Boffins give 'D.TRUMP' an AI injection

LionelB
Bronze badge

Re: “Inbreeding leads to harmful mutations” is a lay understanding of genetics.

OTOH, those mutations may be beneficial, in which case inbreeding can be good.

Yes... but it may be hard to avoid (or even detect) other potentially harmful mutations hitching a ride.

In the long run, repeated inbreeding has the effect of reducing the set of available alleles in the inbred population for any particular gene, which is generally not a good thing. Hence, e.g., the robustness of mongrels in comparison with "pure-bred" dogs.

1
0

Astroboffins stunned by biggest brown dwarf ever seen – just a hop and a skip away (750 ly)

LionelB
Bronze badge

Re: Am I a Bad Person?

the organisation is using this discovery to try to obtain more funding?

Well, duh, of course they are. You need funding to do science.

(And they are almost certainly right - not many unique objects in the universe, because the universe is rather big and full of stuff.)

0
0
LionelB
Bronze badge

Re: It's quite a small object

Well, the ether and phlogiston were, in their day, about the most plausible theories going for explaining the physical evidence as it stood. Sure, they turned out to be wrong, but establishing how a theory is wrong can be an excellent way of homing in on a better theory. To take an example, the Michelson-Morley experiment - which finally did for the ether - forced physicists (like Maxwell and Einstein) to develop new theories to account for the perplexing new evidence.

Getting stuff wrong in science is both commonplace and highly underrated. Better bad theory than no theory at all.

7
0

Carnegie-Mellon Uni emits 'don't be stupid' list for C++ developers

LionelB
Bronze badge

This is no mere listicle: it weighs in at 435 pages

TL;DR

2
0

This AI stuff is all talk! Bots invent their own language to natter away behind humans' backs

LionelB
Bronze badge

Re: Today's "AI" is brute force stupidity

@DougS

Your conception about how systems like AlphaGo learn is way, way off the mark.

Famously, Google's DeepMind learned how to play a range of Atari games from an input of raw pixels; in other words, it figured out how to play games just by "watching" them. Link to original paper here.

0
0

Page:

Forums

Biting the hand that feeds IT © 1998–2017