I think I see a new gameshow format here
"XXXXX is very bright, very fast, and as you saw, he has some weird little moments,"
Just fill in the blank.
Man has drawn with machine in round one of the much-hyped showdown between two wetware Jeopardy! champions and IBM's Watson supercomputer. In the early going, it wasn't looking good for the humans of the world, as IBM's machine ripped through the easy questions and took a hefty lead over Ken Jennings and Brad Rutter in the …
It got real scarry when Watson got the Daily Double and confidently bid $1,000 in the first few minutes. From then on, the half-hour show buzzed by at light speed as Watson zinged in answer after answer, jumping from category to category.
What amused me was that between plays, the IBM infomercials about the Watson project showed one of his handler/programmer using an Apple Macbook. I guess a Lenovo Thinkpad would have been out of the question.
HAL is alive and well.
Be very afraid.
As is usual, the errors Watson makes are more informative and interesting than the correct answers; they seem so weird and off the wall that no human could make them. Better, we get the bonus of seeing Watson's top three contending answers in a little box so we can compare, and all the second and third answers are completely off the reservation; if the correct answer is the name of a person, Watson's second place answer might be the name of a city, and the third the name of the sport the person excels in. Whatever is going on here resembles human understanding very little. Without severe brain damage, nobody would answer "Who's that athlete with all the bimbos?" with "Golf".
I frequently forget names and have to make other associations to kickstart my memory. If you asked "Who's that athlete with all the bimbos?" and I couldn't remember the name, I'd probably say to myself "the golfer... black American... Norwegian wife... earns shitloads... silly name... tiger... Tiger Woods."
Watson is bringing up information in a very similar way, but discounting the other information as it doesn't match the question. Taking the same example, it knows that 'golf' isn't a valid answer as it is looking for the name.
As a last resort, if neither the human or computer knew the correct answer, they could at least both come up with 'the golfer'.
I think Watson's programming is closer to 'artificial intelligence' than most computers/software around, and that's why it is interesting.
"If you asked "Who's that athlete with all the bimbos?" and I couldn't remember the name, I'd probably say to myself "the golfer... black American... Norwegian wife... earns shitloads... silly name... tiger... Tiger Woods." "
That's pretty much the way my brain often works, as well. Interesting observation.
Personally I found it to be interesting, given that the machine in question was fed all of its information by books, having no internet access or similiar, it is an impressive feat of real-time natural language processing. I read somewhere that Watson draws in the order of 10KW, as opposed to the human brain's 60W though, so there's some way to go yet before AI catches us up...
I particulalrly like the following paragraph:
"As one might expect, though I added an intelligently written (albeit brief) handful of paragraphs last night that pointed out the technological weaknesses in Watson from the perspective of someone with a good deal of both academic and practical background of striking relevance, the teenage administrators of Wikipedia once again saw fit to delete it. And they have the nerve to ask for funding! They should all be mercifully put to death in our lifetimes."
I grabbed a copy from usenet this morning to watch at lunch, and found it utterly compelling.
Without re-watching I can't give specifics but it seemed to me that the questions Watson struggled with weren't those involving obscure facts or "tricks" of the sort a human player would find more difficult, but simply those that were more complex to parse in the time allotted.
Of course one of the reasons Jeopardy was chosen for this experiment is that many of the questions are structured in this way, so it was always going to be the case that Watson's language processing ability would be the real test. And clearly there is still some tweaking to be done in that department.
When there was extra information in the question simply as part of the sentence structure, or for background or context, but which wasn't really important in working out the answer, Watson seemed to get "confused" because it didn't know which bits to prioritise. Humans are generally very good at this sort of thing, throwing away the irrelevant in favour of the important. Watson hasn't quite got the hang of it. Yet.
On the other hand, for those questions from which it was easier to extract the core information required to figure out the answer quickly, Watson was damn near unbeatable. It was very impressive.
Given that the questions in Jeopardy tend to follow similar structural patterns it should be possible, given enough time, to enable Watson to learn from its earlier mistakes and apply new rules to later answers. It probably wouldn't help with general language processing, but it would make Watson a better Jeopardy player. It may well do this already.
One thing I don't understand about the Jeopardy set up is exactly when Watson gets its text version of the questions, compared with the human players. Do the humans get to see the whole question as Trebek is reading it, or do they have to listen to his delivery? Is nobody is allowed to 'buzz' until Trebek has finished reading the question, or can they 'buzz' off-screen and get first go at the answer once he's finished? Is Watson fed the text at the same time Trebek starts reading, or as he finishes?
I'm hoping someone more familiar with this game can enlighten me, because it seems to me to be the one place where Watson may be at an advantage or disadvantage depending on when it's fed the text.
 Yes, I know questions and answers effectively swap places in this show but it's confusing as hell when you write them the wrong way round.
The question appears on the giant screen on the stage in front of the players, so yes a good player will certainly read the question on the screen before it is done being read and figure it out and be ready to buzz in as soon as Trebek is done reading it. Watson does the same except instead of reading it, it gets it delivered to it as a small text file.
I had originally thought they would use OCR with a video camera to get the question, but of course that isn't really the problem they are trying to solve and would just be a waste of resources.
I am amused that the developers decided there was no need to have a way to figure out what wrong answers other players gave, and yet in this first part of the first game they already had a case where Watson repeated the wrong answer already given by another player. The developers were apparently amused by that too.
It's not intelligence -- but it could lead to better search engines. You know, like Wolfram Alpha and Ask Jeeves :-P
Of course, if a search takes ~200ms on a 10-rack cluster, IBM is gonna have to charge good money for it. Is it worth a premium over regular, efficient, *fairly transparent* full-text search algorithms?
For those interested in a more in depth look at Watson, Nova did a show on the system's development which aired last week.
They showed a lot more of the practice games, many of which showed the ups and downs of Watson's answer/question selections and what IBM had to train it to work on.
They were supposed to be feeding Watson any responses from his two human competitors to prevent it giving a duplicate wrong response and to also help it learn what kind of responses it should be giving for a particular category.
There was also a shorter Nova Science Now bit on Watson available here:
I think it was that show where they talked about the use of learning algorithms which are also used in handwriting and speech recognition (though not by Watson as it relies on text only input) instead of a bunch of fixed statements.
For those who didn't see the first round of the game, where Watson did really well was on the more simple factual clues, but it had trouble when the clues were not as direct or involved word play. I would like to see how it does on the New York Times crossword, where a large number of the clues involve alternate or weird meanings of words etc.
Biting the hand that feeds IT © 1998–2019