Make your own free website on Tripod.com
« February 2011 »
S M T W T F S
1 2 3 4 5
6 7 8 9 10 11 12
13 14 15 16 17 18 19
20 21 22 23 24 25 26
27 28
You are not logged in. Log in
Entries by Topic
All topics  «
Blog Tools
Edit your Blog
Build a Blog
RSS Feed
View Profile
Of Carbon and Silicon
Monday, 28 February 2011
Trek Tech II: "Watson" goes where only Sci-Fi has gone before.

As you may be aware, last week on Jeopardy!, there was a contest which will prove to be quite an historic event. It was, of course, the contest between the two all-time Jeopardy! scoring champions, Ken Jennings and Brad Rutter, and the IBM computer system named "Watson". This contest has been put on YouTube in various forms -- most of the viewer comments concern Skynet and the silicon or computer uprising. But, caught up in the novelty of having an artificial intelligence defeating the two men with the greatest amount of TMRB-level knowledge in the entire world, people seem to have forgotten something.

"Watson", whilst cutting an imposing square silhouette on the Jeopardy! stage, was created to understand natural language. That is, idioms, colloquialisms, puns, and any sufficiently abstract thought which is created by the human brain and put into vocalised words. What does that remind you of?

Star Trek, of course. When Commander Riker creates a holodeck programme, he does so by explaining his desires to the computer in generalised concepts... "But, computer -- blondes and jazz rarely go together." The computer interprets his statement as a command to change the obligatory nightclub bombshell's hair colour to something other than blonde.
From what I understand of how "Watson" works, it would parse the command and strip it down to its base parts. In the case of Cmdr. Riker, "Watson" would find "blondes...jazz...rarely" to determine that something is wrong with the scene. Of course, the bartender or the jazz bassist could have blonde hair, so it would postulate that a man referring to "blondes", plural, would be talking about women with blonde hair. Since it already knows that the scene is wrong, the determination that it is the hair-colour on the woman in the scene which is wrong would give it enough information to change the scene accordingly.

Or, in the context of its Jeopardy! skills...
The Starfleet library computer is foremost a reference tool. Say, perhaps, that Lt. Ayala asks the computer who the most prolific new-age composer of the mid-21st century was. After thoroughly researching the database of composers it has on file, it would return the response, "J Sebastian Perry".
"Watson" would find the keywords, "new-age composer... most prolific... late-21st century". Finding that, from the list of 21st-century new-age composers it has on file, J Sebastian Perry composed 47 more pieces of music than the runner-up, Jerry Martin, that he must be the most prolific.

At any rate, the basic idea behind "Watson", as I understood it from the developer interviews, was to make a superencyclopaedia that could be interacted with using natural language. Simply put, it does not have the capability (at this stage, anyway) of asking its own questions... outside the format of Jeopardy! responses, that is.
Why is this relevant? Because the basic requirement for machine "consciousness" (also known as "sentience") is that the intelligence be able to ask philosophical questions and expect meaningful answers to them.
Self-awareness is largely accepted as another requirement for machine consciousness, but I don't believe it should be. A computer can be programmed to use personal pronouns such as "I" and "me" without having any sentience at all. I can make my speech synthesiser say, "I like toast", but that's because I, Spiny McSpleen, typed it into the spoken text field. Self-awareness is too easily falsified.
I suppose that a computer could also be programmed to ask a philosophical question and only to accept responses that the human programmer wants to hear. However, if the computer can ask the question without being prompted to do so and the programmer can attest that he did not tell the computer to do it, that qualifies, in my book, as an artificial sentience.

One could say that "Watson" is sentient by its ability to learn from its mistakes, but that trait is native to all artificial intelligences. Even Sims in The Sims 2 and 3 have enough smarts to learn what their fellow Sims like and don't like and how to behave around them. You can't call it an "artificial intelligence" with any credibility if it doesn't learn.

The point is that "Watson" is more like the Starfleet library computer than HAL 9000. It is an artificial intelligence. Granted, a very complex intelligence... but intelligence alone only goes so far. It is not an artificial sentience or artificial consciousness -- it has no personal agenda, no carbon/silicon biases, no clandestine plans for world domination. It's a rack of servers full of data from Wikipedia, the Internet Movie Database, dictionary.com, perhaps even TMRB. Terabytes of text. Simple, plain, old text.

Letters... and words... AI gets absurd. I just gotta jump back?

Yes, "Watson" probably even knows how each of Strong Bad's computers met their respective demises.

So, before you write another "Kill 'Watson' Before It Kills Us" rant on another tech blog, take a second or two and consider what I've said.


Posted by theniftyperson at 10:40 PM CST

View Latest Entries