Arguments in favor of the basic premise must show that such a system is possible.
The first step to answering the question is to clearly define "intelligence".
If you haven’t read about it yet, “Eugene Goostman” is a chatbot that’s being heavily promoted by the University of Reading’s Kevin Warwick, for fooling 33% of judges in a recent Turing Test competition into thinking it was human, and thereby supposedly becoming “the first program to pass the Turing Test” as Turing defined it in his 1950 paper. In reality, while Turing did venture a prediction involving AIs fooling humans 30% of the time by the year 2000, he never set such a numerical milestone as the condition for “passing his test.” Much more importantly, Turing’s famous example dialogue, involving Mr. :-))) By the way, I still don’t know your specialty – or, possibly, I’ve missed it? Eugene: Just two, but Chernobyl mutants may have them up to five. Scott: No, I need to know that you’re not a chatbot. :-))) Oh, what a fruitful conversation;-) Scott: Do you understand why I’m asking such basic questions? Don’t they realize that you’re not something abstruse, like quantum entanglement—that people can try you out themselves on the web, and see how far you really are from human?
Pickwick and Christmas, clearly shows that the kind of conversation Turing had in mind was at a vastly higher level than what any chatbot, including Goostman, has ever been able to achieve. Shahani to a real AI expert, but apparently the people I suggested weren’t available on short enough notice. Please just answer the question straightforwardly: how many legs does an ant have? Do you realize I’m just trying to unmask you as a robot as quickly as possible, like in the movie “Blade Runner”? Eugene: Ask the Creator about it (call Him by phone, for example:-).
Important propositions in the philosophy of AI include: Is it possible to create a machine that can solve all the problems humans solve using their intelligence?
This question defines the scope of what machines will be able to do in the future and guides the direction of AI research.
These three questions reflect the divergent interests of AI researchers, linguists, cognitive scientists and philosophers respectively.
Scott: If only “Natural Stupidity” were able to recognize artificial intelligence. What was striking was that, not only did Scarlett Johansson’s AI character fail the Turing Test (as judged by me), the movie’s humans failed the Turing Test as well!
This entry was posted on Monday, June 9th, 2014 at pm and is filed under Nerd Interest, Procrastination, Rage Against Doofosity.
You can follow any responses to this entry through the RSS 2.0 feed.
Turing notes that no one (except philosophers) ever asks the question "can people think? research defines intelligence in terms of intelligent agents.
" He writes "instead of arguing continually over this point, it is usual to have a polite convention that everyone thinks". An "agent" is something which perceives and acts in an environment.