Artificial Idiot Robots, John Stewart and the private sector
Tonight I was watching the Daily Show. And the guest was Lee Gutkind the author of "Almost Human: Making Robots Think".
John Stewart seemed really disappointed to find out that the robots are only performing basic tasks like shaking cans, bringing food, mapping caves. (the research is more advanced)
Nevertheless he made an involuntary and interesting point. He asked the author why does it seem that this research on robots always starts anew with every robot.
The answer given by the guest was irrelevant, but it made me think of the following pseudo-argument:
P1: this type of research is done in universities (Gutkind spent some time at Carnegie-Mellon writing his book)
P2: generally the professors have a couple students for 2 to 5 years, the professor supervises their work, guides them and co-signs their papers
P3: these are students that have good ideas, theoretically are very good, but practically they have little experience in producing something specific - like a robot or like programming a very complex algorithm
P4: this type of research eats enormous amounts of money and time because of inexperienced bad management and many projects are killed
P5: Artificial intelligence is not intelligent. The theory has split on various areas and it seems to stagnate on its initial premise of developing intelligent machines.
Can any robot take the challenge of passing a standard Binet IQ test? :)
------
So with theory not advancing at an acceptable rate, regular student turnover and badly managed projects, the results produced will always be of more or less the same complexity.
In conclusion, John Stewart was perfectly right to infer that it seems that these projects in robotics seem to evolve extremely slowly.
They actually do. This is an extremely difficult problem, maybe still theoretically impossible, under bad management and mainly tackled by groups with extremely high turnover.
Lemma
For this field to correctly evolve the private sector needs to get more involved.
This would mean acceptable funding, better management and lower employee turnover.
DARPA got involved to create some autonomous vechicles.
Maybe the best research out there is done by Rodney Brooks at MIT with his research groups and the public company iRobot.
All it took for Rodney is a break from the philosophical perspective of symbolic representation (safe move for now, and, hey, artificial idiot robots are cheaper and more reliable) and to found his own company.
On another note:
A reviewer on Amazon (see link above) makes the following quote (don't know if it is real though): "Linux is the language in which some of the robotics programs are written. The reason Apple computers are not used extensively here is because Apple's can't interface with Linux.". This is just ignorance.
Also, during the show, the guy said something along the lines that the robots are so complex that nobody knows how to put them back together if they broke. And that the whole building process is like art. The emergent results may look like art, but not how the machine is built. Unless you are a fancy-pantsy interactive designer.
Another example was that a machine would hit a perfect golf strike and (just like a human) it will not be able to reproduce it. By definition, if an AI machine hits a ball once I can predict that it will hit it more and more often until it will get it all the time (unless you move the ball). Let's not get too emo about this AI crap.
Anyway, he may be a good author but he didn't appear to understand at all the subject.
-- Octavian Mihai
1 comment:
Good words.
Post a Comment