One of the more notable programs that Hofstadter critiques is a program called "ACME" that was developed by Keith Holyoak and Paul Thagard. This program was supposed to be able to draw analogies between Socrates' remarks about himself being a "midwife of ideas". The program "ACME" seemed to be given a knowledge base of what midwifery is. It could then switch out strings of information for one another and easily make an "analogy" between Socrates' and what a midwife really was.
Hofstadter immediately shoots this idea down and shows that "ACME" does not "know" the analogy, but merely hides it behind the switching of strings and patterns within the remark. I found this quite interesting and it again brought my attention to Searle's Chinese room argument. It is so that a computer cannot really "know" what specific things are in the world without a seemingly infinite knowledge base that it has acquired through humans? Is it possible for a computer to actually "know" what certain items or objects are, and be able to analogously compare different items in the world to gain its own "knowledge". This to me seems like an interesting topic, and I know this has been discussed previously in many discussions about AI.
No comments:
Post a Comment