AI*IA 2003 - Ottavo Congresso Nazionale
dell'Associazione Italiana per l'Intelligenza Artificiale
23-26 Settembre 2003, Polo didattico "L. Fibonacci", Università di Pisa
In the last few years robotics has increasingly been recognized and accepted not only as a field for application in industry and services, but also as a potentially ideal application domain for Artificial Intelligence. AI has contributed significantly to the progress of various areas of robotics, particularly those of perception, sensory-motor coordination and intelligent behavior.
AI research, in fact, has focused on developing entities with intelligence comparable to that of humans - that is, with the capability of reasoning and managing knowledge. With some controversy, eventually AI research has recognized (as Rodney Brooks at MIT first proposed) that interaction with the physical world is critical for developing intelligence. Since then, many research groups have tried to develop humanoid "bodies." According to this approach, human-like intelligence relates not only to reasoning but also to learning, perceiving, and interpreting the physical world and to interacting with the world and humans. These goals are much more difficult to implement in machines than pure reasoning. As Tommaso Poggio pointed out, humans have developed reasoning rules only in the last few millennia; perception seems so natural to us because nature has refined it over millions of years. In fact, AI has recently achieved what was considered its biggest challenge for decades defeating a human champion in the game of chess. AI’s new, much more challenging goal is now to develop humanoid robots that can play, and possibly win, against a human soccer team. If this concept of ‘embodiment’ proposes robotics as a tool for developing artificial intelligence, recent technological advances in the development of biomimetic robotic artefacts and of their sensory-motor and behavioral schemes led to a further conceptual advancement: the application of robotics in understanding intelligence itself. This area is sometimes referred to as "neuro-robotics", a term that underlines the importance of the contribution of neuroscience and robotics. Neuro-robots are built by implementing models formulated by neuroscientists, in order to serve as experimental platforms for the validation of such models.
Even though still in its infancy, this emerging field is very promising. In this talk, a number of experimental projects will be presented and discussed. The ultimate goal of these projects is to assess the real contribution that the field of neuro-robotics can bring to the comprehension of the human brain and its functions, and to identify the conditions for the exploitation of the potential of this approach.
Robert C. Moore
In the history of artificial intelligence, many controversies have divided the field. The 1970s saw the clash of "logical" AI vs. "procedural" AI, which then became generalized into "neat" AI vs. "scruffy" AI. In the 1980s and 1990s, "connectionist" and "reactive" approaches to AI arose to challenge the predominant paradigm based on reasoning with explicit representations. Perhaps due to the increasing maturity of the field (or its practitioners), such basic disagreements rarely seem to generate as much heated argument as in prior years. Nevertheless, divisions exist today in AI that are at least as fundamental as any of those mentioned above.
In my view, the most significant such split in the field today concerns the question of how knowledge is to be acquired by intelligent systems. The two principal paradigms can be described as knowledge engineering and data-driven learning. The term "knowledge engineering" originated in connection with a particular approach to building expert systems, but I will use it in the broader sense of any AI system dependent on human experts entering large amounts of knowledge. In contrast, data-driven learning generally involves developing some sort of abstract (often statistical) model of a problem and training the model on large amounts of data, either as it naturally occurs or with human annotation.
No subfield of AI has been more profoundly affected by the contrast between these two approaches to knowledge acquisition than natural-language processing. Over the last fifteen years or so, NLP has gone from being almost completely in the knowledge-engineering camp to predominantly focused on data-driven-learning-based approaches. In this talk, we will review the advantages and disadvantages of both of these paradigms for NLP, looking at some of the major ideas and achievements of each. We will conclude by examining the prospects for a synthesis that combines the benefits of both.
Dr. Robert C. Moore is a Senior Researcher at Microsoft Research. Prior to joining Microsoft, Dr. Moore was Director of the Research Institute for Advanced Computer Science (RIACS) at NASA Ames Research Center. Previously, he held a series of positions with SRI International, including founding and serving as the first Director of SRI's Computer Science Research Centre in Cambridge, England; being Director of SRI's Natural-Language Research Program in Menlo Park, California; and concluding his career at SRI as Principal Scientist in the Natural-Language Research Program.
Dr. Moore's research has ranged widely within artifical intelligence, natural-language processing, and computational linguistics. His early work focussed on knowledge representation and automated reasoning, and included the invention of autoepistemic logic. His more recent interests in natural-language processing and computational linguistics include natural-language semantics, parsing and generation, and speech understanding. His current work focusses on applications of machine learning and statistical modeling to natural-language processing, particularly in the context of machine translation.
Dr. Moore received all his post-secondary education at MIT, culminating in a PhD in Artificial Intelligence in 1979. He is a Fellow of the American Association for Artificial Intelligence, a member of the editorial board of the journal "Artificial Intelligence," and a former member of the editorial board of "Computational Linguistics."