The Ubiquity of Constraints
Constraints are everywhere. The popular puzzle Sudoku is an example of a Constraint Satisfaction Problem, where a sample constraint would be "all the numbers in the first row have to be different". Real-world constraint problems can involve reasoning about costs, preferences, uncertainty, and change. Constraints arise in design and configuration, planning and scheduling, diagnosis and testing, and in many other contexts. They define problems in telecommunications, internet commerce, electronics, bioinformatics, transportation, network management, supply chain management, and many other domains. Once problems are modeled as Constraint Satisfaction Problems, constraint satisfaction and optimization methods may help individuals and businesses make satisfactory or even optimal choices when presented with many options and restrictions. The abundance of potential applications multiplies the opportunities to validate and motivate basic research, and to transfer technology for economic and social benefit.
Acknowledgement: This material is based in part upon works supported by the Science Foundation Ireland under Grant No. 05/IN/I886
Professor Freuder is the Director of the Cork Constraint Computation Centre in the Department of Computer Science at University College Cork in Ireland. He received his B.A., magna cum laude, in mathematics from Harvard and a Ph.D. in computer science from M.I.T. He has been elected a Fellow of the American Association for the Advancement of Science, the Association for the Advancement of Artificial Intelligence, and the European Coordinating Committee for Artificial Intelligence, and is a Member of the Royal Irish Academy. He received the first Research Excellence Award of the Association for Constraint Programming, and served as Executive Chair of the Organizing Committee of the series of International Conferences on Principles and Practice of Constraint Programming, and as the founding Editor-in-Chief of the Constraints journal. In the Citeseer database of most cited authors in computer science Professor Freuder is ranked in the top one-tenth of one per cent. He has played a key role in obtaining over 60 million dollars in funding from government and industry to support scientific research.
AutoTutor and the World of Pedagogical Agents: Intelligent Tutoring Systems with Natural Language Dialogue
AutoTutor is a computer tutor that helps students learn concepts in science and technology by holding a conversation in natural language. Students input their contributions through a keyboard or speech, whereas AutoTutor communicates through an animated conversational agent with speech, facial expressions, and some rudimentary gestures. A recent version tracks and responds to learner emotions. Another version is integrated with an interactive simulation environment. Assessments of AutoTutor on learning gains have been quite promising (nearly a letter grade) compared with reading a textbook. This presentation describes AutoTutor and some of its offspring with animated pedagogical agents.
Art Graesser is a professor in the Department of Psychology, an adjunct professor in Computer Science, and co-director of the Institute of Intelligent Systems at the University of Memphis. Dr. Graesser received his Ph.D. in psychology from the University of California at San Diego and was a visiting researcher at Yale University, Stanford University, and Carnegie Mellon University. His primary research interests are in cognitive science, discourse processing, and the learning sciences. More specific interests include knowledge representation, question asking and answering, tutoring, text comprehension, inference generation, conversation, reading, education, memory, artificial intelligence, and human-computer interaction. He served as editor of the journal Discourse Processes (1996-2005) and is the current editor of Journal of Educational Psychology. He is president of the Society for Text and Discourse and Artificial Intelligence in Education. In addition to publishing over 400 articles in journals, books, and conference proceedings, he has written two books and edited nine books (one being the Handbook of Discourse Processes). He has designed, developed, and tested intelligent software in learning, language, and discourse technologies, including AutoTutor, Coh-Metrix, HURA Advisor, SEEK Web Tutor, MetaTutor, ARIES, Question Understanding Aid (QUAID), QUEST, and Point&Query.
There is growing interest in the automatic extraction of opinions, emotions, and sentiments in text (subjectivity analysis) to support natural language processing applications, ranging from mining product reviews and summarization, to automatic question answering and information extraction. In this talk, I will describe work on two problems in subjectivity analysis at opposite ends of a continuum: subjectivity sense labeling and discourse-level opinion interpretation.
Jan Wiebe is Professor of Computer Science and Director of the Intelligent Systems Program at the University of Pittsburgh. Her research with students and colleagues has been in discourse processing, pragmatics, word-sense disambiguation, and probabilistic classification in NLP. Her most recent work investigates automatically recognizing and interpretating expressions of opinions and sentiments in text, to support NLP applications such as question answering, information extraction, text categorization, and summarization. Her current and past professional roles include ACL Program Co-Chair, NAACL Program Chair, NAACL Executive Board member, Computational Linguistics and Language Resources and Evaluation Editorial Board member, AAAI Workshop Co-Chair, ACM Special Interest Group on Artificial Intelligence (SIGART) Vice-Chair, and ACM-SIGART/AAAI Doctoral Consortium Chair.
Linguistic Ontologies for Time and Space
Natural languages encode concepts of time in different ways and to varying degrees, using distinct grammatical, aspectual, and adverbial constructions; and yet the underlying possible relations between events and temporal expressions are universal. Similarly, languages impose very different linguistic constructions for related spatial configurations, while the underlying set of relations would appear to be logically fixed as well. For this reason, it is common to think that ontologies for both spatial and temporal domains can be designed independently of linguistic data, working from first principles within a logic of specific individuals and relations between them. In this talk, I will argue that ontology design must pay close attention to the manner in which spatial and temporal concepts are realized as linguistic descriptions. There are, however many phenomena in language that suggest a richer and more complex interaction of semantic factors in our conceptual architecture. In this talk, I discuss two such phenomena that pose significant challenges to the design of semantic ontologies for language: (a) co-compositionality-- the emergence of new senses in context from bilateral composition and coercion; and (b) the structure of complex categories-- where linguistic expressions can refer to "complex types" which denote both spatial and physical entities, or both temporal and spatial entities, for example. I illustrate how such phenomena can be adequately explained within theories of linguistically motivated ontologies.
James Pustejovsky is a Professor of Computer Science at Brandeis
University, where he is Director of the Laboratory for Linguistics and
Computation and Chair of the Program in Language and Linguistics.
Dr. Pustejovsky conducts research in the areas of computational
linguistics, lexical semantics, knowledge representation, and
information retrieval and extraction, and is a major developer of
Generative Lexicon Theory. Recent areas of interest include temporal
and spatial reasoning through language and semantic annotation and
standardization of representations for interoperability.
Dr. Pustejovsky conducts research in the areas of computational linguistics, lexical semantics, knowledge representation, and information retrieval and extraction, and is a major developer of Generative Lexicon Theory. Recent areas of interest include temporal and spatial reasoning through language and semantic annotation and standardization of representations for interoperability.
CTAT: Efficiently Building Real-world Intelligent Tutoring Systems through Programming by Demonstration
Intelligent tutoring systems (ITS) are highly effective in supporting student learning, but are difficult to build (Murray, 2003). The Cognitive Tutor Authoring Tools (CTAT) project started over 6 years ago with the goals of making it easier for experienced programmers, and possible for non-programmers to create an ITS. CTAT supports tutor building through programming by demonstration, an approach that has been successful in a range of application areas, but that has been applied to only a very limited degree to ITS authoring. Using CTAT, an author creates a tutor by demonstrating correct and incorrect problem solving behaviors, rather than by writing code. The resulting tutors, called example-tracing tutors, evaluate student behavior by flexibly comparing it against the demonstrated problem-solving examples.
A key question is whether programming-by-demonstration can support sophisticated tutor behavior. We illustrate that example-tracing tutors are capable of sophisticated tutoring behaviors, going well beyond VanLehn's (2006) minimum criterion for ITS status. They provide step-by-step guidance on complex problems while recognizing multiple student strategies and maintaining multiple interpretations of student behavior when there is ambiguity.
Example-tracing tutors have been built and used in real educational settings for a wide range of application areas. Development time estimates from these projects indicate that CTAT improves the cost-effectiveness of ITS development by a factor of 4 8, compared to historical estimates. Although there is a lot of variability in these kinds of estimates, they nevertheless support our hope that lowering the skill requirements for tutor creation will support widespread adoption of ITS technology.
Dr. Vincent Aleven is an Assistant Professor in Carnegie Mellon's Human-Computer Interaction Institute. His research focuses on understanding and improving learning with intelligent tutoring systems, with an emphasis on authoring tools, tutoring metacognition, and supporting learners in ill-defined domains. His research has been published in journals such as Cognitive Science, Educational Psychology Review, the International Journal on Artificial Intelligence and Education, and Artificial Intelligence. He and colleagues recently won the Cognition and Student Learning prize sponsored by IES, given to "the best full paper submission to the 2008 Annual Conference of the Cognitive Science Society on a topic directly related to cognitive science, educational practice and subject-matter learning." He twice won the best paper award at the International Conference on Intelligent Tutoring Systems. Dr. Aleven is a member of the Executive Committee of the Pittsburgh Science of Learning Center, an NSF-sponsored research center spanning multiple departments both at Carnegie Mellon and the University of Pittsburgh. He has served on the program committee of major conferences in intelligent tutoring systems, and has organized numerous workshops during these conferences. He will be the Program Committee Co-Chair of the 2010 International Conference on Intelligent Tutoring Systems.
Multimodal Case-Based Reasoning
Much research on case-based reasoning has focused on conceptual and causal knowledge of past experiences, though there also has been some research on visual case-based reasoning. There are many tasks, however, that require multimodal case-based reasoning. Understanding sketches, drawings and diagrams is an example of such a task; causality in a drawing, for example, is, at most, implicit. Addressing such tasks requires a new scheme for case representation and organization, one that enables case-based inferences about causality from visuospatial representations. In this talk, I will describe a computational technique for understanding engineering drawings by constructing a teleological model of the target drawing by analogy to the model of a known drawing. Knowledge of the source case is organized in a multimodal schema that contains the source drawing and its teleological model represented at multiple levels of abstraction: the lines and intersections in the drawing, the shapes, the structural components and connections, the causal interactions and processes, and the function of the system depicted in the drawing. Given a target drawing and a relevant source case, our technique of compositional analogy first constructs a representation of the lines and the intersections in the target drawing, then uses the mappings at the level of line intersections to transfer the shape representations from the source case to the target, next uses the mappings at the level of shapes to transfer the full teleological model of the depicted system from the source to the target. The Archytas computer system implements this multimodal case representation and the technique for understanding drawings by construction of teleological models by compositional analogy.
Ashok K. Goel is an Associate Professor of Computer Science in the School of Interactive Computing at Georgia Institute of Technology. He is Director of the School's Design Intelligence Laboratory, and a Co-Director of the Institute's Center for Biologically Inspired Design. He has pioneered research on interactive case-based design, integrating case-based and model-based reasoning, visual case-based reasoning, and meta-case-based reasoning. Ashok serves on the editorial boards of the Journal of Experimental and Theoretical Artificial Intelligence, Artificial Intelligence for Engineering Design, Analysis and Manufacturing, and Advanced Engineering Informatics. He is a member of the Program Committees of the Eighth International Conference on Case-Based Reasoning (ICCBR-09) and the IJCAI-09 Workshop on Grand Challenges in Reasoning from Experience.