Associate Professor, Computer Science, University of Arizona. Dr. Surdeanu works on systems that process and extract meaning from natural language texts such as question answering (answering natural language questions), information extraction (converting free text into structured relations and events). He focuses mostly on interpretable models, i.e., approaches where the computer can explain in human understandable terms why it made a decision. He published more than 90 peer-reviewed articles, including four articles that were among the top three most cited articles at their respective venues that year. His work was funded by several government organizations (DARPA, NIH, NSF), as well as private foundations (the Allen Institute for Artificial Intelligence, the Bill & Melinda Gates Foundation). He co-organized multiple shared tasks and workshops such as CoNLL, Knowledge Base Population (KBP), and TextGraphs.
Chief Architect for Watson Language Foundation in the IBM Data &AI product organization. Laura builds NLU systems that are accurate, scalable and transparent. She believes in the notion of "Transparent Machine Learning" in NLU: leveraging machine learning techniques, while ensuring that the NLU system remains transparent - easy to comprehend, debug and enhance. As such, the NLP capabilities in offerings under Laura's technical direction, including IBM Watson Natural Language Understanding, IBM Watson Discovery and IBM Watson Knowledge Studio expose and combine both Machine Learning techniques, as well as rule-based techniques for solving NLU problems. Previously, Laura was a Researcher in the Scalable Natural Language Processing (NLP) group in IBM Research - Almaden, where she was a core developer of SystemT, a state-of-the-art declarative rule-based information extraction system, and led the transfer of SystemT to multiple IBM products. Laura's research interests included rule-based NLP programming languages, developer tooling for rule development, and application of machine learning in building rule-based models. Laura has been teaching NLP in universities within and outside the U.S., and developed two online courses in the process.
Program Director in the Artificial Intelligence Center (AIC) at SRI International. Dayne leads the Advanced Analytics group within the AIC, the center of gravity in NLP research at SRI. His interests are centered on the extraction, retrieval, and integration of information from human language, with a recent emphasis on scientific discourse. He has served as an organizer of the Workshop on Scientific Document Processing and a recent Dagstuhl seminar series on scholarly argumentation. Dayne has led and is leading a number of NLP research efforts sponsored by DARPA (chemistry information extraction), IARPA (authorship attribution), NSF (proposal analysis), ONR (procedural language understanding), and AFRL (predictive social media analytics).
Assistant Professor, Linguistics, University of Arizona. Gus is a core contributor to numerous information extraction and knowledge assembly systems which hybridize linguistic rules and statistical methods. His research interests center around machine reading for scientific discovery and in building systems to scour the literature, analyze findings, and synthesize discoveries to generate novel hypotheses.
Associate Professor, School of Information, University of Arizona. Clay’s research interests revolve around developing machine learning algorithms for learning structured representations from data, with application in natural language processing and time series analysis. His current research includes developing methods for machine reading from scientific documents, assembling probabilistic dynamics models from textual descriptions of causal mechanisms, and assembling executable scientific models from text, equations, and source code. Clay’s areas of expertise are in AI and machine learning and include probabilistic graphical modeling, nonparametric Bayesian inference and causal inference. Clay has served as Finance Chair (2015) and Program Chair (2012) of the IEEE International Conference on Development and Learning and Epigenetic Robotics, and served as chair and program committee member on numerous workshops for NSF and AAAI (conference workshops and fall/spring symposia).
Research Scientist, Computer Science, University of Arizona. Enrique's research is primarily focused on Information Extraction and Information Retrieval in the biomedical domain. He is a contributor to REACH, an open source, high-throughtput information extraction system for the biomedical domain based on the Odin language. Enrique is also interested on the application of reinforcement learning applications for NLP. He has served in the program committee of TextGraphs, SDU and SUKI.
Ellen Riloff is a Professor in the Department of Computer Science at the University of Arizona. Her primary research area is natural language processing, with an emphasis on information extraction, affective text analysis, semantic class induction, and bootstrapping methods that learn from unannotated texts. Prof. Riloff has served as the General Chair for the EMNLP 2018 conference, Program Co-Chair for the NAACL HLT 2012 and CoNLL 2004 conferences, on the NAACL Executive Board for 2004-2005 and 2017-2018, the Computational Linguistics Editorial Board, and the Transactions of the Association for Computational Linguistics (TACL) Editorial Board. In 2018, Prof. Riloff was named a Fellow of the Association for Computational Linguistics (ACL).
Principal Data Scientist, Lex Machina. Rebecca works on legal-domain natural language processing and machine learning. Prior to joining Lex Machina, Rebecca was an Assistant Research Professor at the University of Arizona, where she specialized in information extraction, multimodal emotion detection, and other applications of machine learning.
Principal Research Engineer, Lum AI. Marco's research is in Natural Language Processing and Information Extraction. He is the primary author of the Odin rule language and runtime system, as well as its highly optimized successor, Odinson. These tools have been used to build large-scale information extraction systems over several domains including biomedical, public health, agriculture, astronomy, and social science, among others. He is currently interested in methods for assisting domain-experts to better use these tools to build their own information extraction systems without requiring them to have knowledge of linguistics or NLP.
|Hoang N.H Van
If you would like to get involved, please reach out.