Skip to content
Pan-DL
GitHub

Organizers

Laura Chiticariu

Chief Architect for Watson Language Foundation in the IBM Data &AI product organization. Laura builds NLU systems that are accurate, scalable and transparent. She believes in the notion of "Transparent Machine Learning" in NLU: leveraging machine learning techniques, while ensuring that the NLU system remains transparent - easy to comprehend, debug and enhance. As such, the NLP capabilities in offerings under Laura's technical direction, including IBM Watson Natural Language Understanding, IBM Watson Discovery and IBM Watson Knowledge Studio expose and combine both Machine Learning techniques, as well as rule-based techniques for solving NLU problems. Previously, Laura was a Researcher in the Scalable Natural Language Processing (NLP) group in IBM Research - Almaden, where she was a core developer of SystemT, a state-of-the-art declarative rule-based information extraction system, and led the transfer of SystemT to multiple IBM products. Laura's research interests included rule-based NLP programming languages, developer tooling for rule development, and application of machine learning in building rule-based models. Laura has been teaching NLP in universities within and outside the U.S., and developed two online courses in the process.

Yoav Goldberg

Associate Professor, Computer Science, Bar Ilan University; and Research Director of AI2 Israel. Yoav’s research interests include language understanding technologies with real world applications, combining symbolic and neural representations, uncovering latent information in text, syntactic and semantic processing, and interpretability and foundational understanding of deep learning models for text and sequences. He is particularly interested in exposing NLP technology to non-NLP-experts, and believes pattern-based techniques are a way to achieve that. He authored a textbook on deep learning techniques for natural language processing, and was among the IEEE's AI Top 10 to Watch in 2018, and a recipient of the Krill Prize in Science in 2017. Yoav was a co-organizer of the 4th and 5th ACL workshops for parsing morphologically rich languages (SPMRL 2013,2014), the 1st and 2nd ACL workshops for evaluating vector space models for NLP (RepEval,2016, 2017), and co-organizer / co-program-chair of CoNLL 2016.

Gus Hahn-Powell

Assistant Professor, Linguistics, University of Arizona. Gus is a core contributor to numerous information extraction and knowledge assembly systems which hybridize linguistic rules and statistical methods. His research interests center around machine reading for scientific discovery and in building systems to scour the literature, analyze findings, and synthesize discoveries to generate novel hypotheses.

Clayton T. Morrison

Associate Professor, School of Information, University of Arizona. Clay’s research interests revolve around developing machine learning algorithms for learning structured representations from data, with application in natural language processing and time series analysis. His current research includes developing methods for machine reading from scientific documents, assembling probabilistic dynamics models from textual descriptions of causal mechanisms, and assembling executable scientific models from text, equations, and source code. Clay’s areas of expertise are in AI and machine learning and include probabilistic graphical modeling, nonparametric Bayesian inference and causal inference. Clay has served as Finance Chair (2015) and Program Chair (2012) of the IEEE International Conference on Development and Learning and Epigenetic Robotics, and served as chair and program committee member on numerous workshops for NSF and AAAI (conference workshops and fall/spring symposia).

Aakanksha Naik

Graduate Student, Language Technologies Institute, School of Computer Science, Carnegie Mellon University. Aakanksha’s research interests lie in developing models and evaluation frameworks for the long tail in language understanding, namely domains and phenomena that are underrepresented in standard benchmarks. Her current work focuses on extraction of semantic representations such as events and timelines for expert domains like clinical text, prompting an interest in pattern-based methods for data augmentation and building lightweight interpretable systems. She has served on the program committee of LOUHI and co-organized mentoring programs at NAACL 2019, ACL 2019 and ACL 2020.

Enrique Noriega-Atala

Postdoctoral Research Associate, Computer Science, University of Arizona. Enrique's research is primarily focused on Information Extraction and Information Retrieval in the biomedical domain. He is a contributor to REACH, an open source, high-throughtput information extraction system for the biomedical domain based on the Odin language. Enrique is also interested on the application of reinforcement learning applications for NLP. He has served in the program committee of TextGraphs, SDU and SUKI.

Rebecca Sharp

Assistant Research Professor, Linguistics, Lex Machina. Rebecca’s research is in Linguistics and Natural Language Processing, where she uses a variety of rule-based and machine learning approaches to extract and assemble knowledge in order to perform and explain approximate inference. Her work is very interdisciplinary and she collaborates actively with researchers in Linguistics, Computer Science, Information, and Nursing. This allows her to work with people from diverse backgrounds and with a wide variety of information needs, skills, approaches, and analysis techniques.

Mihai Surdeanu

Associate Professor, Computer Science, University of Arizona. Dr. Surdeanu works on systems that process and extract meaning from natural language texts such as question answering (answering natural language questions), information extraction (converting free text into structured relations and events). He focuses mostly on interpretable models, i.e., approaches where the computer can explain in human understandable terms why it made a decision. He published more than 90 peer-reviewed articles, including four articles that were among the top three most cited articles at their respective venues that year. His work was funded by several government organizations (DARPA, NIH, NSF), as well as private foundations (the Allen Institute for Artificial Intelligence, the Bill & Melinda Gates Foundation). He co-organized multiple shared tasks and workshops such as CoNLL, Knowledge Base Population (KBP), and TextGraphs.

Marco Valenzuela-Escárcega

Research Scientist, Computer Science, University of Arizona. Marco's research is in Natural Language Processing and Information Extraction. He is the primary author of the Odin rule language and runtime system, as well as its highly optimized successor, Odinson. These tools have been used to build large-scale information extraction systems over several domains including biomedical, public health, agriculture, astronomy, and social science, among others. He is currently interested in methods for assisting domain-experts to better use these tools to build their own information extraction systems without requiring them to have knowledge of linguistics or NLP.

Program Committee

Dane BellMaria AlexeevaXuan Wang
Daniela ClaroAnthony RiosAryeh Tiktinsky
Anthony RiosJuan Diego RodriguezHillel Taub-Tabib
Robert VacareanuAndrew ZuponHoang N.H Van

If you would like to get involved, please reach out.

© 2023 by Pan-DL. All rights reserved.