We are delighted to invite you to an afternoon of public lectures on language technology and its interaction with society. The lectures will take place in Auditorium 4 at ITU on 13 June 2023. Attendance is free for anyone interested.
The lectures will also be streamed on Zoom. For remote participation, please register here.
Luca Maria Aiello, IT University of Copenhagen
In the coming decades, a defining task for humankind will be to tackle global challenges through mass coordination that need to unfold i) rapidly, ii) at a continental scale, and iii) organically, with minimal top-down orchestration. Social science research has identified psycho-linguistic aspects of social interactions (e.g., knowledge exchange, group identity, and trust) that are key triggers of spontaneous coordination when used strategically in the public debate. We embarked in a quest to create NLP tools that can capture these "social dimensions" from conversational language. We show that, when applied to a wide variety of social interaction data, they can explain and predict outcomes of consensus and coordination. In this talk, we will also discuss how this type of NLP tools can benefit from the latest Large Language Models. Our ultimate goal is to provide a blueprint to create online participatory platforms that will facilitate their members to find agreement on how to solve problems that can be addressed only with mass action, such as rapid climate change and global pandemics.
Shashi Narayan, Google DeepMind
The ability to convey relevant and faithful information is critical for many tasks in conditional generation and yet remains elusive for neural seq-to-seq models whose outputs often reveal hallucinations and fail to correctly cover important details. In this work, we advocate planning as a useful intermediate representation for rendering conditional generation less opaque and more grounded. We propose a new conceptualization of text plans as a sequence of question-answer (QA) pairs and enhance existing datasets (e.g., for summarization) with a QA blueprint operating as a proxy for content selection (i.e., what to say) and planning (i.e., in what order). We obtain blueprints automatically by exploiting state-of-the-art question generation technology and convert input-output pairs into input-blueprint-output tuples. We develop Transformer-based models, each varying in how they incorporate the blueprint in the generated output (e.g., as a global plan or iteratively). Evaluation across metrics and datasets demonstrates that blueprint models are more factual than alternatives which do not resort to planning and allow tighter control of the generation output.
Anne Lauscher, University of Hamburg
2023 is the year of chatbots: state-of-the-art open-domain conversational AI systems suddenly produce fluent, engaging, and often helpful responses to various user inputs and have quickly reached the broad masses. In the coming years, they will continue to make more and more inroads into our daily lives and become an integral part of our communication. Nevertheless, conversational AI currently has a variety of critical flaws that threaten responsible and societally useful "communication of the future." For example, such systems regularly produce factually incorrect statements, encode and replicate unfair stereotypes, and exclude individuals from culturally and subculturally underrepresented groups. We will discuss some of these (ethical) problems related to conversational AI as well as their causes. The central concept that emerges is the question of "the truth." Based on that, we will exemplify how our research contributes to more responsible AI-based communication.