About me

News

Bio

I’m an Associate Professor of Computer Science at the IT University of Copenhagen, where I’m a part of the NLPnorth natural language processing group. Until 2022, I was also a part of the Computational Linguistics Group at Uppsala University as a Researcher and Associate Professor (Docent) in Computational Linguistics. In 2019-2021, I spent two years as a visitor and Senior Researcher at the School of Informatics of the University of Edinburgh. From 2009 to 2011, I was a member of the machine translation group at Fondazione Bruno Kessler in Trento.

I hold a PhD in Computational Linguistics from Uppsala University. My PhD thesis was on Discourse in Statistical Machine Translation and received the Best Thesis Award of the European Association for Machine Translation in 2015. My supervisors were Joakim Nivre, Jörg Tiedemann and Marcello Federico. I also have an MA in Nordic Philology from the University of Basel.

Research

I work in computational linguistics, and my research touches on statistical natural language processing, machine learning, machine translation, translation studies and text linguistics. My goal is to create NLP systems with a higher awareness of linguistic context and non-linguistic aspects of communicative situations. I am also interested in studying high-level problems in translation using methods from statistical NLP and machine translation.

I’m a member of the organisation committee of the Workshop on Computational Approaches to Discourse (CODI) at EMNLP 2020, EMNLP 2021, COLING 2022, ACL 2023 and EACL 2024. I also co-organised the Workshops on Gender Bias in Natural Language Processing (GeBNLP) at ACL 2019, COLING 2020, ACL-IJCNLP 2021 and NAACL 2022 and the Workshop on Discourse in MT (DiscoMT), last held at EMNLP-IJCNLP 2019.

Topics I am currently interested in include the following:

Uncertainty quantification and communication

How can we measure how certain or confident large language models are of the output they produce (1, 2)? How can we ensure that their measured confidence is realistic and not overblown, and that the text they produce correctly reflects this confidence? How can large language models effectively convey their level of confidence to diverse user groups?

Toxicity and bias

I am keen on modelling in an explainable way what makes specific forms of toxic language toxic, for instance in the context of dehumanising language (3) and threats. I’m also a part of the SafeNet project, in which we study the response of social media platforms to reports of unsafe language across 19 European countries and have studied gender bias in NLP, particularly in the interpretation and generation of referring expressions (4, 5).

Reference

I’m particularly curious about how referring expressions such as pronouns (like she, it, they or this) and lexical noun phrases (like a cat, the house, scrambled eggs or my research) are used across languages, how human translators treat them, what machine translation systems should do with them and how we can use multilingual data to help us interpret them automatically.

These are problems I’ve studied for many years and have approached from many angles:

  • Discourse-level MT (6) and its evaluation (7, 8, 9)
  • Cross-lingual coreference resolution (10), cross-lingual pronoun prediction (11, 12) and neural language modelling (13)
  • Automatic discovery of discourse-related language contrasts in human-translated parallel corpora (14, 15)
  • Cross-lingual studies of the generation and interpretation of referring expressions with human subjects (16, 17, 18)
  • Coreference annotation of multilingual corpora (19, 20)