Katerina Margatina

Email
Google Scholar
Semantic Scholar
GitHub
LinkedIn
Twitter

I’m currently an Applied Scientist at Amazon in NYC, working with the AWS Bedrock Agents team. My main focus is on improving how LLM agents work—making them more useful, reliable, and efficient.

I earned my PhD in Computer Science at the University Sheffield, under the supervision of Prof. Nikos Aletras. I researched active learning algorithms for data efficient LLMs. Along the way, I spent time as a Research Scientist intern at Meta AI (FAIR) in London where I explored the intersection of in-context and active learning methods for LLMs, and at AWS in NYC where I studied temporal robustness of LLMs. I also visited the CoAStaL group in the University of Copenhagen, where I worked on learning from disagreement and cross-cultural NLP.

Before my doctoral studies (i.e., what feels like a lifetime ago), I was a Machine Learning Engineer at DeepSea Technologies. In my undergrad, I studied Electrical & Computer Engineering at the National Technical University of Athens (NTUA).

news

Jul 22, 2024 Just defended my PhD “Exploring Active Learning Algorithms for Data Efficient Language Models” and got it with no corrections!!! Beyond happy to finally achieve this milestone!
Apr 24, 2024 Super excited to share that our preprint The PRISM Alignment Project: What Participatory, Representative and Individualised Human Feedback Reveals About the Subjective and Multicultural Alignment of Large Language Models is on Arxiv!
Jan 16, 2024 Life update! Just joined AWS as an Applied Scientist!
Oct 8, 2023 2 papers accepted at EMNLP 2023!
Jul 17, 2023 Invited talk at the Archimedes Summer School in Athens (slides).
Jun 14, 2023 Invited talk at the Active Learning Speaker Series at Meta in London (slides).
May 9, 2023 Excited to have our position paper On the Limitations of Simulating Active Learning accepted at the Findings of ACL 2023. Joint work with my advisor Nikos Aletras!

selected publications

  1. Arxiv
    The PRISM Alignment Project: What Participatory, Representative and Individualised Human Feedback Reveals About the Subjective and Multicultural Alignment of Large Language Models
    Hannah Rose Kirk, Alexander Whitefield, Paul Röttger, Andrew Bean, Katerina Margatina, Juan Ciro, Rafael Mosquera, Max Bartolo, Adina Williams, Bertie Vidgen He He, and Scott A. Hale
    2024
  2. EMNLP-Findings
    Active Learning Principles for In-Context Learning with Large Language Models
    Katerina Margatina, Timo Schick, Nikolaos Aletras, and Jane Dwivedi-Yu
    2023
  3. ACL-Findings
    On the Limitations of Simulating Active Learning
    Katerina Margatina, and Nikolaos Aletras
    In Findings of the Association for Computational Linguistics (ACL) 2023
  4. EMNLP

    ✨ Oral ✨

    Active Learning by Acquiring Contrastive Examples
    Katerina Margatina, Giorgos Vernikos, LoĂŻc Barrault, and Nikolaos Aletras
    In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP) 2021