Katerina Margatina

Email
Google Scholar
Semantic Scholar
GitHub
LinkedIn
Twitter

Hello world.🦋 I am a final year Ph.D. student at the CS dept. at the University of Sheffield, working on natural language processing & machine learning. My advisor is Nikos Aletras and my work is funded by an Amazon Alexa Fellowship. My research focuses on active learning, evaluation & benchmarking and in-context learning – but I’m fascinated by other topics as well!

During my Ph.D., I have interned at Meta AI (FAIR) in London with Jane Dwivedi-Yu and Timo Schick (2023), and at Amazon Web Services (AWS) in NYC with Miguel Ballesteros and the Amazon Comprehend team (2022).

I have also visited the CoAStaL group in the University of Copenhagen (2021), where I had the pleasure of working with Anders Søgaard and the rest of the team on learning from disagreement and cross-cultural NLP.

Prior to starting my Ph.D., I worked as a Machine Learning Engineer at the awesome Greek startup DeepSea Technologies. In my undergrad, I studied Electrical & Computer Engineering at the National Technical University of Athens (NTUA).

news

May 23, 2023 Thrilled to share a pre-print of our paper Active Learning Principles for In-Context Learning with Large Language Models, joint work with Timo, Jane and Nikos! This is probably the last paper for my Ph.D., so it is extra special.🖤
May 9, 2023 Excited to have our position paper On the Limitations of Simulating Active Learning accepted at the Findings of ACL 2023. Joint work with my advisor Nikos Aletras!
Jan 27, 2023 2 papers accepted at EACL 2023 (main conf.)!
Sep 1, 2022 Very excited to start my internship at Meta AI in London!

selected publications

  1. Pre-print
    Active Learning Principles for In-Context Learning with Large Language Models
    Margatina, K., Schick, T., Aletras, N., and Dwivedi-Yu, J.
    2023
  2. ACL-Findings
    On the Limitations of Simulating Active Learning
    Margatina, Katerina, and Aletras, Nikolaos
    In Findings of the Association for Computational Linguistics (ACL) 2023
  3. EACL
    Dynamic Benchmarking of Masked Language Models on Temporal Concept Drift with Multiple Views
    Margatina, K., Wang, S., Vyas, Y., John, N.A., Benajiba, Y., and Ballesteros, M.
    In Proceedings of the European Meeting of the Association for Computational Linguistics (EACL) 2023
  4. EACL
    Investigating Multi-source Active Learning for Natural Language Inference
    Snijders, A., Kiela, D., and Margatina, K.
    In Proceedings of the European Meeting of the Association for Computational Linguistics (EACL) 2023
  5. ACL
    On the Importance of Effectively Adapting Pretrained Language Models for Active Learning
    Margatina, K., Barrault, L., and Aletras, N.
    In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL) 2022
  6. EMNLP

    ✨ Oral ✨

    Active Learning by Acquiring Contrastive Examples
    Margatina, K., Vernikos, G., Barrault, L., and Aletras, N.
    In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP) 2021