Please note that refreshments will be provided. 

On Mon, Jan 1, 2024 at 8:00 AM Sagie Benaim <sagie.benaim@mail.huji.ac.il> wrote:
Reminder, this is happening today. 

On Mon, Dec 25, 2023 at 6:00 PM Sagie Benaim <sagie.benaim@mail.huji.ac.il> wrote:
Dear all, 

Next week, we have the pleasure of having Dr. Yftah Ziser give a talk in the colloquium.

The seminar will be held on Monday, January 1st at 14:00.
Location: C220.

The title, abstract and bio appear below.

Looking forward to seeing you,
Sagie and Liat

Title:
Democratising Natural Language Processing: Overcoming Language and Domain Barriers in Low-Resource Environments

Abstract:
Natural language processing (NLP) has been revolutionised in recent years to the point where it is an inseparable part of our daily lives. The transition to transformer-based models allows us to train models on vast amounts of text efficiently, proving that scale plays a crucial role in improving performance. Unfortunately, many people worldwide are marginalised from getting access to high-quality NLP models, as the language they speak and the domains they are interested in count for only a tiny fraction of current state-of-the-art models' training sets.

This talk will address the challenges, approaches, and opportunities for democratising NLP across different languages and domains by developing methods to improve NLP in low-resource scenarios. I will start by discussing how we can ease distribution mismatches to improve performance using representation learning.
However, as NLP models become increasingly present in our lives, improving other crucial aspects beyond their performance, such as their fairness, factuality, and our ability to understand their underlying mechanisms, is essential.
Therefore, I will also discuss using spectral methods to remove information from neural networks to reduce undesired attributes, such as bias, to increase fairness where sensitive data is scarce.
Finally, I will explore future directions for making these models accessible to a broader audience by improving the aspects mentioned above in low-resource scenarios.

Bio:
Yftah Ziser is a Postdoctoral Researcher at the School of Informatics at Edinburgh University, hosted by Shay Cohen.
He focuses on Deep-Learning methods for dealing with the resource bottleneck, which seriously challenges the worldwide accessibility of NLP technology.
His research develops methods to improve low-resource models' performance, fairness, and factuality while developing analysis methods for deepening our understanding of them.
He co-organised the Domain Adaptation for NLP Workshop at EACL 2021.

Before joining the University of Edinburgh, Yftah worked as a research scientist at Amazon Alexa. Yftah obtained his PhD from the Technion, where Roi Reichart advised him.