Skip to ContentSkip to Navigation
About us Practical matters How to find us A. (Arianna) Bisazza, PhD

Publications

A Primer on the Inner Workings of Transformer-based Language Models

Are Character-level Translations Worth the Wait? Comparing ByT5 and mT5 for Machine Translation

Endowing Neural Language Learners with Human-like Biases: A Case Study on Dependency Length Minimization

Communication Drives the Emergence of Language Universals in Neural Agents: Evidence from the Word-order/Case-marking Trade-off

Inseq: An Interpretability Toolkit for Sequence Generation Models

Inseq: An Interpretability Toolkit for Sequence Generation Models

Quantifying the Plausibility of Context Reliance in Neural Machine Translation

Wave to Syntax: Probing spoken language models for syntax

DivEMT: Neural Machine Translation Post-Editing Effort Across Typologically Diverse Languages

Evaluating Pre-training Objectives for Low-Resource Translation into Morphologically Rich Languages

Read more