Apertus
  • Get Started
  • Documentation
  • Research
  • FAQ
  • Contact

Research News

Papers and technical reports from the Apertus project.

Apertus: Democratizing Open and Compliant LLMs for Global Language Environments

Apertus Project

Main technical report — architecture, training methodology, data pipeline, evaluation

Can Performant LLMs Be Ethical? Quantifying the Impact of Web Crawling Opt-Outs

Fan, Sabolčec, Ansaripour, Tarun, Jaggi, Bosselut, Schlag

Shows that respecting robots.txt opt-outs causes minimal performance degradation

Positional Fragility in LLMs: How Offset Effects Reshape Our Understanding of Memorization Risks

Xu, Bosselut, Schlag

Research on memorization patterns and copyright risks in LLMs

Quantifying Training Data Retention in Large Language Models: An Analysis of Pretraining Factors and Mitigation Strategies

Yixuan Xu (Master Thesis)

Analysis of memorization and mitigation strategies applied in Apertus

INCLUDE: Evaluating Multilingual Language Understanding with Regional Knowledge

Romanou et al.

Multilingual evaluation benchmark across 44 languages

Deriving Activation Functions Using Integration

Huang, Schlag

xIELU activation function used in Apertus architecture

Visit our 📖 Zotero group for further literature.

  • Impressum
  • Legal Notice
© 2026 ETH AI Center & EPFL AI Center