CuRe develops new evaluation methods to test and improve how language models interpret culture by using Danish literature as a rigorous ground for cultural reasoning.
Independent Research Fund Denmark (DFF), 2026–2030

CuRe investigates how language models interpret culture — not through facts or stereotypes, but through the rich, ambiguous, and historically layered world of literature. By combining NLP, literary studies, and expert-driven evaluation design, the project develops new methods for assessing and improving cultural reasoning in AI systems.
Culture shapes meaning, and meaning is where AI struggles most.
Literature embeds cultural knowledge through:
This makes literature the ideal testing ground for evaluating how AI understands culture — and for building models that respect cultural nuance rather than reducing it to stereotypes.
CuRe addresses three core research questions:
Construction of high-quality corpora and interpretive benchmarks based on Danish literature, including MeMo, Mini-WorldLit, and canonical texts. Data includes passages, interpretive annotations, student essays, and expert commentary, designed following the ECBD framework.
Evaluation of retrieval-augmented generation (RAG), long-context models, and soft-label annotation strategies. Analysis of interpretive ambiguity, multiple valid readings, and the relationship between close and distant reading.
Adaptation of Danish Foundation Models (DFM), experiments with pretraining mixtures, fine-tuning with expert feedback, and human-in-the-loop reinforcement learning.
Daniel Hershcovich (PI) Tenure-Track Assistant Professor, Department of Computer Science, University of Copenhagen
Jens Bjerring-Hansen (Co-PI) Associate Professor, Department of Nordic Studies and Linguistics, University of Copenhagen
Desmond Elliott (Advisor) Associate Professor, Department of Computer Science, University of Copenhagen
A rolling list of project publications (2026–2030) will be maintained here.
For inquiries or collaboration:
dh@di.ku.dk