Self-Improving World Modelling with Latent Actions
Feb 9, 2026·,
,,,,·
1 min read
Yifu Qiu
Zheng Zhao
Weixian Waylon Li
Yftah Ziser
Anna Korhonen
Shay B. Cohen
Edoardo M. Ponti

Abstract
Internal modelling of the world – predicting transitions between previous states and next states under actions – is essential to reasoning and planning for LLMs and VLMs. Learning such models typically requires costly action-labelled trajectories. We propose SWIRL, a self-improvement framework that learns from state-only sequences by treating actions as a latent variable and alternating between Forward World Modelling (FWM) and an Inverse Dynamics Modelling (IDM) . SWIRL iterates two phases: (1) Variational Information Maximisation, which updates the FWM to generate next states that maximise conditional mutual information with latent actions given prior states, encouraging identifiable consistency; and (2) ELBO Maximisation, which updates the IDM to explain observed transitions, effectively performing coordinate ascent. Both models are trained with reinforcement learning (specifically, GRPO) with the opposite frozen model’s log-probability as a reward signal. We provide theoretical learnability guarantees for both updates, and evaluate SWIRL on LLMs and VLMs across multiple environments: single-turn and multi-turn open-world visual dynamics and synthetic textual environments for physics, web, and tool calling. SWIRL achieves gains of 16% on AURORABench, 28% on ByteMorph, 16% on WorldPredictionBench, and 14% on StableToolBench.
Type
Publication
Preprint 2026
Citation
@misc{qiu2026selfimprovingworldmodellinglatent,
title={Self-Improving World Modelling with Latent Actions},
author={Yifu Qiu and Zheng Zhao and Waylon Li and Yftah Ziser and Anna Korhonen and Shay B. Cohen and Edoardo M. Ponti},
year={2026},
eprint={2602.06130},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2602.06130},
}