Lag-Llamas pretraining on a diverse corpus of time series data from various domains showcases its remarkable zero-shot generalisation capabilities. The pretraining corpus consists of 27 datasets across six different domains, including energy, transportation, economics, nature, air quality, and cloud operations, with close to 8K univariate time series and 352M tokens.