• The 22 blog posts 2024 are now published! Check our press release for an overview or dive directly into it on the Blog page
  • More information regarding the poster session will be available soon.


ICLR 2024 Blogposts Track

The Machine Learning community is currently experiencing a reproducibility crisis and a reviewing crisis [Littman, 2021]. Because of the highly competitive and noisy reviewing process of ML conferences [Tran et al., 2020], researchers have an incentive to oversell their results, slowing down the progress and diminishing the integrity of the scientific community. Moreover with the growing number of papers published and submitted at the main ML conferences [Lin et al., 2020], it has become more challenging to keep track of the latest advances in the field.

Blog posts are becoming an increasingly popular and useful way to talk about science [Brown and Woolston, 2018]. They offer substantial value to the scientific community by providing a flexible platform to foster open, human, and transparent discussions about new insights or limitations of a scientific publication. However, because they are not as recognized as standard scientific publications, only a minority of researchers manage to maintain an active blog and get visibility for their efforts. Many are well-established researchers (Francis Bach, Ben Recht, Ferenc Huszár, Lilian Weng) or big corporations that leverage entire teams of graphic designers designer and writers to polish their blogs (Facebook AI, Google AI, DeepMind, OpenAI). As a result, the incentives for writing scientific blog posts are largely personal; it is unreasonable to expect a significant portion of the machine learning community to contribute to such an initiative when everyone is trying to establish themselves through publications.

Submit your blogpost on Openreview

A Blog Post Conference Track

Last year, we ran the second iteration of the Blogpost track at ICLR 2023!

It was very successful, with accepted posts presented in person at the main conference.

Our goal is to create a formal call for blog posts at ICLR to incentivize and reward researchers to review past work and summarize the outcomes, develop new intuitions, or highlight some shortcomings. A very influential initiative of this kind happened after the Second World War in France. Because of the lack of up-to-date textbooks, a collective of mathematicians under the pseudonym Nicolas Bourbaki [Halmos 1957], decided to start a series of textbooks about the foundations of mathematics [Bourbaki, 1939]. In the same vein, we aim to provide a new way to summarize scientific knowledge in the ML community.

Due to the large diversity of topics that can be discussed in a blog post, we decided to restrict the range of topics for this call for blog posts. We identified that the blog posts that would bring to most value to the community and the conference would be posts that distill and discuss previously published papers.


The N Implementation Details of RLHF with PPO
     Shengyi Costa Huang, Tianlin Liu, Leandro von Werra
How to compute Hessian-vector products?
     Mathieu Dagréou, Pierre Ablin, Samuel Vaiter, Thomas Moreau
Bridging the Data Processing Inequality and Function-Space Variational Inference
     Andreas Kirsch

Accepted Posts

Understanding in-context learning in transformers
     Simone Rossi, Rui Yuan, Thomas Hannagan
Behavioral Differences in Mode-Switching Exploration for Reinforcement Learning
     Loren J Anderson
Fairness in AI: two philosophies or just one?
     MaryBeth Defrance
Towards Robust Foundation Models: Adversarial Contrastive Learning
     Jingfeng Zhang, Xilie Xu
A New Alchemy: Language Model Development as a Subfield?
     Colin Raffel
Understanding gradient inversion attacks from the prior knowledge perspective
     Yanbo Wang, Jian Liang, Ran He
Building Diffusion Model’s theory from ground up
     Ayan Das
Masked Language Model with ALiBi and CLAP head
     Jason Chuan-Chih Chou
What exactly has TabPFN learned to do?
     Calvin McCarter
Elaborating on the Value of Flow Matching for Density Estimation
     Maternus Herold, Faried Abu Zaid
The Hidden Convex Optimization Landscape of Two-Layer ReLU Networks
     Victor Mercklé, Franck Iutzeler, Ievgen Redko
Deep Equilibrium Models For Algorithmic Reasoning
     Sophie Xhonneux, Yu He, Andreea Deac, Jian Tang, Gauthier Gidel
Fair Model-Based Reinforcement Learning Comparisons with Explicit and Consistent Update Frequency
     Albert Thomas, Abdelhakim Benechehab, Giuseppe Paolo, Balázs Kégl
Exploring Meta-learned Curiosity Algorithms
     Batsirayi Mupamhi Ziki
Unraveling The Impact of Training Samples
     Daiwei Chen, Jane Zhang, Ramya Korlakai Vinayak
RLHF without RL - Direct Preference Optimization
     Michael Panchenko
It’s Time to Move On: Primacy Bias and Why It Helps to Forget
     Matthew Kielo, Vladimir Lukin
Double Descent Demystified
     Rylan Schaeffer, Zachary Robertson, Akhilan Boopathy, Mikail Khona, Kateryna Pistunova, Jason W. Rocks, Ila R. Fiete, Andrey Gromov, Sanmi Koyejo
On Bayesian Model Selection: The Marginal Likelihood, Cross-Validation, and Conditional Log Marginal Likelihood
     Andreas Kirsch

Key Dates

Abstract deadline: December 11th 00:00GMT, 2023 (submit to OpenReview - to be announced soon).

Submission deadline: December 17th 00:00GMT, 2023 (any modifications to your blog post, via a pull request on GitHub).

Decision Notification: January 30th, 2024 UPDATED: February 15th, 2024

Camera-ready merge: March 15th, 2024

A call for blog posts discussing work previously published at ICLR


Write a post on a subject that has been published at a top-tier venue (ICLR, ICML, NeurIPS, AAAI, UAI, CVPR, SIGGRAPH, ECCV, ICCV, etc.) relatively recently.

Conflict of interest

The authors of the blog posts will have to declare their conflicts of interest (positive or negative) with the paper (and the paper’s authors) they write about. Conflicts of interest include:

  • Recent collaborators (less than 3 years)
  • Current institution ​ Reviewers will be asked to judge if the submission is sufficiently critical and objective of the papers addressed in the blog post.
  • Blog Posts must not be used to highlight or advertise past publications of the **authors or their lab**.

We will only ask the authors to report if they have a conflict of interest. If so, reviewers will be asked to judge if the submission is sufficiently critical and objective of the papers addressed in the blog post.


Blog post

The posts will be created and published under a unified template; see the submission instructions and the sample post hosted on the blog of this website.


Additionally, accepted posts will have the option to present their work as a poster during the main poster session. For more information about the main poster session (time, poster format, etc.) please refer to the ICLR homepage.


Our goal is to avoid heavily engineered, professionally-made blog posts —Such as the “100+ hours” mentioned as a standard by the Distill guidelines—to entice ideas and clear writing rather than dynamic visualizations or embedded javascript engines. Please check our submission instructions for more details. We accept submissions in both Markdown and HTML. We believe this is a good trade-off between complexity and flexibility.

Submit your blogpost on Openreview


For any technical issues with the blog post repository (for example, blog posts not displaying correctly or issues while following the submission instructions), please open an issue in our github repository.

For other inquiries, reach us via email at:



Michael L Littman. Collusion rings threaten the integrity of computer science research. Communications of the ACM, 2021.

David Tran, Alex Valtchanov, Keshav Ganapathy, Raymond Feng, Eric Slud, Micah Goldblum, and Tom Goldstein. An open review of OpenReview: A critical analysis of the machine learning conference review process. arXiv, 2020.

Hsuan-Tien Lin, Maria-Florina Balcan, Raia Hadsell, and Marc’Aurelio Ranzato. What we learned from NeurIPS 2020 reviewing process. Medium, 2020.

Eryn Brown and Chris Woolston. Why science blogging still matters. Nature, 2018.

Paul R Halmos. Nicolas Bourbaki. Scientific American, 1957.

Nicolas Bourbaki. Elements of mathematics. Éditions Hermann, 1939.