- The track has concluded and accepted blogposts are viewable here!
- The poster session for the blog track will take place at 11:30 on Tuesday May 2nd in room MH1-2-3-4.
- How does the inductive bias influence the generalization capability of neural networks?
- Charlotte Barth, Thomas Goerttler, Klaus Obermayer
- Universality of Neural Networks on Sets vs. Graphs
- Fabian B. Fuchs, Petar Veličković
- Data Poisoning is Hitting a Wall
- Rajat Sahay
- Decay No More
- Fabian Schaipp
- Rethinking the Implementation Tricks and Monotonicity Constraint in Cooperative Multi-agent Reinforcement Learning
- Jian Hu, Siying Wang, Siyang Jiang, Weixun Wang
- Autoregressive Renaissance in Neural PDE Solvers
- Yolanne Lee
- A Hitchhiker’s Guide to Momentum
- Fabian Pedregosa
- Thinking Like Transformers
- Alexander Rush, Gail Weiss
- Strategies for Classification Layer Initialization in Model-Agnostic Meta-Learning
- Nys Tjade Siegel, Thomas Goerttler, Klaus Obermayer
- Practical Applications of Bsuite For Reinforcement Learning
- Loren Anderson, Nathan Bittner
- How much meta-learning is in image-to-image translation?
- Maximilian Eißler, Thomas Goerttler, Klaus Obermayer
ICLR 2023 Blogposts Track
The Machine Learning community is currently experiencing a reproducibility crisis and a reviewing crisis [Littman, 2021]. Because of the highly competitive and noisy reviewing process of ML conferences [Tran et al., 2020], researchers have an incentive to oversell their results, slowing down the progress and diminishing the integrity of the scientific community. Moreover with the growing number of papers published and submitted at the main ML conferences [Lin et al., 2020], it has become more challenging to keep track of the latest advances in the field.
Blog posts are becoming an increasingly popular and useful way to talk about science [Brown and Woolston, 2018]. They offer substantial value to the scientific community by providing a flexible platform to foster open, human, and transparent discussions about new insights or limitations of a scientific publication. However, because they are not as recognized as standard scientific publications, only a minority of researchers manage to maintain an active blog and get visibility for their efforts. Many are well-established researchers (Francis Bach, Ben Recht, Ferenc Huszár, Lilian Weng) or big corporations that leverage entire teams of graphic designers designer and writers to polish their blogs (Facebook AI, Google AI, DeepMind, OpenAI). As a result, the incentives for writing scientific blog posts are largely personal; it is unreasonable to expect a significant portion of the machine learning community to contribute to such an initiative when everyone is trying to establish themselves through publications.
A Blog Post Conference Track
Last year, we ran the first iteration of the Blogpost track at ICLR 2022! It was very successful, attracting over 60 submissions and 20 accepted posts.
Our goal is to create a formal call for blog posts at ICLR to incentivize and reward researchers to review past work and summarize the outcomes, develop new intuitions, or highlight some shortcomings. A very influential initiative of this kind happened after the second world war in France. Because of the lack of up-to-date textbooks, a collective of mathematicians under the pseudonym Nicolas Bourbaki [Halmos 1957], decided to start a series of textbooks about the foundations of mathematics [Bourbaki, 1939]. In the same vein, we aim at providing a new way to summarize scientific knowledge in the ML community.
Due to the large diversity of topics that can be discussed in a blog post, we decided to restrict the range of topics for this call for blog posts. We identified that the blog posts that would bring to most value to the community and the conference would be posts that distill and discuss previously published papers.
Abstract deadline: February 2nd AOE, 2023 (submit to OpenReview).
Submission deadline: February 10th AOE, 2023 (any modifications to your blog post, via a pull request on github).
Notification of acceptance: March 31st, 2023
Camera-ready merge: April 28th, 2023 (please follow the instructions here)
A call for blog posts discussing work previously published at ICLR
The format and process for this blog post track is as follows:
- Write a post on a subject that has been published at ICLR relatively recently. The authors of the blog posts will have to declare their conflicts of interest (positive nor negative) with the paper (and their authors) they write about. Conflicts of interest include:
- Recent collaborators (less than 3 years)
- Current institution.
Blog Posts must not be used to highlight or advertise past publications of the authors of of their lab. Previously, we did not accept submissions with a conflict of interest, however this year we will only ask the authors to report if they have such a conflict. If so, reviewers will be asked to judge if the submission is sufficiently critical and objective of the papers addressed in the blog post.
- Blogs will be peer-reviewed (double-blind) for quality and novelty of the content: clarity and pedagogy of the exposition, new theoretical or practical insights, reproduction/extension of experiments, etc. We are slightly relaxing the double-blind constraints by assuming good faith from both submitters and reviewers (see the submission instructions for more details).
As a result, we restrict submissions to the Markdown format. We believe this is a good trade-off between complexity and flexibility. Markdown enables users to easily embed media such as images, gifs, audio, and video as well as write mathematical equations using MathJax, without requiring users to know how to create HTML web pages. This (mostly) static format is also fairly portable; users can download the blog post without much effort for offline reading or archival purposes. More importantly, this format can be easily hosted and maintained through GitHub.
David Tran, Alex Valtchanov, Keshav Ganapathy, Raymond Feng, Eric Slud, Micah Goldblum, and Tom Goldstein. An open review of openreview: A critical analysis of the machine learning conference review process. arXiv, 2020.
Hsuan-Tien Lin, Maria-Florina Balcan, Raia Hadsell, and Marc’Aurelio Ranzato. What we learned from neurips2020 reviewing process. Medium https://medium.com/@NeurIPSConf/what-we-learned-from-neurips-2020-reviewing-process-e24549eea38f, 2020.