Highlights of NeurIPS 2024

Highlights of NeurIPS 2024

Feb 13, 2025

The Thirty-Eighth Annual Conference on Neural Information Processing Systems (NeurIPS 2024) convened from December 10 to 15 at the Vancouver Convention Center, maintaining its status as a premier event in neural research. The conference was founded in 1987 and is now a multi-track interdisciplinary annual meeting that includes invited talks, demonstrations, symposia, and oral and poster presentations of refereed papers. Along with the conference is a professional exposition focusing on machine learning in practice, a series of tutorials, and topical workshops that provide a less formal setting for exchanging ideas. This year's conference witnessed a substantial increase in participation, with over 4,000 papers accepted, complemented by 56 workshops and 14 tutorials, reflecting the dynamic growth and diversification of the AI research community.

Conference Agenda and Structure

NeurIPS 2024 offered a comprehensive agenda that included:

  • Invited Talks: Featuring insights from leading experts in AI and machine learning.

  • Oral Presentations and Spotlights: Showcasing cutting-edge research findings.

  • Poster Sessions: Providing interactive discussions on a wide array of topics.

  • Workshops and Tutorials: Facilitating in-depth exploration of specialized subjects.

  • Affinity Events and Socials: Promoting networking and community building among attendees.

Ilya Sutskever’s Test of Time Award Talk

OpenAI’s cofounder and former chief scientist, Ilya Sutskever, made headlines in 2024 after he left to start his own AI lab, Safe Superintelligence Inc. He has avoided the limelight since his departure but made a rare public appearance in Vancouver at the Conference. The talk reflected on his seminal work from a decade ago and discussed its impact and evolution over time. Sutskever began by revisiting the "Deep Learning Hypothesis" introduced a decade ago, which posited that a 10-layer neural network could perform any task that a human can do in a fraction of a second.

Ilya’s take on pre-training

A slide from Ilya’s talk emphasized the "Early Scaling Hypothesis," the idea that scaling up data to "pre-train" AI systems would send them to new heights, was starting to reach its limits.

"But pre-training as we know it will unquestionably end," Sutskever declared before thousands of attendees at the NeurIPS conference in Vancouver. "While compute is growing," he said, "the data is not growing, because we have but one internet.”

Sutskever offered some ways to push the frontier despite this conundrum. He said technology itself could generate new data, or AI models could evaluate multiple answers before settling on the best response for a user, to improve accuracy. Other scientists have set sights on real-world data. But his talk culminated in a prediction for a future of super intelligent machines where AI will reason through problems like humans can.

His address sparked extensive discussion across the AI world, with some interpreting it as signalling the end of traditional pre-training methods. However, it is clarified that his emphasis was on the necessity for innovative approaches to data acquisition, aiming to enhance the efficacy of pre-training processes.

NeurIPS 2024 Best Paper Awards

The best and runner-up paper awards this year went to five ground-breaking papers (four main track and one datasets and benchmarks track) that highlight, respectively, a new autoregressive model for vision, new avenues for supervised learning using higher-order derivatives, improved training of LLMs and inference methods for text2image diffusion and a novel diverse benchmark dataset for LLM alignment.

Best Papers for the Main Track:

  1. Visual Autoregressive Modelling: Scalable Image Generation via Next-Scale Prediction

    Proposes a novel autoregressive model that predicts the next higher resolution of an image, leveraging a multi-scale VQ-VAE implementation. Outperforms traditional autoregressive models in efficiency and rivals diffusion-based methods, offering compelling insights on scaling laws. Read more

  2. Stochastic Taylor Derivative Estimator: Efficient Amortisation for Arbitrary Differential Operators

    Introduces a stochastic estimator (STDE) for training neural networks with higher-order derivatives, addressing inefficiencies in automatic differentiation. Enables scalable solutions for physics-informed neural networks and PDEs with high-dimensional or high-order complexities. Read more

Runners-Up for the Main Track:

  1. Not All Tokens Are What You Need for Pre-training

    Presents a token filtering method for LLM pre-training using a high-quality reference dataset, ensuring the model is trained on aligned, high-quality data. Improves dataset efficiency and alignment. Read more

  2. Guiding a Diffusion Model with a Bad Version of Itself

    Proposes "Auto guidance," replacing classifier-free guidance in text-to-image diffusion models with a noisier diffusion model. Enhances diversity and image quality while addressing CFG's limitations. Read more

Best Paper for Datasets & Benchmarks Track:

  1. The PRISM Alignment Dataset: What Participatory, Representative and Individualised Human Feedback Reveals About the Subjective and Multicultural Alignment of Large Language Models

    Introduces a dataset with diverse, multicultural human feedback from 75 countries to analyze LLM alignment with subjective and pluralistic values. Enables research on RLHF, pluralism, and disagreements. Read more

Experiment on the Usefulness of LLMs as an Author Checklist Assistant for Scientific Papers

NeurIPS 2024 experimented with an AI-powered "Checklist Assistant" to help researchers improve their paper submissions. The tool, which was tested on 234 papers, checked if submissions met the conference's standards for reproducibility, transparency, and ethical research. The response was largely positive, with 70% of authors finding the assistant helpful in refining their work.

However, the experiment revealed both strengths and limitations of using AI in academic publishing. While the assistant provided specific, actionable feedback that led many authors to enhance their papers, some challenges emerged. About 20 out of 52 surveyed authors reported receiving inaccurate feedback, and 14 felt the system was overly strict. There were also concerns about people potentially gaming the system with fabricated responses.

This test of AI assistance in academic publishing showed promise but highlighted important lessons. While AI tools can effectively help authors meet submission standards and improve their work, they shouldn't replace human judgment in the review process. The experiment suggests a future where AI could complement traditional academic workflows, though improvements in accuracy and security would be needed. Read more

Conclusion

NeurIPS 2024 showcased groundbreaking AI advancements while fostering crucial discussions about inclusivity and sustainability in research. These ongoing conversations are shaping a more thoughtful and responsible path for scientific progress as the field evolves.

About Genloop

Genloop delivers customized LLMs that provide unmatched cost, control, simplicity, and performance for production enterprise applications. Please visit genloop.ai or email founder@genloop.ai for more details.

Ready to Elevate Your Business with Personalized LLMs?

Genloop

Santa Clara, California, United States 95051

© 2025 Genloop™. All Rights Reserved.

Ready to Elevate Your Business with Personalized LLMs?

Genloop

Santa Clara, California, United States 95051

© 2025 Genloop™. All Rights Reserved.

Ready to Elevate Your Business

with Personalized LLMs?

Genloop

Santa Clara, California, United States 95051

© 2025 Genloop™. All Rights Reserved.

Ready to Elevate Your Business

with Personalized LLMs?

Genloop

Santa Clara, California, United States 95051

© 2025 Genloop™. All Rights Reserved.