Nexo Earn with Nexo
François Chollet: AGI progress is accelerating towards 2030, symbolic models will reshape machine learning, and coding agents are revolutionizing automation | Y Combinator Startup Podcast

François Chollet: AGI progress is accelerating towards 2030, symbolic models will reshape machine learning, and coding agents are revolutionizing automation | Y Combinator Startup Podcast

New AGI lab aims to revolutionize machine learning with symbolic models, moving beyond traditional deep learning.

Key Takeaways

  • AGI progress is expected to accelerate, with significant developments anticipated around 2030.
  • The new AGI research lab, NDA, aims to create a fundamentally different branch of machine learning from deep learning.
  • Symbolic models could provide more efficient and generalizable solutions compared to traditional parametric models.
  • AI and machine learning are expected to evolve towards optimality, moving away from current technological stacks.
  • Coding agents succeed due to the verifiable reward signals offered by code, enabling automation in formal domains.
  • The progress of reasoning models in non-verifiable domains like essay writing will be slow due to reliance on costly human-annotated data.
  • Code-based training environments have significantly advanced AI capabilities in programming.
  • AGI requires a model that can efficiently learn and adapt to new tasks with minimal data, similar to human learning.
  • We are on a trajectory to automate economically useful work before achieving true AGI.
  • Building AGI on top of current LLMs is seen as inefficient and not optimal for future AI research.
  • The inevitability of AI progress suggests a need for more efficient foundational structures.
  • The development of new machine learning paradigms could lead to significant advancements in AI.
  • The limitations of current AI models highlight the challenges in handling complex, non-verifiable tasks.
  • Structured training environments have a transformative impact on AI performance, particularly in programming tasks.
  • The efficiency of human-like learning is a fundamental requirement for achieving AGI.

Guest intro

François Chollet is the founder of a startup focused on developing AGI through program synthesis, which he co-founded with Zapier co-founder Mike Knoop after leaving Google in November 2024. He created the Keras deep-learning library in 2015 and published the ARC-AGI benchmark in 2019 to measure AI systems’ ability to solve novel reasoning problems. In 2024, he launched the ARC Prize, a $1 million competition to advance progress toward artificial general intelligence.

Why AGI progress is inevitable

  • AGI progress is expected to continue accelerating, with significant developments anticipated around 2030.
  • I think we’re probably looking at AGI 2030 around the time that we’re gonna be releasing like maybe ARC six or ARC seven

    — François Chollet

  • The inevitability of AI progress suggests that stopping it is unlikely.
  • You’re not gonna stop AI progress I think I think it’s too late for that

    — François Chollet

  • Understanding the timeline for advancements in AGI is crucial for AI development.
  • The prediction about the future of AGI indicates the inevitability of AI progress.
  • The development of new machine learning paradigms could lead to significant advancements in AI.
  • The efficiency of human-like learning is a fundamental requirement for achieving AGI.
  • AGI is basically gonna be a system that can approach any new problem any new task any new domain and make sense of it like model it become competent at it with the same degree of efficiency as a human could

    — François Chollet

The new frontier in machine learning at NDA

  • The goal of the new AGI research lab, NDA, is to create a new branch of machine learning that is fundamentally different from deep learning.
  • What we’re doing at NDA is we’re doing program synthesis research… we are trying to build a new branch of machine learning that will be much closer to optimal unlike deep learning

    — François Chollet

  • Knowledge of current machine learning paradigms and the limitations of deep learning is crucial to appreciate this new approach.
  • This novel approach in AI research could lead to significant advancements in the field.
  • Understanding the limitations of current deep learning approaches is essential for recognizing the potential benefits of symbolic models.
  • We are replacing the parametric curve with a symbolic model that is meant to be as small as possible… we are building something that we call symbolic descent

    — François Chollet

  • Symbolic models can provide more efficient and generalizable machine learning solutions compared to traditional parametric models.
  • The development of new machine learning paradigms at NDA could reshape the future of AI research.

The shift towards symbolic models

  • Symbolic models can provide more efficient and generalizable machine learning solutions compared to traditional parametric models.
  • We are replacing the parametric curve with a symbolic model that is meant to be as small as possible

    — François Chollet

  • The potential benefits of symbolic models include improved efficiency and generalization.
  • Everything you’re doing with machine learning today with parametric curves we should be able to do it with symbolic models in the future in a way that will be much much closer to optimality

    — François Chollet

  • Understanding the limitations of current deep learning approaches is crucial for recognizing the advantages of symbolic models.
  • This novel approach to machine learning could significantly improve efficiency and generalization.
  • The shift towards symbolic models represents a move towards more optimal machine learning solutions.
  • The development of symbolic models could lead to significant advancements in AI technology.

The future of AI and machine learning

  • Machine learning and AI will evolve towards optimality, moving away from current stacks.
  • I personally don’t think that machine learning or AI in fifty years is still gonna be built on this stack

    — François Chollet

  • The inevitability of AI progress suggests a need for more efficient foundational structures.
  • It’s inevitable that the world of AI will trend over time towards optimality

    — François Chollet

  • Understanding the current limitations of AI technology is crucial for anticipating future advancements.
  • The prediction about the future direction of AI technology highlights the need for more efficient foundational structures.
  • The development of new machine learning paradigms could lead to significant advancements in AI.
  • The shift towards optimality represents a move towards more efficient and effective AI solutions.

The success of coding agents

  • Coding agents succeed because code offers a verifiable reward signal, enabling automation in formally verifiable domains.
  • If you look at why everything is starting to work so well with coding agents… it’s really because code provides you with a verifiable reward signal

    — François Chollet

  • Understanding how reward signals function in machine learning is crucial for recognizing the success of coding agents.
  • The verifiability of code enables automation in formal domains, such as mathematics.
  • This explanation clarifies the mechanics behind the success of coding agents.
  • The success of coding agents suggests broader implications for other domains like mathematics.
  • The development of coding agents represents a significant advancement in AI technology.
  • The verifiable reward signals offered by code are crucial for the success of coding agents.

Challenges in non-verifiable domains

  • The progress of reasoning models in non-verifiable domains like essay writing will be slow due to reliance on costly human-annotated data.
  • What you’re gonna see is that progress of reasoning models and based on this type of domain is…gonna be very slow because the stack we’re using like the LLM stack is very very reliant on its trained data

    — François Chollet

  • Understanding the challenges of applying AI to creative tasks like essay writing is crucial for recognizing the limitations of current AI models.
  • The reliance on costly human-annotated data is a significant barrier to progress in non-verifiable domains.
  • This insight highlights the limitations of current AI models in handling complex, non-verifiable tasks.
  • The challenges in non-verifiable domains underscore the need for more efficient AI models.
  • The slow progress in non-verifiable domains suggests a need for new approaches in AI research.
  • The limitations of current AI models highlight the challenges in handling complex, non-verifiable tasks.

Advancements in code-based training environments

  • The creation of code-based training environments has significantly advanced AI capabilities in programming.
  • The big unlock is when people started creating this code based like training environment for post training where the reward signal…is provided by things like unit tests

    — François Chollet

  • Understanding how AI models are trained is crucial for recognizing the importance of structured environments for effective learning.
  • The development of code-based training environments represents a significant advancement in AI technology.
  • Structured training environments have a transformative impact on AI performance, particularly in programming tasks.
  • The success of code-based training environments suggests broader implications for other domains.
  • The creation of code-based training environments underscores the importance of verifiable reward signals in AI training.
  • The advancements in code-based training environments highlight the potential for further improvements in AI capabilities.

The trajectory towards automation

  • We are on a trajectory to automate economically useful work before achieving true AGI.
  • I think that’s a trajectory that we’re on right now and I think it’s already true that in principle current technology can fully automate at human level or beyond any domain where you have verifiable rewards

    — François Chollet

  • Understanding the distinction between automation and AGI is crucial for recognizing the current advancements in AI.
  • The prediction about the trajectory towards automation highlights the potential for significant advancements in AI technology.
  • The development of automation technologies represents a significant step towards achieving AGI.
  • The trajectory towards automation suggests a need for more efficient AI models.
  • The current advancements in AI automation set expectations for future developments.
  • The potential for full automation in verifiable domains underscores the importance of verifiable reward signals in AI training.

The inefficiency of building AGI on current LLMs

  • Building AGI on top of current LLMs would be inefficient and not optimal for future AI research.
  • I do believe however this would be the wrong thing to do because it would be very inefficient I think AI research will have to trend towards not just efficiency but in fact optimality over time

    — François Chollet

  • Understanding the limitations of current LLM technology is crucial for recognizing the inefficiency of building AGI on top of them.
  • This opinion provides a critical perspective on the direction of AI research and the need for more optimal approaches.
  • The inefficiency of building AGI on current LLMs suggests a need for new approaches in AI research.
  • The development of more efficient AI models represents a significant step towards achieving AGI.
  • The need for optimality in AI research underscores the importance of efficiency in future AI developments.
  • The limitations of current LLM technology highlight the challenges in building AGI on top of them.
Disclosure: This article was edited by Editorial Team. For more information on how we create and review content, see our Editorial Policy.

François Chollet: AGI progress is accelerating towards 2030, symbolic models will reshape machine learning, and coding agents are revolutionizing automation | Y Combinator Startup Podcast

François Chollet: AGI progress is accelerating towards 2030, symbolic models will reshape machine learning, and coding agents are revolutionizing automation | Y Combinator Startup Podcast

New AGI lab aims to revolutionize machine learning with symbolic models, moving beyond traditional deep learning.

Key Takeaways

  • AGI progress is expected to accelerate, with significant developments anticipated around 2030.
  • The new AGI research lab, NDA, aims to create a fundamentally different branch of machine learning from deep learning.
  • Symbolic models could provide more efficient and generalizable solutions compared to traditional parametric models.
  • AI and machine learning are expected to evolve towards optimality, moving away from current technological stacks.
  • Coding agents succeed due to the verifiable reward signals offered by code, enabling automation in formal domains.
  • The progress of reasoning models in non-verifiable domains like essay writing will be slow due to reliance on costly human-annotated data.
  • Code-based training environments have significantly advanced AI capabilities in programming.
  • AGI requires a model that can efficiently learn and adapt to new tasks with minimal data, similar to human learning.
  • We are on a trajectory to automate economically useful work before achieving true AGI.
  • Building AGI on top of current LLMs is seen as inefficient and not optimal for future AI research.
  • The inevitability of AI progress suggests a need for more efficient foundational structures.
  • The development of new machine learning paradigms could lead to significant advancements in AI.
  • The limitations of current AI models highlight the challenges in handling complex, non-verifiable tasks.
  • Structured training environments have a transformative impact on AI performance, particularly in programming tasks.
  • The efficiency of human-like learning is a fundamental requirement for achieving AGI.

Guest intro

François Chollet is the founder of a startup focused on developing AGI through program synthesis, which he co-founded with Zapier co-founder Mike Knoop after leaving Google in November 2024. He created the Keras deep-learning library in 2015 and published the ARC-AGI benchmark in 2019 to measure AI systems’ ability to solve novel reasoning problems. In 2024, he launched the ARC Prize, a $1 million competition to advance progress toward artificial general intelligence.

Why AGI progress is inevitable

  • AGI progress is expected to continue accelerating, with significant developments anticipated around 2030.
  • I think we’re probably looking at AGI 2030 around the time that we’re gonna be releasing like maybe ARC six or ARC seven

    — François Chollet

  • The inevitability of AI progress suggests that stopping it is unlikely.
  • You’re not gonna stop AI progress I think I think it’s too late for that

    — François Chollet

  • Understanding the timeline for advancements in AGI is crucial for AI development.
  • The prediction about the future of AGI indicates the inevitability of AI progress.
  • The development of new machine learning paradigms could lead to significant advancements in AI.
  • The efficiency of human-like learning is a fundamental requirement for achieving AGI.
  • AGI is basically gonna be a system that can approach any new problem any new task any new domain and make sense of it like model it become competent at it with the same degree of efficiency as a human could

    — François Chollet

The new frontier in machine learning at NDA

  • The goal of the new AGI research lab, NDA, is to create a new branch of machine learning that is fundamentally different from deep learning.
  • What we’re doing at NDA is we’re doing program synthesis research… we are trying to build a new branch of machine learning that will be much closer to optimal unlike deep learning

    — François Chollet

  • Knowledge of current machine learning paradigms and the limitations of deep learning is crucial to appreciate this new approach.
  • This novel approach in AI research could lead to significant advancements in the field.
  • Understanding the limitations of current deep learning approaches is essential for recognizing the potential benefits of symbolic models.
  • We are replacing the parametric curve with a symbolic model that is meant to be as small as possible… we are building something that we call symbolic descent

    — François Chollet

  • Symbolic models can provide more efficient and generalizable machine learning solutions compared to traditional parametric models.
  • The development of new machine learning paradigms at NDA could reshape the future of AI research.

The shift towards symbolic models

  • Symbolic models can provide more efficient and generalizable machine learning solutions compared to traditional parametric models.
  • We are replacing the parametric curve with a symbolic model that is meant to be as small as possible

    — François Chollet

  • The potential benefits of symbolic models include improved efficiency and generalization.
  • Everything you’re doing with machine learning today with parametric curves we should be able to do it with symbolic models in the future in a way that will be much much closer to optimality

    — François Chollet

  • Understanding the limitations of current deep learning approaches is crucial for recognizing the advantages of symbolic models.
  • This novel approach to machine learning could significantly improve efficiency and generalization.
  • The shift towards symbolic models represents a move towards more optimal machine learning solutions.
  • The development of symbolic models could lead to significant advancements in AI technology.

The future of AI and machine learning

  • Machine learning and AI will evolve towards optimality, moving away from current stacks.
  • I personally don’t think that machine learning or AI in fifty years is still gonna be built on this stack

    — François Chollet

  • The inevitability of AI progress suggests a need for more efficient foundational structures.
  • It’s inevitable that the world of AI will trend over time towards optimality

    — François Chollet

  • Understanding the current limitations of AI technology is crucial for anticipating future advancements.
  • The prediction about the future direction of AI technology highlights the need for more efficient foundational structures.
  • The development of new machine learning paradigms could lead to significant advancements in AI.
  • The shift towards optimality represents a move towards more efficient and effective AI solutions.

The success of coding agents

  • Coding agents succeed because code offers a verifiable reward signal, enabling automation in formally verifiable domains.
  • If you look at why everything is starting to work so well with coding agents… it’s really because code provides you with a verifiable reward signal

    — François Chollet

  • Understanding how reward signals function in machine learning is crucial for recognizing the success of coding agents.
  • The verifiability of code enables automation in formal domains, such as mathematics.
  • This explanation clarifies the mechanics behind the success of coding agents.
  • The success of coding agents suggests broader implications for other domains like mathematics.
  • The development of coding agents represents a significant advancement in AI technology.
  • The verifiable reward signals offered by code are crucial for the success of coding agents.

Challenges in non-verifiable domains

  • The progress of reasoning models in non-verifiable domains like essay writing will be slow due to reliance on costly human-annotated data.
  • What you’re gonna see is that progress of reasoning models and based on this type of domain is…gonna be very slow because the stack we’re using like the LLM stack is very very reliant on its trained data

    — François Chollet

  • Understanding the challenges of applying AI to creative tasks like essay writing is crucial for recognizing the limitations of current AI models.
  • The reliance on costly human-annotated data is a significant barrier to progress in non-verifiable domains.
  • This insight highlights the limitations of current AI models in handling complex, non-verifiable tasks.
  • The challenges in non-verifiable domains underscore the need for more efficient AI models.
  • The slow progress in non-verifiable domains suggests a need for new approaches in AI research.
  • The limitations of current AI models highlight the challenges in handling complex, non-verifiable tasks.

Advancements in code-based training environments

  • The creation of code-based training environments has significantly advanced AI capabilities in programming.
  • The big unlock is when people started creating this code based like training environment for post training where the reward signal…is provided by things like unit tests

    — François Chollet

  • Understanding how AI models are trained is crucial for recognizing the importance of structured environments for effective learning.
  • The development of code-based training environments represents a significant advancement in AI technology.
  • Structured training environments have a transformative impact on AI performance, particularly in programming tasks.
  • The success of code-based training environments suggests broader implications for other domains.
  • The creation of code-based training environments underscores the importance of verifiable reward signals in AI training.
  • The advancements in code-based training environments highlight the potential for further improvements in AI capabilities.

The trajectory towards automation

  • We are on a trajectory to automate economically useful work before achieving true AGI.
  • I think that’s a trajectory that we’re on right now and I think it’s already true that in principle current technology can fully automate at human level or beyond any domain where you have verifiable rewards

    — François Chollet

  • Understanding the distinction between automation and AGI is crucial for recognizing the current advancements in AI.
  • The prediction about the trajectory towards automation highlights the potential for significant advancements in AI technology.
  • The development of automation technologies represents a significant step towards achieving AGI.
  • The trajectory towards automation suggests a need for more efficient AI models.
  • The current advancements in AI automation set expectations for future developments.
  • The potential for full automation in verifiable domains underscores the importance of verifiable reward signals in AI training.

The inefficiency of building AGI on current LLMs

  • Building AGI on top of current LLMs would be inefficient and not optimal for future AI research.
  • I do believe however this would be the wrong thing to do because it would be very inefficient I think AI research will have to trend towards not just efficiency but in fact optimality over time

    — François Chollet

  • Understanding the limitations of current LLM technology is crucial for recognizing the inefficiency of building AGI on top of them.
  • This opinion provides a critical perspective on the direction of AI research and the need for more optimal approaches.
  • The inefficiency of building AGI on current LLMs suggests a need for new approaches in AI research.
  • The development of more efficient AI models represents a significant step towards achieving AGI.
  • The need for optimality in AI research underscores the importance of efficiency in future AI developments.
  • The limitations of current LLM technology highlight the challenges in building AGI on top of them.
Disclosure: This article was edited by Editorial Team. For more information on how we create and review content, see our Editorial Policy.