No Free Lunch from Deep Learning in Neuroscience: A Case Study through Models of the Entorhinal-Hippocampal Circuit, 2022, Schaeffer et al

Discussion in 'Research methodology news and research' started by CRG, Nov 6, 2022.

  1. CRG

    CRG Senior Member (Voting Rights)

    Messages:
    1,857
    Location:
    UK
    No Free Lunch from Deep Learning in Neuroscience: A Case Study through Models of the Entorhinal-Hippocampal Circuit

    Rylan Schaeffer, Mikail Khona, Ila Rani Fiete


    Abstract


    Research in Neuroscience, as in many scientific disciplines, is undergoing a renaissance based on deep learning. Unique to Neuroscience, deep learning models can be used not only as a tool but interpreted as models of the brain.

    The central claims of recent deep learning-based models of brain circuits are that they make novel predictions about neural phenomena or shed light on the fundamental functions being optimized. We show, through the case-study of grid cells in the entorhinal-hippocampal circuit, that one often gets neither.

    We begin by reviewing the principles of grid cell mechanism and function obtained from analytical and first-principles modeling efforts, then rigorously examine the claims of deep learning models of grid cells. Using large-scale hyperparameter sweeps and theory-driven experimentation, we demonstrate that the results of such models may be more strongly driven by particular, non-fundamental, and post-hoc implementation choices than fundamental truths about neural circuits or the loss function(s) they might optimize.

    Finally, we discuss why these models cannot be expected to produce accurate models of the brain without the addition of substantial amounts of inductive bias, an informal No Free Lunch result for Neuroscience. In conclusion, caution and consideration, together with biological knowledge, are warranted in building and interpreting deep learning models in Neuroscience.

    Full = pdf https://t.co/XBjH1m8X1E

    --------------------------------

    placed in Research methodology because primary relevance is the caution needed to be attached to the Deep Learning tag. Full article is extremely technical.
     
    Peter Trewhitt likes this.
  2. rvallee

    rvallee Senior Member (Voting Rights)

    Messages:
    12,919
    Location:
    Canada
    Good grief, do they think ML is about magically solving unsolved problems? You can't train a neural network without a solution to the problem, it's literally what it uses to learn. Machine learning is similar to human learning, it requires rapid and accurate feedback about how close an answer is to the real solution. Just like in school.

    The benefit of ML is that some problems require such massive labor that they are impractical. When a problem is unsolved it's like trying to divide by zero in this case, it doesn't matter how much brute force you can put on it, it will go on forever because it needs a large set of validated solutions to the problem to work at all.

    The protein folding AIs built by OpenAI and Meta did not magically solve the problem of protein folding. They used already solved folding problems. Validated by hand, from decades of painstaking labor.

    You can't automate an unsolved problem. Otherwise the only answer you can get out of it is 42.
     
    Art Vandelay, alktipping, CRG and 3 others like this.

Share This Page