Will AI soon surpass the human brain? If you ask employees at OpenAI, Google DeepMind and other large tech companies, it is inevitable. However, researchers at Radboud University and other institutes show new proof that those claims are overblown and unlikely to ever come to fruition. Their findings are published in Computational Brain & Behavior today.

  • beefbot@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    1
    arrow-down
    1
    ·
    2 hours ago

    Elon Musk was Steve Jobs, Thomas Edison was Nikola Tesla, more examples I’m sure, and Sam Altman IS ELON MUSK.

    To paraphrase Göring: smarty-man hype & their promises work the same in every decade

    • beefbot@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      1
      ·
      1 hour ago

      And I say this disbelief as a loony-sort who believes that insects maybe have language as complex as humans’, and that AGI will probably happen someday & potentially while people living now are alive.

      But I just look at they hypesters who think they could ever control such a mind, who obviously plan to, & they just seem like goofy carnival types playing at summoning a god, & when the real thing shows up it is NOT happy these ants were so presumptuous. Most of us aren’t, any AGI who might eventually be evaluating us, Star-Trek-Q-style!

  • ipkpjersi@lemmy.ml
    link
    fedilink
    arrow-up
    6
    ·
    17 hours ago

    Obviously those claims are overblown lol, AIs literally cannot think. They are currently LLMs. They are impressive, sure, but anyone knows the technology knows that this is NOT AGI, and it is entirely possible we will never get AGI. It’s also possible we will get AGI, but this ain’t it. lol

    • Quail4789@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 hour ago

      If someone uses LLM and AI interchangable, their opinions on the subject doesn’t matter anyway.

  • BitSound@lemmy.world
    link
    fedilink
    arrow-up
    13
    arrow-down
    2
    ·
    edit-2
    1 day ago

    This is a silly argument:

    […] But even if we give the AGI-engineer every advantage, every benefit of the doubt, there is no conceivable method of achieving what big tech companies promise.’

    That’s because cognition, or the ability to observe, learn and gain new insight, is incredibly hard to replicate through AI on the scale that it occurs in the human brain. ‘If you have a conversation with someone, you might recall something you said fifteen minutes before. Or a year before. Or that someone else explained to you half your life ago. Any such knowledge might be crucial to advancing the conversation you’re having. People do that seamlessly’, explains van Rooij.

    ‘There will never be enough computing power to create AGI using machine learning that can do the same, because we’d run out of natural resources long before we’d even get close,’ Olivia Guest adds.

    That’s as shortsighted as the “I think there is a world market for maybe five computers” quote, or the worry that NYC would be buried under mountains of horse poop before cars were invented. Maybe transformers aren’t the path to AGI, but there’s no reason to think we can’t achieve it in general unless you’re religious.

    EDIT: From the paper:

    The remainder of this paper will be an argument in ‘two acts’. In ACT 1: Releasing the Grip, we present a formalisation of the currently dominant approach to AI-as-engineering that claims that AGI is both inevitable and around the corner. We do this by introducing a thought experiment in which a fictive AI engineer, Dr. Ingenia, tries to construct an AGI under ideal conditions. For instance, Dr. Ingenia has perfect data, sampled from the true distribution, and they also have access to any conceivable ML method—including presently popular ‘deep learning’ based on artificial neural networks (ANNs) and any possible future methods—to train an algorithm (“an AI”). We then present a formal proof that the problem that Dr. Ingenia sets out to solve is intractable (formally, NP-hard; i.e. possible in principle but provably infeasible; see Section “Ingenia Theorem”). We also unpack how and why our proof is reconcilable with the apparent success of AI-as-engineering and show that the approach is a theoretical dead-end for cognitive science. In “ACT 2: Reclaiming the AI Vertex”, we explain how the original enthusiasm for using computers to understand the mind reflected many genuine benefits of AI for cognitive science, but also a fatal mistake. We conclude with ways in which ‘AI’ can be reclaimed for theory-building in cognitive science without falling into historical and present-day traps.

    That’s a silly argument. It sets up a strawman and knocks it down. Just because you create a model and prove something in it, doesn’t mean it has any relationship to the real world.

    • This is a gross misrepresentation of the study.

      That’s as shortsighted as the “I think there is a world market for maybe five computers” quote, or the worry that NYC would be buried under mountains of horse poop before cars were invented.

      That’s not their argument. They’re saying that they can prove that machine learning cannot lead to AGI in the foreseeable future.

      Maybe transformers aren’t the path to AGI, but there’s no reason to think we can’t achieve it in general unless you’re religious.

      They’re not talking about achieving it in general, they only claim that no known techniques can bring it about in the near future, as the AI-hype people claim. Again, they prove this.

      That’s a silly argument. It sets up a strawman and knocks it down. Just because you create a model and prove something in it, doesn’t mean it has any relationship to the real world.

      That’s not what they did. They provided an extremely optimistic scenario in which someone creates an AGI through known methods (e.g. they have a computer with limitless memory, they have infinite and perfect training data, they can sample without any bias, current techniques can eventually create AGI, an AGI would only have to be slightly better than random chance but not perfect, etc…), and then present a computational proof that shows that this is in contradiction with other logical proofs.

      Basically, if you can train an AGI through currently known methods, then you have an algorithm that can solve the Perfect-vs-Chance problem in polynomial time. There’s a technical explanation in the paper that I’m not going to try and rehash since it’s been too long since I worked on computational proofs, but it seems to check out. But this is a contradiction, as we have proof, hard mathematical proof, that such an algorithm cannot exist and must be non-polynomial or NP-Hard. Therefore, AI-learning for an AGI must also be NP-Hard. And because every known AI learning method is tractable, it cannor possibly lead to AGI. It’s not a strawman, it’s a hard proof of why it’s impossible, like proving that pi has infinite decimals or something.

      Ergo, anyone who claims that AGI is around the corner either means “a good AI that can demonstrate some but not all human behaviour” or is bullshitting. We literally could burn up the entire planet for fuel to train an AI and we’d still not end up with an AGI. We need some other breakthrough, e.g. significant advancements in quantum computing perhaps, to even hope at beginning work on an AGI. And again, the authors don’t offer a thought experiment, they provide a computational proof for this.

    • petrol_sniff_king@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      2
      arrow-down
      2
      ·
      7 hours ago

      but there’s no reason to think we can’t achieve it

      They provide a reason.

      Just because you create a model and prove something in it, doesn’t mean it has any relationship to the real world.

      What are we science deniers now?

  • mindbleach@sh.itjust.works
    link
    fedilink
    arrow-up
    21
    arrow-down
    2
    ·
    1 day ago

    You do all this on three pounds of wet meat powered by cornflakes.

    The idea we’ll never recreate it through deliberate effort is absurd.

    What you mean is, LLMs probably aren’t how we get there. Which is fair. “Spicy autocorrect” is a limited approach with occasionally spooky results. It does a bunch of stuff people insisted would never happen without AGI - but that’s how this always goes. The products of human intelligence have always shown some hard-to-define qualities which humans can eventually distinguish from our efforts to make a machine produce anything similar.

    Just remember the distinction got narrower.

    • Greg Clarke@lemmy.ca
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 day ago

      I agree. Very few people in industry are claiming that LLMs will become AGI. The release of o1 demonstrates that even OpenAI are pivoting from pure LLM approaches. It was always going to be a framework approach that utilizes LLMs.

      • mindbleach@sh.itjust.works
        link
        fedilink
        arrow-up
        1
        ·
        24 hours ago

        I had hopes for recurrent systems becoming kinda… Dixie Flatline. Maybe not general enough to learn, but spooky enough to evaluate claims.

    • technocrit@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      1
      arrow-down
      6
      ·
      edit-2
      1 day ago

      You do all this on three pounds of wet meat powered by cornflakes. The idea we’ll never recreate it through deliberate effort is absurd.

      It’s even more absurd to think AGI will run on wet meat and cornflakes.

    • ReversalHatchery@beehaw.org
      link
      fedilink
      English
      arrow-up
      2
      ·
      19 hours ago

      It’s literally insane that they are doing this even though they don’t even have the replacement. It really shows their colour.

  • utopiah@lemmy.ml
    link
    fedilink
    arrow-up
    48
    arrow-down
    4
    ·
    2 days ago

    It’s a classic BigTech marketing trick. They are the only one able to build “it” and it doesn’t matter if we like “it” or not because “it” is coming.

    I believed in this BS for longer than I care to admit. I though “Oh yes, that’s progress” so of course it will come, it must come. It’s also very complex so nobody else but such large entities with so much resources can do it.

    Then… you start to encounter more and more vaporware. Grandiose announcement and when you try the result you can’t help but be disappointed. You compare what was promised with the result, think it’s cool, kind of, shrug, and move on with your day. It happens again, and again. Sometimes you see something really impressive, you dig and realize it’s a partnership with a startup or a university doing the actual research. The more time passes, the more you realize that all BigTech do it, across technologies. You also realize that your artist friend did something just as cool and as open-source. Their version does not look polished but it works. You find a KickStarter about a product that is genuinely novel (say Oculus DK1) and has no link (initially) with BigTech…

    You finally realize, year after year, you have been brain washed to believe only BigTech can do it. It’s false. It’s self serving BS to both prevent you from building and depend on them.

    You can build, we can build and we can build better.

    Can we build AGI? Maybe. Can they build AGI? They sure want us to believe it but they have lied through their teeth before so until they do deliver, they can NOT.

    TL;DR: BigTech is not as powerful as they claim to be and they benefit from the hype, in this AI hype cycle and otherwise. They can’t be trusted.

    • yonder@sh.itjust.works
      link
      fedilink
      arrow-up
      4
      ·
      1 day ago

      And the big tech companies also stand to benefit from overhyping their product to the point of saying it will take over the world. They look better for investors and can justify laws saying they should be the only arbiters of this technology to “keep it out of criminal hands” while happily serving the criminals for a fee.

    • just another dev@lemmy.my-box.dev
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      1
      ·
      1 day ago

      It’s one thing to claim that the current machine learning approach won’t lead to AGI, which I can get behind. But this article claims AGI is impossible simply because there are not enough physical resources in the world? That’s a stretch.

      • utopiah@lemmy.ml
        link
        fedilink
        arrow-up
        4
        ·
        1 day ago

        I haven’t seriously read the article for now unfortunately (deadline tomorrow) but if there is one thing that I believe is reliable, it’s computational complexity. It’s one thing to be creative, ingenious, find new algorithms and build very efficient processors and datacenters to make things extremely efficient, letting us computer things increasingly complex. It’s another though to “break” free of complexity. It’s just, as far as we currently know, is impossible. What is counter intuitive is that seemingly “simple” behaviors scale terribly, in the sense that one can compute few iterations alone, or with a computer, or with a very powerful set of computers… or with every single existing computers… only to realize that the next iteration of that well understood problem would still NOT be solvable with every computer (even quantum ones) ever made or that could be made based on resources available in say our solar system.

        So… yes, it is a “stretch”, maybe even counter intuitive, to go as far as saying it is not and NEVER will be possible to realize AGI, but that’s what their paper claims. It’s a least interesting precisely because it goes against the trend we hear CONSTANTLY pretty much everywhere else.

      • MindTraveller@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 day ago

        Maybe if they keep using digital computers. What they need is an analogue system. It’s much more efficient for this kind of work.

    • utopiah@lemmy.ml
      link
      fedilink
      arrow-up
      8
      arrow-down
      2
      ·
      1 day ago

      Read few months ago, warmly recommended. Basically on self selection bias and sharing “impressive” results while ignoring whatever does not work… then claiming it’s just the “beginning”.

  • Matriks404@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    1 day ago

    To be honest I really think that an AI surprising human brain in many ways is a matter of time, but what people don’t tend to talk about is whether or not we are slowly approaching the limit what we can do with technology, because I already see tech progress slowing down in some areas.

      • trainsaresexy@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        1 day ago

        SCI

        I looked this up because it’s new to me. AGI is what you think it is, and superintelligent collective intelligence is a collection that can perform tasks. Instead of 1 LLM or 1 AGI doing all the work, you have a team of agents and humans who can talk to each other. AGI seems like far off space tech and SCI is more like a next gen pursuit.

        • SturgiesYrFase@lemmy.ml
          link
          fedilink
          arrow-up
          1
          ·
          22 hours ago

          Cool… unfortunately my search is not turning up much. SCI.ai a science geared LLM was about 30% of my search results over Google and DDG. The other 70% is about Sierra’s Creative Interpreter, and a moisturizer additive had 2 hits.

          Glad you gave me the synopsis, apparently I’m incapable of finding that info myself, regardless of what combination of AI/AGI/SCI/DIFFERENCES etcetcetc and nauseum.