Tech CEOs want us to believe that generative AI will benefit humanity. They are kidding themselves

  • Turkey_Titty_city@kbin.social
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    I mean AI is already generating lots of bullshit ‘reports’. Like you know, stuff that reports ‘news’ with zero skill. It’s glorified copy-pasting really.

    If you think about how much language is rote, in like law and etc. Makes a lot of sense to use AI to auto generate it. But it’s not intelligence. It’s just creating a linguistic assembly line. And just like in a factory, it will require human review to for quality control.

    • 🐝bownage [they/he]@beehaw.org
      link
      fedilink
      arrow-up
      9
      ·
      edit-2
      1 year ago

      The thing is - and what’s also annoying me about the article - AI experts and computational linguists know this. It’s just the laypeople that end up using (or promoting) these tools now that they’re public that don’t know what they’re talking about and project intelligence onto AI that isn’t there. The real hallucination problem isn’t with deep learning, it’s with the users.

    • fiasco@possumpat.io
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      This is the curation effect: generate lots of chaff, and have humans search for the wheat. Thing is, someone’s already gotten in deep shit for trying to use deep learning for legal filings.

    • shoelace@kbin.social
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      It drives me nuts about how often I see the comments section of an article have one smartass pasting the GPT summary of that article. The quality of that content is comparable to the “reply girl” shit from 10 years ago.