I believe that knowledge should be free, you should have access to knowledge even if you don’t have the money to afford buying it. This uses IPFS.

  • hendrik@palaver.p3x.de
    link
    fedilink
    English
    arrow-up
    3
    ·
    19 hours ago

    Right, the public and journalists often lump everything together under the term “AI”. When it’s really a big difference between some domain specific pattern recognition task that can be done with machine learning and >99% accuracy… Or an ill-suited use-case where a LLM gets slapped on.

    For example I frequently disagree with people using LLMs for summarization. That seems to be something a lot of people like. And I think they’re particularly bad at it. All my results were riddled with inaccuracies, sometimes it’d miss the whole point of the input text. And it’d rarely summarize at all. It just picks a topic/paragraph here and there and writes some shorter version of that. Missing what a summary is about, providing me with the main points and conclusion, reducing the details and roughly outlining how the author got there. I think LLMs just can’t do it.

    I like them for other purposes, though.

    • Mirodir@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      19 hours ago

      Re LLM summaries: I’ve noticed that too. For some of my classes shortly after the ChatGPT boom we were allowed to bring along summaries. I tried to feed it input text and told it to break it down into a sentence or two. Often it would just give a short summary about that topic but not actually use the concepts described in the original text.

      Also minor nitpick but be wary of the term “accuracy”. It is a terrible metric for most use cases and when a company advertises their AI having a high accuracy they’re likely hiding something. For example, let’s say we wanted to develop a model that can detect cancer on medical images. If our test set consists of 1% cancer inages and 99% normal tissue the 99% accuracy is achieved trivially easy by a model just predicting “no cancer” every time. A lot of the more interesting problems have class imbalances far worse than this one too.

      • hendrik@palaver.p3x.de
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        18 hours ago

        What’s the correct term within casual language? “correctness”? But that has the same issue… I’m not a native speaker…

        By the way, I forgot my main point. I think that paper generator was kind of a joke. At least the older one, which predates AI and uses “hand-written context-free grammar”:

        And there are projects like Papergen and several others. But I think what I was referring to was the AI scientist which does everything from brainstorming research ideas, to simulating experiments, writing reports etc. That’s not meant to be taken seriously, in the sense that you’ll publish the generated results. But seems pretty creative to me, to write a paper about an artificial scientist…