• sudneo@lemm.ee
    link
    fedilink
    English
    arrow-up
    31
    arrow-down
    4
    ·
    2 days ago

    Models are not improving, companies are still largely (massively) unprofitable, the tech has a very high environmental impact (and demand) and not a solid business case has been found so far (despite very large investments) after 2 years.

    That AI isn’t going anywhere is possible, but LLM-based tools might also simply follow crypto, VR, metaverses and the other tech “revolutions” that were just hyped and that ended nowhere. I can’t say it will go one way or another, but I disagree with you about “adjustment period”. I think generative AI is cool and fun, but it’s a toy. If companies don’t make money with it, they will eventually stop investing into it.

    Also

    Today’s hype will have lasting effects that constrain tomorrow’s possibilities

    Is absolutely true. Wasting capital (human and economic) on something means that it won’t be used for something else instead. This is especially true now that it’s so hard to get investments for any other business. If all the money right now goes into AI, and IF this turns out to be just hype, we just collectively lost 2, 4, 10 years of research and investments on other areas (for example, environment protection). I am really curious about what makes you think that that sentence is false and stupid.

    • VoterFrog@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      4
      ·
      2 days ago

      Models are not improving? Since when? Last week? Newer models have been scoring higher and higher in both objective and subjective blind tests consistently. This sounds like the kind of delusional anti-AI shit that the OP was talking about. I mean, holy shit, to try to pass off “models aren’t improving” with a straight face.

      • sudneo@lemm.ee
        link
        fedilink
        English
        arrow-up
        9
        ·
        2 days ago

        There is a bunch of research showing that model improvement is marginal compared to energy demand and/or amount of training data. OpenAI itself ~1 month ago mentioned that they are seeing a smaller improvements in Orion (I believe) vs GPT4 than there was between GPT 4 and 3. We are also running out of quality data to use for training.

        Essentially what I mean is that the big improvements we have seen in the past seem to be over, now improving a little cost a lot. Considering that the costs are exorbitant and the difference small enough, it’s not impossible to imagine that companies will eventually give up if they can’t monetize this stuff.

        • icecreamtaco@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          18 hours ago

          Compare Llama 1 to the current state of the art local AI’s. They’re on a completely different level.

          • sudneo@lemm.ee
            link
            fedilink
            English
            arrow-up
            2
            ·
            11 hours ago

            Yes, because at the beginning there was tons of room for improvement.

            I mean take openAI word for it: chatGPT 5 is not seeing improvement compared to 4 as much as 4 to 3, and it’s costing a fortune and taking forever. Logarithmic curve, it seems. Also if we run out of data to train, that’s it.

        • theherk@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          17 hours ago

          Surely you can see there is a difference between marginal improvement with respect to energy and not improving.

          • sudneo@lemm.ee
            link
            fedilink
            English
            arrow-up
            1
            ·
            11 hours ago

            Yes, I see the difference as in hitting the logarithmic tail that shows we are close to the limit. I also realize that exponential cost is a defacto limit on improvement. If improving again for chatGPT7 will cost 10 trillions, I don’t think it will ever happen, right?