TikTok’s parent company, ByteDance, has been secretly using OpenAI’s technology to develop its own competing large language model (LLM). “This practice is generally considered a faux pas in the AI world,” writes The Verge’s Alex Heath. “It’s also in direct violation of OpenAI’s terms of service, which state that its model output can’t be used ‘to develop any artificial intelligence models that compete with our products and services.’”

  • TootSweet@lemmy.world
    link
    fedilink
    English
    arrow-up
    149
    arrow-down
    12
    ·
    11 months ago

    OpenAI will steal a whole internet worth of everybody’s data to train their large language model, but gets pissed when others do the same to them.

        • FaceDeer@kbin.social
          link
          fedilink
          arrow-up
          12
          arrow-down
          3
          ·
          11 months ago

          No, even then it isn’t. It’s not stealing. There is literally a whole different body of law defining stealing versus the body of law that defines copyright and intellectual property. The data is still exactly where it was to begin with, therefore it hasn’t been stolen.

          I wish people would stop using wildly inaccurate loaded terminology in these discussions simply to score emotional points.

    • crazyCat@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      6
      ·
      11 months ago

      Their take on it, via Sam Altman, is that the AI is reading and learning from the internet and we can’t fault them for that, right? You don’t fault a human from using what they’ve learned, do you? Is the rationale… I don’t know what I think about it though

      • TootSweet@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        5
        ·
        11 months ago

        Ok. I was going to let my comment just sit and not respond to responses, partly because my take on it is something I haven’t even fully thought through yet, don’t know if I can put it into words well, and may be internally inconsistent in some ways.

        But I’m getting a lot of responses saying largely the same things and I think they’re good points that probably deserve a response, so here goes.

        First off, I’m pretty skeptical of the recent hype around AI. It can do some “neat tricks,” but I think it’s not really ready to start replacing people for instance.

        IBM was one of the first companies recently to announce they were “replacing a whole bunch of people with AI.” I suspect what really happened was that they decided to lay off a whole bunch of people and then their PR department came up with a clever way to spin that. Since IBM is big in the “AI solutions” market, saying they were “replacing people” with AI made their product and stock seem more attractive than if they’d just said they were laying people off. I doubt they’re actually doing much “replacing with AI.”

        I think other companies (well, the CEO’s of other companies, I mean) have gotten swept up in the hype and actually think they can replace people with AI. I don’t think that’s going to go well for them.

        Still other companies may be fucking over their workers by laying them off, setting up “AI solutions”, and rehiring the same (or different) people to review/edit what the “AI” outputs at a lower rate of pay. But I doubt fixing the mistakes AI makes is really any less work than doing the job they’ve given to the AI. (In some cases, the AI might go secretly unused because it’s more work to try to make the AI do what the human can do themselves and it’s against policy not to use it. But that plays right into the business’ evil hands.)

        Now, as an aside, let me say that there are algorithms that are often considered “AI” that in the right hands and applied correctly to various narrow use cases can be very useful. But again, these techniques are tools. Not replacements for people.

        At best, I think the current craze over AI is unfounded hype. A bubble. At worst, a scam. If we ever do get AI that can replace humans, I don’t think DALL-E-8 and LLMs are going to be how we do it.

        Next, it’s fucked up that ChatGPT uses all the data it can find all over the internet to train and then locks the results of its training up on a server where you can’t use it without registering for an accout.

        “Oh, but TootSweet, what about LLaMa?” I might hear you ask. Meta bills LLaMa as “open source.” But it doesn’t fit the Open Source Initiative’s definition of “open source.” Seems like Meta is trying to dilute the term by calling things that aren’t open source “open source.” (I get that some folks see no problem with using the term “open source” to refer to things that don’t meet the OSI’s definition, but I do. So there.) So I also see LLaMa as at best insidious.

        I fully believe that information wants to be free and that copying is not theft. But I also believe in copyleft and I think software-as-a-service (like ChatGPT) is dastardly.

        I wouldn’t have a problem with OpenAI if they:

        • Scraped the whole fuckin’ internet
        • Built an AI
        • Open sourced the engine (properly - not just shared the source code)
        • Made the model downloadable
        • Published their methodology so it could be inspected and reproduced
        • Didn’t scam people by making it out to be more useful than it is

        What I dislike about OpenAI is not the copying per se. It’s using everybody else’s stuff and locking the results up behind a data-harvesting subscription wall and then selling it as snake oil. Generally what I dislike is that they’re using my Reddit posts (yes, I used to use Reddit) for nefarious purposes.

        I’m pissed the same way I’d be pissed if neo-Nazis took my words out of context and used them as marketing materials for their fucked up ideology. (Whereas I’d be honored if some good cause like the EFF or whatever wanted to use my words as marketing materials.)

        Now, beyond that, let me also say that there may be places that various AI hucksters are gathering data that no reasonable person would have reason to believe was public. I drive a Subaru and its privacy policy allows them to record any sound in the cabin of any of their vehicles at any time via the OnStar mic, send that data back to their HQ, and use it for any purposes they wish, including training AI models. At least when AI training data is scraped off the web, probably most of that was intended to be made public and at least isn’t a blatant invasion of privacy. But me having a private conversation with someone or talking to myself while in my vehicle? Holy late stage capitalism, Batman. (And I doubt that’s even one of the most egregious examples of a breach of privacy that might ultimately end up feeding an AI model.)

      • Hacksaw@lemmy.ca
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        6
        ·
        11 months ago

        It’s not a PERSON. The only person involved is literally copying the internet and duct taping it together to form chat gpt. Then they say “the AI is reading and learning like any human would”. No brother, the AI IS MADE FROM a copy of all the stolen words. Before the theft, there is no AI that you can put the words into and have it learn. It’s just a matrix filled with trillions of zeroes. It’s only an AI AFTER you build it from the stolen data.

    • AdamEatsAss@lemmy.world
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      41
      ·
      11 months ago

      The didn’t really “steal” the internet data. I don’t think most websites and data logs they used explicitly said “don’t use this to train a large language model.”

          • AdamEatsAss@lemmy.world
            link
            fedilink
            English
            arrow-up
            9
            arrow-down
            20
            ·
            11 months ago

            Someone would have sued openAI if they stole something. Private companies try to steal ideas and data all the time. The only thing to stop them is regulation or IP lawsuits.

      • Maëlys@slrpnk.net
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        1
        ·
        11 months ago

        unpopular opinion but true: they took advantage of a legal loophole and they cashed on it. Legal counseling really pay up dividends.

    • FaceDeer@kbin.social
      link
      fedilink
      arrow-up
      35
      ·
      edit-2
      11 months ago

      Ever since that paper about “model decay” this has been a common talking point and it’s greatly misunderstood. Yes, if you just repeatedly cycle content through AI training over and over through successive generations, you get AIs that lose “fidelity.” But that’s not what any actual real world training regimen using synthetic data does. The helper AI is usually used to process input data. For example, if you’re training an AI to respond in a chat-like format, you could take raw non-conversational text (like a book) and have the helper AI create a conversation about that content for the new AI to learn from. Or to take a real-world example, Dalle3 was trained by having a helper AI look at pictures and create detailed text descriptions of them to use as the caption to associate with the image when training.

      OpenAI has put these restrictions in its TOS as a way of trying to “pull up the ladder behind them”, preventing rivals from trying to build AIs as good as the ones they have already. Fortunately it’s not going to work. There are already open LLMs that can be used as “helpers” without needing OpenAI at all. ByteDance was likely just being lazy here.

    • Ech@lemm.ee
      link
      fedilink
      English
      arrow-up
      9
      ·
      11 months ago

      Depends how it’s done. GAN (Generative Adversarial Network) training works with exactly that, having networks train against each other, each improving the other over time.

    • SeaJ@lemm.ee
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      1
      ·
      edit-2
      11 months ago

      I’ve watched Multiplicity enough times to know you get a slightly less functional copy.

    • ZickZack@fedia.io
      link
      fedilink
      arrow-up
      2
      ·
      11 months ago

      Not necessarily: there have been recent works that indicate that filtering effects of fine tuned LLMs greatly improves the data efficiency (e.g phi-1). Further, if you have e.g. human selection on top of LLM generated content you can get great results as the LLM generation can be used as a soft curriculum, with the human selection biasing towards higher quality.

    • betterdeadthanreddit@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      11 months ago

      Sounds like what you’d get if you ordered a ChatGPT off of Wish dot com. Cheap knock-offs that blatantly steal ideas/designs and somewhat work are kinda their thing.

  • Buttons@programming.dev
    link
    fedilink
    English
    arrow-up
    22
    ·
    11 months ago

    I hope this harms OpenAI in their lawsuits somehow. Their argument of “we can train on the output of others, but nobody can train on our output” has no moral foundation. Pick a lane.

  • redcalcium@lemmy.institute
    link
    fedilink
    English
    arrow-up
    17
    ·
    11 months ago

    A lot of open source models are trained using data from gpt outputs actually. It’s a cheap way to generate huge training data. The difference is those models are made by independent researchers not backed by a huge company for commercial purpose so OpenAI probably left them alone.

  • betterdeadthanreddit@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    6
    ·
    11 months ago

    Probably an honest mistake. Who hasn’t bent down to tie their shoe, lost their balance and accidentally coded up a LLM to steal from an existing product? I’d still trust them to plant listening devices, cameras and keyloggers in my pocket since they’ve displayed such a commitment to honesty, integrity and transparency.

    • filister@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      8
      ·
      11 months ago

      Yes, honestly you have also been a subject of a lot of propaganda. The US and the US media are villifying a lot of Chinese companies while American companies are not much better if not worse.

  • Mahlzeit@feddit.de
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    2
    ·
    11 months ago

    I wonder if that clause is legal. It could be argued that it legitimately protects the capital investment needed to make the model. I’m not sure if that’s true, though.

    • Nick@mander.xyz
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      11 months ago

      I can’t speak for every jurisdiction, but I’d be hard pressed to see why it wouldn’t be legal in the US, especially in these circumstances. ByteDance is a massive legally sophisticated corporation, so they should’ve been expected to fully read and understand the terms and conditions before accepting them. They probably won’t bring a legal challenge, because they know they don’t have a particularly strong legal argument or a sympathetic angle to use.

        • Nick@mander.xyz
          link
          fedilink
          English
          arrow-up
          1
          ·
          11 months ago

          Sorry for the late reply, but this doesn’t really seem like it’d come close to invoking any of the US’s neutered antitrust enforcement. Open AI doesn’t have a monopoly position to abuse, since there are other large firms offering LLMs that see reasonable amounts of usage. This clause amounts more to an effort to stop reverse engineering than stifle anyone trying to build an LLM.

          • Mahlzeit@feddit.de
            link
            fedilink
            English
            arrow-up
            1
            ·
            11 months ago

            I doubt if it is clear-cut enough to bring down enforcement in any case. However, that does not mean that the clause is enforceable.

            It is easy to circumvent such a ban. Eventually, the only option that MS has is suing. Then what?

            • Nick@mander.xyz
              link
              fedilink
              English
              arrow-up
              1
              ·
              11 months ago

              Why would the clause be unenforceable? It doesn’t violate any of the general principles of contract law. If you intentionally contract around these terms that don’t violate any existing body of law and don’t run counter to public interest, a court would have no problem enforcing the terms of a contract. They probably wouldn’t sue you or me in our individual capacity if we circumvented. There’s a much greater chance of recovery if they go after a company which is pretty clearly using their service in a bad faith. If ByteDance wanted to use their LLM to train their own, they could’ve negotiated such a license.