Skyrim VAs are speaking out about the spread of pornographic AI mods.

  • Correct me if I’m wrong, but I don’t believe voices can be copyrighted. After all, if a human can replicate someone else’s voice, they get booked as professional impersonators rather than sued into oblivion.

    The difference here is that the voice replication happens though AI now. Would we see the same outrage if the voices in these mods were just people that sounded like the original voice actors?

    Copyright law needs to be fortified or a lot of voice actors are about to get screwed over big time. AI voice replication by modders is only the beginning, once big companies find the output acceptable these people may very well lose their jobs.

    • Rossel@sh.itjust.works
      link
      fedilink
      arrow-up
      0
      ·
      1 year ago

      The legal grounds are that the AI is trained using voice lines that can indeed be copyrighted material. Not the voice itself, but the delivered lines.

      • FaceDeer@kbin.social
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        The problem with that approach is that the resulting AI doesn’t contain any identifiable “copies” of the material that was used to train it. No copying, no copyright. The AI model is not a legally recognizable derivative work.

        If the future output of the model that happens to sound very similar to the original voice actor counts as a copyright violation, then human sound-alikes and impersonators would also be in violation and things become a huge mess.

        • ChemicalRascal@kbin.social
          link
          fedilink
          arrow-up
          0
          ·
          1 year ago

          The problem with that approach is that the resulting AI doesn’t contain any identifiable “copies” of the material that was used to train it. No copying, no copyright. The AI model is not a legally recognizable derivative work.

          That’s a HUGE assumption you’ve made, and certainly not something that has been tested in court, let alone found to be true.

          In the context of existing legal precedent, there’s an argument to be made that the resulting model is itself a derivative work of the copyright-protected works, even if it does not literally contain an identifiable copy, as it is a derivative of the work in the common meaning of the term.

          If the future output of the model that happens to sound very similar to the original voice actor counts as a copyright violation, then human sound-alikes and impersonators would also be in violation and things become a huge mess.

          A key distinction here is that a human brain is not a work, and in that sense, a human brain learning things is not a derivative work.

          • FaceDeer@kbin.social
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            That’s a HUGE assumption you’ve made

            No, I know how these neural nets are trained and how they’re structured. They really don’t contain any identifiable copies of the material used to train it.

            and certainly not something that has been tested in court

            Sure, this is brand new tech. It takes time for the court cases to churn their way through the system. If that’s going to be the ultimate arbiter, though, then what’s to discuss in the meantime?

      • Skull giver@popplesburger.hilciferous.nl
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        That’s a decent theoretical legal basis, but the voice lines are property of the game company rather than the voice actors.

        If this interpretation of copyright law on AI models will be the outcome of the two (three?) big AI lawsuits related to stable diffusion, most AI companies will be completely fucked. Everything from Stable Diffusion to ChatGPT 4 will instantly be in trouble.

  • TheChurn@kbin.social
    link
    fedilink
    arrow-up
    0
    ·
    1 year ago

    The porn bit gets headlines, but it isn’t the core of the issue.

    All of these models retain a representation of the original training data in their parameters, which makes training a violation of copyright unless it was explicitly authorized. The law just hasn’t caught up yet, since it is easy to obfuscate this fact with model mumbo-jumbo in between feeding in voices and generating arbitrary output.

    The big AI players are betting that they will be able to entrench themselves with a massive data advantage before regulation locks down training and effectively kills any future competition. They will already have their models, and the worst case at that point is paying some royalties to people whose data was used in training.

    • LoafyLemon@kbin.social
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      I’d like to know how do you expect the governments or even private institutions to enforce this since most of the countries won’t care about foreign laws.

      • Ragnell@kbin.social
        link
        fedilink
        arrow-up
        0
        ·
        1 year ago

        They can forbid companies from using the AI to do business in their areas, like the EU is doing with privacy laws. Google not being able to use its chatbot search in the US would be a big deal.

        • LoafyLemon@kbin.social
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          Sounds to me you would have to prove someone used AI in their work first, therefore making it difficult to realistically enforce.

          • Ragnell@kbin.social
            link
            fedilink
            arrow-up
            0
            ·
            1 year ago

            Not hard when they’re advertising it right now. And if they do try to keep it secret all the government will have to do is subpoena a look at the backend.

            But honestly, since when do we just not have laws when it’s hard to prove. It’s hard to prove someone INTENDS to murder someone, but that’s a really important legal distinction. It’s hard to prove someone’s faking a mental illness, but that’s another thing that’s got laws around it. It’s really hard to prove sexual assault, but that needs to still be outlawed too.

            Compared to that stuff? Proving someone used an AI is going to be a piece of cake with all the data that gets collected and the amount of work it would take to REMOVE the AI from a business process before the cops get there.

            • LoafyLemon@kbin.social
              link
              fedilink
              arrow-up
              1
              ·
              1 year ago

              Enforcing a potential AI ban in work environments is unrealistic right now because it’s challenging to prove that AI was actually used for work purposes and then enforce such a ban. Let’s break it down in simple terms.

              Firstly, proving that AI was used for work is not straightforward. Unlike physical objects or traditional software, AI systems often operate behind the scenes, making it difficult to detect their presence or quantify their impact. It’s like trying to catch an invisible culprit without any clear evidence.

              Secondly, even if someone suspects AI involvement, gathering concrete proof can be tricky. AI technologies leave less visible traces compared to conventional tools or processes. It’s akin to solving a mystery where the clues are scattered and cryptic.

              Assuming one manages to establish AI usage, the next hurdle is enforcing the ban effectively. AI systems are often complex and interconnected, making it challenging to untangle their influence from the overall work environment. It’s like trying to remove a specific ingredient from a dish without affecting its overall taste or texture.

              Moreover, AI can sometimes operate subtly or indirectly, making it difficult to draw clear boundaries for enforcement. It’s like dealing with a sneaky rule-breaker who knows how to skirt around the regulations, all you have to do is ask.

              Considering these challenges, implementing a ban on AI in work environments becomes an uphill battle. It’s not as simple as flipping a switch or putting up a sign. Instead, it requires navigating through a maze of complexity and uncertainty, which is no easy task.