• Politically Incorrect
    link
    fedilink
    English
    01 year ago

    If you quote the sources and write it with your own words I believe it isn’t, AFAIK “AI” already do that.

    • @ominouslemon@lemm.ee
      link
      fedilink
      English
      131 year ago

      Copilot lists its sources. The problem is half of them are completely made up and if you click on the links they take you to the wrong pages

    • Uninvited Guest
      link
      fedilink
      English
      71 year ago

      It definitely does not cite sources and use it’s own words in all cases - especially in visual media generation.

      And in the proposed scenario I did write the student plagiarizes the copyrighted material.

      • Politically Incorrect
        link
        fedilink
        English
        -3
        edit-2
        1 year ago

        If you read a book or watch a movie and get inspired by it to create something new and different, it’s plagiarism and copyright infringement?

        If that were the case the majority of stuff nowadays it’s plagiarism and copyright infringement, I mean generally people get inspired by someone or something.

        • @potustheplant@feddit.nl
          link
          fedilink
          English
          -5
          edit-2
          1 year ago

          You do realize that AI is just a marketing term, right? None of these models learn, have intelligence or create truly original work. As a matter of fact, if people don’t continue to create original content, these models would stagnate or enter a feedback loop that would poison themselves with their own erroneous responses.

          AIs don’t think. They copy with extra steps.

            • @potustheplant@feddit.nl
              link
              fedilink
              English
              01 year ago

              Except that the information it gives you is often objectively incorrect and it makes up sources (this happened to me a lot of times). And no, it can’t do what a human can. It doesn’t interpret the information it gets and it can’t reach new conclusions based on what it “knows”.

              I honestly don’t know how you can even begin to compare an LLM to the human brain.