If you read a book or watch a movie and get inspired by it to create something new and different, it’s plagiarism and copyright infringement?
If that were the case the majority of stuff nowadays it’s plagiarism and copyright infringement, I mean generally people get inspired by someone or something.
You do realize that AI is just a marketing term, right? None of these models learn, have intelligence or create truly original work. As a matter of fact, if people don’t continue to create original content, these models would stagnate or enter a feedback loop that would poison themselves with their own erroneous responses.
Except that the information it gives you is often objectively incorrect and it makes up sources (this happened to me a lot of times). And no, it can’t do what a human can. It doesn’t interpret the information it gets and it can’t reach new conclusions based on what it “knows”.
I honestly don’t know how you can even begin to compare an LLM to the human brain.
If you quote the sources and write it with your own words I believe it isn’t, AFAIK “AI” already do that.
Copilot lists its sources. The problem is half of them are completely made up and if you click on the links they take you to the wrong pages
It definitely does not cite sources and use it’s own words in all cases - especially in visual media generation.
And in the proposed scenario I did write the student plagiarizes the copyrighted material.
So your question is “is plagiarism plagiarism”?
No, that is not the question nor a reasonable interpretation of it.
If you read a book or watch a movie and get inspired by it to create something new and different, it’s plagiarism and copyright infringement?
If that were the case the majority of stuff nowadays it’s plagiarism and copyright infringement, I mean generally people get inspired by someone or something.
You do realize that AI is just a marketing term, right? None of these models learn, have intelligence or create truly original work. As a matter of fact, if people don’t continue to create original content, these models would stagnate or enter a feedback loop that would poison themselves with their own erroneous responses.
AIs don’t think. They copy with extra steps.
Removed by mod
Except that the information it gives you is often objectively incorrect and it makes up sources (this happened to me a lot of times). And no, it can’t do what a human can. It doesn’t interpret the information it gets and it can’t reach new conclusions based on what it “knows”.
I honestly don’t know how you can even begin to compare an LLM to the human brain.