• @ImplyingImplications@lemmy.ca
    link
    fedilink
    English
    6913 hours ago

    The ethics violation is definitely bad, but their results are also concerning. They claim their AI accounts were 6 times more likely to persuade people into changing their minds compared to a real life person. AI has become an overpowered tool in the hands of propagandists.

    • Joe
      link
      fedilink
      English
      86 hours ago

      It would be naive to think this isn’t already in widespread use.

    • ArchRecord
      link
      fedilink
      English
      86 hours ago

      To be fair, I do believe their research was based on how convincing it was compared to other Reddit commenters, rather than say, an actual person you’d normally see doing the work for a government propaganda arm, with the training and skillset to effectively distribute propaganda.

      Their assessment of how “convincing” it was seems to also have been based on upvotes, which if I know anything about how people use social media, and especially Reddit, are often given when a comment is only slightly read through, and people are often scrolling past without having read the whole thing. The bots may not have necessarily optimized for convincing people, but rather, just making the first part of the comment feel upvote-able over others, while the latter part of the comment was mostly ignored. I’d want to see more research on this, of course, since this seems like a major flaw in how they assessed outcomes.

      This, of course, doesn’t discount the fact that AI models are often much cheaper to run than the salaries of human beings.