• 0 Posts
  • 20 Comments
Joined 2 years ago
cake
Cake day: June 12th, 2023

help-circle
rss

  • Yes, sorry, where I live it’s pretty normal for cars to be diesel powered. What I meant by my comparison was that a train, when measured uncritically, uses more energy to run than a car due to it’s size and behavior, but that when compared fairly, the train has obvious gains and tradeoffs.

    Deepseek as a 600b model is more efficient than the 400b llama model (a more fair size comparison), because it’s a mixed experts model with less active parameters, and when run in the R1 reasoning configuration, it is probably still more efficient than a dense model of comparable intelligence.



  • This article is comparing apples to oranges here. The deepseek R1 model is a mixture of experts, reasoning model with 600 billion parameters, and the meta model is a dense 70 billion parameter model without reasoning which preforms much worse.

    They should be comparing deepseek to reasoning models such as openai’s O1. They are comparable with results, but O1 cost significantly more to run. It’s impossible to know how much energy it uses because it’s a closed source model and openai doesn’t publish that information, but they charge a lot for it on their API.

    Tldr: It’s a bad faith comparison. Like comparing a train to a car and complaining about how much more diesel the train used on a 3 mile trip between stations.



  • https://openai.com/index/how-openai-is-approaching-2024-worldwide-elections/

    Here is a direct quote from openai:

    “In addition to our efforts to direct people to reliable sources of information, we also worked to ensure ChatGPT did not express political preferences or recommend candidates even when asked explicitly.”

    It’s not a conspiracy. It was explicitly thier policy not to have the ai discuss these subjects in meaningful detail leading up to the election, even when the facts were not up for debate. Everyone using gpt during that period of time was unlikely to receive meaningful information on anything Trump related, such as the legitimacy of Biden’s election. I know because I tried.

    This is ostentatiously there to protect voters from fake news. I’m sure it does in some cases, but I’m sure China would say the same thing.

    I’m not pro China, I’m suggesting that every country engages in these shenanigans.

    Edit it is obvious that a 100 billion dollar company like openai with it’s multude of partnerships with news companies could have made gpt communicate accurate and genuinely critical news regarding Trump, but that would be bad for business.












  • It’s worth mentioning that in this instance the guy did send porn to a minor. This isn’t exactly a cut and dry, “guy used stable diffusion wrong” case. He was distributing it and grooming a kid.

    The major concern to me, is that there isn’t really any guidance from the FBI on what you can and can’t do, which may lead to some big issues.

    For example, websites like novelai make a business out of providing pornographic, anime-style image generation. The models they use deliberately tuned to provide abstract, “artistic” styles, but they can generate semi realistic images.

    Now, let’s say a criminal group uses novelai to produce CSAM of real people via the inpainting tools. Let’s say the FBI cast a wide net and begins surveillance of novelai’s userbase.

    Is every person who goes on there and types, “Loli” or “Anya from spy x family, realistic, NSFW” (that’s an underaged character) going to get a letter in the mail from the FBI? I feel like it’s within the realm of possibility. What about “teen girls gone wild, NSFW?” Or “young man, no facial body hair, naked, NSFW?”

    This is NOT a good scenario, imo. The systems used to produce harmful images being the same systems used to produce benign or borderline images. It’s a dangerous mix, and throws the whole enterprise into question.