5

Seriously, ChatGPT?? What other use could you have in this historic moment?? Guess OpenAI forgot to train opportunism into its models.

Comments
  • 2
    nobody is surprised by this. Even Dalle 2 refused to generate well-known people.
  • 0
    Threatening the right people with the Epstein treatment goes a long way I guess
  • 0
    @Chewbanacas was chat gpt on the fight list?!
  • 2
    you can always install stable diffusion on your PC.

    Although i think the latest data set is a cleaned one aswell.
  • 0
    Censorship cuts both ways. No matter if you're liberal or conservative. But hey, I guess this is what the vocal minority of the population wanted. Let's police speech to the extreme :)
  • 1
    @Chewbanacas no worries, we'll see it very soon 🤣 not even in my corruption ravaged country there has been a convicted felon running for presidency
  • 1
    Censorship is a code smell for a society. It is not the root problem.

    In this case, there are still an advantage from it. Nothing stops a human from creating it. Cartoonists at least, can still maintain a creative edge.
  • 1
    @kobenz
    >Choice 1: convicted felon loon
    >Choice 2: a senile old fart
    Lmao
    I probably can’t say much though, I live in a place that has been run by war criminals and their descendants for decades.
    Seems like politics always attracts the worst kind of trash.
  • 0
    @ars1 I thought Biden and trump were about the same age
  • 0
    @Hazarth except that nothing about this is "censorship".

    it's a private company deciding what can and can't be done with their private property (physical, as in servers, AND intellectual, as in the LLM).
  • 0
    @tosensei true, but Im not sure what to call it then. It's a company that scraped public data, articles and conversations from the internet, trained a model on it and them trained that model to reply to a part of that data that they don't agree with with a predefined warning.

    It sounds a little bit like censorship since we're talking using public data and... Censoring it? It's not removed from the dataset, It's not filtered, It's covered over by additional RLHF training. "Censor" comes to mind as an intuitive word for this. Not all that different than bluring an unwanted part of a publicly posted image or video.
  • 0
    @Hazarth question: is it censorship if YOU for yourself decide NOT to say something?

    because that's basically the same scenario, only scaled down. after all you're just a neural net implemented in wetware.

    i bet you _wouldn't_ call it censorship. the better term would be "self-moderation".
  • 1
    @Hazarth that being said: i believe the real reasoning behind this is that no caricature could ever hope to even get close to the level of "pathetic" that trump inherently has. nothing could compare to laughing at the real loser, so why waste processing power on trying?
  • 0
    @tosensei I see what you're saying but I'f still feel It's different. LLMs are much closer to google/software than to humans. The LLM didn't decide anything. It was hotwired to respond this by a round table of engineers. It's much closer to the same round table deciding what can and can't show on google. I think we still collectively agree that's censorship, no?
  • 0
    @Hazarth i still disagree.

    the LLM was _trained_ to not provide certain content. just like _you_ were trained by your parents not to do certain things (i hope, at least).

    and i still think that "moderation" and "censorship", while being somewhat adjacent, are fundamentally different things.
Add Comment