Ranter
Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Comments
-
Chewbanacas736205dThreatening the right people with the Epstein treatment goes a long way I guess
-
thebiochemic3012205dyou can always install stable diffusion on your PC.
Although i think the latest data set is a cleaned one aswell. -
Hazarth9527205dCensorship cuts both ways. No matter if you're liberal or conservative. But hey, I guess this is what the vocal minority of the population wanted. Let's police speech to the extreme :)
-
kobenz868205d@Chewbanacas no worries, we'll see it very soon 🤣 not even in my corruption ravaged country there has been a convicted felon running for presidency
-
anux736204dCensorship is a code smell for a society. It is not the root problem.
In this case, there are still an advantage from it. Nothing stops a human from creating it. Cartoonists at least, can still maintain a creative edge. -
tosensei8453203d@Hazarth except that nothing about this is "censorship".
it's a private company deciding what can and can't be done with their private property (physical, as in servers, AND intellectual, as in the LLM). -
Hazarth9527203d@tosensei true, but Im not sure what to call it then. It's a company that scraped public data, articles and conversations from the internet, trained a model on it and them trained that model to reply to a part of that data that they don't agree with with a predefined warning.
It sounds a little bit like censorship since we're talking using public data and... Censoring it? It's not removed from the dataset, It's not filtered, It's covered over by additional RLHF training. "Censor" comes to mind as an intuitive word for this. Not all that different than bluring an unwanted part of a publicly posted image or video. -
tosensei8453203d@Hazarth question: is it censorship if YOU for yourself decide NOT to say something?
because that's basically the same scenario, only scaled down. after all you're just a neural net implemented in wetware.
i bet you _wouldn't_ call it censorship. the better term would be "self-moderation". -
tosensei8453203d@Hazarth that being said: i believe the real reasoning behind this is that no caricature could ever hope to even get close to the level of "pathetic" that trump inherently has. nothing could compare to laughing at the real loser, so why waste processing power on trying?
-
Hazarth9527203d@tosensei I see what you're saying but I'f still feel It's different. LLMs are much closer to google/software than to humans. The LLM didn't decide anything. It was hotwired to respond this by a round table of engineers. It's much closer to the same round table deciding what can and can't show on google. I think we still collectively agree that's censorship, no?
-
tosensei8453202d@Hazarth i still disagree.
the LLM was _trained_ to not provide certain content. just like _you_ were trained by your parents not to do certain things (i hope, at least).
and i still think that "moderation" and "censorship", while being somewhat adjacent, are fundamentally different things.
Related Rants
Seriously, ChatGPT?? What other use could you have in this historic moment?? Guess OpenAI forgot to train opportunism into its models.
rant
trump