Ranter
Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Comments
-
That sounds like a failed psych project I did associating affective values with choice of words
Too many variables to be useful -
What you’re describing is very unlikely to ever work at our present knowledge of people as many many factors effect language use age
You’d be better to delve into Econ and lawsuit data -
JsonBoa30143y@AvatarOfKaine agreed, when that data is accurate. However the are some parts of the world where you unfortunately cannot put too much trust in official data sources. For those regions, we had to get creative.
Our AI it is, indeed, a sentiment analysis tool. It's basically a teenager, looking for shocking terms in opposite-sentiment articles. We thought that a larger resulting score on terms associated with social unrest would have a high correlation with logistical disruptions.
@Hazarth it's not. But we trained our AI using many kinds of articles. We believe that it associated the "wellness" terms in our feel-good brochure to the same terms in opposite-sentiment news articles about suicide, and then the AI yielded the most valuable termset in the later dataset.
Obs: when I say "articles", it includes "Twitter, Weibo and Facebook posts", "Instagram, TikTok, et al" automated image descriptions, the whole Zoomer shabang -
JsonBoa30143y@magicMirror most definitely. Our point is (or, rather, "was", since we're pulling the plug on the faulty project):
Normal day: organic Garbage In >> plastic Garbage Out.
Day before the revolution: organic Garbage In >> empty vodka bottles out.
We were waaaaaay too ambitious. The signal is too subtle and even if distilled perfectly, it could still not be accurate. Well, back to the brainstorm room. -
Hazarth95193y@JsonBoa fair enough, And to be fair, suicide rates are actually climbing so, the AI is not even wrong!
-
@JsonBoa oh most certainly data is fucked at the moment
As is the world and this is a very familiar conversation
I got to watch aging robot people in Chicago all over again and wonder whether life is ever going to just settle down or if I’ll be going in a very long large circle over and over.
I certainly don’t feel that this is the way things should be going but whatever I suppose. I am an ant on a giant hill as are all those who believe they are in control. When this started who knows but in the end it’s a matter of this fucked up country being in a never ending cycle of idiocy now.
Anyway back to this problem which I used to dream of as well. The idea is there is going to be large error in either case. It’s more about the availability of data and how it affects the reader and how much they will be exposed to the data if at all. If everyone thinks it’s bullshit that’s another problem which exists presently
So, for the last year or so, we've been playing with a natural language A.I.
The goal was to predict port, truck and rail service disruption due to social unrest.
The trick here is that our AI would "read between the lines" of today's news articles and spit out keywords that were likely to appear in near future articles, thus giving us an early warning before some union or army start blockading roads.
It... did not work as intended. But some very weird results came out.
Apparently, we made a robotic "kid that screams that the emperor has no clothes", yielding unlikely (but somewhat expected) keywords when fed collections of articles.
We gave it marketing content about our company. It replied "high suicide rate".
rant