6
JsonBoa
2y

So, for the last year or so, we've been playing with a natural language A.I.
The goal was to predict port, truck and rail service disruption due to social unrest.
The trick here is that our AI would "read between the lines" of today's news articles and spit out keywords that were likely to appear in near future articles, thus giving us an early warning before some union or army start blockading roads.

It... did not work as intended. But some very weird results came out.
Apparently, we made a robotic "kid that screams that the emperor has no clothes", yielding unlikely (but somewhat expected) keywords when fed collections of articles.

We gave it marketing content about our company. It replied "high suicide rate".

Comments
  • 3
    That sounds like a failed psych project I did associating affective values with choice of words

    Too many variables to be useful
  • 2
    What you’re describing is very unlikely to ever work at our present knowledge of people as many many factors effect language use age

    You’d be better to delve into Econ and lawsuit data
  • 0
    And police records these affect people in general more
  • 0
    I'm confused, why is the word "suicide" included in your marketing data at all?
  • 0
  • 1
    @AvatarOfKaine agreed, when that data is accurate. However the are some parts of the world where you unfortunately cannot put too much trust in official data sources. For those regions, we had to get creative.
    Our AI it is, indeed, a sentiment analysis tool. It's basically a teenager, looking for shocking terms in opposite-sentiment articles. We thought that a larger resulting score on terms associated with social unrest would have a high correlation with logistical disruptions.

    @Hazarth it's not. But we trained our AI using many kinds of articles. We believe that it associated the "wellness" terms in our feel-good brochure to the same terms in opposite-sentiment news articles about suicide, and then the AI yielded the most valuable termset in the later dataset.

    Obs: when I say "articles", it includes "Twitter, Weibo and Facebook posts", "Instagram, TikTok, et al" automated image descriptions, the whole Zoomer shabang
  • 0
    @magicMirror most definitely. Our point is (or, rather, "was", since we're pulling the plug on the faulty project):
    Normal day: organic Garbage In >> plastic Garbage Out.
    Day before the revolution: organic Garbage In >> empty vodka bottles out.

    We were waaaaaay too ambitious. The signal is too subtle and even if distilled perfectly, it could still not be accurate. Well, back to the brainstorm room.
  • 1
    @JsonBoa fair enough, And to be fair, suicide rates are actually climbing so, the AI is not even wrong!
  • 1
    @JsonBoa oh most certainly data is fucked at the moment
    As is the world and this is a very familiar conversation
    I got to watch aging robot people in Chicago all over again and wonder whether life is ever going to just settle down or if I’ll be going in a very long large circle over and over.
    I certainly don’t feel that this is the way things should be going but whatever I suppose. I am an ant on a giant hill as are all those who believe they are in control. When this started who knows but in the end it’s a matter of this fucked up country being in a never ending cycle of idiocy now.
    Anyway back to this problem which I used to dream of as well. The idea is there is going to be large error in either case. It’s more about the availability of data and how it affects the reader and how much they will be exposed to the data if at all. If everyone thinks it’s bullshit that’s another problem which exists presently
  • 1
    I think express.co.uk might be using your AI to create their main news articles.
Add Comment