11
NoMad
4y

I sometimes feel like some people's comments on devrant are enough for a mental health crisis diagnosis. I wonder, how can we diagnose people through text? And can, let's say, ML do any better.

I mean; let's say for example abusive behaviors. This may be an online community but that doesn't stop some from abusing others, right? But the only form of communication here is text, right? What if you could diagnose... Not even that. What if you could inform a mental health expert about a toxic behavior online? We do have a lot of "internet policing" but we have no "internet mental health help" for toxic behaviors and attempts to mitigate that. I don't mean banning people. I mean literally in simplest form tag a psychotherapist in the convo.

Just thinking. :)

Comments
  • 1
    It is an interesting take, and I agree. There should be some sort of group or agency that labels these red flags that are indicative of abuse (any form).

    Diagnosis idk, but maybe get authorities invooved and make the person (if abuse continues) force therapy or an appointment, if violated then some other stuff.

    Point is toxicity is trash in any environment, specially having to deal with it during all the shit we already go through daily
  • 1
    That sounds roughly equivalent to a content reviewer job at Facebook >.<
  • 2
    @Lyym I don't even mean that. I mean the same work you do with talk therapy, you can do via text. If not, preliminary diagnosis and sending them off to see a local therapist.
  • 1
    @SortOfTested I did not mean reviewing tho. I meant when others feel uneasy, or detect underlying issues, they tag a therapist. I do not want to talk to evey single person with a bad stage of depression, but I wish I could help them all out. Same with some people who are good people but just enjoy torturing others emotionally.
  • 0
    @NoMad oh, gotcha. Although for some people it could be complex. In my case ai express more with my body movements than my words, so for people like that there could be a margin of error
  • 2
    @Lyym I think you're underestimating ML 😛
  • 3
    Please don't do that. Enough humans call me a massive arsehole, I don't need a ML bot to do the same thing 😂

    Not sure how you'd train it though - can't think of how you'd do it unsupervised unless there's another similar bot to classify responses, and supervised... well, I ain't volunteering to categorise it all 😉
  • 4
    @AlmondSauce
    Maybe. But let's say you're flagged. That would just mean that someone contacts you and the two of you talk about "your stressors", "your childhood", "your habits" and all the other crap you talk to a therapist about. Like, not to force anything, but that would be much easier to help people than for them to "call a therapist and go to appointments".
  • 1
    This is true - and I guess it could use those responses as part of a feedback loop too...

    Hmm. I refer to my first point though 😉
  • 2
    @AlmondSauce sorry to disappoint you tho. You're not that big of an asshole. I generally stop talking to the asshole ones, because nothing positive comes from talking to them. And I'm still talking to you, as you can see 😜
  • 2
    @NoMad I guess I save my true arsehole nature for my dev colleague who gets paid more than me to do sod all apart from talk about the weather and provide me excuses over why literally no new work has been completed in an entire sprint...

    ...still, I digress. And only a month until I'm outta here!
  • 3
    There is indeed already a ton of work on attempting to classify mental illnesses using ML tools.

    Just a very generalized statement, but afaik the general understanding is, that as of now the problem is too broad. However you can hunt for very specific things, like signs of schizophrenia, with various degrees of reliability.

    Data could be obtained via crowdsourcing, as is done with e.g. Tinnitus research.

    Also one should veeeery much be aware of the limitations of one's model. Otherwise you start misdiagnosing people, possibly causing real life damage and creating a devRant variant of the Reddit-Boston-Debakle, where good will without expertise in the specific field led to real life disaster.

    One more thing to consider is motivation. Will users prone to abusive behavior be at all willing to participate?

    Lastly you need to be clear what behavior is at all pathological. Some behaviors that make you or me uncomfortable may not be and some others might.
  • 0
    @Maer
    Firstly, maybe a ton of work. Because ML currently has issue expressing and understanding the semantic through text. And this is even levels above semantics. It's even above intent. Like, at this point you're picking the theme of one's mental state through a great bunch of texts, looking for specific tendencies.

    Secondly, data will be an issue. A mental health issue doesn't just pop up. So I'm not even talking about someone who's a little depressed this week, I'm talking someone who is spreading negativity for weeks or months as if it's a cry for help. Even then, convincing someone they're not ok, is a great challenge.

    Lastly, it shouldn't rely on opinion of a non-expert. Which is why I said you tag the therapist. So they investigate the subject's posts and contact them for further inspection. It is not a "hey, your behavior was not ok" but a "hey, are you okay? Do you wanna talk about stuff?". Of course, they can refuse help, but it will be at least available to them.
Add Comment