16
stacked
5y

Tired of hearing "our ML model has 51% accuracy! That's a big win!"

No, asshole, what you just built is a fucking random number generator, and a crappy one moreover.

You cannot do worse than 50%. If you had a binary classification model that was 10% accurate, that would be a win. You would just need to invert the output of the model, and you'd instantly get 90% accuracy.

50% accuracy is what you get by flipping coins. And you can achieve that with 1 line of code.

Comments
  • 10
    I should publish a paper titled "Generating Random Numbers Using Failed Machine Learning Attempts".

    Abstract: "The random number generators here presented are slow and predictable. Their implementation details are long and complicated. They should not be used by anyone."
  • 2
    Op, lets work together on an article or blog that we can call "what happens when web devs refuse to learn math for ML"

    I seriously believe that the biggest issue is people refusing to learn the essential mathematical methods that would let them see how much they are failing. Sadly....people just est that shit up
  • 1
    @NoMad ooooh who said anything about teaching? :P it will be more of a diss
  • 2
    @AleCx04 the "web devs" in my case are senior software engineers, and some of them are currently undergoing ML courses (those probably cause more damage than everything else)
  • 2
    @NoMad lol, I wouldn't be surprised if giving only 1s would make 100% accuracy.

    I've seen people testing only with positive cases, completely forgetting about the negative ones. And while this may seem hilarious, this actually happens very frequently here where I work because we store only the positives (which makes sense for what our service does, but doesn't make sense if you want to do some data science: for that you need to use the raw values from the source).
Add Comment