7

Machine learning is fine IFF MACHINE LEARNING IS USED APPROPRIATELY.

Comments
  • 3
    Most things are fine when used appropriately. So what is your point?
  • 2
    @Oktokolo: Using machine learning on low-dimensional datasets is relatively frequent and prevents gaining proper insights into these datasets.
  • 1
    @varikvalefor
    Well, regular greyscale images are pretty low-dimensional and machine learning really kicks ass when used for OCR on them.
    I wonder what proper insights its use prevents here. It isn't like OCR hasn't been tried with pretty limited success algorithmically for decades before the use of deep convolutional neural networks has become feasible...
  • 1
    @Oktokolo: "Low-dimensional" is used such that for all low-dimensional datasets, a low-dimensional dataset can be reasonably visualised as a graph.
    This information should have been made clear relatively early.
  • 2
    @varikvalefor could you give an example? I fail to see what this has to do with applying ML. Low dimensional data can still have difficult structure which any other method is going to absolutely struggle with.
  • 1
    @varikvalefor
    Uhh, so there are actually people who use input layers containing only 10 neurons?
  • 1
    @Oktokolo: Such men exist. Laziness is one hell of a drug.
  • 1
    @varikvalefor
    Designing and training a model isn't that easy. I would expect that more to be an "if you only have a hammer..." problem rather than lazyness.
  • 1
    @RememberMe: Let there exist sensor outputs $K$. $K$ is a set of sets of sensor outputs.

    Additionally, let there exist a desired output set $J$.

    If $K$ can be processed to accurately predict $J$ and $K$ is sufficiently simple, then pure logic and a bit of subject-related knowledge should yield a relatively terse formula which calculates $J$ when given only $K$.
  • 1
    @Oktokolo: A fair point is made; "laziness" may not be the most fitting term.
    Unwillingness to (think about the underlying relationships between data attributes and create formulas which describe these relationships) is a powerful thing.
  • 0
    @varikvalefor
    Yes. Imagine a statistician venturing into ML and artificial neural networks in Python - which he only learned to escape the salty soils of R...
    Maybe he knows shit about logik and algorithms. But most of them can follow a tutorial to get some simple artificial neural networks running - and they actually have enough math background to be able to actually grasp what is going on.
  • 2
    @varikvalefor @Oktokolo
    1. We don't *need* to have a relatively terse formula in many cases anymore. If you can afford to throw ML at a problem (eg. it doesn't need to run under a constrained environment or you have fast enough acceleration eg. edge accelerators, a SoC with a NPU, or datacenters with FPGA or TPU or GPU), there's a good chance it'll do better (assuming complex relationships between input and output). My phone is fast enough to run sizeable deep neural nets very efficiently because it has ML specific accelerators in the SoC, I'm going to take advantage of that to get better performance in my application.

    2. Of course, with everything you should know what you're doing and ideally everybody would, but if a deep neural net lets me solve a problem that I would need a ton of domain knowledge to solve otherwise (eg. finding a line in an image, which you can do through a Hough transform but also via a DNN, but then I need to know what a Hough transform is) then that is still very valuable. It may not be the most efficient solution but if it's the difference between me not being able to do it at all and having something running that does something useful, it's still valuable. And if the efficiency of that particular thing is not the make-or-break factor for the application, then eh, fuckit, I'm going to use ML even though there are better ways. Can always be replaced later if it becomes a problem.

    (Edit: I had missed you saying that "if K be processed efficiently" so I removed the first point)
  • 2
    @varikvalefor I just get a feeling of almost...hostility (?) towards ML from what you've written. It's a (very powerful) tool, nothing more, nothing less. I imagine you ran into somebody throwing ML at a problem when there were much simpler solutions, hence the (understandable) ire?
  • 2
    @RememberMe: I strongly like machine learning but prefer fully understanding the solutions of problems and fully optimising stuff over saying "eh" and tossing databases into neural networks.

    I apologise for any apparent hostility. I could have worded things a bit better.
Add Comment