8
netikras
352d

AI here, AI there, AI everywhere.
AI-based ads
AI-based anomaly detection
AI-based chatbots
AI-based database optimization (AlloyDB)
AI-based monitoring
AI-based blowjobs
AI-based malware
AI-based antimalware
AI-based <anything>
...

But why?
It's a genuine question. Do we really need AI in all those areas? And is AI better than a static ruleset?

I'm not much into AI/ML (I'm a paranoic sceptic) but the way I understand it, the quality of AI operation correctness relies solely on the data it's
datamodel has been trained on. And if it's a rolling datamodel, i.e. if it's training (getting feedback) while it's LIVE, its correctness depends on how good the feedback is.

The way I see it, AI/ML are very good and useful in processing enormous amounts of data to establish its own "understanding" of the matter. But if the data is incorrect or the feedback is incorrect, the AI will learn it wrong and make false assumptions/claims.

So here I am, asking you, the wiser people, AI-savvy lads, to enlighten me with your wisdom and explain to me, is AI/ML really that much needed in all those areas, or is it simpler, cheaper and perhaps more reliable to do it the old-fashioned way, i.e. preprogramming a set of static rules (perhaps with dynamic thresholds) to process the data with?

Comments
  • 2
    Imho there is an tremendous amount of FUD involved.

    E.g. nowadays hardcoded heuristics based on probability (think e.g. AntiVirus detection) are called AI cause buzzword.

    Note: Not an heuristic approach, reallly just good old probability based on a fixed rule set. If extension == bla and magic byte header == oxDEADBEEF kind of rulesets (fictional, but you get the gist I guess).

    FUD because its always marketed in a "you need this or you're doomed | inefficient | out of date | ..."

    What makes it even harder is that marketing reallly has become an information firewall.

    When you want reliable information, you need to get an engineer | dev.

    A sales person - even one with an IT engineering title or someone who claims working closely together with the dev team - will do anything to not give you any kind of valuable information.

    I remember a certain call 2 months ago with a Cloudflare reprasentative...

    That guy ... Made me wanna scratch my skin off. I aborted the call after 10 mins, cause talking to someone suffering from delusions due to alcohol poisoning would have been easier.

    I reallly don't need a sermon about how insecure everything is, neither about our saviour the AI, nor about how Cloudflare saves a company like mine with the holy grail of AI.

    Imho AI is an desaster. We already see how fragile ChatGPT is.

    We see how fragile the eco system is, especially regarding politics (OpenAI for example) and laws.

    I'm not against AI.

    But mixing AI with marketing is like the good old quack doctor pouring mercury down your throat claiming its healthy.

    Its not. There are many possibilities for AI - but AI must be open source imho, otherwise we end up with the patent / license / law debacle we had with Video codecs for example.

    And judging from the recent uncensored training data leaks and other things... This might blow up spectacurlaly.

    Plus the massive trouble regarding hardware scalping / ressource usage etc.
  • 6
    If you can make a static rule set, it's obviously better since you know how it works. AI can be useful (i.e. cheap and fast) if it doesn't matter how it works.
  • 1
    @supernova777 The sheep example: if I had the parameters, I would be able to write a ruleset for them to collect statistics, and out of those statistics draw conclusions with some particular certainty based on some thresholds.

    This example of yours explains to me that ML is good when you have no clue

    - whether there is a correlation between the parameters you're monitoring and the problem you're solving (movement vs being sick)

    - what are the parameters

    - what are the thresholds

    - what is the correlation

    So.. ML is basically brute-forcing the situation with feedback over time. It does not allow us to understand the mechanism, it just gives results.

    Am I correct?
  • 1
    @supernova777 isn't it all statistics at the end of the day?
  • 1
    @supernova777

    > you can understand the underlying algorithm and methods you’re using

    I didn't mean the learning process per se.

    Instead, I meant the particular problem analysis.

    I.e. in your sheep example, how does the farmer do the same thing w/o the ML? Can he learn from ML what to look for in the sheep's movements to determine if it's sick or not?
  • 2
    @netikras Partially. Yes.

    Look ahead bias is a nice search term I'd guess.

    Most of that mathematical gumbo makes my brain gooey.

    Trouble is the brute force aspect.

    You kinda have to provide the necessary indicators and push the machine model / network in the right direction...

    Otherwise it will not work.

    So a guided brute force approach where a human still intervenes and makes sure the AI is on track.

    At least thats what my experience with machine learning teams was.

    Painful is the analysis of prediction and fine tuning as someone has to find a way to sort out rotten apples.
  • 1
    @supernova777 rinse and repeat instead of brute force I'd guess.

    Train the monkey, watch the monkey, analyse, retrain the monkey.
  • 1
    There is also the issue of the overhype.
    Nowadays any dandy makes a `print("hello world")` app and calls it "AI Greeter"
    Thus a lot of "AI" things are just plan ol' misleading advertising
  • 0
    Me and you got something in common buddy. It's good, let the hate flow through you.
  • 2
    1. Computer is cheaper than human

    2. Computer can work 24/7, software developer can work at most 4 hours a day 5 days a week.

    3. Programming is expensive, maintaining software is not cost effective

    4. AI models have high cost entry but literally zero maintainability cost when used because their effectiveness is defined by probability not by writing bug prone code.

    5. AI models are using real data and probability not arbitrary human decisions

    6. AI models right now are not vournelable to security problems

    7. AI models can be fine tuned without humans and fine tuning can be mesured using probability, software can be only changed by humans and every change can cause errors

    8. AI models can be replaced by other models without problems, replacing human with other human is problematic

    9. Humans are obsolate, computers are timeless killers
  • 2
    tl;dr Algorithems have been renamed to AI.

    ML is an approach of deriving a new algorithem from a labeled dataset. Not AI. Any program who can play chess is AI now.

    A chatbot able to parse input, and create a BS answer - a.k.a chatGPT, is not AI. You better fact check whatever it responds with. Very similar to humans who are often full of shit, and answer with BS to questions. Same with generated graphics. Like a kid holding a cat with 5 legs 😉.

    It will pass. Same as with blockchain hype.
  • 1
    If you want an utopia where nobody has to work, you need AI. You just can't do full automation without robots that can see and respond like a human.

    And that is why i like to see progress in AI (it totally isn't because of Roko's basilisk). Good AI doesn't guarantee that utopia. But it at least makes it possible.
  • 1
    @Oktokolo If you want utopia you need to forget everything, go back to caves, live close to nature with people and for the people. You don't need technology for that, you just need education and justice system.

    What we are creating right now are small toys for idiots to make more idiots dependent on more toys.

    We are at the process of migrating everything to big storage locked behind high bars and we keep saying just use toys and pay to get things you want - you don't need anything else. Just work and pay to get things so we can build bigger pyramides.

    At some point we won't be able to take shit without toys and toys will stop working so we will shit our pants and die.
  • 1
    @vane Your utopia is pretty grim.

    As a hedonist i prefer having slaves that do the work that is neccessary for me to live but i don't want to do myself.

    And as a liberal i want society advance to a higher level of freedom: The freedom to not have negative consequences when not doing things.

    Obviously, my utopia requires slavery. But i am not racist. So classic slavery and wage slavery aren't options. But AI isn't a species. AI can enable mechanical slaves to power my version of utopia.

    And in that utopia, you can still live in a cave and die of random infections if you want to. You just don't have to.
  • 1
    I don't understand where all this AI stuff comes around. Sure language stuff makes sense with LLM but yesterday I saw an article saying how this railway was using AI. I was like WTF? LLMs can do that now?
  • 0
    @Oktokolo

    I agree that each person is different, have different needs and different desires, but for me technology right now stopped solving problem of what people want. It just tries to push us to the place where everyone wants the same one thing.

    Maybe it's just that people changed and don't want to have a choice, want to work 8 to 5, raise children or play medicore games, watch medicore movies and don't give a fuck, give me things other people use and fuck off.

    I have no idea but at some point we went from - you can do everything and we give you components to empower you, to - we give you product use it like we want to.

    And AI (call it whatever machine does calculating gradient descent on matrix) is also something like that - it solves problems of masses not problems of people.

    Sorry for long response.
  • 0
    @vane Technology is used for bad and good. But in the end, you can't have the utopia i described without AI because not all jobs are bullshit jobs.

    Also, don't play the mediocre games. Plenty of very good games exist. Play those.
  • 0
    @supernova777 the tiktok algorithm and yt algo are not AI based IIRC
  • 0
    @Oktokolo everything is bullshit if you stare long enough
  • 0
    @vane Then don't stare long enough. Just play as long as it's fun and switch games when it stops being fun.
  • 1
    Valid questions for sure. There is so much confusion!

    At the moment, for big corp stuff, we see many potential work loads where the productivity is potentially enhanced by allowing generative AI in different contexts.
    It’s so early.
    We don’t know what the hell we are doing!
    Often we find ourselves in discussions and the consensus is that in the majority of work loads the general feeling is that the error in correctness often outweighs the productivity increase.
    But. Dev things aside!
    One must consider and appreciate the vast number of hours we spend in finding the right wording in a presentation and maybe even ten times that time in trying to understand a twenty page whatever document.

    It’s an assistant. We should treat it as such.

    It is truly a remarkable era!

    We are ”forcing” all teams to tag and comment the source code/commits if some code has been generated, changing flow of the daily work in these events.

    Some projects does not even have a single test of any kind
  • 0
    @netikras
    AI everything
    Because the mgmt hit a wall of reality and couldn't add more mgmt overhead to their bonuses.
    Flat. No joke.
Add Comment