17
NoMad
127d

As someone who works in AI and actually bothers with cognitive models, general intelligence, theory of mind and such shit, I find the current state of the field laughable. I don't get why people panic about AI. Like, yeah it's gonna take us a while to adopt and regulate, but... it's just not there, and nowhere even near there, yet.

... Unless we're comparing AI to moronic idiotic mofos such as my neighbors. But let's not do... that. 😒 Let's just not.

Comments
  • 0
    @jestdotty of what? Of failure to read a clock? 😂
  • 1
    I'm learning a lot about it at the moment, and I'm finding it fascinating the ways people figured out how to get useful responses out of these llms.
  • 0
    @jestdotty AI Agents are literally just LLMs with some context prepended to them so they act more "domain specific" the tech is the same shitty tech though
  • 2
    We'd need to develop __much__ higher computing power, nevermind that energy production -- and consumption -- is part of the equation; both lines of research may just hit a dead end, whatever is theoretically possible wouldn't factor for shit at that point.

    And I say "possible" as in maybe, hypotetically, absolute hyperbole, perhaps in the future we figure out just enough about the mind to build more efficient abstractions; but it's out there with interstellar travel, at that speed it takes years to deaccelerate and one fucking stray pebble fucks up your entire ship. Space junk!

    Someday, "not there yet" itself has to be questioned. Rate of technological development isn't a constant, and it certainly cannot grow indefinitely because fucking nothing works that way. Shit, we might even want to prepare ourselves for the idea that it might just stop altogether for a few years.

    My apologies, I began ranting and forgot my point.
  • 0
    @jestdotty some simple things, for instance, just including "think through your answer step by step" will often magically make it give you a better answer. Or adding "if you don't know, just say I don't know" keeps it from making up answers.

    Another thing I saw was when having it generate something like a story, if you break it up and have it keep sending in the previous bits and checking for inconsistencies and revising, it actually comes up with a pretty good end product.

    I've sort of come to think of it like you have to coax it to do the things you own mind does to refine a thought before you give a response.
  • 1
    @jestdotty an AI agent just means exactly what a robot is to robotics. An agent is an individual unit, a system, a full package if you may, rather than just some spaghetti code. It's a concept rather than something concrete.
  • 0
    I don't worry about AI. I think my career will be over well before these AI concerns materialize.

    What I'm worried about is concentration of knowledge and an ability to
    adapt it to an arbitrary problem automatically, using normal language. To rephrase, the more realistic problem I see is a set of software automating our BAUs. E.G. Automating a business software solution design and implementation based on plain layman's English to explain what soft is required to do. And ofc applyinf new features/bugfixes in the same manner.

    It's not there yet. But I'm certain it's coming. A static piece of software embedding LLMs, not an actual AI...

    Yes, the manual workforce will still be required. But a fraction of the current numbers
  • 0
    @jestdotty 🤔 who are they???

    Don't tell me you believe in conspiracy theories! 😒
  • 3
    @netikras honestly there are plenty of morons who barely do their job which they also hate. I'd want to replace each and every single one of them. Not the good ones tho. They're slow but they're worth the effort.

    But if WHO is right, most of us are gonna die painful deaths before we get there; starvation, cancer, poison, weather, etc. Those are the problems worth focusing on. Which plenty of researchers are tbf. Just not enough.
  • 1
    @jestdotty yeah multi agent solutions with rag argumentation for llm can automate most of administrative things, it can increase productivity but it won’t be so revolutionary as everyone is hyped, I’m now trying to setup conversational bot with multiple models, multi agents to learn foreign languages using AI native speaker and tech is like 10 years from production, you have basically 100 POC solutions you need to wire together to make anything viable working.
  • 3
    LLMs are literally a "language model", they model language. Just because the emergent property of modeling language looks like intelligence and thought doesn't mean we should be trying to make it "think thoughts" by bending it backwards. As far as I'm concerned, making LLM based "AI"s and "agents" is a waste of time. The LLM can be a part of encoding and decoding, but that's all it is. Assigning any other properties to it at this point in time is pure fantasy world level madness. Whatever "AI" you can build with just language will pale in comparison to a system designed to actually actively think. The only issue is we have no idea how to do that or even where to start so everyone pretends like this is useful right now. It's quickly moving out of the cryptocurrency-level territory to NFT-level territory... and as before, everyone just laps it up
  • 1
    People panic about AI because they are idiots who think it's some central entity (that is better than humans in very specific tasks) and that it's smarter than a toddler in everything.

    Thankfully, it's not every non-technical person or everyone who haven't studied AI that acts like this, but there are loads of them.
Add Comment