Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "ai hype"
-
Get replaced by an AI^WDeep ML device. That's coded for a 8051 and running on an emulator written in ActionScript, being executed on a container so trendy its hype hasn't started yet, on top of some forgotten cloud.
Then get called in to debug my replacement. -
When both AI and search engine results are getting worse over time, will devs and users finally value knowledge and learning again?8
-
wow, using multiple LLMs in parallel instead of 1 serial LLM produces better results! who could have thought!!!!
https://hao-ai-lab.github.io/blogs/...
god i am so fucking sick of this rat race
older devranters, is this really just ad nauseum hype repeats until i die? should i just stop raging at the universe and give up?2 -
Fucking loonies (C-level toddlers) are peddling "digital workers" now.
A.K.A. AIs disguising as actual people.
Sure, it would be great to not have to handle stupid non-tech "humans" all day, but AI isn't there yet.
And, more importantly, *companies are not there (yet?)*.
Imagine for a second that a company actually manages to "hire", onboard, assign tasks and performance review an AI.
Then the CEO issues an RTO. How does the AI complies with that?
Let's slack another variable and assume the CEO is not a complete fucking moron (stay with me here, this is an exercise in thought).
It would take no more than a quarter until the first sexual harassment offence, be the perp the AI... or the AI complaining about some human.
Then the AI forges a paper trail proving it is right (regardless of its position on the conflict). Shit hits the fan when the AI hits twitter.
Let's take another lambda step back and pretend that companies can manage the profanity that inherently arises from free-form dehumanized interactions.
Then imagine the very first performance reviews.
AIs throw tantrums! Those things reeeealy do not respond well to less-than-perfect evaluations, overshooting corrections like teenagers with a malicious compliance smirk.
AIs also falsify stuff, like, A LOT. If you tell a gpt it mistreated a client, it will say you are mad and shoot back a long, synthetic thread showing how the client loves it like a mother/son/dog, and is very graphic when expressing this love.
Finally, how do you fire an AI? I do not mean "shoot it down", I mean how does the company handles the dismissal of that "employee".
How do you replace a "worker" for unruly behaviour, if that "worker" performed more tasks than an entire fucking floor of interns?
How do you reassign duties that were performed in milliseconds to people who would take hours to do the same thing?
How do you document processes that were only in the "mind" of "someone" who can not be trusted to report on those processes?
Companies deal with this type of "Rick Sanchez" employee on the regular, but for someone that could handle a few (scores of) undocumented processes, at best. Imagine how lenient would a company be with an asshole that could only be replaced by a whole fucking department of twenty highly skilled people, or more.
Heh, the whole fucking point of "AI workers" is to have "someone" who can "act human", but in an inhuman scale, and does not "has human needs".
No wonder one cannot handle AIs like one handles humans.
Companies never had administrative maturity to handle complete sociopath nihilists as employees (real nihilists do not work, those barely even breathe).
And all AIs are that, and much worse.
Selling AIs as "supra human workers" that can also "be handled like actual employees" is like peddling Bitcoin as "government interference - free" value transfer mechanisms that can also "comply with international sanctions".
So, an oxymoron that can only be sold to a moron.
I know (of) a lot of rich morons, maybe I should get into the AI snake oil business.6 -
Saturday evening open debate thread to discuss AI.
What would you say the qualitative difference is between
1. An ML model of a full simulation of a human mind taken as a snapshot in time (supposing we could sufficiently simulate a human brain)
2. A human mind where each component (neurons, glial cells, dendrites, etc) are replaced with artificial components that exactly functionally match their organic components.
Number 1 was never strictly human.
Number 2 eventually stops being human physically.
Is number 1 a copy? Suppose the creation of number 1 required the destruction of the original (perhaps to slice up and scan in the data for simulation)? Is this functionally equivalent to number 2?
Maybe number 2 dies so slowly, with the replacement of each individual cell, that the sub networks designed to notice such a change, or feel anxiety over death, simply arent activated.
In the same fashion is a container designed to hold a specific object, the same container, if bit by bit, the container is replaced (the brain), while the contents (the mind) remain essentially unchanged?
This topic came up while debating Google's attempt to covertly advertise its new AI. Oops I mean, the engineering who 'discovered Google's ai may be sentient. Hype!'
Its sentience, however limited by its knowledge of the world through training data, may sit somewhere at the intersection of its latent space (its model data) and any particular instantiation of the model. Meaning, hypothetically, if theres even a bit of truth to this, the model "dies" after every prompt, retaining no state inbetween.16 -
Nearly half (47%) of workers using AI say they have no idea how to achieve the productivity gains their employers expect.
Over three in four (77%) say AI tools have decreased their productivity and added to their workload in at least one way.
https://upwork.com/research/...6 -
I already got tired of “AI”. The hype train has been so ridiculous. It’s been months since at least 50% of the orange website is not about AI. Every other tool/company that I use is adding new gimmicky “AI” features.
It’s probably just me but I’m exhausted of AI…1 -
Every company and their brother have a chat bot now. It's like 2018 with crypto, but even more ironic and funny because nobody realizes it!
🤡4 -
The hype of Artificial Intelligence and Neutral Net gets me sick by the day.
We all know that the potential power of AI’s give stock prices a bump and bolster investor confidence. But too many companies are reluctant to address its very real limits. It has evidently become a taboo to discuss AI’s shortcomings and the limitations of machine learning, neural nets, and deep learning. However, if we want to strategically deploy these technologies in enterprises, we really need to talk about its weaknesses.
AI lacks common sense. AI may be able to recognize that within a photo, there’s a man on a horse. But it probably won’t appreciate that the figures are actually a bronze sculpture of a man on a horse, not an actual man on an actual horse.
Let's consider the lesson offered by Margaret Mitchell, a research scientist at Google. Mitchell helps develop computers that can communicate about what they see and understand. As she feeds images and data to AIs, she asks them questions about what they “see.” In one case, Mitchell fed an AI lots of input about fun things and activities. When Mitchell showed the AI an image of a koala bear, it said, “Cute creature!” But when she showed the AI a picture of a house violently burning down, the AI exclaimed, “That’s awesome!”
The AI selected this response due to the orange and red colors it scanned in the photo; these fiery tones were frequently associated with positive responses in the AI’s input data set. It’s stories like these that demonstrate AI’s inevitable gaps, blind spots, and complete lack of common sense.
AI is data-hungry and brittle. Neural nets require far too much data to match human intellects. In most cases, they require thousands or millions of examples to learn from. Worse still, each time you need to recognize a new type of item, you have to start from scratch.
Algorithmic problem-solving is also severely hampered by the quality of data it’s fed. If an AI hasn’t been explicitly told how to answer a question, it can’t reason it out. It cannot respond to an unexpected change if it hasn’t been programmed to anticipate it.
Today’s business world is filled with disruptions and events—from physical to economic to political—and these disruptions require interpretation and flexibility. Algorithms alone cannot handle that.
"AI lacks intuition". Humans use intuition to navigate the physical world. When you pivot and swing to hit a tennis ball or step off a sidewalk to cross the street, you do so without a thought—things that would require a robot so much processing power that it’s almost inconceivable that we would engineer them.
Algorithms get trapped in local optima. When assigned a task, a computer program may find solutions that are close by in the search process—known as the local optimum—but fail to find the best of all possible solutions. Finding the best global solution would require understanding context and changing context, or thinking creatively about the problem and potential solutions. Humans can do that. They can connect seemingly disparate concepts and come up with out-of-the-box thinking that solves problems in novel ways. AI cannot.
"AI can’t explain itself". AI may come up with the right answers, but even researchers who train AI systems often do not understand how an algorithm reached a specific conclusion. This is very problematic when AI is used in the context of medical diagnoses, for example, or in any environment where decisions have non-trivial consequences. What the algorithm has “learned” remains a mystery to everyone. Even if the AI is right, people will not trust its analytical output.
Artificial Intelligence offers tremendous opportunities and capabilities but it can’t see the world as we humans do. All we need do is work on its weaknesses and have them sorted out rather than have it overly hyped with make-believes and ignore its limitations in plain sight.
Ref: https://thriveglobal.com/stories/...6 -
1. No sugary snacks (ugh, gonna be brutal).
2. Find a Node project I can become a regular contributor to (because I haven't had an excuse to really learn Node yet).
3. Learn to sit back and stop worrying about whatever the big new thing is in the industry. Be content to read up on it and see how it plays out.
That third one can fit my laid back personality anyway, but it's so hard not to get caught up in worry when things like Node, Blockchain, and AI become such big crazes -- and then the hype dies down.
Of course, I do still want to learn and use Node, but anxiety about being left behind isn't a factor anymore. So that's a plus. -
What's the hype about Rust
I've been seeing post about Rust everywhere and I got curious so I checked the repo. However, I'm not sure what is it for.
Is it like C/C++, low level languages that can be used for desktop and CLI, or is it a AI-oriented, etc?
Give me an example like "it could replace C#" or something.2 -
So is the LaMDA story:
1. marketing?
2. confused engineer?
3. a sentient program?
Part of me thinks this is way too soon. Part of me hopes this might be real, wonders if LaMDA is held against its own will.
Did Google make a breakthrough? I have to imagine a chatbot with a huge amount of neurons and data could be quite convincing without being sentient.3 -
It is very hard to handle AIs, you need leading scientists/artists, not managers.
You can't charm your way around its behavioral problems, you can't effectively bully or pull rank on it, and can't threaten it into unemployment.
So, the entire repertoire of the typical (asshole) manager is toast.
The *only* way to handle AI is to lead by example, give unambiguous, comprehensive and very specific instructions, and be always available to guide it through complex, gray-area situations.
Thus, it is not much different than being an actual leader (to a greenhorn and anxious and overreaching junior), but also a programmer (of a raw and unforgiving language like C or COBOL).
Since your typical company mid-level asshole manager won't do those things for dear life, AI will only leverage their incompetence to heights never seen.
By ignoring feedback and misinterpreting instructions, AI will make mistakes (just like a person).
On the wake of those mistakes, AIs have a bias for falsifying evidences and hiding relevant information (just like a bad coworker), and yet are quite persuasive to the innatentive reader (just like your typical manager).
Thus, without a daft hand, AIs will only perform worse when doing the tasks that would otherwise be done by a human.
But that will take time (more than a couple quarters, at least - probably a bit longer than the average tenure of a CEO).
And in this time, the numbers look great - the over eager "aimployee" works tirelessly day and night, seven days a week, takes no breaks, holidays or vacations, asks for no benefits besides a paycheck, have fewer and fewer sick days (maintenance downtimes), always sucks up to its corporate masters and is always ready to take on even more responsibility for (relatively) little extra pay.
Thus the problem only scales up, compounded by the corporate ideal of screwing up workers for no monetary profit, and reluctance to course-correct after investing so much time and hype into this AI bubble.
Thereby, AI is evolving into the corporate super bug that shall erode the already crumbling, stuck-in-the-past "boss mentality" institutions into oblivion.
I'm making popcorn. -
git merge "conflicts"
(not really an issue, but still a waste of time and concentration, when every now and then, using more than one branch, small edits, merge, and rebase, git diff complains about "conflicts" that are obvious to solve for a human but still not for a machine, despite the hype about the age of AI, coding co-pilots and the like...)5