Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API

From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "ai/ml"
-
Happened with anyone?joke/meme deep learning ml rants + metro = 2 station bonus :) ai artificial intelligence meme funny machine learning python4
-
I am a machine learning engineer and my boss expects me to train an AI model that surpasses the best models out there (without training data of course) because the client wanted ‘a fully automated AI solution’.13
-
> Open private browsing on Firefox on my Debian laptop
> Find ML Google course and decided to start learning in advance (AI and ML are topics for next semester)
**Phone notifications: YouTube suggests Machine Learning recipes #1 from Google**
> Not even logged in on laptop
> Not even chrome
> Not even history enabled
> Not fucking even windows
😒😒😒
The lack of privacy is fucking infuriating!
....
> Added video to watched latter
I now hate myself for bitting22 -
I can't figure out shit..
To be honest I created this profile just so I can write down somewhere what I am going through.
So, once upon a time I had graduated from college and went right into a corporate (has only been 2 years since). I was fortunate enough that I got assigned a project that was just starting, and even though I had no clue what was going on, I started doing whatever was assigned.
I initially worked in java and then finished all my tasks earlier than expected, so they switched me to another C++ project that builds on top of it.
Fast forward 2.5 years, I'm now the team lead of the CPP project and all my friends who were in the core team have left the company.
As usual, the reason behind it is shitty management. These mfs won't hire competent people and WILL ABSOLUTELY NOT retain the ones that are. I can feel it in my bones that it is time for me to leave, but fuck me if I understand what I am good at.
I have been able to handle all the tasks that they threw at me, be it java or c++ - just because I love logic and algorithms. I have been dabbling in ML and AI since 4-5 years now, but could never go into it full time.
Now I'm looking at the job postings and Jesus Christ these bitches do not understand what they want. I have to be expert in 34567389 technologies, mastering each of whom (by mastering I mean become proficient in) would need at least 6-8 months if not more, all with 82146867+ years of experience in them.
I don't know if I am supposed to learn on Java (so spring boot and stuff) or I'm supposed to do c++ or I'm gonna go with Python or should I learn web dev or database management or what.
I like all of these things, and would likely enjoy working in each of these, but for fucks sake my cv doesn't show this and most of the bitch ass recruiter portals keep putting my cv in the bin.
Yeah...
If you have read so far, here's a picture of a cat and a dog.5 -
There are a couple:
A system that updates user accounts to connect them into our wifi system by parsing thousands of processing files written in Clojure. The project was short lived and mainly experimental, It has complete test cases and the jar generated from it is still purring silently on the main application. It was used to replace an $85k vendor application that made no fucking sense. The code has not been touched in 2 years and the jar is still there. The dba mentioned the solution to the vendor, the vendor tried buying it from me, but being that it belongs to the institution nothing was touched, still, it got the VP's attention that I can make programs that would be bought for that level, it caught his attention even more when I showed him the codebase and he recognized a Lisp variant (he is old, and was back in the day a Fortran and Cobol developer)
A small Python categorical ML program that determines certain attributes of user generated data and effectively places them on the proper categories on the main DB. The program generates estimates of the users and the predictions have a 95% correctness rate. The DBA still needs to double check the generated results before doing the db updates. I don't remember how I coded it because I was mostly drunk when I experiment on the scenario. It also got the attention of the VP and director since the web tech manager was apparently doing crazy ML shit that they were not expecting me to do, it made them paranoid that I would eventually leave for a ML role somewhere, still here, but I want more moneys!!
A program that generates PDF documentation from user data, written in Go, Python and Perl (yes Perl) I even got shit from the lead developer since I used languages outside of their current scope of work. Dude had no option but to follow along with it :P since I am his boss
Many more. I am normally proud of my work code. But my biggest moment is my current ntural language processing unit that I am trying to code for my home, but I don't have enough power to build it with my computers, currently, my AI is too stupid, but sometimes it does reply back to my commands and does the things I ask it to do (simple things, opening a browser, search for a song etc) but 7 times out of ten it wont work :P -
The first fruits of almost five years of labor:
7.8% of semiprimes give the magnitude of their lowest prime factor via the following equation:
((p/(((((p/(10**(Mag(p)-1))).sqrt())-x) + x)*w))/10)
I've also learned, given exponents of some variables, to relate other variables to them on a curve to better sense make of the larger algebraic structure. This has mostly been stumbling in the dark but after a while it has become easier to translate these into methods that allow plugging in one known variable to derive an unknown in a series of products.
For example I have a series of variables d4a, d4u, d4z, d4omega, etc, and these are translateable now, through insights that become various methods, into other types of (non-d4) series. What these variables actually represent is less relevant, only that it is possible to translate between them.
I've been doing some initial learning about neural nets (implementation, rather than theoretics as I normally read about). I'm thinking what I might do is build a GPT style sequence generator, and train it on the 'unknowns' from semiprime products with known factors.
The whole point of the project is that a bunch of internal variables can easily be derived, (d4a, c/d4, u*v) from a product, its root, and its mantissa, that relate to *unknown* variables--unknown variables such as u, v, c, and d4, that if known directly give a constant time answer to the factors of the original product.
I think theres sufficient data at this point to train such a machine, I just don't think I'm up to it yet because I'm lacking in the calculus department.
2000+ variables that are derivable from a product, without knowing its factors, which are themselves products of unknown variables derived from the internal algebraic relations of a product--this ought to be enough of an attack surface to do something with.
I'm willing to collaborate with someone familiar with recurrent neural nets and get them up to speed through telegram/element/discord if they're willing to do the setup and training for a neural net of this sort, one that can tease out hidden relationships and map known variables to the unknown set for a given product.16 -
This is gonna be a long post, and inevitably DR will mutilate my line breaks, so bear with me.
Also I cut out a bunch because the length was overlimit, so I'll post the second half later.
I'm annoyed because it appears the current stablediffusion trend has thrown the baby out with the bath water. I'll explain that in a moment.
As you all know I like to make extraordinary claims with little proof, sometimes
for shits and giggles, and sometimes because I'm just delusional apparently.
One of my legit 'claims to fame' is, on the theoretical level, I predicted
most of the developments in AI over the last 10+ years, down to key insights.
I've never had the math background for it, but I understood the ideas I
was working with at a conceptual level. Part of this flowed from powering
through literal (god I hate that word) hundreds of research papers a year, because I'm an obsessive like that. And I had to power through them, because
a lot of the technical low-level details were beyond my reach, but architecturally
I started to see a lot of patterns, and begin to grasp the general thrust
of where research and development *needed* to go.
In any case, I'm looking at stablediffusion and what occurs to me is that we've almost entirely thrown out GANs. As some or most of you may know, a GAN is
where networks compete, one to generate outputs that look real, another
to discern which is real, and by the process of competition, improve the ability
to generate a convincing fake, and to discern one. Imagine a self-sharpening knife and you get the idea.
Well, when we went to the diffusion method, upscaling noise (essentially a form of controlled pareidolia using autoencoders over seq2seq models) we threw out
GANs.
We also threw out online learning. The models only grow on the backend.
This doesn't help anyone but those corporations that have massive funding
to create and train models. They get to decide how the models 'think', what their
biases are, and what topics or subjects they cover. This is no good long run,
but thats more of an ideological argument. Thats not the real problem.
The problem is they've once again gimped the research, chosen a suboptimal
trap for the direction of development.
What interested me early on in the lottery ticket theory was the implications.
The lottery ticket theory says that, part of the reason *some* RANDOM initializations of a network train/predict better than others, is essentially
down to a small pool of subgraphs that happened, by pure luck, to chance on
initialization that just so happened to be the right 'lottery numbers' as it were, for training quickly.
The first implication of this, is that the bigger a network therefore, the greater the chance of these lucky subgraphs occurring. Whether the density grows
faster than the density of the 'unlucky' or average subgraphs, is another matter.
From this though, they realized what they could do was search out these subgraphs, and prune many of the worst or average performing neighbor graphs, without meaningful loss in model performance. Essentially they could *shrink down* things like chatGPT and BERT.
The second implication was more sublte and overlooked, and still is.
The existence of lucky subnetworks might suggest nothing additional--In which case the implication is that *any* subnet could *technically*, by transfer learning, be 'lucky' and train fast or be particularly good for some unknown task.
INSTEAD however, what has happened is we haven't really seen that. What this means is actually pretty startling. It has two possible implications, either of which will have significant outcomes on the research sooner or later:
1. there is an 'island' of network size, beyond what we've currently achieved,
where networks that are currently state of the3 art at some things, rapidly converge to state-of-the-art *generalists* in nearly *all* task, regardless of input. What this would look like at first, is a gradual drop off in gains of the current approach, characterized as a potential new "ai winter", or a "limit to the current approach", which wouldn't actually be the limit, but a saddle point in its utility across domains and its intelligence (for some measure and definition of 'intelligence').4 -
Saturday evening open debate thread to discuss AI.
What would you say the qualitative difference is between
1. An ML model of a full simulation of a human mind taken as a snapshot in time (supposing we could sufficiently simulate a human brain)
2. A human mind where each component (neurons, glial cells, dendrites, etc) are replaced with artificial components that exactly functionally match their organic components.
Number 1 was never strictly human.
Number 2 eventually stops being human physically.
Is number 1 a copy? Suppose the creation of number 1 required the destruction of the original (perhaps to slice up and scan in the data for simulation)? Is this functionally equivalent to number 2?
Maybe number 2 dies so slowly, with the replacement of each individual cell, that the sub networks designed to notice such a change, or feel anxiety over death, simply arent activated.
In the same fashion is a container designed to hold a specific object, the same container, if bit by bit, the container is replaced (the brain), while the contents (the mind) remain essentially unchanged?
This topic came up while debating Google's attempt to covertly advertise its new AI. Oops I mean, the engineering who 'discovered Google's ai may be sentient. Hype!'
Its sentience, however limited by its knowledge of the world through training data, may sit somewhere at the intersection of its latent space (its model data) and any particular instantiation of the model. Meaning, hypothetically, if theres even a bit of truth to this, the model "dies" after every prompt, retaining no state inbetween.16 -
- Remake all my hacky products and finally make those adjustments and improvements I always forget about. (A shitton of maintenance that I always YOLO my way through)
- Potentially finally give digital drawing and design a go as a second career (if money permits, also)
- Move to middle of Asia, dead center of Kazakhstan or wherever there are gypsy tribes, learn their language and teach their kids about computers and robots and make a lot of products that'd make a gypsy's life easier. Or rather, create a modern gypsy life that does not override their traditional ways, rather integrates with it. (This is one of my dreams, which I know will never come true. Gypsies and nomads do settle more and more each year and their culture is basically going extinct. Plus, govts around the world dislike them greatly)
- Do a lot more research projects in robotics. Literally make everyday robotic items and then sell them. (with a sprinkle of AI/ML, that is)
All the above would also need lots of money and effort tho.1 -
Deep learning is probably (????) the only research branch where every successful paper title needs to be a stupid acronym or meme
I work in a conversational AI startup and the new intern that joined yesterday didn't understand half the memes or acronyms (especially all the Simpsons related) because apparently he's "Gen Z" and all the paper title is "Millennial" humour
He's only 2 years younger than me. Am I literally at the millennial - GenZ border ? Or the intern is out of touch ?7 -
I’ve been looking for a job recently since I am a student and starting my career.
I have a bunch of experience and I like to think I have pretty broad knowledge of programming concepts (web dev, ML, AI, software development).
I see these job postings for jobs that I know I am qualified for.
- I got my research published (which is related to the jobs I’ve been applying for)
- I have great grades
- I have a clear track record of doing well in teams (life long athlete)
- I am a complete geek for new tech and libraries so I always learn them super fast
- I have side projects that aren’t just shit I’ve done in school
- my past jobs show that I am an efficient worker who has real experience
However, I always fucking fail the coding challenges.
I’m never asked questions like “how to reverse a linked list”, just obscure questions that I don’t know how to study for.
What the fuck am I supposed to do? It’s not even like I get close to the answers. I usually get a couple test cases and then fail the rest of them, or I can’t figure out a solution to solve them.
This is all really disheartening and I fucking hate it I absolutely fucking hate it and when I am trying to hire people in the future, I’m never going to make them do coding challenges bc they’re fucking stupid4 -
People started to use ChatGPT to discover a new vulnerabilities (0day), I saw someone use it to help them break a smart contract, I mean if you already found a 0day you might ask it to write the exploit rather write it yourself 😬7
-
!Dev
If I was rich I think I'd donate to schools and children educational funds a lot
There's so much more that I've been able to learn about and do now that I have my own income stream and it's not just my dad supporting me and my 2 brothers himself. so I have the means to buy a server off eBay, or get books every few months on topics I find interesting, or upgrade my ram to an obscene 48GB to toy with ML and AI from my desktop when the whim arises, as well as all the stuff I'm learning to do with raspberry pi boards and my 3D printers, and the laptops I collect from people about to toss good fixable electronics
So I think I'd want to open the same doors for other children if I ever could who knows how much farther I could be if I had this same access when I was younger and didn't get access to my first 'personal' laptop when I was already 14 or 15 years old
I still consider my childhood 'lucky' and I had many opportunities other children couldn't get, but if I ever could I think I'd like to make future children have more opportunities in general1 -
i come from a very closely knit family and i kinda like it. i am in close proximity to my parents, they are growing old so i do a lot of home chores. meanwhile a lot relatives and dad's business friends live nearby , and the whole area around my home feels like a place of known people. my free time goes with 5-6 friends , who again live nearby, or with gym buddies. this is a nice life, which could further expand with a wife and my kids in future .
at the same time, i have seen the "work" life. my office is in a different state, 90% of people there are people like me who would be renting a home nearby and living alone/with strangers. their main "family"(well pseudo-family) will be their coworkers, and that's also not a bad thing.
in the workplace the reasons to be happy will be a lot (as parties or celebrations will occur on multiple birthdays/ company growths and other achievements) , and so will be the reasons to feel sad ( company failure, teammates leaving, missing family)
at the end of the day , when you are living an office life, you are a corporate rat running for the cheese you are never gonna (or , if you are a glass half full person, let's say that you are a "dedicated work professional giving your 100% to the company")
but here comes the dilemma : with AIs like chat gpt coming around and redefining nthe expectations from a software engineer, you will no longer be expected to be resourceful but rather how much of a corporate rat you can be. ( https://twitter.com/bajicdusko/...)
so 1) is it the only way forward for an upcoming engineer's lifestyle? to be like a soldier for their company , while their family and friends await for their long return? 2) if yes, what is the positi8 aspsct we can take away from this?
PS : what a stupid profession those AI/ML guys work in. they put out their minds together to make a sword which is gonna cut the heads of s/w engineers, their own breed. not lawyers, not doctors, not even the fucking peons, but their own freaking brothers5 -
i am having a feeling that getting into software branch of it industry might be a wrong decision. in my college years, i got to explore different domains in tech :
1. software development : frontend tech , backed tech, mobile tech : somethings i and a million other people know
2. os and internal softwares : os, compilers, processor coding , chip manufacturing etc : don't know what this industry is known but we devs rarely go that deep in the hole
3. the network industry : computer networks , topologies, packets, data transfers etc. again not sure what this industry is but 4g/5g brands/ cisco seems to making a lot of money with this
4. cloud computing, devops, data etc : i guess some backend devs explore this domain too.
5. ai/ml data sciences/web3 : the new fad
6. biotech :?? don't know anything about this at all
7. graphics/management/qa : the other associated sisters of software dev. they are seeing a similar recession
8... ans so on.
i chose the 1st one in my undergrad as my career and now regretting this i am thinking of doing masters to fix my mistake and take a job in some other industry that is still blooming and has a future for sustaining a recession for atleast 30 years.
so any suggestions/experiences?9 -
devRanters!
Do any of you find that you can type the solution faster than GitHub Copilot recommends?
That's how you know you're senior 😏
Also, on a serious note, does it only support JavaScript / TypeScript? Didn't really take time to investigate.
I thought there was also a feature you could tell it what to code and it would try to write a solution. Haven't really seen how to do that yet.7 -
https://milkyeggs.com/?p=303
"I claim that the trend which AI/ML continues for lawyers is one that it starts for programmers. Just like how a partner at Cravath likely sketches an outline of how they want to approach a particular case and swarms of largely replaceable lawyers fill in the details, we are perhaps converging to a future where a FAANG L7 can just sketch out architectural details and the programmer equivalent of paralegals will simply query the latest LLM and clean up the output. Note that querying LLMs and making the outputted code conform to specifications is probably a lot easier than writing the code yourself ー and other LLMs can also help you fix up the code and integrate the different modules together!"1 -
Has anyone seen an AI/ML/whatever in the wild? I mean, an _actual_ implementation in production that is actually used.
And Powerpoint slide does not count (unless it of course were created by AI/ML/whatever).
I hear a lot of big words from management but I can’t see anything anywhere.9 -
Why do clients expect that they would get a high quality machine learning model without a properly cleaned dataset? I usually get the response, ‘just scrape data and train it. It shouldn’t take long’3
-
Ros melodic in a strictly python 2.7 environment mixes horribly with a PyTorch based RL module... Time to work around with terminal calls from the latter
*sigh*1 -
Hey, what is the best way to add speech recognition on web? Did web developers have python developer like AI/ML technology.3