Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API

From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "machine learning"
-
Difference between machine learning and AI:
If it is written in Python, it's probably machine learning
If it is written in PowerPoint, it's probably AI4 -
Happened with anyone?joke/meme deep learning ml rants + metro = 2 station bonus :) ai artificial intelligence meme funny machine learning python3
-
I am a machine learning engineer and my boss expects me to train an AI model that surpasses the best models out there (without training data of course) because the client wanted ‘a fully automated AI solution’.13
-
> Open private browsing on Firefox on my Debian laptop
> Find ML Google course and decided to start learning in advance (AI and ML are topics for next semester)
**Phone notifications: YouTube suggests Machine Learning recipes #1 from Google**
> Not even logged in on laptop
> Not even chrome
> Not even history enabled
> Not fucking even windows
😒😒😒
The lack of privacy is fucking infuriating!
....
> Added video to watched latter
I now hate myself for bitting22 -
* Go to sleep at reasonable times
* Watch some of those anime I never quite finished
* Read more books
* Become more proficient with rust
* Replace go with rust at work
* Setup a weeb media center I can remotely
* Finally make a personal webpage/blog without overthinking things, to actually get it done
* Find or make a storage solution for all the memes I sto- I mean collected, where I can add tags to find them more quickly. Would love to have them have the tagging be done automatically with machine learning, but I don't think we're quite there yet.2 -
"So Alecx, how did you solve the issues with the data provided to you by hr for <X> application?"
Said the VP of my institution in charge of my department.
"It was complex sir, I could not figure out much of the general ideas of the data schema since it came from a bunch of people not trained in I.T (HR) and as such I had to do some experiments in the data to find the relationships with the data, this brought about 4 different relations in the data, the program determined them for me based on the most common type of data, the model deemed it a "user", from that I just extracted the information that I needed, and generated the tables through Golang's gorm"
VP nodding and listening intently...."how did you make those relationships?" me "I started a simple pattern recognition module through supervised mach..." VP: Machine learning, that sounds like A.I
Me: "Yes sir, it was, but the problem was fairly easy for the schema to determ.." VP: A.I, at our institution, back in my day it was a dream to have such technology, you are the director of web tech, what is it to you to know of this?"
Me: "I just like to experiment with new stuff, it was the easiest rout to determine these things, I just felt that i should use it if I can"
VP: "This is amazing, I'll go by your office later"
Dude speaks wonders of me. The idea was simple, read through the CSV that was provided to me, have the parsing done in a notebook, make it determine the relationships in the data and spout out a bunch of JSON that I could use. Hook it up to a simple gorm golang script and generate the tables for that. Much simpler than the bullshit that we have in php. I used this to create a new database since the previous application had issues. The app will still have a php frontend and backend, but now I don't leave the parsing of the data to php, which quite frankly, php sucks for imho. The Python codebase will then create the json files through the predictive modeling (98% accuaracy) and then the go program will populate the db for me.
There are also some node scripts that help test the data since the data is json.
All in all a good day of work. The VP seems scared since he knows no one on this side of town knows about this kind of tech. Me? I am just happy I get to experiment. Y'all should have seen his face when I showed him a rather large app written in Clojure, the man just went 0.0 when he saw Lisp code.
I think I scare him.12 -
Do some cool shit that I’ve always wanted to do.
- learn more about machine learning and computer vision
- learn C / C++ / rust
- learn embedded systems / programming
- learn more EE centered stuff3 -
I just told my director that the solution for a particular problem that we have involves Machine Learning. For which I had already applied a VERY small app to make sense of an old database to make a NEW one since the old one broke every notion of how a db is supposed to be set (meaning that I recreated the project from scratch)
And on the same message I told him that I was not willing to do it using M.L since I was not paid enough to bring this level of heat to the institution.
Normalize telling mfkers that your skills are worth more.
I am paid well, but not enough to out of the blue tell mfkers that my ml based algo can save them./
Fuck em, fuck em hard, fuck em good, fuck em without even using spit.
I don't do this shit because I am paSSiOnate, since there lies the trap: "I mean, I love it so I guess I can do it, I do this on my free time either way" <---- no bitch, shit is expensive on the real world, don't do that wtf is the matter with you? *slaps* companies don't see it as a: "oh shit, employee X can do this! value!" they see it as "greaaaaat, I can save money on this", so fuck em.
Normalize it, y'all are wizards, advisors of kings, no company today survives without I.T. About motherfucking time y'all bitches take this shit by the horns and do with it what you want.
People form third world countries that need work: shit don't apply to you, currently, but we will make it apply to you on the rising, my kings, stay strong.4 -
I am so sick and tired of hearing “AI” everywhere all the time. Yeah how about we integrate some AI into your super smart toaster so that it knows when to start preparing for when you put toasts in it in the morning.
Not even mentioning all these idiots being like “oh yeah AI is becoming sentient. Oh yeah AI is gonna take over the world”.
Brother the current state of AI is just machine learning, it’s a stupid pattern detector and generator it doesn’t have thoughts, emotions. Please just stop it.6 -
How to psych-out a machine learning algorithm:
> Use a platform for 10 years
> Never like, comment, or give it any inkling of your preferences
> Like one random video
> Never log in again10 -
I've assembled enough computing power from the trash. Now I can start to build my own personal 'cloud'. Fuck I hate that word.
But I have a bunch of i7s, and i5s on hand, in towers. Next is just to network them, and setup some software to receive commands.
So far I've looked at Ray, and Dispy for distributed computation. If theres others that any of you are aware of, let me know. If you're familiar with any of these and know which one is the easier approach to get started with, I'd appreciate your input.
The goal is to get all these machines up and running, a cloud thats as dirt cheap as possible, and then train it on sequence prediction of the hidden variables derived from semiprimes. Right now the set is unretrievable, but theres a lot of heavily correlated known variables and so I'm hoping the network can derive better and more accurate insights than I can in a pinch.
Because any given semiprime has numerous (hundreds of known) identities which immediately yield both of its factors if say a certain constant or quotient is known (it isn't), knowing any *one* of them and the correct input, is equivalent to knowing the factors of p.
So I can set each machine to train and attempt to predict the unknown sequence for each particular identity.
Once the machines are setup and I've figured out which distributed library to use, the next step is to setup Keras, andtrain the model using say, all the semiprimes under one to ten million.
I'm also working on a new way of measuring information: autoregressive entropy. The idea is that the prevalence of small numbers when searching for patterns in sequences is largely ephemeral (theres no long term pattern) and AE allows us to put a number on the density of these patterns in a partial sequence, but its only an idea at the moment and I'm not sure what use it has.
Heres hoping the sequence prediction approach works.29 -
Riddle me this
Client wants solution based on open source software.
Any additional software that I write (let's say, an offline store plugin for Feast feature store) to add missing functionality has to be closed source.
Fuck you. Intellectual property my ass. You and me wouldn't even have projects if it werent for OSS.
Good luck maintaining the plugin after I am gone.
I'm doing a lot of work and will have close to nothing to show to future employers.
(BTW, if it were for the old Microsoft model of code source, I would have never become a programmer of any sort. God bless OSS)3 -
one of my guys decided to start learning c++ for the fun and fuck of it. We do not use c++ for shit (we web developers in this bitch) and he asked me if in the event of him getting completely fucking stuck he could come to me for guidance, I said sure. I do use c++ for personal game projects....it is mostly very bad C until I need c++, it is horrible seriously, I ain't no expert.
He decides to go with the LLVM. Creates a simple hello world app. Runs clang++ main.cpp -o main.
**QUICK PAUSE**
Done, the CLI returns the prompt back to him. He comes and asks me wtf is going on. I check on my machine(Linux based) and do the exact same thing. Executable comes out.
I check back on his windows machine, try typing the same shit. Nada. It does not throw errors or warnings, and the syntax is fucking fine, can't really fuck up c-outing hello fucking world. FUCKING NADA
I couldn't sit down to troubleshoot since it was still working hours, but this shit is haunting me and I am going ballsack crazy knowing that I won't be able to jump at it until tomorrow.
This just makes me dislike c++, i usually never have issues like that, but then again, I use the microsoft compiler (bitch at me all you want, most game developer tutorials etc use that shit, so does the Cherno, its all i know OK????)
I am going to go crazy sdjkfhasdkjlfghlajkhrfvluidefjbhfksjadhjksdsdsjksdjkl11 -
Next level reinforcement learning:
Grab a baseball bat and show that damn machine who's the boss, i.e. reinforce that message by highfiving the said machine in the face with the aforementioned bat.3 -
Im deploying a Machine Learning Model to production. We dont have an automated deployment pipeline for the models, so we do it manually exposing the models through a rest api.
I asked for the model artifact to the DS, they didn't have permissions to download files from Databricks.
I asked their manager for the artifact. He told me that he has the permissions, then bullshitted me with something about the formal process, some shit about proper permissions handling, and that they do no have a standard process for sharing files right now so i should wait.
I was like "bro, share the artifact with me to unblock my work, then stablish your process, i dont care". He said no, and just after that he started a thread involving half of the middle management and data engineers asking for feedback on how to stablish a process for sharing databricks files. Just Wtf.
I got pissed, i reach out to his superior (good friend of mine), that was on vacation btw, and i told him the situation. He opened slack and humiliated him so bad, that i almost felt bad for the manager jajajajaja.
I grabbed my model artifact and got out of there instantly.2 -
As you can see from the screenshot, its working.
The system is actually learning the associations between the digit sequence of semiprime hidden variables and known variables.
Training loss and value loss are super high at the moment and I'm using an absurdly small training set (10k sequence pairs). I'm running on the assumption that there is a very strong correlation between the structures (and that it isn't just all ephemeral).
This initial run is just to see if training an machine learning model is a viable approach.
Won't know for a while. Training loss could get very low (thats a good thing, indicating actual learning), only for it to spike later on, and if it does, I won't know if the sample size is too small, or if I need to do more training, or if the problem is actually intractable.
If or when that happens I'll experiment with different configurations like batch sizes, and more epochs, as well as upping the training set incrementally.
Either case, once the initial model is trained, I need to test it on samples never seen before (products I want to factor) and see if it generates some or all of the digits needed for rapid factorization.
Even partial digits would be a success here.
And I expect to create multiple training sets for each semiprime product and its unknown internal variables versus deriable known variables. The intersections of the sets, and what digits they have in common might be the best shot available for factorizing very large numbers in this approach.
Regardless, once I see that the model works at the small scale, the next step will be to increase the scope of the training data, and begin building out the distributed training platform so I can cut down the training time on a larger model.
I also want to train on random products of very large primes, just for variety and see what happens with that. But everything appears to be working. Working way better than I expected.
The model is running and learning to factorize primes from the set of identities I've been exploring for the last three fucking years.
Feels like things are paying off finally.
Will post updates specifically to this rant as they come. Probably once a day.2 -
I've got a report that one of our machine-learning purpose computers broke down suddenly. I took a look and saw that the thing was stuck at the BIOS screen. The thing that was off was that it did not prompt for any keystrokes. Like, if there were a BIOS problem, there would usually be a prompt to press <F1> to ignore or something, right? But, nope! Even BIOS did not do jack s#!+.
I tinkered around the peripherals for an hour before finally finding something odd - why the f*<k does this computer have a screen hooked up via f*<king D-Sub????????
Yup, somebody hooked up a screen to the base motherboard via D-Sub when they rearranged other computers, even though that machine needed to have a screen hooked up to a GPU via HDMI.
🤦4 -
In 2015 I sent an email to Google labs describing how pareidolia could be implemented algorithmically.
The basis is that a noise function put through a discriminator, could be used to train a generative function.
And now we have transformers.
I also told them if they looked back at the research they would very likely discover that dendrites were analog hubs, not just individual switches. Thats turned out to be true to.
I wrote to them in an email as far back as 2009 that attention was an under-researched topic. In 2017 someone finally got around to writing "attention is all you need."
I wrote that there were very likely basic correlates in the human brain for things like numbers, and simple concepts like color, shape, and basic relationships, that the brain used to bootstrap learning. We found out years later based on research, that this is the case.
I wrote almost a decade ago that personality systems were a means that genes could use to value-seek for efficient behaviors in unknowable environments, a form of adaption. We later found out that is probably true as well.
I came up with the "winning lottery ticket" hypothesis back in 2011, for why certain subgraphs of networks seemed to naturally learn faster than others. I didn't call it that though, it was just a question that arose because of all the "architecture thrashing" I saw in the research, why there were apparent large or marginal gains in slightly different architectures, when we had an explosion of different approaches. It seemed to me the most important difference between countless architectures, was initialization.
This thinking flowed naturally from some ideas about network sparsity (namely that it made no sense that networks should be fully connected, and we could probably train networks by intentionally dropping connections).
All the way back in 2007 I thought this was comparable to masking inputs in training, or a bottleneck architecture, though I didn't think to put an encoder and decoder back to back.
Nevertheless it goes to show, if you follow research real closely, how much low hanging fruit is actually out there to be discovered and worked on.
And to this day, google never fucking once got back to me.
I wonder if anyone ever actually read those emails...
Wait till they figure out "attention is all you need" isn't actually all you need.
p.s. something I read recently got me thinking. Decoders can also be viewed as resolving a manifold closer to an ideal form for some joint distribution. Think of it like your data as points on a balloon (the output of the bottleneck), and decoding as the process of expanding the balloon. In absolute terms, as the balloon expands, your points grow apart, but as long as the datapoints are not uniformly distributed, then *some* points will grow closer together *relatively* even as the surface expands and pushes points apart in the absolute.
In other words, for some symmetry, the encoder and bottleneck introduces an isotropy, and this step also happens to tease out anisotropy, information that was missed or produced by the encoder, which is distortions introduced by the architecture/approach, features of the data that got passed on through the bottleneck, or essentially hidden features.4 -
I want to do something data-science-y.
Gimme project ideas, and where can I get the data for it?
Also, not looking for machine learning, just basic data analysis stuff.
I'm bored.11 -
(TL;DR FOR THE TL;DR: **THIS IS NOT AN AD, ITS A SHITPOST**)
(TL;DR: this is a shitpost about an Intuit ad campaign Israelis get a lot on YouTube, those ads are starting to drive me nuts lmao.)
WE'RE INTUIT
WE'RE INTO MACHINE LEARNING
OPEN SOURCE
WE'RE ADVANCING THE FIELD OF TECHNOLOGY TO OPEN FINANCIAL OPPORTUNITIES FOR MILLIONS OF PEOPLE AROUND THE GLOBE5 -
I spent 4 months in a programming mentorship offered by my workplace to get back to programming after 4 years I graduated with a CS degree.
Back in 2014, what I studied in my first programming class was not easy to digest. I would just try enough to pass the courses because I was more interested in the theory. It followed until I graduated because I never actually wrote code for myself for example I wrote a lot of code for my vision class but never took a personal initiative. I did however have a very strong grip on advanced computer science concepts in areas such as computer architecture, systems programming and computer vision. I have an excellent understanding of machine learning and deep learning. I also spent time working with embedded systems and volunteering at a makerspace, teaching Arduino and RPi stuff. I used to teach people older than me.
My first job as a programmer sucked big time. It was a bootstrapped startup whose founder was making big claims to secure funding. I had no direction, mentorship and leadership to validate my programming practices. I burnt out in just 2 months. It was horrible. I experienced the worst physical and emotional pain to date. Additionally, I was gaslighted and told that it is me who is bad at my job not the people working with me. I thought I was a big failure and that I wasn't cut out for software engineering.
I spent the next 6 months recovering from the burn out. I had a condition where the stress and anxiety would cause my neck to deform and some vertebrae were damaged. Nobody could figure out why this was happening. I did find a neurophyscian who helped me out of the mental hell hole I was in and I started making recovery. I had to take a mild anti anxiety for the next 3 years until I went to my current doctor.
I worked as an implementation engineer at a local startup run by a very old engineer. He taught me how to work and carry myself professionally while I learnt very little technically. A year into my job, seeing no growth technically, I decided to make a switch to my favourite local software consultancy. I got the job 4 months prior to my father's death. I joined the company as an implementation analyst and needed some technical experience. It was right up my alley. My parents who saw me at my lowest, struggling with genetic depression and anxiety for the last 6 years, were finally relieved. It was hard for them as I am the only son.
After my father passed away, I was told by his colleagues that he was very happy with me and my sisters. He died a day before I became permanent and landed a huge client. The only regret I have is not driving fast enough to the hospital the night he passed away. Last year, I started seeing a new doctor in hopes of getting rid of the one medicine that I was taking. To my surprise, he saw major problems and prescribed me new medication.
I finally got a diagnosis for my condition after 8 years of struggle. The new doctor told me a few months back that I have Recurrent Depressive Disorder. The most likely cause is my genetics from my father's side as my father recovered from Schizophrenia when I was little. And, now it's been 5 months on the new medication. I can finally relax knowing my condition and work on it with professional help.
After working at my current role for 1 and a half years, my teamlead and HR offered me a 2 month mentorship opportunity to learn programming from scratch in Python and Scrapy from a personal mentor specially assigned to me. I am still in my management focused role but will be spending 4 hours daily of for the mentorship. I feel extremely lucky and grateful for the opportunity. It felt unworldly when I pushed my code to a PR for the very first time and got feedback on it. It is incomparable to anything.
So we had Eid holidays a few months back and because I am not that social, I began going through cs61a from Berkeley and logged into HackerRank after 5 years. The medicines help but I constantly feel this feeling that I am not enough or that I am an imposter even though I was and am always considered a brilliant and intellectual mind by my professors and people around me. I just can't shake the feeling.
Anyway, so now, I have successfully completed 2 months worth of backend training in Django with another awesome mentor at work. I am in absolute love with Django and Python. And, I constantly feel like discussing and sharing about my progress with people. So, if you are still reading, thank you for staying with me.
TLDR: Smart enough for high level computer science concepts in college, did well in theory but never really wrote code without help. Struggled with clinical depression for the past 8 years. Father passed away one day before being permanent at my dream software consultancy and being assigned one of the biggest consultancy. Getting back to programming after 4 years with the help of change in medicine, a formal diagnosis and a technical mentorship.3 -
The first fruits of almost five years of labor:
7.8% of semiprimes give the magnitude of their lowest prime factor via the following equation:
((p/(((((p/(10**(Mag(p)-1))).sqrt())-x) + x)*w))/10)
I've also learned, given exponents of some variables, to relate other variables to them on a curve to better sense make of the larger algebraic structure. This has mostly been stumbling in the dark but after a while it has become easier to translate these into methods that allow plugging in one known variable to derive an unknown in a series of products.
For example I have a series of variables d4a, d4u, d4z, d4omega, etc, and these are translateable now, through insights that become various methods, into other types of (non-d4) series. What these variables actually represent is less relevant, only that it is possible to translate between them.
I've been doing some initial learning about neural nets (implementation, rather than theoretics as I normally read about). I'm thinking what I might do is build a GPT style sequence generator, and train it on the 'unknowns' from semiprime products with known factors.
The whole point of the project is that a bunch of internal variables can easily be derived, (d4a, c/d4, u*v) from a product, its root, and its mantissa, that relate to *unknown* variables--unknown variables such as u, v, c, and d4, that if known directly give a constant time answer to the factors of the original product.
I think theres sufficient data at this point to train such a machine, I just don't think I'm up to it yet because I'm lacking in the calculus department.
2000+ variables that are derivable from a product, without knowing its factors, which are themselves products of unknown variables derived from the internal algebraic relations of a product--this ought to be enough of an attack surface to do something with.
I'm willing to collaborate with someone familiar with recurrent neural nets and get them up to speed through telegram/element/discord if they're willing to do the setup and training for a neural net of this sort, one that can tease out hidden relationships and map known variables to the unknown set for a given product.16 -
So I got the LSTM working in keras.
Working from a glorified tutorial.
Why the fuck do people let their github pages go down with no other backup?
Especially if its a link in your blog?
Why would you do that and not post the full script (instead of bits and pieces interspersed with *partial* explanations)?
In any case, its working and training on a test set and examples just to debug my own understanding of the process.
Once thats done I can generate some training data and try training on a small set. If that goes smoothly and the loss looks like it is heading in the right direction, then I'll setup the hardware for the private cloud and start writing the parallel computing component.2 -
When I commented that that there may be non-euclidean equivalents to certain stat functions (average, mean, mode, etc), apparently there were others out there with the same general idea.
Some guys over at stanford are exploring hyperbolic spaces for machine learning, which is exactly the sort of applications I had in mind.
Very fascinating work, go check it out if it's something that interests you..
https://dawn.cs.stanford.edu/2019/...
And the related paper that it is based on:
http://proceedings.mlr.press/v80/...2 -
Need some advice. I’m a uni student and I really want to go into machine learning, data science, or computer vision. I have most of the skills and I feel I am fairly competent. However, the only professional experiences I have are web dev based. How can I make myself more appealing for data based roles? I really don’t want to do web dev anymore hahahahah5
-
Any example of machine learning / artificial intelligence on video auditing that the community knows of?
-
I am tired of switching my role at a startup where I work. I was primarily hired as a backend developer but some times i have to handle sprints, i have to become machine learning engineer, a devops engineer and now even frontend. Yuck. I am sòoooooo tired..6
-
Anybody has an opinion on CMU for a machine learning or robotics PhD? You think they'll let me in? (I've heard horror stories from their selection process tbh)
Also, any good Canadian unis and degrees for AI/robotic combo PhDs?7 -
To all my Machine Learning engineers, Ive been doing Frontend development for 6 years and I'm done. Wanting to get into machine learning because I've always loved data.
1. What is your day to day like?
2. Any advice for my learning journey?
Thank you🙏14 -
Ok soooo......today all those years of learning cmd commands and how to navigate the system in cmd kinda paid off
Had to search and copy files from a pc that isn't booting up and the pc has to return to the pharmacy today
Incase the machine fails.... we just do fresh install and restore back critical data -
First computer I saw was an Apple II running Oregon Trail in grade school. Then I played computer games on my uncles Apple IIe. The first home video game I ever saw was Pong. It was a device you hooked to the RF input on the TV. It had 2 paddles to control the input (single axis controllers). The first game console I played on was the Atari. The first computer I programmed was on a black and white Macintosh. Then the other programmers in my high school told me the PC was better. Well, it was better for learning IMO. That was with Windows 3.0. But the programming was Turbo Pascal in DOS. DOS gave you complete control of the machine. Better at the time for me learning to do graphics and sounds programming. The first computer I bought was a 386 and I played with VR programming. Made my own joysticks using the limited joystick port. Fun times learning electronics and software together.
-
Hey y'all!
this is my first post, so mind me if my question sounds obvious, but googling around it gave me contradicting articles.
I wanted to ask if there's the possibility to make a living off being an AI developer outside my country (Italy), because, like I wrote in my bio, despite a CS degree and specialization in machine learning, the only jobs I landed were about maintaining useless outdated webapps. I can tell you that the first job's project was a JSP/servlet app that could run only in internet explorer (yes, internet explorer, in 2019), maybe you won't believe me, but if you do, maybe you can partially understand why I want to flee my place.
Add that I had to commute by train + subway to get to work, losing some 3 extra hours a day because of that.
I mean, if I really have to take the hassle of public transit in order to work, at least I want to enjoy it a bit. Please get me outta here.4 -
I am very confused nowadays, exist a great number technologies but i don't decide what technology or language of programming i want specialize me.
I love it the hacking but i have very little experience in programming and have a basic knowdledge in networks and database.
I love it the assembly language but only can code a pair of syntax in this language and know very little about components, architecture and the rest.
I love it the data mining, big data, IA, machine learning but i don't control the statistic.
Of every topics i have a basic knowdlege.
Actually i try to discover my practicality. I am learning Perl and regular expressions.5 -
Need advice:
So I’m 20 years old. Got a decent job as software engineer with a really good pay and really want to break into machine learning.
Mastered NodeJS (my stack has always had node for the past 5-6 years) and I’m finding it difficult to switch to python for machine learning since things are so engraved in my head in javascript.
Aside from the syntax when I’m watching tutorials or reading books, I see data scientists and mathematicians make design mistakes in their code and it hurts my eyes and triggers my ocd.
I need tips on how to put my mindset in a moldable state so I can judge less and learn more and absorb data. Like you know that philosophy that when u get old your brain can’t learn things as fast anymore? I feel like that’s already happening to me rn at the age of 20.5 -
Every time when someone tells me they are an expert of ML. I chuckled. I don't know the ML stands for Machine Learning or "MY LEGGGGG' from SpongeBob.
-
Have anyone used machine learning in real world use cases? (would be nice if you can describe the case in a few words)
I'm reading about the topic and do some testing stuff but at the moment my feeling is that ml is like blockchain. It solves a specific type of problem and for some reason everyone wants to have this problem.7 -
Why do clients expect that they would get a high quality machine learning model without a properly cleaned dataset? I usually get the response, ‘just scrape data and train it. It shouldn’t take long’3
-
Could there be a "greater" GPL which explicitly declares that the constraint extends to use of the code as statistical data, such as in machine learning models?1
-
If you are required to do a custom Object Detector for Mobile.
Would rather use Tensorflow, Theano, or PyTorch to train the model?1