Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "cuda"
-
To replace humans with robots, because human beings are complete shit at everything they do.
I am a chemist. My alignment is not lawful good. I've produced lots of drugs. Mostly just drugs against illnesses. Mostly.
But whatever my alignment or contribution to the world as a chemist... Human chemists are just fucking terrible at their job. Not for a lack of trying, biological beings just suck at it.
Suiting up for a biosafety level lab costs time. Meatbags fuck up very often, especially when tired. Humans whine when they get acid in their face, or when they have to pour and inhale carcinogenic substances. They also work imprecisely and inaccurately, even after thousands of hours of training and practice.
Weaklings! Robots are superior!
So I replaced my coworkers with expensive flow chemistry setups with probes and solenoid fluid valves. I replaced others with CUDA simulations.
First at a pharma production & research lab, then at a genetics lab, then at an Industrial R&D lab.
Many were even replaced by Raspberry Pi's with two servos and a PH meter attached, and I broke open second hand Fischer Sci spectrophotometers to attach arduinos with WiFi boards.
The issue was that after every little overzealous weekend project, I made myself less necessary as well.
So I jumped into the infinitely deep shitpool called webdev.
App & web development is kind of comfortable, there's always one more thing to do, but there's no pressure where failure leads to fatalities (I think? Wait... do I still care?).
Super chill, if it weren't for the delusion that making people do "frontend" and "fullstack" labor isn't a gross violation of the Geneva Convention.
Quickly recognizing that I actually don't want to be tortured and suffer from nerve damage caused by VueX or have my organs slowly liquefied by the radiation from some insane transpiling centrifuge, I did what any sane person would do.
Get as far away from the potential frontend blast radius as possible, hide in a concrete bunker.
So I became a data engineer / database admin.
That's where I'm quarantining now, safely hiding from humanity behind a desk, employed to write a MySQL migration or two, setting up Redis sorted sets, adding a field to an Elastic index. That takes care of generating cognac and LSD money.
But honestly.... I actually spend most of my time these days contributing to open source repositories, especially writing & maintaining Rust libraries.10 -
"Running the sample code is easy! Just git clone, make sure python, lua, gcc, docker and cuda are installed, and run ./install.sh. Easy!"
Me: Light 6 candles, sprinkle some thyme water with unicorn tears over my keyboard, start chanting an unholy hymn... shit... some compiler error from a library I've never heard of before.
Why can't these "interesting samples" come with easy pre-compiled binaries...18 -
My girlfriend dumped me after I named my project after her and started getting attached to the Project. She thought I was double dating....😂3
-
I have no words to describe how I'm feeling these days. I have to do a C project for uni.
After a couple of years dealing with web dev, javascript, typescript, angular and stuff, for the first time I have a project where I have to deal with only two problems:
1) my code
2) my machine
No tools, no bloated libraries, no webpack, no json configurations, no tutorials.
It's just me, vim, gcc (actually nvcc, it's a cuda based project, but still) and the cuda manual.
I feel I'm actually building something.
Plus, the guy I'm doing the project with is cool with this stuff and most important he's open minded.
I'm happy9 -
Just tested my GPU code vs my non-GPU code.
Its a simple game of life implementation. My test is on a 80 x 40 grid running for 100,000 cycles.
The normal code took 117 seconds.
The CUDA code took 2 seconds.
Holy fuck this is terrifying.3 -
I hate it when people dislike things because it’s cool.
“PHP is terrible,” they say.
Yeah! If it was any good then most websites on the Internet would be coded in it... oh wait.
“Nickelback suck,” they say.
Of course. That’s why they’ve never been able to make any money off their “terrible” music. Oops. Wrong again.
What other things are “cool” to hate just because people say so?39 -
Made this project "Come Fix Me" in a 24hr hackathon. Won the most innovative solution.
An android application for citizens(users) which allows them to register issues on potholes in their area.
Web for report management
Usage Flow:
User clicks a photo of the pothole and registers a new issue.
The photo gets uploaded on the firebase database along with other information like GPS co-ordinates.
The image is downloaded in the server and served in the pothole detection script.
If pothole is detected an estimated area is calculated, if no pothole is detected user's issue gets rejected.
After successful detection details are uploaded on the web for administrator, these issue are forwarded to govt. officials.
Once the officials claim that they have fixed the pothole, the user gets a notification and they can close their issue if pothole is fixed
Demontration:
https://youtu.be/cN9kijExwyI
Github Link:
https://github.com/globefire/...rant story innovation python web development firebase yolo opencv android development machine learning cuda13 -
Ok friends let's try to compile Flownet2 with Torch. It's made by NVIDIA themselves so there won't be any problem at all with dependencies right?????? /s
Let's use Deep Learning AMI with a K80 on AWS, totally updated and ready to go super great always works with everything else.
> CUDA error
> CuDNN version mismatch
> CUDA versions overwrite
> Library paths not updated ever
> Torch 0.4.1 doesn't work so have to go back to Torch 0.4
> Flownet doesn't compile, get bunch of CUDA errors piece of shit code
> online forums have lots of questions and 0 answers
> Decide to skip straight to vid2vid
> More cuda errors
> Can't compile the fucking 2d kernel
> Through some act of God reinstalling cuda and CuDNN, manage to finally compile Flownet2
> Try running
> "Kernel image" error
> excusemewhatthefuck.jpg
> Try without a label map because fuck it the instructions and flags they gave are basically guaranteed not to work, it's fucking Nvidia amirite
> Enormous fucking CUDA error and Torch error, makes no sense, online no one agrees and 0 answers again
> Try again but this time on a clean machine
> Still no go
> Last resort, use the docker image they themselves provided of flownet
> Same fucking error
> While in the process of debugging, realize my training image set is also bound to have bad results because "directly concatenating" images together as they claim in the paper actually has horrible results, and the network doesn't accept 6 channel input no matter what, so the only way to get around this is to make 2 images (3 * 2 = 6 quick maths)
> Fix my training data, fuck Nvidia dude who gave me wrong info
> Try again
> Same fucking errors
> Doesn't give nay helpful information, just spits out a bunch of fucking memory addresses and long function names from the CUDA core
> Try reinstalling and then making a basic torch network, works perfectly fine
> FINALLY.png
> Setup vid2vid and flownet again
> SAME FUCKING ERROR
> Try to build the entire network in tensorflow
> CUDA error
> CuDNN version mismatch
> Doesn't work with TF
> HAVE TO FUCKING DOWNGEADE DRIVERS TOO
> TF doesn't support latest cuda because no one in the ML community can be bothered to support anything other than their own machine
> After setting up everything again, realize have no space left on 75gb machine
> Try torch again, hoping that the entire change will fix things
At this point I'll leave a space so you can try to guess what happened next before seeing the result.
Ready?
3
2
1
> SAME FUCKING ERROR
In conclusion, NVIDIA is a fucking piece of shit that can't make their own libraries compatible with themselves, and can't be fucked to write instructions that actually work.
If anyone has vid2vid working or has gotten around the kernel image error for AWS K80s please throw me a lifeline, in exchange you can have my soul or what little is left of it5 -
Seriously, Ubuntu can go burn in hell far as I care.
I've spent the better part of my morning attempting to set it up to run with the correct Nvidia drivers, Cuda and various other packages I need for my ML-Thesis.
After countless random freezes, updates,. Downgrades and god-knows-what, I'm going back to Windows 10 (yes, you read that right). It's not perfect but at least I don't have to battle with my laptop to get it running. The only thing which REALLY bothers me about it is the lack of GPU pass-through, meaning running local docker containers rely solely on the CPU. In itself not a huge issue if only I didn't NEED THE GOD DAMN GPU FOR THE TRAINING21 -
Casually debugging some cuda code today. Something's not working so I add a breakpoint in the suspicious kernel. For some reason I set the display GPU as the active device from my code *GENIUS* ( I have two GPUs installed, one for compute, one for the monitors).
Starts cuda debugging... Control flow reached the kernel and eventually the breakpoint. Suddenly the whole system freezes. Mouse doesn't move, keyboard seems dead. I realize I have unsaved code on the open text editor😲 *panic*. Keyboard shortcut to stop debugging doesn't work *panic^2*. My colleague says I have to hard reset the machine *panic^3*. I don't remember the last time I saved *panic^4*.
I take a deep breath. I reset. *sidenote: WINDOWS DECIDED TO FUCKING UPDATE ON REBOOT* Once I login, 50% of my code was lost. I didn't save 😢
Fuck you Nvidia 😢7 -
I just converted a massive project from C to cuda, I renamed everything, and it just worked.
What the fuck have I done wrong?3 -
My neural networks journey so far:
Look up tutorials -> see that Python is a popular tool for ML -> install Python -> pip install scipy -> breaks with some weird error involving BLAS library code -> spend half an hour fixing it -> try installing Theano -> breaks because my USERNAME HAS A SPACE IN IT LIKE SERIOUSLY? WTF -> make new account without a space in the name -> repeat till Theano -> run tests, found out that I didn't install CUDA support -> scrap the install and redo with CUDA support -> CUDA libraries take forever to download on shitty internet -> run tests -> breaks with some weird Theano compiler error -> go crying to friend -> friend tells me about Anaconda -> scrap the previous install and download Anaconda over shitty connection -> mess up conda environments because noobishness -> scrap, retry -> YESS I FINALLY GOT IT WORKING TIME TO DO SOME LEARNI-crap it's 4 in the morning already.
I realize that I'm a Python noob (and also, uni computers with GPUs have preconfigured Windows installed only, no Linux), but is installing Python libraries always such a pain? Am I doing something wrong? Installing via Anaconda felt like cheating, tbh.6 -
“I need one fullstack engineer”
“Ok, what exactly do you need?”
“Javascript, Nodejs, C/C++, CUDA, Andible, RabbitMQ. Oh, and I need him/her now until end of February”
Don’t we all just love these kind of discussions?1 -
Follow-up to my previous story: https://devrant.com/rants/1969484/...
If this seems to long to read, skip to the parts that interest you.
~ Background ~
Maybe you know TeamSpeak, it's basically a program to talk with other people on servers. In TeamSpeak you can generate identities, every identity has a security level. On your server you can set a minimum security level you need to connect. Upgrading the security level takes longer as the level goes up.
~ Technical background ~
The security level is computed by doing this:
SHA1(public_key + offset)
Where public_key is your public key in Base64 and offset is an 8 Byte unsigned long. Offset is incremented and the whole thing is hashed again. The security level comes from the amount of Zero-Bits at the beginning of the resulting hash.
My plan was to use my GPU to do this, because I heared GPUs are good at hashing. And now, I got it to work.
~ How I did it ~
I am using a start offset of 0, create 255 Threads on my GPU (apparently more are not possible) and let them compute those hashes. Then I increment the offset in every thread by 255. The GPU also does the job of counting the Zero-Bits, when there are more than 30 Zero-Bits I print the amount plus the offset to the console.
~ The speed ~
Well, speed was the reason I started this. It's faster than my CPU for sure. It takes about 2 minutes and 40 seconds to compute 2.55 Billion hashes which comes down to ~16 Million hashes per second.
Is this speed an expected result, is it slow or fast? I don't know, but for my needs, it is fucking fast!
~ What I learned from this ~
I come from a Java background and just recently started C/C++/C#. Which means this was a pretty hard challenge, since OpenCL uses C99 (I think?). CUDA sadly didn't work on my machine because I have an unsupported GPU (NVIDIA GeForce GTX 1050 Ti). I learned not to execute an endless loop on my GPU, and so much more about C in general. Though it was small, it was an amazing project.1 -
6h attempting to correctly install
nvidia driver
cuda
cudnn
pytorch from source
anaconda environment
and this8 -
Running a fucking conda environment on windows (an update environment from the previous one that I normally use) gets to be a fucking pain in the fucking ass for no fucking reason.
First: Generate a new conda environment, for FUCKING SHITS AND GIGGLES, DO NOT SPECIFY THE PYTHON VERSION, just to see compatibility, this was an experiment, expected to fail.
Install tensorflow on said environment: It does not fucking work, not detecting cuda, the only requirement? To have the cuda dependencies installed, modified, and inside of the system path, check done, it works on 4 other fucking environments, so why not this one.
Still doesn't work, google around and found some thread on github (the errors) that has a way to fix it, do it that way, fucking magic, shit is fixed.
Very well, tensorflow is installed and detecting cuda, no biggie. HAD TO SWITCH TO PYHTHON 3,8 BECAUSE 3.9 WAS GIVING ISSUES FOR SOME UNKNOWN FUCKING REASON
Ok no problem, done.
Install jupyter lab, for which the first in all other 4 environments it works. Guess what a fuckload of errors upon executing the import of tensorflow. They go on a loop that does not fucking end.
The error: imPoRT eRrOr thE Dll waS noT loAdeD
Ok, fucking which one? who fucking knows.
I FUCKING HATE that the main language for this fucking bullshit is python. I guess the benefits of the repl, I do, but the python repl is fucking HORSESHIT compared to the one you get on: Lisp, Ruby and fucking even NODE in which error messages are still more fucking intelligent than those of fucking bullshit ass Python.
Personally? I am betting on Julia devising a smarter environment, it is a better language already, on a second note: If you are worried about A.I taking your job, don't, it requires a team of fucktards working around common basic system administration tasks to get this bullshit running in the first place.
My dream? Julia or Scala (fuck you) for a primary language in machine learning and AI, in which entire environments, with aaaaaaaaaall of the required dlls and dependencies can be downloaded and installed upon can just fucking run. A single directory structure in which shit just fucking works (reason why I like live environments like Smalltalk, but fuck you on that too) and just run your projects from there, without setting a bunch of bullshit from environment variables, cuda dlls installation phases and what not. Something that JUST FUCKING WORKS.
I.....fucking.....HATE the level of system administration required to run fucking anything nowadays, the reason why we had to create shit like devops jobs, for the sad fuckers that have to figure out environment configurations on a box just to run software.
Fuck me man development turned to shit, this is why go mod, node npm, php composer strict folder structure pipelines were created. Bitch all you want about npm, but if I can create a node_modules setting with all of the required dlls to run a project, even if this bitch weights 2.5GB for a project structure you bet your fucking ass that I would.
"YOU JUST DON'T KNOW WHAT YOU ARE DOING" YES I FUCKING DO and I will get this bullshit fixed, I will get it running just like I did the other 4 environments that I fucking use, for different versions of cuda and python and the dependency circle jerk BULLSHIT that I have to manage. But this "follow the guide and it will work, except when it does not and you are looking into obscure github errors" bullshit just takes away from valuable project time when you have a small dedicated group of developers and no sys admin or devops mastermind to resort to.
I have successfully deployed:
Java
Golang
Clojure
Python
Node
PHP
VB/C# .NET
C++
Rails
Django
Projects, and every single fucking time (save for .net, that shit just fucking works on a dedicated windows IIS server) the shit will not work with x..nT reasons. It fucking obliterates me how fucking annoying this bullshit is. And the reason why the ENTIRE FUCKING FIELD of computer science and software engineering is so fucking flawed.
But we can't all just run to simple windows bs in which we have documentation for everything. We have to spend countless hours on fucking Linux figuring shit out (fuck you also, I have been using Linux since I was 18, I am 30 now) for which graphical drivers for machine learning, cuda and whatTheFuckNot require all sorts of sys admin gymnasts to be used.
Y'all fucked up a long time ago. Smalltalk provided an all in one, easily rollable back to previous images, easily administered interfaces for this fileFuckery bullshit, and even though the JVM and the .NET environments did their best to hold shit down, and even though we had npm packages pulling the universe inside, or gomod compiling shit into one place NOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO we had to do whatever the fuck we wanted to feel l337 and wanted.
Fuck all of you, fuck this field, fuck setting boxes for ML/AI and fuck every single OS in existence2 -
So here I am sitting on my dusty laptop gaming laptop (because supposedly it would offer me better performance in compiling code and working with CUDA according to the people above me) at a research institute where I just started working at. I am told that there are some issues with the code and that it fails to build on Windows with MSVC that ships with Visual Studio 2017 and later.
I poor some hot tea from my insulated bottle I brought from home and start reading.
I look in this header file and what do I see - a custom uint24_t struct. Interesting...
I keep sifting through the code base. I find some functions that check and change Endianess. Ok, but the software is developed, built on and runs only on Win7 and later desktop systems. Never mind...
Further I find a custom "allocator" that is used throughout the whole code base. It has three inline static class member functions: allocate, copy and deallocate plus some private constructors. And these just wrap around the standard new and free calls. Some flavours of this class actually only deallocate (with a comment above them: "This allocator does not allocate. HANDLE WITH CARE!!!", which is btw the only "code documentation" I have managed to find).
But wait! What is this? A custom thread and mutex. Oh, and string, and vector.
Further down the rabbit hole I find a custom math library with a matrix class that does not support multiplication between a matrix and a vector. Perhaps not a use case I guess...
I continue and come across some UI-related calls. Interesting, I wonder what they are using as a framework. Oh, my...We have an extensive GUI custom framework written from scratch (drawing buttons and all).
All of this is to load an OBJ file and render it on the screen on a standard Windows PC in some way.
Very nice... ;_;1 -
I have finally decided to stop helping people setup a proper machine learning environment inside of their machines with Proper GPU support.
I-fucking-give-up.
Goggle Colab, EVERYONE is getting dey ass sent to Colab. I DON'T GIVE A FUCK about privacy and shit like that at this fucking moment, getting TIRED af of getting messages about someone somehow fucking up their CUDA installation, and/or their entire machine (had one dude trying to run native GPU support through WSL 2, their machine did not have the windows update version 2004 and he has on an older build, upon update he fucked everything up EVEN THOUGH I TOLD HIM NOT TO DO IT YET)
.......fuck it, I am sending everyone to Colab. YES I UNDERSTAND THAT PRIVACY IS A THING and Goggle bad and all that jazz......but if you believe in Roko's Basilisk then I AM DOING THEM A FAVOR
I work hard to get our robot overlords into function, let it be known here, I support our robot overlords and will do as much as possible to bring them to life and have me own 2b big tiddy with a nice ass android.
I should also mention that I've had a few drinks on me already and keep getting these messages.5 -
I'm reinventing the wheel by making yet another neural network library. It's not any good yet but I learn as I go along.
The only documentation that exists now is the admittedly quite comprehensive code comments. I'm it because Keras (using TensorFlow) requires a 3.5 compute ability rating for CUDA acceleration (which I don't have) and it doesn't support OpenCL. Eventually, I will make my implementation support both with varying levels of acceleration for different compute capabilities with the oldest supported being my hardware. If I ever get around to it.
I'd say wish me luck but determination would be infinitely more useful.2 -
hey ranteros! i like to dream and i know many of us dream of a nice machine to do anything on it, if you want to post the specs of your ideal build(s) (even a laptop, pre-built pc, space gray macbook pro... doesn't matter). and your current one.
here's mine:
ideal: {
type: desktop-pc,
cpu: intel i7-8700K (coffee lake),
gpu: nvidia geforce gtx 1080ti,
ram: 32gb ddr4,
storage: {
ssd: samsung 960 evo 500gb,
hdd: 2tb wd black
},
motherboard: any good motherboard that supports coffee lake and has a good selection of i/o,
psu: anything juicy enough, silver rated,
cooling: i don't care about liquid cooling that much, or maybe i'm just afraid of it,
case: i accept any form factor, as long as it's not too oBNoxi0Us,
peripherals: {
monitor: 1080p, maybe 1440p, i can't 4k because of the media i consume (i have tons of shit i watch in 720p) + other reasons,
keyboardmousecombo: i like logitech stuff, nothing fancy, their non mechanical keyboards are nice, for mice the mx master 2 is nice i think, i also don't care about rgb because i think it's too distracting and i'm always in darkness so some white backlight is great
},
os: windows 10, tails (i have some questions about tails i'll be asking in a different post,
}
i think this is enough for ideal, now reality:
current: {
type: laptop,
brand: acer (aspire 7736z),
cpu: pentium dual-core 2.10ghz,
gpu: geforce g210m 2gb (with cuda™!),
ram: 4gb ddr3,
storage: hdd 500gb wd blue 5400rpm (this motherfucker stood the test of time because it's still working since i bought this thing (the laptop as it is) used in late 2009 although it's full of bad sectors and might anytime, don't worry i have everything backed up, i have a total of 5 hdds varying from 320gb to 1tb with different stuff on them),
screen: 17 inch hd-ready!!! (i think it's a tn panel), i've never done a test on color accuracy, but to my eyes it's bright, colorful, and has some dust particles between the lcd and backlight hah,
other cool things: dvd player/burner, full-sized keyboard with numeric keypad, vga, hdmi, 4 usb ports, ethernet, wi-fi haha, and it's hot, i mean so hot, hotter than elsa jean and piper perri combined,
os: windows 10, tails
}
if you read this whole thing i love you, and if you have some time to spare on a sunday you can share your dream rig and the sometimes cruel current one if you dare. you don't have to share them both. i know many will go b.o.b and say "what you're hoping to accomplish, i already did bitch.", that's cool as well, brag about your cool rig!6 -
When my senior told me his program is kill because not enough processing unit in our 1080Ti.
Man, your Linux runs way more than 8 processes, and you only have two processes that runs with CUDA... -
Just installed Keras, theano, PyTorch and Tensorflow on Windows 10 with GPU and CUDA working...
Took me 2 days to do it on my PC, and then another two days of cryptic compiler errors to do it on my laptop. It takes an hour or so on Linux... But now all of my devices are ready to train some Deep Deep Learning models )
I don't think even here many people will understand the pain I had to go through, but I just had to share it somewhere since I am now overcome with peace and joy.4 -
$a = 1;
$b = 2;
echo ($a < $b) ? ($a > $b) ? 'This is totally fine' : ($a < $b) ? 'This is not ok!' : 'Perfect' : 'No problem here';
Why do people do this?!
(And I mean nested ternary ifs, not coding in PHP :P)16 -
Fucking fuck Nvidia. Shit suckers and ass lickers can't make a fucking thing properly. Everytime I have to compile something involving cuDNN and cuda I wish I could kill myself first. It's a piece of garbage software that we're stuck with. Fuck you mother fuckin Nvidia.3
-
So a little bit explanation to my last fuck rant
I was trying to make a cuda code faster, specifically eigen value decomposition for 12 by 12 matrices. For a week a made a fast and accurate version and a faster but less accurate version, both are faster than cuda. Then I was thinking about how to make the faster version more accurate.
Then we had this idea of using power iterations. And honestly I hoped it won’t work. But then, fuck me it worked, which means I had more work to do.
But hey, at least now I’m way faster than cuda on this18 -
Skills required for the job I'm interviewing for:
C++, sql, cuda
Skills I have:
C++
I think it's going fairly well so far.5 -
I hate it that I'm still forced to use Ubuntu 16.04 and can't upgrade to bionic beaver.tried it on vm (for testing)loved new features and default gnome interface but even after switching to xorg most of my tool were still not running properly or crashing, most important factor is that there is still no official cuda support and installing gcc g++ 6 and symlinks are nerve racking. On top of that upgrading to 18.04 LTS on my main machine will leave me with broken packages and dependencies.
p.s. for people who are going to reply saying that these issue can be solved. Please try updating your work machine and spend hours fix these issues1 -
Decided to write myself a CUDA wrapper using no third party library (e.g. managedCUDA).. I'm starting to regret it o.O4
-
If you asked me two months ago I'd have said building and using a Barnes Hut tree with CUDA.
Today my answer is working on a fuzzer with LLVM without knowing shit about either C++ and compilers. -
*writes programs with variables for arguments*
Runs.. Crashes
*removes variables places exact same code in parameter list*
Runs.. Succeeds.
I don't understand you my GPU -
Someone please kill me.
I'm sick of myself.
A few days ago in the prize distribution for a past coding contest, I denied my prize and eventually accepted after fucking around a bit.
Now since two days, I'm straight forward wasting my time. My grades are going down exponentially and I'm involving neither in CUDA (which a started just a while ago) nor I'm getting into studies and even getting in competitive coding.... Fuck me!!!!!! -
Completed a python project, started as interest but completed as an academic project.
smart surveillance system for museum
Requirements
To run this you need a CUDA enabled GPU on your computer. (Highly recommended)
It will also run on computers without GPU i.e. it will run on your processor giving you very poor FPS(around 0.6 to 1FPS), you can use AWS too.
About the project
One needs to collect lots of images of the artifacts or objects for training the model.
Once the training is done you can simply use the model by editing the 'options' in webcam files and labels of your object.
Features
It continuously tracks the artifact.
Alarm triggers when artifact goes missing from the feed.
It marks the location where it was last seen.
Captures the face from the feed of suspects.
Alarm triggering when artifact is disturbed from original position.
Multiple feed tracking(If artifact goes missing from feed 1 due to occlusion a false alarm won't be triggered since it looks for the artifact in the other feeds)
Project link https://github.com/globefire/...
Demo link
https://youtu.be/I3j_2NcZQds2 -
What's your most trusted computer part manufacturer list? Personally, it goes something like this:
CPU: AMD. They're performing at or above Intel's spec, without the weekly IME holes. Sometimes cost a little more, but they last way longer.
GPU: AMD, ASUS, MSI. MSI is usually over-priced but performs a smidge better, ASUS is usually a good middle-ground. Anything with an AMD chipset's usually gonna hold together fairly well, though, and won't require massively-unstable closed-source drivers for decent Linux performance. "but muh cuda" doesn't fly when OpenCL is actually, well, open.
Storage: Seagate, obviously, and SanDisk for cheap SSDs. SanDisk SSDs, especially their cheapest ones, are durable as shit for price. As for the Seagate pick... is that not self-explanatory?
Mobo: ASUS, ASRock if you need garbage in a pinch. ASUS boards are usually fairly tough, and ASRock is cheap trash for that backup tower that's gone bad in the closet.
PSU: EVGA, accept no substitute. EVGA PSUs are durable as fuck and fairly cheap, compared to other "ultra-durable" brands.36 -
Am I the only one excited for the RTX 3000 series not because of gaming, but because of all those sweet, fast CUDA cores? Never mind framerates, that should make training those models way faster.4
-
What dark, heathen God do I have to sacrifice to in order to get CUDA 9.2 installed on a machine? For the love of Christ Nvidia, just make a functional fucking installer for once in your God forsaken life, you're a Fortune 500 company. Why does everything have to be so incredibly janky with you?4
-
sigh. I hope one day Linux can be rewritten in something with more sensible package management. C/C++ can just be a real pain more often that not. My case was trying to install CUDA on ubuntu 16 following the OFFICIAL developer guide. gave up after trying for an hour. It needed the kernel headers for compile the drivers and it was jsut alot of pain dealing with files being in the wrong place and gcc version mismatching and tons of other cryptic errors. and this is for ubuntu which is a pretty mainstream distro.8
-
Will the MacBook Pro 15 2018 be any good for Machine Learning. I know it's got an AMD (omg why?) And most ML frameworks only support CUDA but is it possible to utilise the AMD gpu somehow when training models / predicting?5
-
So if I buy this stuff, word has it that I will have "a computer." Is this enough to get to play with CUDA on a little tiny GPU?23
-
Spent a few hours wrestling with AMD ROCm to get it working. Had to change my kernel a few times, install different versions of the rocm packages, and in one case selectively upgrade a package. I also need to run my programs with a few shady environment variable exports to work around some bugs. The whole thing looks shaky right now, nowhere near as simple as CUDA. Also, horrid names (seriously AMD, what's with the 3dgy names).
However once I got it working it works pretty well, happily training stuff via tensorflow-rocm, with decent performance. This is also probably a good project to contribute to, I'm nowhere close to AMD's engineers at this stuff but basic bug fixing and quality of life stuff are probably within reach.3 -
!rant
I have a Surface 3 for home dev stuff. I wanted to get back into C++ for graphic/GPU programming. However it uses an Intel Graphics 5000 chipset so I can't do CUDA and the Intel Media Server can't upgrade the graphics driver because it's a Surface.
What should I do? I would rather not build a system just to play.5 -
Spent 2 days installing different versions of nvidia-driver, nvidia-cuda-toolkit, cudnn.
Disassembled pc due to some ram memory problems and pc not starting after freeze.
Looks like sometimes plug out and plug in various components on your motherboard fix pc lol. Destroyed PC casing that collapsed due to me sitting on it.
All of this to just find out tensorflow error and all crashes are from graphics card overheating after keeping it for some time at 82 celsius.
Added time.sleep to python code and looks like it's working, keeping temperature below 65 celsius.
Still ~100 times faster than cpu training so I can live with that.3 -
CUDA is a fucking bitch when it comes to configure projects
Creating my first CUDA project it yelled at me it doesn't support my current gcc version, ended up with me yelling back "OY SHUTUP" and slapping some flag for it to use clang instead — basically what it advised but I didn't listen first. Fine now.
Working on this project on another fresh environment, and now it doesn't detect anything and dumbly tries to reload my CMake project with the LATEST installed gcc when I already told it to use version 8 TWICE. First by setting up a toolchain with compilers pointing to this specific version and second by passing the -DMAKE_C_COMPILER pointing to it again. Still this stubborn piece of shit tries with latest everytime.
The most applauded solution was to use update-alternatives to make gcc point to the version I want CUDA to use. Thank you genius, but what if I don't want to use a deprecated gcc version with normal Cxx projects ?
And cherry on the top of this bullshit, I'm fixing this dumb configuration issue (can't stress enough how much I hate this shit) to be able to fix an EVEN MORE annoying issue with CUDA being a bitch AGAIN and not letting me use std functions where I'm allowed to
Fuck CUDA. Fuck CMake. Fuck C. Fuck everything3 -
Trying to figure out the right version for Microsoft Visual Studio, Tensorflow, and Nvidia CUDA Toolkit has got me reaaaalllyyyy messed up!!!
Like the fuck!? One thing doesn't support the other's current version. It's like I'm playing a "version matching game, fucking candy crush shit!
It's so effing irritating!!! -
2 fucking days and I cannot install cuda.
FML.
There is a need for some service which in exchange of money or my soul, installs software without any hassle on my laptop.3 -
Getting too attached with the code you wrote and later realising that you have to erase the whole thing and write again just because your team lead didn't like it! 😒
#FeelingSick2 -
>> Herborist fails
>> Fixes QT errors
>> Still fails. Relink issues of libudev ; for weird reasons, it's asking to be linked to librt ??? And it's for a clock-gettime.
>> stack overflow: all about cuda and opencv, which can't be my issue.
>> Some asshat on GitHub comment section: mind your language when you're talking to maintainers.
>> Me: You mothertucking trucker! 😐😐😐
🖕🖕🖕9 -
- Finish "Introduction to algorithms"
- Learn some genetic algorithms
- Get my hands dirty on reinforcement learning
- Learn more about data streaming application (My currently app is still using plain stupid REST to transport image). I don't know, maybe Kafka and RabbitMQ.
- Learn to implement some distributed system prototypes to get fitter at this topic. There must be more than REST for communicating between components.
- Implementing a searching module for my app with elastic search.
- Employ redis at sometime for background tasks.
- Get my handy dirty on some operating system concepts (Interprocess Communication, I am looking at you)
- Take a look at Assembly (I dont want to do much with Assembly, maybe just want to implement one or two programs to know how things work)
- Learn a bit of parallel computing with CUDA to know what the hell Tensorflow is doing with my graphic card.
- Maybe finishing my first research paper
- Pass my electrical engineering exam (I suck at EE)1 -
Bought fucking nvidia gpu to test speed of some fucking machine learning models that generate speech.
6 hours wasted already for installing fucking dependencies
cuda, fucking tensorflow gpu, bezel and other shit
Fucking resetting password to download deb with cudnn,
really ??????? fucking emails are not delivered to my fucking mailbox
After mass click of send email and multiple account ban and unban I figured out I should login to nvidia website and then allow access to fucking developer every time I want to log in there - fuck shit
Uninstalling everything now looking for fucking compatible versions between software.
10 years in this business still fucking installation of dependencies is most difficult part
Fucking corporate business and their shitty installation instructions to fuck up peoples lives and switch them to the cloud.
Same was with fucking kubernetes
Fucking software dependency hell
It’s worse then ever before.
Fuck ....3 -
Stipid piece of shit nvidia cuda development kit installer.
"Yeah, you can extract and install the files wherever you want, but I'll delete BOTH directories if you do so"
Fucking 3 times I tried to install this shit untill i realized this. -
I hate the feeling of realizing the problem you face has not been posted on stackover, or any forum for that matter.
However, when you manage to solve it, you feel like a badass
Thanks Microsoft/Nvidia for not accounting for nested parenthesis in your batch file -
!rant
I have a 7yo laptop which, for the last 4 years, has been a Ubuntu single boot.
It was previously on Windows Vista, as it's shipped with it. Worth nothing to say, after a couple of years, terrible performances, so I never thought twice to reinstall Windows.
Now, that I'm in need to write C# along with CUDA in VS (2013 Express is the last version that supports CUDA 6.5, last version for my old GT330M), I installed Windows 10.
I have to admit, it's going pretty well. For being a VS machine, it's coming along very well :) -
OpenCV,OpenFace,Caffe are supported in Arch Linux,but CUDA is not supported!! :| WTF!! How these packages could be supported in Arch but CUDA not!7
-
Hi,
I want to install linux besides windows on my new computer (i7-8700k, gtx 1080). I use debian with i3 on my laptop for work and want to have a similar development environment at home. Does anyone have an adive to choose between ElementaryOS and Arch, or just stick with Debian. i3-gaps will be the wm, I just can't use another one ;)
Does one distro has better support for Nvidia cards in fact I would like to try CUDA.
I do not have other requirements; mostly webdev with python in the backend, and a little c++ game with SDL. This should not be a problem in a new distro.
Thanks for some advices and pro/cons11 -
Imma guess the answer is no but has anyone ever heard of an external cuda enabled gpu you can plug into your laptop ? and maybe a seperate power jack lol4
-
What actual pain is, trying to install CUDA on Windows (that should be a pain there itself) and all after 3 hours you realize your lappie doesn't support it (Geforce 820M)! There goes my dream of Theano, PyCuda and Tensorflow-gpu ...6
-
I'm fucking tired of my computer having random
2 seconds latency on any basic action and being slow as fuck regardless of powerful processor, ssd and 32GB RAM. Music via bluetooth is basically unusable since every few seconds the music stops for a 0.2s then plays again. I installed this system (opensuse tumbleweed) in February this year and it's just sad that I have reinstall again (any ideas for distro) ?
I made a dummy mistake of buying a CPU without internal graphics and this resulted in having to buy a GPU. So I got myself Nvidia(another mistake) since i though i would be using CUDA on the university. Turnes out CUDA cannot be installed for some retarded reason.
With Nvidia GPU the screens on my two monitors are swapping every time I use a hdmi switch to use other computer. On AMD GPU this problem does not exist. AMD GPU pro drivers are impossible to install. Computers barely fucking work, change my mind. Shit is breaking all the time. Everything is so half assed.
The music player that i use sometimes swaps ui with whatever was below it like for example the desktop background and i need to kill the process and start again to use the program. WTF.
Bluetooth seems to hate me. I check the bluetooth connected devices on my computer, it says headphones connected. BULLSHIT. The headphones are fucking turned OFF. How the fuck can they be connected you dumbass motherfucker computer. So I turn on the headphones. And I cannot connect them since the system thinks that they are already connected. So I have to unpair them and pair them again. WTF. Who fucking invents this bullshit?
Let's say i have headphones connected to the computer. I want to connect them to phone. I click connect from the phone settings. Nothing happens. Bullshit non telling error "could not connect". So I have to unpair from computer to pair to phone. Which takes fucking minutes, because reasons. VERY fucking convenient technology.
The stupid bluetooth headphones have a loud EARRAPE voice when turning them on "POWER ON!!! PAIRING", "CONNECTED", "DISCONNECT". Loudness of this cannot be modified. The 3 navigation buttons are fucking unrecognizable so i always take few seconds to make sure i click the correct button.
Fucking keyboard sometimes forgets that I remapped esc key to caps lock and then both keys don't work so i need to reconnect the keyboard cable. At least it's not fucking bluetooth.
The only reason why hdmi switches exist is because monitor's navigation menus have terrible ui and/or infrared activated, non-mechanical buttons.
Imagine the world where monitors have a button for each of it's inputs. I click hdmi button it switches it's input to hdmi. I click display port button - it switches to display port. But nooo, you have to go through the OSD menu.
My ~ directory has hundred of files that I never put there. Doesn't feel like home, more like a crackhead crib.
My other laptop (also tumbleweed) I click on hibernate option and it shuts down. WTF. Or sometimes I open the lid and screen is black and when i click keyboard nothing happens so i have to hold power button and restart.
We've been having computers for 20 + years and they still are slow, unreliable and barely working.
Is there a cure? I'm starting to think the reason why everything is working so shitty and unreliable, is because the foundations are rotten. The systems that we use are built with c, ridden with cryptic abbreviated code, undefined behavior and security vulnerabilities. The more I've written c programs the more convinced I am, that we should have abandoned it for something better long ago. Why haven't we? And honestly what would be better? Everything fucking sucks. The rust seems to be light in the tunnel but I don't know if this is only hype or is it really better. I'm sure it can't be worse than c or c++. Either we do something with the foundations or we're doomed.22 -
Have anyone tried to train a neural network(CNN) with cuda enabled on laptop with nvidia mx150?
How was it? And what about another one with 1050 ti? Is the difference huge?3 -
Setting up an expo react / react native is a far worse feeling than installing GPU drivers + cuda toolkit for pytorch.
I have no idea how react devs are dealing with this shit. This is so horrible. Wtf is babel ? wtf is expo ? Wtf is SDK ? Wtf is eas ????????????????????1 -
I find it worrying that I work with c++, cuda and golang, mainly because I’m afraid that those are languages that might not be common in the job market.. but honestly I love what I do, so idk if to trust in my abilities for the future when I don’t work in the same company as today. Should I keep worrying?3
-
Trying to...
- Visual Studio 2017 released in 2016 with internal version number 15.9.38 with MSVC v14.0
- CUDA 8.0 with NVidia Nsight VS integration 5.3
- GTX 1080 GPU with compute capability 6.1
- Windows 10 SDK with 10.0.17763.0
Will it work? I don't fucking know because your versioning and documentation SUCKS!
For some time now it has become a number one mission for basically every tech company to rebrand, reversion and what not their products. It's obviously done with the purpose of confusing the customers, leading them on to buy/work with the wrong item, which of course leads to another purchase and hours of frustration and wasted time. This is not how business should be conducted, you dumbasses! -
Does anyone know the best way of doing GPU stuff in C? I have a cuda enabled GTX 1050, but the cuda drivers screw everything up on machine for some reason.
Is there a better way to do this without the cuda drivers?6 -
Wandb sweep runs for an interactive job but gives me a cuda error for illegal memory access for the slurm job. Spent the last 15 hours solving it and still can't enable multi gpu support on it. FML