Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "gpus"
-
Well that was a fun call I just had.
Owner of the company I freelance for: Hey I forgot to tell you something.
Me: What?
Owner: I bought you a plane ticket to fly to Puerto Rico. You're heading out in a month.
Me: What?! Why????
Owner: To set up cryptocurency mining rigs.
Me: Just because I know a bit about mining doesn't make me an expert.
Owner: We have $80k in our pocket in investments from outside parties, with another $20-30k on the way. You get 20% of the coins mined for as long as you manage it.
Me: So we're gonna set up several rigs, utilizing a b250 motherboard, g4400 CPU, 8GB of RAM and 10 GPUs each. We'll have AMD rigs for monero and Nvidia rigs for Ethereum and others. We'll use awesome miner for profitability switching on the fly. Each machine is probably going to be $5k each, possibly $4k with bulk discounts. We'll need at least 1500W per rig for power, 2000W to be safe, so we need to make sure we have ample power delivery to the mining warehouse.
Owner: I thought you weren't an expert?
Me: I'm not, but when there's money involved my motivation to Google goes into overdrive.28 -
Users: Are you going to make your Products cheaper and put better GPUs and CPUs in your desktops and laptops?
Apple:7 -
Most of the code I write nowadays is for GPUs using a dialect of C. Anyways, due to the hardware of GPUs there is no convenient debugger and you can't just print to console neither.
Most bugs are solved staring at the code and using pen and paper.
I guess one could call that a quirk.11 -
Prof: So yeah this is going to be difficult. We're going to make the scalable math library. Then we have to make a functional finite elements library using that. Then make a multiphysics engine using that library. This could easily take your entire PhD. Are you prepared for that?
Me: May I show you something?
Prof: Sure, sure.
Me, showing him: We can use moose to code in the multiphysics. It's built atop libmesh for the finite elements. Which can be built with a petsc backend. Which we can run on GPUs and CPUs, up to 200k cores. All of this has been done for us. This project will, at worst, take a couple months.
Prof: ...
Guys, libraries. Fucking. Libraries. Holy fucking shit.5 -
We are devs right?
We have cpus and gpus lying around right?
We are still alive... right? 🤔
How about we do our part and utilise our PCs for helping with COVID-19 research.
I've stumbled across this little tool that not only keeps me warm at night but helps researchers with several diseases.
https://foldingathome.org/iamoneina...
It's like a a bitcoin miner but for research purposes, no it's not a dodgy bitcoin miner.
Oh and feel free to keep yourself anonymous as there are stats that will identify your username - when they work.
There are installers for windows, Mac, and linux distros so everyone can get involved.29 -
Coolest project I've worked on. Artistic.af machine learning + Instagram makes your images artistic AF. Did it as a side project to get up to speed on NN implementations on GPUs2
-
Casually debugging some cuda code today. Something's not working so I add a breakpoint in the suspicious kernel. For some reason I set the display GPU as the active device from my code *GENIUS* ( I have two GPUs installed, one for compute, one for the monitors).
Starts cuda debugging... Control flow reached the kernel and eventually the breakpoint. Suddenly the whole system freezes. Mouse doesn't move, keyboard seems dead. I realize I have unsaved code on the open text editor😲 *panic*. Keyboard shortcut to stop debugging doesn't work *panic^2*. My colleague says I have to hard reset the machine *panic^3*. I don't remember the last time I saved *panic^4*.
I take a deep breath. I reset. *sidenote: WINDOWS DECIDED TO FUCKING UPDATE ON REBOOT* Once I login, 50% of my code was lost. I didn't save 😢
Fuck you Nvidia 😢7 -
My neural networks journey so far:
Look up tutorials -> see that Python is a popular tool for ML -> install Python -> pip install scipy -> breaks with some weird error involving BLAS library code -> spend half an hour fixing it -> try installing Theano -> breaks because my USERNAME HAS A SPACE IN IT LIKE SERIOUSLY? WTF -> make new account without a space in the name -> repeat till Theano -> run tests, found out that I didn't install CUDA support -> scrap the install and redo with CUDA support -> CUDA libraries take forever to download on shitty internet -> run tests -> breaks with some weird Theano compiler error -> go crying to friend -> friend tells me about Anaconda -> scrap the previous install and download Anaconda over shitty connection -> mess up conda environments because noobishness -> scrap, retry -> YESS I FINALLY GOT IT WORKING TIME TO DO SOME LEARNI-crap it's 4 in the morning already.
I realize that I'm a Python noob (and also, uni computers with GPUs have preconfigured Windows installed only, no Linux), but is installing Python libraries always such a pain? Am I doing something wrong? Installing via Anaconda felt like cheating, tbh.6 -
Follow-up to my previous story: https://devrant.com/rants/1969484/...
If this seems to long to read, skip to the parts that interest you.
~ Background ~
Maybe you know TeamSpeak, it's basically a program to talk with other people on servers. In TeamSpeak you can generate identities, every identity has a security level. On your server you can set a minimum security level you need to connect. Upgrading the security level takes longer as the level goes up.
~ Technical background ~
The security level is computed by doing this:
SHA1(public_key + offset)
Where public_key is your public key in Base64 and offset is an 8 Byte unsigned long. Offset is incremented and the whole thing is hashed again. The security level comes from the amount of Zero-Bits at the beginning of the resulting hash.
My plan was to use my GPU to do this, because I heared GPUs are good at hashing. And now, I got it to work.
~ How I did it ~
I am using a start offset of 0, create 255 Threads on my GPU (apparently more are not possible) and let them compute those hashes. Then I increment the offset in every thread by 255. The GPU also does the job of counting the Zero-Bits, when there are more than 30 Zero-Bits I print the amount plus the offset to the console.
~ The speed ~
Well, speed was the reason I started this. It's faster than my CPU for sure. It takes about 2 minutes and 40 seconds to compute 2.55 Billion hashes which comes down to ~16 Million hashes per second.
Is this speed an expected result, is it slow or fast? I don't know, but for my needs, it is fucking fast!
~ What I learned from this ~
I come from a Java background and just recently started C/C++/C#. Which means this was a pretty hard challenge, since OpenCL uses C99 (I think?). CUDA sadly didn't work on my machine because I have an unsupported GPU (NVIDIA GeForce GTX 1050 Ti). I learned not to execute an endless loop on my GPU, and so much more about C in general. Though it was small, it was an amazing project.1 -
Recently purchased a few edge AI devices at work, and management sent some people to box up and get rid of our dedicated GPUs, since we "just got new AI computers." Now futilely trying to explain to management that not all computers are exactly the same...3
-
Sometimes I just don't know what to say anymore
I'm working on my engine and I really wanna push high triangle counts. I'm doing a pretty cool technique called visibility rendering and it's great because it kind of balances out some known causes of bad performance on GPUs (namely that pixels are always rasterized in quads, which is especially bad for small triangles)
So then I come across this post https://tellusim.com/compute-raster... which shows some fantastic results and just for the fun of it I implement it. Like not optimized or anything just a quick and dirty toy demo to see what sort of performance I can get
... I just don't know what to say. Using actual hardware accelerated rasterization, which GPUs are literally designed to be good at, I render about 37 million triangles in 3.6 ms. Eh, fine but not great. Then I implement this guys unoptimized(!) software rasterizer and I render the same scene in 0.5 ms?!
IT'S LITERALLY A COMPUTE SHADER. I rasterize the triangles manually IN SOFTWARE and write them out with 64-bit atomic image stores. HOW IS THIS FASTER THAN ACTUAL HARDWARE!???
AND BY LIKE A ORDER OF MAGNITUDE AT THAT???
Like I even tried doing some optimizations like backface cone culling on the meshlets, but doing that makes it slower. HOW. Im rendering 37 million triangles without ANY fancy tricks. No hi-z depth culling which a GPU would normally do. No backface culling which a GPU with normally do. Not even damn clipping of triangles. I render ALL of them ALL the time. At 0.5 ms7 -
Writing an efficient, modern renderer is truly an exercise of patience. You have a good idea? Hah, fuck you, GPUs don't support that. Okay but what if I try to use this advanced feature? Eh, probably not going to support exactly what you would like to do. Okay fuck it I'm gonna use the most obscure features possible. Congratulations, it doesn't work even on the niche hardware that supports that extension
If I sound jaded, ya better believe I f*cking am! I cannot wait for more graphics cards to support features like mesh shaders so we can finally compute shader all the things and do things the way we want to god dammit -
Tl; dr: Linux on Ryzen is a pain at the moment.
Now for the long part: Our student council got new computers because the old ones where slow as hell. As one of the admins, the others and I together decided that ryzen would be a good option, because they are not that expensive and we wouldn't have to buy gpus. (Wrong decision it turns out.) We settled on the ryzen 3 2200G and bought three systems to replace the old ones.
We meet Saturday morning and build the systems. All was fine and we were happy. The we tried to install ubuntu via preseeded netboot, which seemed to work fine at first. Then we started having weird screen issues and couldn't proceed with the installation. (See image) we then grumpily decided to just install them all one by one, flashed two usbs and started installing. On two systems the installation worked and we installed our packages, we weren't so lucky with the third one. It would crash on us all the time, even in bios. While that was going on we tried to set the other two up, turns out those two were also crashing but not as frequent as the other one. So we start to google and find people saying that kernel 4.19 kinda fixes it. We install it on the two working machines and the crashes get less frequent but are still there. At that point it was midnight and we went home.
Sunday morning: we reseated the cpu on the third system and it seems to be better now (it installed on the second try) and we were able to change the kernel. Yay. Now all three are in a state where they will sometimes randomly reset. :/ and we don't know what to try anymore.... Any suggestions?1 -
First of all, a great channel to follow and where all this is from: https://youtube.com/watch/...
It listed a lot of open source news I missed myself and I'm sure others did too, for those that are too lazy to watch the video or open the description, I've stripped away the links and "X version got released" just to give an idea of what he covers.
------------------
GNOME and KDE announced they would work together on building better Linux desktops at Linux App Summit.
XRDesktop, a VR enabled Linux desktop, will allow you to use your Linux programs while wearing your VR headset.
Responding to the european commission's fines, Google announced that it would allow other search engines to be present at Android's setup.
Manjaro will allow users to pick between FreeOffice, Libre Office, or no office suite at all.
The Igalia team announced that they are working to make Pitivi compatible with Final Cut Pro X
Microsoft might be bringing its Teams software to Linux.
Martin Wimpress from the Canonical SnapCraft team gave an interview to TechRepublic, on Snaps
A discussion took place on how to improve Linux desktop performance in low ram scenarios.
A KDE vulnerability has been outed publicly before notifying the developers.
Nvidia has open sourced a bunch of documentation for its GPUs
Linux Journal announced they would cease their publication.
Kdenlive 19.08 has been released, bringing 3 point editing and a bunch of keyboard shortcuts
The Linux on Dex project now allows to run Ubuntu 16.04 LTS on a samsung smartphone.
According to protondb, we passed the 6000 playable games mark, out of 9 thousand for which users have created a report
GNOME Feeds has been released on flathub, a simple app to read RSS feeds on GNOME
The enlightenment desktop released its first version in 2 years, enlightenment 0.23.0.
Linux celebrated its 28th birthday
Microsoft announced that they would bring exFAT support to the linux kernel.
Thundebird 68 was released with an interface redesign
Collabora has published an update on their work on viglrenderer, a solution to emulate a gpu while using a virtual machine through Qemu.7 -
Asus announced their AM5 board X670E Extreme. The E already stands for Extreme, which makes it an Extreme Extreme.
My feeling is that AM5 will be extremely - expensive!
That's because the AM5 LGA socket moves cost from the CPU (formerly PGA) to the mobo, but AMD certainly won't drop CPU prices, rather the opposite, and then there's also DDR5 as cost driver, not to mention tons of PCIe 5.0 where we don't even have AIBs.
On the upside, that would finally end the days of GPUs causing a disproportionately large share of the system cost - if only because the rest gets more expensive.6 -
There's a local server with lots of processing power and plenty of GPUs and tons RAM going critically underused. Why? There's someone who's running a process using a relational database which literally performs 40 to 60 percentage disk I/O for months. It's so bad, if you run "ls" it may take a minute to run (only if you are lucky to ssh in).4
-
So I am back home for a week without my laptop and my phone was low on power so I finally give up and decide to use a old PC we had.
I was gonna download some anime which I did but as I was waiting I started just looking around...
1. The drives are huge, 3 HDs with 400GB each.... vs my current 128 GB SSD
2. I found an old stash of anime (2013-4), several series... that I had actually not watched
3. The machine is known to be slow but after using it for awhile to install VLC and JDowloader... It's actually OK...
4. Video can playback at 3x speed... No lag... Apparently I forgot the onboard GPU failed and my dad replaced it with a cheap (I think) GFX card that has like 1GB RAM/processing power...1 -
My cycle : windows - ubuntu - mint - fedora - elementary - kubuntu - apricity os - debian - windows.
Why? Because that damn linux has fucking problem with hybrid intel/amd gpus13 -
My new setup.
The conputer on the left is the new and first comouter i built.
Im looking to invest in an extra screen that tilts vertically and 2 GPUs.6 -
Stories like the one I'm about to tell you are just another reason why people hate Windows. I know I usually preach 'Don't hate everything' and shit, but this is a real big fucking deal when it hits your desktop for no reason.
Now, onto the actual story...
Background: Playing with my Oculus, fixing issues like forgetting to use USB3 and stuff. I learned about an issue with Nvidia GPUs, where in Windows, they can only support 4 simultaneous displays per GPU. I only have the one GPU in my system, Nova, so I have to unplug a monitor to get Oculus and its virtual window thingy working. Alright, friend gave me idea of using my old GPU to drive one of my lesser used monitors, my right one. Great idea I thought, I'll install it a bit later.
A bit later...
I plug the GPU in (after 3 tries of missing the PCI-E slot, fuckers) and for some reason I'm getting boot issues. It's booting to the wrong drive, sometimes it'll not even bother TRYING to boot, suddenly one of my hard drives isn't even being recognized in BIOS, fuck. Alright, is the GPU at least being recognized? Shit, it isn't. FUCKFUCKFUCK.
Oh wait. I just forgot the power cable Duh. Plug that in, same issues. Alright, now I have no idea. Try desperately to boot, but it just won't I start getting boot error 0xc000000f. Critical device not found. Alrighty then. Fuck my life, eh?
Remove the GPU, look around a bit while frantically trying to boot the system, and I notice an oddly bent SATA cable. I look at it and the bastard is FRAYED AT THE END! Fuck, that's my main SSD! I finally replace the SATA cable and boot, still the same error... Boot into a recovery environment, and guess what?
Windows has decided to change my boot partition, ya know, the FUCKING C: DRIVE, from NTFS to RAW format, stripping it of formatting! What the actual fuck Microsoft? You just took a shit on yourself while having a seizure on the fucking MOON! Fine, fuck you, I have recovery USB! Oh, shit, that won't boot... I have an old installation! Boot ITS recovery, try desperately to find a fix online... CHKDSK C: /F... alright, repairing, awesome! Repaired, I can see data, but not boot. So now I'm at the point where I'm waiting for a USB installer to be created over USB 2.0. Wheeeeeeeeee. FML.
THESE are the times I usually hate Windows a lot. And I do. But it gets MOST of my work done. Except when it does this.
I'm already pissed, so don't go into the comments and just hate on Windows completely. Just a little. The main post is for the main hate. Deal with it. And I know that someone is going to come at me "Ohhhhh, you need FUCKIN LIIIIIIINUUUUUUUXXXXXXXX!' Want to know my response to that?
No.3 -
Argh! (I feel like I start a fair amount of my rants with a shout of fustration)
Tl;Dr How long do we need to wait for a new version of xorg!?
I've recently discovered that Nvidia driver 435.17 (for Linux of course) supports PRIME GPU offloading, which -for the unfamiliar- is where you're able render only specific things on a laptops discreet GPU (vs. all or nothing). This makes it significantly easier (and power efficient) to use the GPU in practice.
There used to be something called bumblebee (which was actually more power efficient), but it became so slow that one could actually get better performance out of Intel's integrated GPU than that of the Nvidia GPU.
This feature is also already included in the nouveau graphics driver, but (at least to my understanding) it doesn't have very good (or none) support for Turing GPUs, so here I am.
Now, being very excited for this feature, I wanted to use it. I have Arch, so I installed the nvidia-beta drivers, and compiled xorg-server from master, because there are certain commits that are necessary to make use of this feature.
But after following the Nvidia instructions, it doesn't work. Oops I realize, xrog probably didn't pick up the Nvidia card, let's restart xorg. and boom! Xorg doesn't boot, because obviously the modesetting driver isn't meant for the Nvidia card it's meant for the Intel one, but xorg is to stupid for that...
So here I am back to using optimus-manager and the ordinary versions of Nvidia and xorg because of some crap...
If you have some (good idea) of what to do to make it work, I'm welcome to hear it.6 -
Fuck you, Nvidia. Uhm no, this time not from Torvalds, but EVGA: they're fed up with Nvidia's antics towards their AIB partners. No 4000 series EVGA GPUs anymore.
Source: https://forums.evga.com/Official-Me...9 -
Why is it that the tech Youtubers of this world (and tech reviewers in general) tend to completely skip development as a use case, and instead (if they do ever move off gaming) focus on things like Rendering & Modelling / CAD work? I'm sure there's *way* more devs in the world than CAD guys, surely?!
And if they *do* give it the light of day, it's always a quick benchmark based on "Firefox compile time", "Linux kernel compile time" or similar. Dude, it's 2020. Much as some would like to believe otherwise, most guys stopped compiling swathes of heavy C & C++ as part of their normal workflow over a decade ago.
Real-world tests I want to know about are things like docker performance, common IDE startup performance, compile performance of different sized applications on a bunch of langs like Kotlin, C#, Java, Clojure - or node.js performance, Tensorflow performance on NVidia's vs AMDs latest GPUs, etc. I care about how many IntelliJ instances & VMs I can have open way more than how many Chrome tabs I can forget to close.
But noooo - forget that, here's how fast Blender can render a BMW! 😬5 -
1billiontrillionshigilimillion$/day
Free food & drinks.
Nice office
Supermega computer with a 10009186372891293 GPUs and shit, 6 screens
Working on cutting edge technologies with world class experts.6 -
First day in school after the holidays today. But I've got a bad feeling.
So basically I've been coding till 3am this night. I became tired and shut down my PC. I then pulled out my USB when suddenly a spark was coming out of the USB port. I had a really bad feeling about that. All of a sudden my PC started booting. The fans started spinning and the GPUs LED lit up. Then it was all off again. Aaand it turned on, and off, and on..... I just pulled the plug and went to bed.
Now I'm sitting in school and can't think about anything else but what could have happened to my PC :(4 -
## building my own router
I hoped things would go more smoothly :)
Anyway, my new miniPC easily accepted CentOS 8 - no fuss here. And I've got to say - I love CentOS8 so far! Shell has amazing nifty tricks, UI (gnome3) is also snappy, video/audio/ethernet,.. everything works.
What I did NOT expect is hardware being off. Well okay, the price was low - it was obvious smth is not right. But still.. I decided to build my own router so that I could swap wifi card whenever I want. So that I could run my own network services in there. Turns out - the card swapping is not as easy as one might think.
I got the AX200 WiFi6 card for that very purpose. But once plugged in the OS can only see it's bluetooth module. Weird... What's even weirder is that even though the card is PCIe, the OS uses btusb module to talk to that device. What? USB?? emm.. What??
And there it is. After opening it up again I noticed that the mPCIe area is marked with a label: "USB WIFI / WWAN". USB? Does that mean this PCIe slot is wired into the USB bus? Not impossible I guess.
Googling for a "pcie wifi over usb" or smth like that brought me to one reddit (I think?) where someone wanted to build a DIY wifi mPCIe -> USB adapter and someone else adviced hime that (for some reason) at best he could only get bluetooth working (hey! just like me!). It's got to do smth with pcie channels and USB being too weak to handle all that load, or smth.. IDK, I'm not a HW guy.
Well that sucks then! I have a mPCIe slot that does not work as a PCIe. Shit! So I guess the best I could do is to plug back in the same wifi card that came with the device. It smells like 2003 - supports only g protocol. Fine, let's try that. Maybe I'll find a way to work around this mPCIe limitation later on (USB adapter or smth... except there are no USB WIFI6 dongles yet :( ). So I plug it back in and start turning it into a router. Disable NetworkManager, configure static NCs' settings, install dhcpd, hostapd, bind and others. Looks like all is done! Now it's time to start it all. systemctl start hostapd --> FAILED. wtf? journalctl says it could not initialize a driver. umm okay? Why? Forums say I should airodump-ng check and kill whatever's using that device. Fine. airodumo reveals avahi and wpa_suppl are still using it. kill, kill, GOTTA KILL 'EM ALL!! Starting hostapd again -- same shit... wtf?
iw list
My gawd... That shitty network card does not even support AP mode :( I mean.. My USB wifi dongle for 2€ supports 2x more modes, is faster, has better range and is easier to work with than this old tart!
Yeah. That was an interesting day. When enfironment engineers break my testing environments at work I'm glad I have where to spend my time now.
BTW any ideas how to bypass this mPCIe nonsense? Come on, there are USB GPUs out there.. Why can't they make a USB (or dual-USB if they really need to) mPCIe adapter?8 -
I’m living the dream. Lightweight, powerful, beautiful gaming laptops are a thing (have been for a while) and I have the pleasure of owning one.
I remember one of my college peers having a BRICK Alienware laptop in 2010. Don’t get me wrong, It was awesome at the time and I was super jealous, but it was insanely loud, heavy af, and as thick as a calculus textbook!
But now with the amazing RTX GPUs, and TB SSDs I can game on max settings, benchmark fairly well and take it with me when I travel for work alongside my work laptop all in the same bag without breaking my back.
🤘🏼 I love my Asus Zephyrus 🤘🏼
The fan is still hella loud though 😆
Maybe by mid or late 2020s we will have a revolutionary cooling system that would rid our dependence on fans for cooling. Just dreaming out loud here. It sure would be great to not have to clean the dust out.8 -
FUCK ME IN MY INDICES.
FUCK THE GPUS IN THEIR INDICES.
I mean... I understand (roughly) why the meshes are sent to gpu in this form, but at the same time...
...there's a reason why first thing I did when I was coding my procedural geometry generation library, was abstracting away all of that stuff...
...sadly, as many useful things, when I was looking for that lib on the start of this contract, I couldn't find it. and I was like "doesn't matter, this is a simple thing, using the library would be just a lazy overkill anyway".
well, fuck.
two hours of playing around with two fucking triangles, trying to figure out which indexes are pointing to the correct vertices in a list containing FOUR outline paths.
(lower inner, upper inner, lower outer, upper outer, exacly in this order).
i mean, yeah, it's actually pretty straightforward stuff... for someone not as dumb as me =D
you just have two offsets, one that jumps you to start of the upper path, another that jumps you to the start of the outer path, then it's just
0 + upOffset to get the vertex extruded upwards from the zeroth of the inner path, or
0 + outOffset to get the zeroth from the outer outline, or
0 + outOffset + upOffset, to get the one extruded from zeroth outer vertex...
and so on.
simple stuff, then you just replace the zero with loop control var, put them in the right order, and voilá! walls!
except... whatever, why am I describing in such detail, not necessary, you're not my rubber duck =D
in short, figuring out which fuckin vertex is which, when the list contains ...well, any number of points, and you need to plug the gap between last and first points of the paths, where you need to wrap around the list...
...has proven to be surprisingly hard for me.
funny how much I love doing these things with meshes, despite how bad I am at doing them, which makes me hate doing them despite loving it =D2 -
If Apple didn't jack up their prices and offered decent dedicated GPUs they'd actually be fuckin great... the greed is just too damn high with them
P.S. I might have to get a Macbook because of an iOS project, but I really want something that can game at least a bit... at the end of the day it's not my decision, I'll get what they give me3 -
Imagine buying a 16 tflops GPU from some manufacturer, yet the computing power is somehow shared, and when they need to produce some more GPUs, they just take some part of your 16 tflops, as well as from all the other people who bought these GPUs, so suddenly you have 15.6 tflops, yet money is never sent back to you, and you have no control over such decisions.
Oh, you tell me it's "robbery"? Yet federal reserve does the exact same thing by creating money out of thin air, basically taking their cuts off of every person's monetary value.
For your very real work, companies pay you with something that can be created out of thin air, and then that something loses its value when new tokens are created.
This is slavery. From now on, I'm going to call dollars "slave tickets".11 -
so I asked a question on linus tech tips discord channel that wasn't GPU related and the whole chat went quiet.
I mean, it's a tech channel are GPUs the only thing boys talk about now?
the new dicksize contest? lol12 -
Gamers Nexus has a really unique benchmark for the new AMD GPUs where AMD actually manages to pull ahead of Nvidia:13
-
I find GPT3/ChatGPT an interesting development but at the same time I'm afraid which the spread of deep learning is going to take away further power from individuals and small companies to put it in the hands of big tech companies: the only ones who can afford to hoard countless GPUs/TPUs and exabytes of data to train top performing AIs.9
-
0. A good comfortable chair, one that does not hurt my fat ass and back
1. GPUs, lots of them so that I can train my models faster
2. Patience to endure the stupidity of people3 -
Can anyway recommend a book (or other quality resource) on tensor programming that isn’t focused on all this ML crap?
I’d like to use GPUs for some simulation modelling, so interested in vector and matrix manipulation.2 -
So the tests for the AMD RX 7000 GPUs are out. Business as usual: superb for non-RT gaming given the price, crap at everything else - including energy efficiency in FPS/W.
Pro tip to the AMD marketing: you don't highlight features like energy where you suck relative to the competition. You point out your strong points. Admittedly, you don't have much to work with here.3 -
Theregister.com is wrestling with gpus that need 700 Watts of juice and how to cool them. 50 years ago I was reading an excellent magazine called "Electronics". And I remember that IBM came up with a scheme to absorb enormous amounts of heat from chips. You simply score the underside of the chip in a grid pattern and pump water through it. Hundreds of watts per degree Celsius can be removed. Problem solved.4
-
Is AMD gonna beat Intel to 5nm tho?
https://notebookcheck.net/AMD-essen...
(god i can't wait for picometer shit, that'd be a good milestone to see)13 -
I've always been a strong critic of the mac operating system and apple in general for they're overpriced products. few months back my old laptop kicked the bucket and repairing it was not an option as i was sick of charging the laptop after every 3-4 hours and had to purchase a new laptop immediately. loooking at my options around 50k rs or 700$ all windows laptops available in indian markets sucked (except for lenovo 320s) so i made the shift to macbook air 2017.my daily work involves photoshop illustrator and a dash of premiere pro. I also work on nodeJS and python using the pycharm and atom IDEs. After using it for a month i feel in love with mac platform and macos. Its a wonderful experience. gone are the days of crashes and the windows updates (ugh). the boot of the laptop is like magic and softwares like wmware imovie and notes keynote are f**king awesome. Long hours of work have become fun rather than hell dealing with constant windows gimmicks and bad battery optimisation on linux.
An explanation why all developers (except for the ones who require high powered gpus) graphic designers should shift to macos rn.
Advantages of using mac
No forced updates update whenever now or a f'ing month later no probs.
better battery optimisation than linux
no more installing os again and again (ubuntu)
better vm than virtualbox (vmware)
terminal for running bash commands
no crahes
Xcode platform
trackpad is worlds better than the best windows trackpad
Disadvantages
some softwares not available for macos
storage is generally less on macbooks
UI is simple (less elaborated than windows)
Workarounds
get a vm and install linux(vmware fusion 8)
ps. u may not need it though
wine and wine bottler for using windows apps
get a microsd to sd adapter for macbook and expand storage5 -
How capable are mobile gpus? I made a small game with godot, and all of the animations are done with shaders, and i get a really good performance, but i want to try a 3d game next and not sure if throwing 3d transformations on a mobile gpur is a good idea.5
-
You people might like to make fun of the engineer who put an entire game in VRAM and then played it from there, but I had the idea to preinstall game assets to VRAM years ago, and once GPUs gain persistent RAM, it is a no brainer
Not everything that sounds silly is useless, damn it. Sometimes it is progress in the making.7 -
"The more GPUs you buy, the more money you saved" - Jensen Huang. ASUS finally promoting GPU mining app on their RTX official webpage! Good luck with saving money!!!6
-
TL;DR; I need your advice regd. a new workhorse of a laptop and ARM/MS Surface10/Laptop6 for this purpose
So my hi-end dell XPS (9350) keeps annoying me with its screen flickering. And it's an 8 year old ultrabook with 16G of RAM that I'm using extensively for development, devops, researching and whatnot. 16GB RAM is also becoming...not enough for all of it.
So I'm passively looking for an upgrade. I like the 13" profile (ultrabook style) and battery life, so I'd like to stay away from gaming laptops.
There have been talks about ARM being the new thing. I always saw ARM as a consumer-grade CPU arch (browsing, movies, music, docs, etc.), but the internet says that the new MS Surface devices will have ARM/Qualcomm built in and can compete with MB Pro in terms of performance (ref.: https://windowscentral.com/hardware...) and they are allegedly released this spring.
I'm not much of a hardware person, I prefer staying on the logical level of things, so I want to ask you, people smarter than me, what do you think? Is it a feasible upgrade for an XPS13 (i7 Skylake/16G RAM/4k touch)? I'll be running code and image builds A LOT, using JetBrains IDEs and doing similar resource-intensive tasks. I don't care at all about GPUs - I don't use them (integrated graphics has always been sufficient).
What else should I consider?
Any alternatives?
P.S. while I can't stand Windows, I actually like MS's hardware. They are good at making it.14 -
The term "CPU" is stupid nowadays, what even is "central", when there are entire server farms primarily employing GPUs.
I propose "GPP" - General Purpose Processor - as a much more descriptive name instead.3 -
Current dream PC:
Dual EPYC at 128 core/256 thread or higher
1TB of RAM or more
Dual AMD GPUs
Massive RAID 10 array of 7200RPM HDDs, something like 24TB or better
A few standalone SSDs, to taste (at least one of a sizeable capacity, like 1TB?)
Total cost, like $30k or so?51 -
While talking about how Windows dropped support for a line of AMD GPUs (not really sure which), I suggested they use Linux.
Friend: "You know, whenever I hear Linux, it's always from oraro"
:( -
I'm fucking Paralyzed and I need some advice.
I want to be an entrepreneur.
Not just an entrepreneur but a DAMN good one.
I self-studied business, economics, physics, self-taught multivariable calculus, teaching myself chemistry too.
But I haven't even started my career and I just graduated from University.
Right now I'm starting simple and just doing a few web development things.
But, I want to go deeper into a subject that hasn't really had its problem solved yet.
A.I. can sell you neat things, but it can't kill misinformation (yet).
Graphics are an integral part to gaming, but GPUs are the second greatest threat to our environment behind commercial jets.
Do I HAVE to choose between A.I. and graphics?!14 -
Why are all the 2060, 3060, and 4060 gpus about $300? Shouldn't the price come down on the 2060 and 3060?10
-
There's been talk that UE5's Nanite isn't actually all that efficient (sometimes slower than the alternative) and that kind of got me thinking.
You give developers very high end machines so that they can move quickly. But that doesn't always translate to lower machines. When benchmarking how would you even target lower machines in a simple way? Like for me, I have two GPUs in my system, but one is passed through to a Windows VM. I'd love to test on that GPU but it's just not feasible
All the great test results I (and others) have been seeing might just be a result of the newest cards being insanely fast in relation to cache. Is visibility rendering really faster on a few generation old card? I don't know! Nvidia MASSIVELY beefed up L2 cache on the 4000 series. Does that play a role? Maybe even a big one...2 -
What laptop would you guys recommend for development? I was thinking about a surface or surface book but those don't have dedicated gpus iirc and thus a friend recommended the Dell ultrabooks - what would you guys choose? MacBook is out of the game.12
-
Has anyone (who does either Data Analysis, ML/DL, NLP) had issues using AMD GPUs?
I'm wondering if it's even worth considering or if it's too early to think about investing in computers with such GPUs.10 -
https://appleinsider.com/articles/...
Tl;Dr This guy thinks apple is poised to switch the Macs to a custom arm based chip over x86! He's now on my idiot list.
I paraphrase:
"They've made a custom GPU", great! That's as helpful as "The iPad is a computer now", and guess what Arm Mali GPUs exist! Just because they made their own GPU doesn't make it suitable for desktop graphics (or ML)!
"They released compilation tools right when they released their new platform, so developers could compile for it right away", who would be an idiot not too...
"Because Android apps run in so many platforms, it's not optimized for any. But apple can optimize their apps for a sepesific users device", what!? What did I miss? What do you optimize? Sure, you can optimize this, you can optimize that... But the reason why IOS software is "optimized", and runs better/smoother (only on the newest devices of course) is because it's a closed loop, proprietary system (quality control), and because they happen to have done a better job writing some of their code (yes Android desperately needs optimization in numerous places...).
I could go on... "WinTel's market share has lowly plataued", "tHeY iNtRoDuCeD a FiElD pRoGrAmMaBlE aRrAy"
For apple to switch Macs to arm would be a horrible idea, face it: arm is slower than x86, and was never meant to be faster, it was meant to be for mobile usage, a good power to Wh ratio favoring the Wh side.
Stupid idiot.19 -
Man i realy need to get of my windows host.
My productivity takes a nosedive whenever im on windows idk why.
I'd love to use linux fully but my fav game Overwatch has shit performance running on linux.
So the best solution would be to pass through my gpu to a windows vm for gaming.
But that would require a new gpu for the host system as the ryzen 7 1700 does not have a gpu.
I dont have any experience with passing thtough gpus. But could i make 2 vms that acces the same gpu, ofc not at the same time. So that i could have a gaming vm and maybe use another linux vm if i wanted to do something which profits off gpu acceleration.11 -
1. O(n)
2. Container queries
3. Supercomputer with a bunch of GPUs/TPUs running for free (solar, wind power)
Genuinely thanks! -
I'm a complete noob with hardware so can someone please help me.
My GPU can support up to 4 monitors. I have 5. I figured that since you don't need a GPU for a computer to work, my PC would be able to support 4 with my GPU and one with the on-board system, but apparently I am wrong.
Is it possible to configure it to work this way? Will this seriously impact performance? (It shouldn't right? As PCs are designed to run with one monitor)
I know it is possible to connect multiple GPUs so if that's not possible, could anyone give me any advice on that? Thanks!13 -
"Well GPU main memory is L2. CPU main memory is L3. Which is why GPUs are so much faster." - cs major in my college.
Someone please confirm if this is a common opinion.2 -
Deep learning
I thought it would be a great course, learn some of the stuff that I always read about but couldn't understand jackshit, and maybe profit form it somehow.
I'm in my last assignment, they want us to pick some SNLI paper and implement, ok, so I find this one with the least amount of params because I thought hey this seems promising.
And boy what a ride it was, I implemented it using PyTorch, the results are way off, I read the paper again and rewrite some parts, still nothing, I get 79%, it's supposed to be 85%, and no matter how I try, nothing.
10 GitHub repos later, 40 hours of complete meltdown,
20 throwaway Google accounts using colab because we don't have GPUs in our uni and using AWS is not feasible.
Same shit, I'm at loss, the world is a lie, and I fell for it...
Fuck.2 -
The 1080ti is rated at a little more over 11 teraflops. GPUs with over 1 teraflops of compute performance was released in the early 2000's.
It's 2017 and we are stuck with fancy gen xxx cpus.
I smell a huge compute performance wastage.1 -
Okay, so debian is just fucked by default then.
Created a Debian 10 persistence stick, and I'm having the fucking xorg issues ("No screens detected", xrandr says the same) i've had every fucking time i've installed debian, except a simple round of dpkg-reconfigure isn't fixing it this time.
Suggestions?
Things tried:
- dpkg-reconfigure <every package even remotely related>
- X -configure
- installing all firmware from linux-firmware repo
- reinstalling everything remotely related (with both reinstall and purge/install)
- Wayland ("failed to create compositor backend")
- creating my own xorg configs and driver-radeon configs and all that shit with my screen explicitly defined
- remaking the stick with a redownloaded ISO
- actually installing it to a HDD first
- crying in frustration
- different monitors
- someone else's machine (both AMD GPUs, mine's an R9 380, his an RX 3-digit something-or-other)
- an NVIDIA card (other tester threw his old 1080TI in his PC, set up all the drivers and shit, and nothing fucking changed)
what is this, Fedora?3