Details
-
AboutGeneric McGenericsson
-
SkillsYes
-
LocationIndia / USA
Joined devRant on 9/14/2017
Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
-
@b2plane Save the state (container and volumes etc if using docker) locally when you're done for the week and restore when you get back? These all appear on searches and have nothing to do with AWS or Azure.
-
In general as long as you have any kind of resources (compute, storage, networking, IP, backups, services, whatever) allocated to you, you'll be billed for it. That's how...all of them work.
-
gas?
-
@-red I fucking loved Children of Time, even though I'm an arachnophobe
I should read the others -
@Fast-Nop couldn't be happier that blockchain is passé
-
"Google AI Searchengine machinelearning generative LLM Llama GPT assistant startup TPU quantized Search"
-
@Fast-Nop yeah, that's a good strategy (although again that's your particular mix of working set size and throughput/latency/occupancy requirements). I needed the absolute fastest mem throughput for one set of workloads, and large capacity and occupancy for another set, so the current small 64GB DDR5 system and bigger 128GB DDR4 systems worked out perfectly in price/perf. Sadly the usage patterns of both are too large for the big X3D caches to matter much, the L2/L3 cache is basically just acting as a prefetch/shmem buffer, so we needed to invest in RAM.
Although, DDR5 can go to 48GB/stick now (maybe even 64GB/stick in future? Not sure) so there will eventually be a capacity benefit too. -
@Fast-Nop @cuddlyogre DDR4 128GB was pretty fiddly even with 1 kit of 4 sticks. We went through several kits and two mobos for one of the machines.
DDR5 is even more sensitive - one of the machines is stuck on 64GB because the 128gb kit we bought didn't work and we're just waiting for prices to drop now. And 2 kits of 2 sticks is basically not going to work, the timings are too tight. I'd be happy with ddr5-5200 with 4 sticks, never seen a stable ddr5-6000+ system with 4 sticks (consumer platform ofc).
If it's any consolation, you should see the memory latency vs memory capacity tradeoff for your application. If your working set is small and fits in memory and your application is memory bound, then better RAM timings will (might) help. Otherwise, it's a waste - if your application is not memory bound anyway, then faster will not help, and if your working set is large, then larger mem is (generally) better than faster mem because you'll avoid paging to disk which is paainfully slow. -
13900k at max load under 90 deg C here. The only cooler I've tried that can pull it off reliably is DeepCool LT 720 (we have three machines with 13900k+LT720). It runs this chip at 315W according to hwinfo in Cinebench.
Tbh I usually gate it to 250W or lower, not much perf difference but way nicer for the system. -
Oh, this is about code in papers.
The only thing that's guaranteed to do is reproduce the results in the paper :p (and if it doesn't, that's a big problem)
Expecting anything else is too much (which is where the engineers come in).
Data science/ML code can be pretty bad, but wait till you see systems code... -
glm ftw
-
Why are data scientists being made to write libraries?
Most places I know have engineers specialized in taking models and implementing them in well engineered, performant libraries. -
@Demolishun "I picked the wrong week to stop sniffing glue!"
-
Samsung moment
-
I still go back to old stuff I've seen and it's fun, in a comfortable sort of way, like an old tshirt. Sometimes the familiar is what you want/need.
I must've rewatched "Airplane!" at least ten times, still gets a good laugh -
You could generate pre-connected rooms via cellular automata? That should be a lot more structured.
-
Just look up the relevant Project Farm video for whichever tool
-
@black-kite I'm genuinely impressed you caught that. How did you remember a post from 2016??
-
"because you don't do exactly what I told you" wow, that's...not how PhD is supposed to work. It seriously sucks when your advisor is not on your side. Are you able to switch to a different advisor in the same dept?
Btw, what does it mean to get a poor grade on your dissertation? Mine is just a binary pass/fail, and I've never heard of anyone caring about grades on a PhD resume. -
Tbf I don't think most of these are bad per se (I love my smart lights and appliances) but it's the predatory "squeeze every last drop (especially by collecting data)" monetization that makes me hate it.
Thinking of switching all my smart stuff to a home server with "dumb" backup instead, but effort sigh. -
@Demolishun resident evil?
-
(hypothetically, if performance or power draw becomes a problem)
-
@Hazarth this.
@Demolishun instead of occlusion culling, might actually be faster to batch draw the whole thing, even redundant parts, for such a (graphically) simple thing. As long as you avoid synchronization with the CPU for every draw call and the scene fits in GPU caches (avoids lots of data movement), you'll be very fast and power efficient. Few large chunks of (even redundant) work are usually much much better than many small chunks of work, and the less you involve the CPU the better usually.
Idk if Godot gives you that much control tho. -
It's already happened - we are the monkeys.
-
@MammaNeedHummus I guess you're supposed to make a point. Powerfully.
Probably each one was done by a different team and they didn't care much as long as it was reasonably consistent. They are pretty good names I think, given how they've become iconic and stuck around for decades. I also think they sound pretty cool.
I don't think LibreOffice or Apple's office suite have as good names ("Pages" or "Writer" or "Impress", meh). -
Neither C++ nor CUDA are (likely) going away anytime soon lol. Plus if you know CUDA you have GPU programming skills that'll transfer easily to any other API (OpenCL, Vulkan Compute, SYCL, whatever else they dream up).
-
Like Word is a word
-
@jestdotty you can (and almost always do) have physically correct graphics but a stylized art style. Current tech is more than good enough to show "real" violence in all its horrid details, but that's not done.
As a concrete example, the rendering engine behind Pixar movies (RenderMan) uses many physically correct or physically inspired models of materials, lighting, shading, geometry (fur, skin etc.) but I don't think anyone would call their style "realistic". A lot of very cool simulations of hair, particulate substances like sand, water, etc. are also used (look at "Piper" for a good example). These days a good fraction of 3D graphics engines use physically-based rendering.
Basically, i don't think graphics technology controls how we depict violence, that bar was crossed long ago -
Do you have discrete "rooms" with connections in between (so basically a graph of rooms with edges being connections) or is an "open connected space" the basic building block? I remember having to do a lot of sketchy code because I used a graph of rooms and visibility across rooms was a pain.
-
@Fast-Nop honestly I enjoyed the darker, grittier DS9 as much, or more. Stronger writing overall imo. I know Trek is supposed to have the "hopeful, principled future" vibe but I liked how in DS9 they try to uphold the Federation in a much more difficult and murky context, with much more interesting "is this actually for the greater good or are we crossing a line" dilemmas.