Details
-
AboutGeneric McGenericsson
-
SkillsYes
-
LocationIndia / USA
Joined devRant on 9/14/2017
Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API

From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
-
@b2plane Save the state (container and volumes etc if using docker) locally when you're done for the week and restore when you get back? These all appear on searches and have nothing to do with AWS or Azure.
-
In general as long as you have any kind of resources (compute, storage, networking, IP, backups, services, whatever) allocated to you, you'll be billed for it. That's how...all of them work.
-
gas?
-
@-red I fucking loved Children of Time, even though I'm an arachnophobe
I should read the others -
@Fast-Nop couldn't be happier that blockchain is passé
-
"Google AI Searchengine machinelearning generative LLM Llama GPT assistant startup TPU quantized Search"
-
@Fast-Nop yeah, that's a good strategy (although again that's your particular mix of working set size and throughput/latency/occupancy requirements). I needed the absolute fastest mem throughput for one set of workloads, and large capacity and occupancy for another set, so the current small 64GB DDR5 system and bigger 128GB DDR4 systems worked out perfectly in price/perf. Sadly the usage patterns of both are too large for the big X3D caches to matter much, the L2/L3 cache is basically just acting as a prefetch/shmem buffer, so we needed to invest in RAM.
Although, DDR5 can go to 48GB/stick now (maybe even 64GB/stick in future? Not sure) so there will eventually be a capacity benefit too. -
@Fast-Nop @cuddlyogre DDR4 128GB was pretty fiddly even with 1 kit of 4 sticks. We went through several kits and two mobos for one of the machines.
DDR5 is even more sensitive - one of the machines is stuck on 64GB because the 128gb kit we bought didn't work and we're just waiting for prices to drop now. And 2 kits of 2 sticks is basically not going to work, the timings are too tight. I'd be happy with ddr5-5200 with 4 sticks, never seen a stable ddr5-6000+ system with 4 sticks (consumer platform ofc).
If it's any consolation, you should see the memory latency vs memory capacity tradeoff for your application. If your working set is small and fits in memory and your application is memory bound, then better RAM timings will (might) help. Otherwise, it's a waste - if your application is not memory bound anyway, then faster will not help, and if your working set is large, then larger mem is (generally) better than faster mem because you'll avoid paging to disk which is paainfully slow. -
13900k at max load under 90 deg C here. The only cooler I've tried that can pull it off reliably is DeepCool LT 720 (we have three machines with 13900k+LT720). It runs this chip at 315W according to hwinfo in Cinebench.
Tbh I usually gate it to 250W or lower, not much perf difference but way nicer for the system. -
Oh, this is about code in papers.
The only thing that's guaranteed to do is reproduce the results in the paper :p (and if it doesn't, that's a big problem)
Expecting anything else is too much (which is where the engineers come in).
Data science/ML code can be pretty bad, but wait till you see systems code... -
glm ftw
-
Why are data scientists being made to write libraries?
Most places I know have engineers specialized in taking models and implementing them in well engineered, performant libraries. -
@Demolishun "I picked the wrong week to stop sniffing glue!"
-
Samsung moment
-
I still go back to old stuff I've seen and it's fun, in a comfortable sort of way, like an old tshirt. Sometimes the familiar is what you want/need.
I must've rewatched "Airplane!" at least ten times, still gets a good laugh -
Just look up the relevant Project Farm video for whichever tool
-
@black-kite I'm genuinely impressed you caught that. How did you remember a post from 2016??
-
"because you don't do exactly what I told you" wow, that's...not how PhD is supposed to work. It seriously sucks when your advisor is not on your side. Are you able to switch to a different advisor in the same dept?
Btw, what does it mean to get a poor grade on your dissertation? Mine is just a binary pass/fail, and I've never heard of anyone caring about grades on a PhD resume. -
Tbf I don't think most of these are bad per se (I love my smart lights and appliances) but it's the predatory "squeeze every last drop (especially by collecting data)" monetization that makes me hate it.
Thinking of switching all my smart stuff to a home server with "dumb" backup instead, but effort sigh. -
@Demolishun resident evil?
-
It's already happened - we are the monkeys.
-
@MammaNeedHummus I guess you're supposed to make a point. Powerfully.
Probably each one was done by a different team and they didn't care much as long as it was reasonably consistent. They are pretty good names I think, given how they've become iconic and stuck around for decades. I also think they sound pretty cool.
I don't think LibreOffice or Apple's office suite have as good names ("Pages" or "Writer" or "Impress", meh). -
Neither C++ nor CUDA are (likely) going away anytime soon lol. Plus if you know CUDA you have GPU programming skills that'll transfer easily to any other API (OpenCL, Vulkan Compute, SYCL, whatever else they dream up).
-
Like Word is a word
-
@jestdotty you can (and almost always do) have physically correct graphics but a stylized art style. Current tech is more than good enough to show "real" violence in all its horrid details, but that's not done.
As a concrete example, the rendering engine behind Pixar movies (RenderMan) uses many physically correct or physically inspired models of materials, lighting, shading, geometry (fur, skin etc.) but I don't think anyone would call their style "realistic". A lot of very cool simulations of hair, particulate substances like sand, water, etc. are also used (look at "Piper" for a good example). These days a good fraction of 3D graphics engines use physically-based rendering.
Basically, i don't think graphics technology controls how we depict violence, that bar was crossed long ago -
What I'd like to see from graphics technology now is performance and efficiency rather than just showing off new effects. Give me good visuals *and* good performance with low power consumption, damnit. I want raytraced GI at 144fps on my phone. Unreasonable? Maybe not in a few hw and sw generations.
Doom Eternal is a great example of a performant engine that looks fantastic too.
(I'd say there's still a lot to do in graphics and technical animation/simulation but otherwise yes I agree). -
@Fast-Nop that's fair. I was going off the original post.
-
Do an analysis of the size of the teams and infrastructure that built the models you're supposed to surpass and hand in an excel sheet summarizing a lower bound on how much it cost. Include the cost of obtaining a dataset. That argument should work much better, it's something they'll understand.
-
@Fast-Nop to me it reads more like an irresponsible disclosure of vulnerable data. If you read the points MS objects to, it's not that the group made the search tool, but that it was done improperly ("We recommend that any security company that wants to provide a similar tool follow basic measures to enable data protection and privacy").
It also appears that the researchers exaggerated the scope of the issue (and didn't respond to followup). Both these are entirely fair game to be pissed at. Just because a security group says something doesn't make it true, publishing of such fundings should be done with verification from the affected company.
Security groups are usually pretty responsible about such things (the legit ones, not hackers). -
Oh boy you should see SYCL errors...or rather, you won't, because it doesn't have robust validation for stuff like inappropriate synchronization which can totally wreck your program.