Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "image rendering"
-
I like memory hungry desktop applications.
I do not like sluggish desktop applications.
Allow me to explain (although, this may already be obvious to quite a few of you)
Memory usage is stigmatized quite a lot today, and for good reason. Not only is it an indication of poor optimization, but not too many years ago, memory was a much more scarce resource.
And something that started as a joke in that era is true in this era: free memory is wasted memory. You may argue, correctly, that free memory is not wasted; it is reserved for future potential tasks. However, if you have 16GB of free memory and don't have any plans to begin rendering a 3D animation anytime soon, that memory is wasted.
Linux understands this. Linux actually has three States for memory to be in: used, free, and available. Used and free memory are the usual. However, Linux automatically caches files that you use and places them in ram as "available" memory. Available memory can be used at any time by programs, simply dumping out whatever was previously occupying the memory.
And as you well know, ram is much faster than even an SSD. Programs which are memory heavy COULD (< important) be holding things in memory rather than having them sit on the HDD, waiting to be slowly retrieved. I much rather a web browser take up 4 GB of RAM than sit around waiting for it to read the caches image off my had drive.
Now, allow me to reiterate: unoptimized programs still piss me off. There's no need for that electron-based webcam image capture app to take three gigs of memory upon launch. But I love it when programs use the hardware I spent money on to run smoother.
Don't hate a program simply because it's at the top of task manager.6 -
Ah finally, the moment when being a web developer is full of joy.
☑️ Server-side rendering
☑️ Inline critical css
☑️ Add progressive image loading
☑️ Minify everything
☑️ Automate release process in CI
☑️ Lint everything
Now that the strucutre is up, time to code the actual website. This is gonna be good!8 -
Diary of an insane lead dev: day 447
pdf thumbnails that the app generates are now in S3 instead of saved on disk.
when they were on disk, we would read them from disk into a stream and then create a stream response to the client that would then render the stream in the UI (hey, I didn't write it, I just had to support it)
one of my lazy ass junior devs jumps on modifying it before I can; his solution is to retrieve the file from the cloud now, convert the stream into a base64 encoded string, and then shove that string into an already bloated viewmodel coming from the server to be rendered in the UI.
i'm like "why on earth are you doing that? did you even test the result of this and notice that rendering those thumbnails now takes 3 times as long???"
jr: "I mean, it works doesn't it?"
seriously, if the image file is already hosted on the cloud, and you can programmatically determine its URL, why wouldn't you just throw that in the src attribute in your html tag and call it a day? why would you possibly think that the extra overhead of retrieving and converting the file before passing it off to the UI in an even larger payload than before would result in a good user experience for the client???
it took me all of 30 seconds to google and find out that AWS SDK has a method to GetPreSignedURL on a private file uploaded to s3 and you can set when it expires, and the application is dead at the end of the year.
JFC. I hate trying to reason with these fuckheads by saying "you are paid for you brain, fucking USE IT" because, clearly these code monkeys do not have brains.3 -
Sometimes I just don't know what to say anymore
I'm working on my engine and I really wanna push high triangle counts. I'm doing a pretty cool technique called visibility rendering and it's great because it kind of balances out some known causes of bad performance on GPUs (namely that pixels are always rasterized in quads, which is especially bad for small triangles)
So then I come across this post https://tellusim.com/compute-raster... which shows some fantastic results and just for the fun of it I implement it. Like not optimized or anything just a quick and dirty toy demo to see what sort of performance I can get
... I just don't know what to say. Using actual hardware accelerated rasterization, which GPUs are literally designed to be good at, I render about 37 million triangles in 3.6 ms. Eh, fine but not great. Then I implement this guys unoptimized(!) software rasterizer and I render the same scene in 0.5 ms?!
IT'S LITERALLY A COMPUTE SHADER. I rasterize the triangles manually IN SOFTWARE and write them out with 64-bit atomic image stores. HOW IS THIS FASTER THAN ACTUAL HARDWARE!???
AND BY LIKE A ORDER OF MAGNITUDE AT THAT???
Like I even tried doing some optimizations like backface cone culling on the meshlets, but doing that makes it slower. HOW. Im rendering 37 million triangles without ANY fancy tricks. No hi-z depth culling which a GPU would normally do. No backface culling which a GPU with normally do. Not even damn clipping of triangles. I render ALL of them ALL the time. At 0.5 ms7 -
Each time I try firefox after somebody mentions it again or it's in my rss feed, it still seems to never actually advance
It's stuck and either gets worse or goes back to its stable non improving level again, how come do they still not have a proper mobile responsive tester, why are even the upgraded addons still suffering the same container and rendering bugs
how is it more important getting bad image by implementing mr robot malware, than getting on an actual competitive level
why is it default bloated with random pocket addon bullshit, why did it begin to lag, ..
I remember when I was using firefox for a good portion of my life and laughed at how google chrome is laggy, but nowadays theres simply no competition to chrome, its stability and developer tools
I wish there was competition, the grid tools were a great start, but then nothing followed and they just went back to their never improving flatline16 -
when KhronosGroup anounced Vulkan back then, they also announced a whole set of software, that can handle all the new formats, that they introduced.
One format in particular peaked my interest recently, which is ktx2. It's an image format, that can be multilayered, and supercompressed, has inline mipmapping, and most importantly: streamed directly to the GPU, without involving the CPU basically at all.
Now here comes the kicker. If i want to use this format (mind you: Vulkan is around for a while now) for creating Skyboxes, there is only a single tool, that can properly convert hdr images to ktx2, and it only works on windows. Oh and there are no binaries, so in every case you have to compile it yourself.
Ah and then i thought, okay what if i then already render the cubemap faces and assemble them by hand into the cubemap, because _some_ ktx tools work on linux, then that should work right? wrong. When assembling it, it turns out, that now it's a 2D image instead of a 2DArray image with one element (which apparently is not the same for skyboxes)
Why is this shit such a pain in the ass?
Like.. I'm currently rendering equirectangular hdr images on my linux machine, then move these (usually 100MB) files over to some windows PC, convert it there into ktx2 cubemaps and then move it back. And everytime i need to do a change on the skybox, i have to repeat this whole nonsense. Ah.. and this tool doesn't even properly work on Windows, like you can't just disable mipmaps or change the filtering, because then the skybox is just black for some reason.
The funniest thing is, at the end of the day, these ktx2 files work on linux, as well as windows, mac and even mobile platform, so there's really no reason, that the conversion tool only works on one of them systems.
But hey, at long last i got them working, and this stuff looks quite nice now 👌2 -
Currently rendering an image on my laptop. It's going to take 11 hours. Any good ideas on how to kill the time?? :)11
-
v0.0005a (alpha)
- class support added to lua thanks to yonaba.
- rkUIs class created
- new panel class
- added drawing code for panel
- fixed bug where some sides of the UI's border were failing to drawing (line rendering quark)
v0.0014a (alpha) 11.30.2023 (~2 hours)
- successfully retrieving basic data from save folder, load text into lua from files
- added 'props' property to Entity class
- added a props table to control what gets serialized and what doesn't
- added a save() base method for instances (has to be overridden to be useful beyond the basics)
- moved the lume.serialize() call into the :save() method on the base entity class itself
- serialized and successfully saved an entities property table.
- fixed deserializion bugs involving wrong indexes (savedata[1] not savedata[2])
- moved deserialization from temp code, into line loading loop itself (assuming each item is on one line)
- deser'd test data, and init()'d new player Entity using the freshly-loaded data, and displayed the entity sprite
All in all not a bad session. Understanding filing handling and how to interact with the directory system was the biggest hurdle I was worried about for building my tools.
Next steps will be defining some basic UI elements (with overridable draw code), and then loading and initializing the UI from lua or json.
New projects can be set as subfolders folders in appdata, using 'Setidentity("appname/projectname") to keep things clean.
I'm not even dreading writing basic syntax highlighting!
Idea is to dogfood the whole process. UI is in-engine rendered just like you might see with godot, unity, or gamemaker, that way I have maximum flexibility to style it the way I want. I'm familiar enough with constructing from polygons, on top of stenciling, on top of nine-slicing, on top of existing tweening and special effects, that I can achieve exactly what I want.
Idea is to build a really well managed asset pipeline. Stencyl, as 'crappy' as it appeared, and 'for education' was a master class in how to do things the correct way, it was just horribly bloated while doing it.
Logical tilesets that you import, can rearrange through drag-n-drop, assign custom tile shapes to, physics materials, collisions groups, name, add tag data to, all in one editor? Yes please.
Every other 2D editor is basic-bitch, has you importing images, and at most generates different scales and does the slicing for you.
Code editor? Everything behavior was in a component, with custom fields. All your code goes into a list of events, which you can toggle on and off with a proper toggle button, so you can explicitly experiment, instead of commenting shit out (yes git is better, but we're talking solo amateurs here, they're not gonna be using git out the gate unless they already know what they're doing).
Components all have an image assignable to identify them, along with a description field, and they're arranged in a 2d grid for easy browsing, copying, modifying.
The physics shape editor, the animation editor, the map editor, all of it was so bare bones and yet had things others didn't.
I want that, except without the historic ties to flash, without the overhead of java, and with sexier fucking in-engine rendering of the UI and support for modding and in-engine custom tools.
Not really doing it for anyone except myself, and doubt I'll get very far, but since I dropped looking for easy solutions, I've just been powering through all the areas I don't understand and doing the work.
I rediscovered my love of programming after 3-4 years of learning to hate it, and things are looking up.2 -
Visibility rendering using traditional vertex/fragment shaders does 39 million tris in about 3.6 ms
With my newest renderer I can push 314 million triangles in about 6 ms right now
And this is just visibility, factoring in material evaluation of traditional deferred it would be at least like 10x worse. Meanwhile everything expensive about materials is completely independent of geometric complexity in my renderer
Literally me rn: https://youtube.com/watch/...
(cant include image because devrant doesn't want to)7 -
I have been spending all day optimizing a wordpess site for pagespeed, looking into how can I optimize the custom scripts which block rendering and I was learning some new things, it was hard but I was making progress. Then comes the senior engineer who installs a plugin and pagespeed went from 60 to 90 on mobile, I was pretty shocked. Then it hit me. IT DELAYS THE LOADING OF EVERY SCRIPT AND IMAGE UNTIL USER INPUT TRICKING THE SCORING SYSTEM. U GET A WHITE SCREEN IF YOU DON'T DO ANYTHING. I told him it's not really faster this way, and he agreed it is not "ethical" but the score is good.
Am I still an idiot naive kid? There is a line between scamming people and quality work, but it keeps getting more blurry.5 -
Messing with three.js library. Now imagine I have a letter in 3D, from which I removed all the faces and replaced vertices with white dots (plexus effect), and all around are like thousands of dots. Now I wanted to put single sprite somewhere on that letter. So I wrote a short script doing that. Fast forward an hour. I am figuring out, why the FPS significantly dropped to like 0.5 fps or what. Checking various browsers, even downscaling image from 512x512 sprite to 64x64, checking whole stackoverflow why is just one fucking sprite causing a lot of trouble and such an fps drop. Trying everything except... I wrote that function inside loop rendering those thousands dots all around. Lol my computer almost catched in fire rendering that shit.
Must to say, in chrome it had 0.5fps, mozilla had around 15-20 fps which is A LOT better.1 -
Showing off this PC cluster rendering these gigantic 3D frames, a customer shakes his head: "Why is this so slow?" I asked why he thought it was slow and he said, "I have an old Windows laptop but if I double click on an image on the desktop, it opens immediately!"
-
ok, but this is cool. it’s an image that renders differently on apple and non-apple devices. not sure if devRant will process it so it doesn’t work, but this is cool. also a huge vulnerability for apple, but cool.9
-
I write a quick prompt thinking nothing of it, just trying to generate an image to illustrate the next baddie in the dungeon, but I nail it on the first try and the resulting image PHYSICALLY affects me.
What the fuck, now I cannot unsee. I didn't think this was possible. I assumed the whole "cursed photo" thing was just kids being easily disturbed, turns I was wrong. DO NOT fucking mix hyper realistic 3D rendering and cryptids, do NOT fucking do it. THe moment you get anything resembling a """face""" on the other side you've just consumed years worth of nightmare fuel.
Unsettling? Fuck you, this is beyond that. The eyes I saw on the other side of the screen made me RECOIL in fear. You think you know what NOPE means, let me tell you, I have just *FELT* what __NOPE__ stands for.
I would hope I sleep well tonight, but I don't think I will. SHIIIIIIIIIIIIIIIIIIIIT18 -
My badass dev moment was when I read a valve white paper on text rendering and implemented a dynamic text version of it in webgl. That white paper was about signed distance fields and how to hack the alpha channel of an image to bake in some font smoothing data.... Holy fuck that felt good. Oh and it looked good too!1
-
I'm creating a bitmap font right now and wanted to automatically generate a image with some text so I can track my progress how it looks. gnome-font-viewer displays it fine, but it'd nothing compared to some real text. Well, how hard can it be?
First attempt: Use ImageMagick to create an image and draw some text. I found a forum post in the ImageMagick forums from 2017 claiming incorrect rendering of BDF fonts, which was promised to be fixed. Yet convert does exactly nothing besides saying “couldn't read font”.
Looking around, there is exactly one tool for the job I'm looking to get done: pbmtext. It works, but doesn't support Unicode. Egh.
Maybe I could write a short script to do it, then? Python's Pillow can import Bitmap fonts (cairo can't). Halfway done I notice it can't deal with anything outside of the character range 0..256.
Using FreeFont directly is out of the question as that seems to be equally much work as creating the font in the first place. I briefly tried SDL, but the font formats it understands are limited.
So how about converting the font then, you ask? Everyone seems to be only concerned about the other way (like OTF to BDF). I tried loading the font into FontForge and exporting an OTF or TTF but couldn't get anything out of it that ImageMagick recognizes as a font.
It seems fucking impossible to render text to an image with an Unicode BDF font in some automated way.
To add insult to injury, my searches containing “bdf” are always interpreted as with “pdf”. I'm not even a Franconian, I can distinguish B and P!4 -
I need to position circles on a curve in a HTML page, the number of circles can be variable (a minimum of 3 and a maximum of 7) and these should be spaced equally. Each circle is a clickable element that has some functionality, and also just hovering on these circles should show a thumbnail right beside them.
https://i.stack.imgur.com/jC3gE.png
As shown in the above image. The whole design consists of 2 of these curves one on the left and one on the right(I cannot put the whole design here sorry) one approach I can think of is rendering the curve as an SVG and then position the beads (circles) on them but I cannot think of any way to position those elements exactly on the svg. Any kind of help or code which achieves this functionality is a huge help.
Note: I am using Angular, if there's a nice way to do the same in Angular I'd be pretty delighted. I know that this isn't the right place to ask this but any help would be greatly appreciated.
Thanks in Advance10