Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "render problem"
-
curl cheat.sh — get an instant answer to any question on (almost) any programming language from the command line
tldr
do curl cht.sh/go/execute+external+program to see how to execute external program in go
And this question: why I actually should I start the browser, and the browser has to downloads tons of JS, CSS and HTML, render them thereafter, only to show me some small output,
some small text, number or even some plot. Why can't I do a trivial query from the command line
and instantly get what I want?
I decided to create some service that will work as I think such a service should work.
And that is how wttr.in was created.
Nowadays you probably know, how to check the weather from the command line, but if not:
curl wttr.in
or
curl wttr.in/Paris
(curl wetter in Paris if you want to know the weather in Paris)
After that several other services were created (the point was to check how good the console
can solve the task, so I tried to create services providing information
of various nature: text, numbers, plots, pseudo graphic etc.):
curl rate.sx/btc # to check exchange rate of any (crypto)currency
curl qrenco.de/google.com # to QRenco.de any text
And now last but not least, the gem in this collection: cheat.sh.
The original idea behind the service was just to deliver a various UNIX/Linux command line cheat sheets via curl. There are several beautiful community driven cheat sheet repositories such as tldr, but the problem is that to use them you have to install them first, and it is quite often that you have no time for it, you just want to quickly check some cheat sheet.
With cheat.sh you don't need to install anything, just do:
curl cheat.sh/tar (or whatever)
you will get a cheat sheet for this command (if such cheat sheet exists inf one of the most popular community-driven cheat sheet repositories; but it surely does).
But then I thought: why actually show only existing cheat sheets? Why not generate cheat sheets or better to say on the fly? And that is how the next major update of cheat.sh was created.
Now you can simply do:
curl cht.sh/python/copy+files
curl cht.sh/go/execute+external+program
curl cht.sh/js/async+file+read
or even
curl cht.sh/python/копировать+файл
curl cht.sh/ruby/Datei+löschen
curl cht.sh/lua/复制文件
and get your question answered
(cht.sh is an alias for cheat.sh).
And it does not matter what language have you used to ask the question. To be short, all pairs (human language => programming language) are supported.
One very important major advantage of console oriented interfaces is that they are easily
programmable and can be easily integrated with various systems.
For example, Vim and Emacs plugins were created by means of that you can
query the service directly from the editor so that you can just write your
questions in the buffer and convert them in code with a keystroke.
The service is of course far from the perfection,
there are plenty of things to be fixed and to be implemented,
but now you can see its contours and see the contours of this approach,
console oriented services.
The service (as well as the other mentioned above services) is opensource, its code is available here:
https://github.com/chubin/cheat.sh
What do you think about this service?
What do you think about this approach?
Have you already heard about these services before?
Have you used them?
If yes, what do you like about them and what are you missing?26 -
I need to render point clouds in OpenGL for a project I'm working on. The problem is it just becomes too damn slow when the point cloud size increases. So I'm trying to use LOD methods and nested octrees to speed things up. Also, I need to render text too in my OpenGL scene. So now I'm stuck trying to show proper labels using Qt and OpenGL which is completely pissing me off by the way. And I'm not doing the actual speeding up thing I was working on earlier on and before that I was working on the actual function of my program. So I'm now deviated from the deviation.
Fuck my brain.4 -
While writing a raytracing engine for my university project (a fairly long and complex program in C++), there was a subtle bug that, under very specific conditions, the ray energy calculation would return 0 or NaN, and the corresponding pixel would be slightly dimmer than it should be.
Now you might think that this is a trifling problem, but when it happened to random pixels across the screen at random times it would manifest as noise, and as you might know, people who render stuff Absolutely. Hate. Noise. It wouldn't do. Not acceptable.
So I worked at that thing for three whole days and finally located the bug, a tiny gotcha-type thing in a numerical routine in one corner of the module that handled multiple importance sampling (basically, mixing different sampling strategies).
Frustrating, exhausting, and easily the most gruelling bug hunt I've ever done. Utterly worth it when I fixed it. And what's even better, I found and squashed two other bugs I hadn't even noticed, lol -
My boss wants to be asynchronous with php. Then to render the backend async he wants to use beanstalk using python to be scalable.
I said : we can use node.js it's already asynchronous. And we don't care about the langage php python...
Boss : node.js isn't scalable, there is no security it's not good enough, it's not enough safe. I code with php since 15 years and it's better than node. To much problem in the node.js version 0.12.
OK BUT NOW WE HAVE NODEJS VERSION 6 LTS. WAKE UP. OMG I GIVE IT UP LET'S GO.5 -
A little late but whatever.
About half a year ago, I started working on setting up self hosted (slippy) maps. For one, because of privacy reasons, for two, because it'd be in my own control and I could, with enough knowledge, be entirely in control of how this would work.
While the process has been going on for hours every day for about half a year (with regular exceptions), I'll briefly lay out what I've accomplished.
I started with the OpenMapTiles project and tried to implement it myself. This went well but there were two major pitfalls:
1. It worked postgres database based. This is fine but when you want to have the entire world.... the queries took insanely long (minutes, at lower zoom levels) and quite intimate postgres/tooling knowledge was required, which I don't have.
2. Due to the long queries and such, the performance was so bad that the maps could take minutes to render and when you'd want that in production... yeah, no.
After quite some time I finally let that idea sail and started looking into the MBTiles solution; generating sqlite databases of geojson features. Very fast data serving but the rendering can take quite some time.
After some more months, I finally got the hang of it to the point that I automated 50-70 percent of the entire process. The one problem? It takes a shitload of resources and time to generate a worldwide mbtiles database.
After infinite numbers of trial and error, I figured out that one can devide a 'render' (mbtiles aka sqlite database) into multiple layers (one for building data, one for water, one for roads and so on), so I started doing renders that way.
Result? Styling became way more easy and logical and one could pick specific data to display; only want to display the roads? Its way more simple this way. (Not impossible otherwise but figuring out how that works... Good luck).
Started rendering all the countries, continents and such this way and while this seemed like a great idea; the entire world is at 3-4 percent after about a month. And while 40-70 percent generates 10 times as fast, that's still way too slow.
Then, I figured out that you can fetch data per individual layer/source. Thus, I could render every layer separately which is way faster.
Tried that with a few very tiny datasets and bam, it works. (And still very fast).
So, now, I'm generating all layers per continent. I want to do it world based but figured out that that's just not manageable with my resources/budget.
Next to that, I'm working on an API which will have exactly the features I want/need!13 -
(long post is long)
This one is for the .net folks. After evaluating the technology top to bottom and even reimplementing several examples I commonly use for smoke testing new technology, I'm just going to call it:
Blazor is the next Silverlight.
It's just beyond the pale in terms of being architecturally flawed, and yet they're rushing it out as hard as possible to coincide with the .Net 5 rebranding silo extravaganza. We are officially entering round 3 of "sacrifice .Net on the altar of enterprise comfort." Get excited.
Since we've arrived here, I can only assume the Asp.net Ajax fiasco is far enough in the past that a new generation of devs doesn't recall its inherent catastrophic weaknesses. The architecture was this:
1. Create a component as a "WebUserControl"
2. Any time a bound DOM operation occurs from user interaction, send a payload back to the server
3. The server runs the code to process the event; it spits back more HTML
Some client-side js then dutifully updates the UI by unceremoniously stuffing the markup into an element's innerHTML property like so much sausage.
If you understand that, you've adequately understood how Blazor works. There's some optimization like signalR WebSockets for update streaming (the first and only time most blazor devs will ever use WebSockets, I even see developers claiming that they're "using SignalR, Idserver4, gRPC, etc." because the template seeds it for them. The hubris.), but that's the gist. The astute viewer will have noticed a few things here, including the disconnect between repaints, inability to blend update operations and transitions, and the potential for absolutely obliterative, connection-volatile, abusive transactional logic flying back and forth to the server. It's the bring out your dead approach to seeing how much of your IT budget is dedicated to paying for bandwidth and CPU time.
Blazor goes a step further in the server-side render scenario and sends every DOM event it binds to the server for processing. These include millisecond-scale events like scroll, which, at least according to GitHub issues, devs are quickly realizing requires debouncing, though they aren't quite sure how to accomplish that. Since this immediately becomes an issue with tickets saying things like, "scroll event crater server, Ugg need help! You said Blazorclub good. Ugg believe, Ugg wants reparations!" the team chooses a great answer to many problems for the wrong reasons:
gRPC
For those who aren't familiar, gRPC has a substantial amount of compression primarily courtesy of a rather excellent binary format developed by Google. Who needs the Quickie Mart, or indeed a sound markup delivery and view strategy when you can compress the shit out of the payload and ignore the problem. (Shhh, I hear you back there, no spoilers. What will happen when even that compression ceases to cut it, indeed). One might look at all this inductive-reasoning-as-development and ask themselves, "butwai?!" The reason is that the server-side story is just a way to buy time to flesh out the even more fundamentally broken browser-side story. To explain that, we need a little perspective.
The relationship between Microsoft and it's enterprise customers is your typical mutually abusive co-dependent relationship. Microsoft goes through phases of tacit disinterest, where it virtually ignores them. And rightly so, the enterprise customers tend to be weaksauce, mono-platform, mono-language types who come to work, collect a paycheck, and go home. They want to suckle on the teat of the vendor that enables them to get a plug and play experience for delivering their internal systems.
And that's fine. But it's also dull; it's the spouse that lets themselves go, it's the girlfriend in the distracted boyfriend meme. Those aren't the people who keep your platform relevant and competitive. For Microsoft, that crowd has always been the exploratory end of the developer community: alt.net, and more recently, the dotnet core community (StackOverflow 2020's most loved platform, for the haters). Alt.net seeded every competitive advantage the dotnet ecosystem has, and dotnet core capitalized on. Like DI? You're welcome. Are you enjoying MVC? Your gratitude is understood. Cool serializers, gRPC/protobuff, 1st class APIs, metadata-driven clients, code generation, micro ORMs, etc., etc., et al. Dear enterpriseur, you are fucking welcome.
Anyways, b2blazor. So, the front end (Blazor WebAssembly) story begins with the average enterprise FOMO. When enterprises get FOMO, they start to Karen/Kevin super hard, slinging around money, privilege, premiere support tickets, etc. until Microsoft, the distracted boyfriend, eventually turns back and says, "sorry babe, wut was that?" You know, shit like managers unironically looking at cloud reps and demanding to know if "you can handle our load!" Meanwhile, any actual engineer hides under the table facepalming and trying not to die from embarrassment.36 -
I decided to do a daily Blender render just as a creative challenge for myself. I was so excited because I built an awesome computer this year and was eager to put it to use, only to be hit with seemingly random BSODs and crashes a couple seconds into the render.
As it turns out, my CPU cooler is woefully inadequate to handle 32 threads running at once, which was the cause of my crashes. Turning the render threads down to 8 keeps the temps low enough to finish the render.
In a way, it's a relief because the alternative was a problem with my $900 GPU. I've ordered a bigger CPU cooler and well reviewed heatsink paste which will hopefully fix it. If not, I'm going to have to go water-cooled, which I really don't want to do.34 -
<html><body>shit everywhere<meta>more shit</meta></meta><\meta>countles garbage code lines</body><head>[copy&pasted html code that actually works <img ... />]Tons of shitload</body></body></html>
Me: what are you reading?
PM: some email code that doesn't render well in the browser...
Me: let me see... OMFG!!!! who was the author of this garbage?
PM: Oh! it is not that bad! It was working well 'till today...
Me: But... but... this is really bad! you can't send this to customers!
PM: I think that the problem is the "/" at the img's end...
True story. -
Another rant reminded me:
I’ve been at the same company for almost five years now, and I’ve seen the dev teams grow: several juniors come in, and even a few that left - or more precisely, was let go. And for each of those that were let go, the root issue with them was always the same: not the lack of skills or ability to learn their trade at job, no. It was always communication issues serious enough to render working with them impossible.
So remember juniors, besides googling and problem solving, the most important skill to master in team-based dev envs is COMMUNICATION. You need to fucking be part of the team or you’re out, no matter how good you are technically. If that’s too hard, either this is the wrong line of work for you or you need to just go solo. It’s that simple, boys, girls, and everyone else on the spectrum.4 -
So apparently vmware has a blacklist of video drivers... if you want to ignore the blacklist you legit have to manually add a line to the .vmx file...
I understand it's not a massive problem everyone will have but at least add a GUI option...
Fuck me, if your going to have something that can potentially render your product useless to some users out of the box, at least give them a simple way, even if it is just under he VM's advanced options -.-1 -
Just a quick rant on JavaScript,
So there’s a lot of people hating javascript, and while not a long time ago i was part of them, but I changed my opinion a little.
I think JavaScript is a great way to deal with website programming as it is quick and efficient, but I would not say to program directly on it, use a js-compilable language (CoffeScript, TypeScript, Kotlin(I think), etc.), but then you might say: “Well, no need for js then, compile it in byte code”. That would break the point of how I see web design/dev. The main intent behind webpages is to have an easy and fast way to send code to other computers to render them, that’s why it is interpreted: “Easy to send” and “*All* computers can handle it” with the proper browser. You need to be able to change the way the website is rendered and/or works sometimes, for diverse reasons like copy/pasting data, make it render properly or use plugins/add-ons to change that code to suit your needs.
I think js should be kept as a “readable byte-code”, so that means: {
Keep comments when compiling the js-compilable code,
Add standardized machine-readable comments that will indicate to smart code viewers how to show a particular thing (Like have a higher-end function compiled in js shown as a minimized code with explanations of the function)
Keep it nicely formated and don’t obfuscate (coz that’s annoying)
Etc.
}
So you bypass the quirks and all that pesky js stuff, while keeping it’s good sides.
-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-
Part 2:
Web design for non-web:
Ok so things like node.js, electron, react-native and all that stuff; I won’t say they’re bad but...
Why we have this is because web designers wanted to make desktop apps and were like “Hey! Making web pages is easy! Let’s port it to desktop”, the problem is: Web technologies were made to work on a restricted canvas, aka a browser. It’s good on web for reasons mention earlier and more. But it’s not on desktop! You’re trying to push it outside of those boundaries. It’s difficult to make it break that canvas and go outside, make something that really works! For social media clients and that kind of stuff that you want to make a little more inclusive, yes! it’s a great idea (hello devrantron ;), but not if it’s an exact same copy of the website, just use the website. But for things that are supposed to really make use of YOUR computer; no!
I see those PWA (progressive webapps aka mobile app, but it’s an offline website”), I stand for the same positions, social media and those sort of things: yes, great idea! Games? 🤢.
I have way more to say but I have difficulties to remember them while reading, so feel free to comment your thoughts
Lol, “just a quick rant”1 -
WH YDOES IT FUCKING THINK I AM NOT WRITING TO GL_POSITION FOR FUCK'S SAKE I JUST WANT TO RENDER MY OBAMA PRISM WITHOUT WORRIES! THIS IS A HUGE PROBLEM THAT I HAVE BEEN WORKING ON FOR 3 FUCKING DAYS NOW WITH NOTHING BUT HEADACHES AS THE REWARD! FUCK YOU WHO INVENTED OPENGL5
-
Unpopular opinion:
Coding on paper exams actually do help at beginner stages of learning to code.
It makes you at least think how to write things simply, without overthinking the problem, makes you familiar with semicolons (so all you stupid fks wont complain that it has taken you 2 hours to find missing semicolon (actually, who has ever encountered that problem, besides memes?)), makes you learn the syntax, just many benefits that spoiled OOP/FP starting kids cant see, because they relied on autocomplete so much.
God, I hate people who are trying to render things stupid just because they can't see the fking point -.-'
Losing my mind about who goes into "programming" and who calls himself "developer" is just fueled by that.8 -
"Warning: Functions are not valid as a React child. This may happen if you return a Component instead of <Component /> from render. Or maybe you meant to call this function rather than return it."
WHERE. TELL ME WHERE THE FUCK THE PROBLEM IS HAPPENING. OR SHUT THE FUCK UP.3 -
I can't sign in to my wifi router on my phone because of a stupid JS bug. When you tap on the password box it uses JS to check if the username field is blank. If it is, it auto focuses on that instead. The problem is that it doesn't it think the username field has any content even when it does. So I can't enter my password! I tried blocking JS, then it doesn't render anything. It has no accessibility at all. Thanks a lot TP-Link.8
-
I spent the entire day trying to force WordPress to enqueue jQuery and fonts.googleapis.com in the footer. Why is this so hard to do? Why is this still a problem? Does Automattic just not know that PageSpeed Insights flagging render blocking is even a thing?
-
Someone else having this weird browser render issue on Mac OS El Cap? As far as I know it happens on both Opera and Chrome. (which use somewhat similar engines of course)
Setup
Macbook Pro Early 2011
Mac OS X El Capitan
Latest Chrome / Opera1 -
¡¡Good news!! Finally solved the image upload problem with lumen and angular. It happens that, even the $request in Lumen was "empty" it turned out that the actual image file was a binary object inside the Lumen $request variable that didn't render because the browser, postman and everything I tried couldn't understand it (maybe something to do with the Content-Type). I figured out and solved it, now I can easily save, delete and even modify images when are in the server side.
One more thing... My code was fine the whole time, l mean like, 3 days of finding a big that doesn't exists haha
Everyday we learn some new si*t
For those who don't know what I'm talking about, the story is right here:
http://stackoverflow.com/questions/...
PS: thanks guys, I really appreciate your comments: @champion01 @itsdaniel0 @dfox @joetj3