Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API

From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "map"
-
I’m surrounded by idiots.
I’m continually reminded of that fact, but today I found something that really drives that point home.
Gather ‘round, everybody, it’s story time!
While working on a slow query ticket, I perused the code, finding several causes, and decided to run git blame on the files to see what dummy authored the mental diarrhea currently befouling my screen. As it turns out, the entire feature was written by mister legendary Apple golden boy “Finder’s Keeper” dev himself.
To give you the full scope of this mess, let me start at the frontend and work my way backward.
He wrote a javascript method that tracks whatever row was/is under the mouse in a table and dynamically removes/adds a “.row_selected” class on it. At least the js uses events (jQuery…) instead of a `setTimeout()` so it could be worse. But still, has he never heard of :hover? The function literally does nothing else, and the `selectedRow` var he stores the element reference in isn’t used elsewhere.
This function allows the user to better see the rows in the API Calls table, for which there is a also search feature — the very thing I’m tasked with fixing.
It’s worth noting that above the search feature are two inputs for a date range, with some helpful links like “last week” and “last month” … and “All”. It’s also worth noting that this table is for displaying search results of all the API requests and their responses for a given merchant… this table is enormous.
This search field for this table queries the backend on every character the user types. There’s no debouncing, no submit event, etc., so it triggers on every keystroke. The actual request runs through a layer of abstraction to parse out and log the user-entered date range, figure out where the request came from, and to map out some column names or add additional ones. It also does some hard to follow (and amazingly not injectable) orm condition building. It’s a mess of functional ugly.
The important columns in the table this query ultimately searches are not indexed, despite it only looking for “create_order” records — the largest of twenty-some types in the table. It also uses partial text matching (again: on. every. single. keystroke.) across two varchar(255)s that only ever hold <16 chars — and of which users only ever care about one at a time. After all of this, it filters the results based on some uncommented regexes, and worst of all: instead of fetching only one page’s worth of results like you’d expect, it fetches all of them at once and then discards what isn’t included by the paginator. So not only is this a guaranteed full table scan with partial text matching for every query (over millions to hundreds of millions of records), it’s that same full table scan for every single keystroke while the user types, and all but 25 records (user-selectable) get discarded — and then requeried when the user looks at the next page of results.
What the bloody fucking hell? I’d swear this idiot is an intern, but his code does (amazingly) actually work.
No wonder this search field nearly crashed one of the servers when someone actually tried using it.
Asdfajsdfk.rant fucking moron even when taking down the server hey bob pass me all the paperclips mysql murder terrible code slow query idiot can do no wrong but he’s the golden boy idiots repeatedly murdered mysql in the face21 -
I've told the same story multiple times but the subject of "painfully incompetent co-worker" just comes up so often.
I have one coworker who never really grew out of the mindset of a college student who just took "Intro to Programming". If a problem couldn't be solved with a textbook solution, then he would waste several weeks struggling with it until eventually someone else would pick up the ticket and finish it in a couple days. And if he found a janky workaround for a problem, he'd consider that problem "solved" and never think about it again.
He lasted less than a year before he quit and went off to get a job somewhere else, leaving the rest of our team to comb through his messy code and fix it. Unfortunately, our team is mostly split across multiple projects and our processes were kind of a mess until recently, so his work was a black box of code that had never been reviewed.
I opened the box and found only despair and regret. He was using deprecated features from older versions of the language to work around language bugs that no longer existed. He overused constants to a ridiculous degree (hundreds of constants, all of which are used exactly once in the entire codebase, stored in a single mutable map variable named "values" because why not). He didn't really seem to understand DRY at all. His code threw warnings in the IDE and had weird errors that were difficult to reproduce because there was just a whole pile of race conditions.
I ended up having to take a figurative hacksaw to it, ripping out huge sections of unnecessary crap and modernizing it to use recent language features to get rid of the deprecation warnings and intermittent errors. And then I went through the same process again for every other project he'd touched.
Good riddance. -
I worked once in a company which had this tourist app which should show places on map of the city. Unfortunately it slowed the App down to load more then a couple of places. Their solution was to limit the number of loaded places to teb and prohibited zomming out. I made it handle thousands of places at the same time. Main reason for the Performance issue was, that they sent all data they had about places big, big json objects with large text blobs) to the frontend. This part was easy, I instead sent only the data needed for the map like coordinates and icon type obviously. But still the backend struggeled hard with many objects from the DB, because they built a really shitty orm or what ever this was supposed to be: every line of data retrieved from the DB was immideatly wrapped in some class wich direved from another class which had some magic methods in it which caused some absurd loops over all other obejcts and even more DB queries in unexpected moments and also in the fucking constructor. So it turrned out that the map issue was only the top of the iceberg, since using any data from the DB was extremely expensive. The hard part was to understand the insaness of this abnormination and find the bottlenecks.8
-
Why the fuck do managers think beacuse a component has been build by another developer a shit ton of time ago, we can still reuse that fucking code.
For fucks sake, I had to rebuild a whole fucking map component that needed contextual filter and the fuckers just add extra functionality without consulting me. And gave me a tight schedule bc the customer, who btw disappeared for 6 months, will be mad for wasting his precious fucking time.
Fuck these clients.9 -
EoS1: This is the continuation of my previous rant, "The Ballad of The Six Witchers and The Undocumented Java Tool". Catch the first part here: https://devrant.com/rants/5009817/...
The Undocumented Java Tool, created by Those Who Came Before to fight the great battles of the past, is a swift beast. It reaches systems unknown and impacts many processes, unbeknownst even to said processes' masters. All from within it's lair, a foggy Windows Server swamp of moldy data streams and boggy flows.
One of The Six Witchers, the Wild One, scouted ahead to map the input and output data streams of the Unmapped Data Swamp. Accompanied only by his animal familiars, NetCat and WireShark.
Two others, bold and adventurous, raised their decompiling blades against the Undocumented Java Tool beast itself, to uncover it's data processing secrets.
Another of the witchers, of dark complexion and smooth speak, followed the data upstream to find where the fuck the limited excel sheets that feeds The Beast comes from, since it's handlers only know that "every other day a new one appears on this shared active directory location". WTF do people often have NPC-levels of unawareness about their own fucking jobs?!?!
The other witchers left to tend to the Burn-Rate Bonfire, for The Sprint is dark and full of terrors, and some bigwigs always manage to shoehorn their whims/unrelated stories into a otherwise lean sprint.
At the dawn of the new year, the witchers reconvened. "The Beast breathes a currency conversion API" - said The Wild One - "And it's claws and fangs strike mostly at two independent JIRA clusters, sometimes upserting issues. It uses a company-deprecated API to send emails. We're in deep shit."
"I've found The Source of Fucking Excel Sheets" - said the smooth witcher - "It is The Temple of Cash-Flow, where the priests weave the Tapestry of Transactions. Our Fucking Excel Sheets are but a snapshot of the latest updates on the balance of some billing accounts. I spoke with one of the priestesses, and she told me that The Oracle (DB) would be able to provide us with The Data directly, if we were to learn the way of the ODBC and the Query"
"We stroke at the beast" - said the bold and adventurous witchers, now deserving of the bragging rights to be called The Butchers of Jarfile - "It is actually fewer than twenty classes and modules. Most are API-drivers. And less than 40% of the code is ever even fucking used! We found fucking JIRA API tokens and URIs hard-coded. And it is all synchronous and monolithic - no wonder it takes almost 20 hours to run a single fucking excel sheet".
Together, the witchers figured out that each new billing account were morphed by The Beast into a new JIRA issue, if none was open yet for it. Transactions were used to update the outstanding balance on the issues regarding the billing accounts. The currency conversion API was used too often, and it's purpose was only to give a rough estimate of the total balance in each Jira issue in USD, since each issue could have transactions in several currencies. The Beast would consume the Excel sheet, do some cryptic transformations on it, and for each resulting line access the currency API and upsert a JIRA issue. The secrets of those transformations were still hidden from the witchers. When and why would The Beast send emails, was still a mistery.
As the Witchers Council approached an end and all were armed with knowledge and information, they decided on the next steps.
The Wild Witcher, known in every tavern in the land and by the sea, would create a connector to The Red Port of Redis, where every currency conversion is already updated by other processes and can be quickly retrieved inside the VPC. The Greenhorn Witcher is to follow him and build an offline process to update balances in JIRA issues.
The Butchers of Jarfile were to build The Juggler, an automation that should be able to receive a parquet file with an insertion plan and asynchronously update the JIRA API with scores of concurrent requests.
The Smooth Witcher, proud of his new lead, was to build The Oracle Watch, an order that would guard the Oracle (DB) at the Temple of Cash-Flow and report every qualifying transaction to parquet files in AWS S3. The Data would then be pushed to cross The Event Bridge into The Cluster of Sparks and Storms.
This Witcher Who Writes is to ride the Elephant of Hadoop into The Cluster of Sparks an Storms, to weave the signs of Map and Reduce and with speed and precision transform The Data into The Insertion Plan.
However, how exactly is The Data to be transformed is not yet known.
Will the Witchers be able to build The Data's New Path? Will they figure out the mysterious transformation? Will they discover the Undocumented Java Tool's secrets on notifying customers and aggregating data?
This story is still afoot. Only the future will tell, and I will keep you posted.6 -
Everyone and their dog is making a game, so why can't I?
1. open world (check)
2. taking inspiration from metro and fallout (check)
3. on a map roughly the size of the u.s. (check)
So I thought what I'd do is pretend to be one of those deaf mutes. While also pretending to be a programmer. Sometimes you make believe
so hard that it comes true apparently.
For the main map I thought I'd automate laying down the base map before hand tweaking it. It's been a bit of a slog. Roughly 1 pixel per mile. (okay, 1973 by 1067). The u.s. is 3.1 million miles, this would work out to 2.1 million miles instead. Eh.
Wrote the script to filter out all the ocean pixels, based on the elevation map, and output the difference. Still had to edit around the shoreline but it sped things up a lot. Just attached the elevation map, because the actual one is an ugly cluster of death magenta to represent the ocean.
Consequence of filtering is, the shoreline is messy and not entirely representative of the u.s.
The preprocessing step also added a lot of in-land 'lakes' that don't exist in some areas, like death valley. Already expected that.
But the plus side is I now have map layers for both elevation and ecology biomes. Aligning them close enough so that the heightmap wasn't displaced, and didn't cut off the shoreline in the ecology layer (at export), was a royal pain, and as super finicky. But thankfully thats done.
Next step is to go through the ecology map, copy each key color, and write down the biome id, courtesy of the 2017 ecoregions project.
From there, I write down the primary landscape features (water, plants, trees, terrain roughness, etc), anything easy to convey.
Main thing I'm interested in is tree types, because those, as tiles, convey a lot more information about the hex terrain than anything else.
Once the biomes are marked, and the tree types are written, the next step is to assign a tile to each tree type, and each density level of mountains (flat, hills, mountains, snowcapped peaks, etc).
The reference ids, colors, and numbers on the map will simplify the process.
After that, I'll write an exporter with python, and dump to csv or another format.
Next steps are laying out the instances in the level editor, that'll act as the tiles in question.
Theres a few naive approaches:
Spawn all the relevant instances at startup, and load the corresponding tiles.
Or setup chunks of instances, enough to cover the camera, and a buffer surrounding the camera. As the camera moves, reconfigure the instances to match the streamed in tile data.
Instances here make sense, because if theres any simulation going on (and I'd like there to be), they can detect in event code, when they are in the invisible buffer around the camera but not yet visible, and be activated by the camera, or deactive themselves after leaving the camera and buffer's area.
The alternative is to let a global controller stream the data in, as a series of tile IDs, corresponding to the various tile sprites, and code global interaction like tile picking into a single event, which seems unwieldy and not at all manageable. I can see it turning into a giant switch case already.
So instances it is.
Actually, if I do 16^2 pixel chunks, it only works out to 124x68 chunks in all. A few thousand, mostly inactive chunks is pretty trivial, and simplifies spawning and serializing/deserializing.
All of this doesn't account for
* putting lakes back in that aren't present
* lots of islands and parts of shores that would typically have bays and parts that jut out, need reworked.
* great lakes need refinement and corrections
* elevation key map too blocky. Need a higher resolution one while reducing color count
This can be solved by introducing some noise into the elevations, varying say, within one standard div.
* mountains will still require refinement to individual state geography. Thats for later on
* shoreline is too smooth, and needs to be less straight-line and less blocky. less corners.
* rivers need added, not just large ones but smaller ones too
* available tree assets need to be matched, as best and fully as possible, to types of trees represented in biome data, so that even if I don't have an exact match, I can still place *something* thats native or looks close enough to what you would expect in a given biome.
Ponderosa pines vs white pines for example.
This also doesn't account for 1. major and minor roads, 2. artificial and natural attractions, 3. other major features people in any given state are familiar with. 4. named places, 5. infrastructure, 6. cities and buildings and towns.
Also I'm pretty sure I cut off part of florida.
Woops, sorry everglades.
Guess I'll just make it a death-zone from nuclear fallout.
Take that gators!5 -
The meeting attendee added that Zuckerberg appeared red-eyed and told staff he might tear up during the meeting, not because of the topics being discussed but because he'd "scratched his eye," Bloomberg reported.
Isn't this soul satisfying?
Iceberg losing billions in few hours and pressurising 'FAANG' bootlickers who joined Meta to narrow down on video saying he did not expect TikTok as a competition.
LMAO. Fucking hilarious.
Map the normalisation curve for anything and it's always symmetrical. Facebook's downfall has started.
Source: https://businessinsider.com/mark-zu...10 -
My God is map development insane. I had no idea.
For starters did you know there are a hundred different satellite map providers?
Just kidding, it's more than that.
Second there appears to be tens of thousands of people whos *entire* job is either analyzing map data, or making maps.
Hell this must be some people's whole *existence*. I am humbled.
I just got done grabbing basic land cover data for a neoscav style game spanning the u.s., when I came across the MRLC land cover data set.
One file was 17GB in size.
Worked out to 1px = 30 meters in their data set. I just need it at a one mile resolution, so I need it in 54px chunks, which I'll have to average, or find medians on, or do some sort of reduction.
Ecoregions.appspot.com actually has a pretty good data set but that's still manual. I ran it through gale and theres actually imperceptible thin line borders that share a separate *shade* of their region colors with the region itself, so I ran it through a mosaic effect, to remove the vast bulk of extraneous border colors, but I'll still have to hand remove the oceans if I go with image sources.
It's not that I havent done things involved like that before, naturally I'm insane. It's just involved.
The reason for editing out the oceans is because the oceans contain a metric boatload of shades of blue.
If I'm converting pixels to tiles, I have to break it down to one color per tile.
With the oceans, the boundary between the ocean and shore (not to mention depth information on the continental shelf) ends up sharing colors when I do a palette reduction, so that's a no-go. Of course I could build the palette bu hand, from sampling the map, and then just measure the distance of each sampled rgb color to that of every color in the palette, to see what color it primarily belongs to, but as it stands ecoregions coloring of the regions has some of them *really close* in rgb value as it is.
Now what I also could do is write a script to parse the shape files, construct polygons in sdl or love2d, and save it to a surface with simplified colors, and output that to bmp.
It's perfectly doable, but technically I'm on savings and supposed to be calling companies right now to see if I can get hired instead of being a bum :P20 -
~Of open worlds and post apocalypses~
"Like dude. What if we made an open world game with a map the size of the united states?"
"But really, what if we put the bong down for a minute, and like actually did it?"
"With spiky armor and factions and cats?"
"Not cats."
"Why not though?"
"Cuz dude, people ate them all."
https://yintercept.substack.com/p/...8 -
Data wrangling is messy
I'm doing the vegetation maps for the game today, maybe rivers if it all goes smoothly.
I could probably do it by hand, but theres something like 60-70 ecoregions to chart,
each with their own species, both fauna and flora. And each has an elevation range its
found at in real life, so I want to use the heightmap to dictate that. Who has time for that? It's a lot of manual work.
And the night prior I'm thinking "oh this will be easy."
yeah, no.
(Also why does Devrant have to mangle my line breaks? -_-)
Laid out the requirements, how I could go about it, and the more I look the more involved
it gets.
So what I think I'll do is automate it. I already automated some of the map extraction, so
I don't see why I shouldn't just go the distance.
Also it means, later on, when I have access to better, higher resolution geographic data, updating it will be a smoother process. And even though I'm only interested in flora at the moment, theres no reason I can't reuse the same system to extract fauna information.
Of course in-game design there are some things you'll want to fudge. When the players are exploring outside the rockies in a mountainous area, maybe I still want to spawn the occasional mountain lion as a mid-tier enemy, even though our survivor might be outside the cats natural habitat. This could even be the prelude to a task you have to do, go take care of a dangerous
creature outside its normal hunting range. And who knows why it is there? Wild fire? Hunted by something *more* dangerous? Poaching? Maybe a nuke plant exploded and drove all the wildlife from an adjoining region?
who knows.
Having the extraction mostly automated goes a long way to updating those lists down the road.
But for now, flora.
For deciding plants and other features of the terrain what I can do is:
* rewrite pixeltile to take file names as input,
* along with a series of colors as a key (which are put into a SET to check each pixel against)
* input each region, one at a time, as the key, and the heightmap as the source image
* output only the region in the heightmap that corresponds to the ecoregion in the key.
* write a function to extract the palette from the outputted heightmap. (is this really needed?)
* arrange colors on the bottom or side of the image by hand, along with (in text) the elevation in feet for reference.
For automating this entire process I can go one step further:
* Do this entire process with the key colors I already snagged by hand, outputting region IDs as the file names.
* setup selenium
* selenium opens a link related to each elevation-map of a specific biome, and saves the text links
(so I dont have to hand-open them)
* I'll save the species and text by hand (assuming elevation data isn't listed)
* once I have a list of species and other details, to save them to csv, or json, or another format
* I save the list of species as csv or json or another format.
* then selenium opens this list, opens wikipedia for each, one at a time, and searches the text for elevation
* selenium saves out the species name (or an "unknown") for the species, and elevation, to a text file, along with the biome ID, and maybe the elevation code (from the heightmap) as a number or a color (probably a number, simplifies changing the heightmap later on)
Having done all this, I can start to assign species types, specific world tiles. The outputs for each region act as reference.
The only problem with the existing biome map (you can see it below, its ugly) is that it has a lot of "inbetween" colors. Theres a few things I can do here. I can treat those as a "mixing" between regions, dictating the chance of one biome's plants or the other's spawning. This seems a little complicated and dependent on a scraped together standard rather than actual data. So I'm thinking instead what I'll do is I'll implement biome transitions in code, which makes more sense, and decouples it from relying on the underlaying data. also prevents species and terrain from generating in say, towns on the borders of region, where certain plants or terrain features would be unnatural. Part of what makes an ecoregion unique is that geography has lead to relative isolation and evolutionary development of each region (usually thanks to mountains, rivers, and large impassible expanses like deserts).
Maybe I'll stuff it all into a giant bson file or maybe sqlite. Don't know yet.
As an entry level programmer I may not know what I'm doing, and I may be supposed to be looking for a job, but that won't stop me from procrastinating.
Data wrangling is fun.2 -
me: builds a python-script to transport data in .json-format into a config-file written in .xml for a coworker
my boss: "I am glad you have earned yourself a reputation as the 'programmer' in our team" -
Me to my peer: "Yo the code that they sent us works but it sucks and is insecure"
My peer: "Yo that sucks they should definitely change that, go submit a ticket so they change it up, that really sucks!"
Me: *prepares ticket, gets it checked by peer:
My peer: YOoOoO U cAnT tElL tHeM tO cHaNgE oR tElL tHeM hOw tO wRiTe tHeIr CoDe ThAt ThEy DeLiVeR tO uS!1!1!eleven
--
classics1 -
Redux is absolute fucking insanity. There is no way in hell there isn't a better way to do this. Absolutely unintelligible, convoluted piece of garbage.6
-
Devs who use the array map method for purposes other than generating a new array, and who use an empty return statement to satisfy the linter, should receive a slap in the face. A gentle one, but a slap nonetheless4
-
Clock: Friday, 15:45 PM - just go home now, idiot!
My stupid ass: Makes substantial changes to the code I have been working on and breaks it.4 -
I had to contact technical support for an API. I’m pretty sure I was emailing with a bot because I was getting all sorts of stupid replies.
Me: I’m using your SDK for language X. It’s returned null for some properties. In the user portal, I can see there are values for those properties for the transaction. I don’t know why I’m not receiving them on my end.
Tech Support: Hi! I see the following was sent in the API response. [Sends api response to me.] You can also go the the portal to see those values.
Me: Yeah, I know. You just repeated everything I wrote to you. I don’t want to go to the portal. I told you I want to figure out why your SDK doesn’t seem to map those properties correctly when I receive the api response.
TS: Let me look at the docs. I think you need to send the properties you want in your request in order to get them back in the reply from the api. Such as <property>value<property> in the xml message.
🤨 The docs do not say that. They don’t even imply that.
Me: What the fuck?! That makes absolutely no sense. We have already established that the api **is** returning values for those properties. I want to troubleshoot why your SDK is mapping them as NULL. -
So, just to recap if you missed the last few episodes. I've been a web developer for years. But I decided to get a degree and go to uni.
Also I am firmly on the fewer comments side of the debate about self-documenting code. Even though I usually rephrase it and say method and variable names are comments. Basic idea: something is unclear, you should leave a comment. But, before you leave a comment, take a good look at your method. Can you rename a variable? Maybe the method name? Maybe extract a method into smaller methods so it doesn't need a comment? And only if you fail to do so, leave a comment.
Alright, now that we rehashed that, uni coding makes no bloody sense.
There is code that is abbreviated to the max (or min).
And then, they need everything commented. I mean, why do that? Why call the parameters a and b instead of base and exponent. And then say:
"But write a whole article about it above the method". Like:
a is the base for a power operation.
b is the exponent for a power operation.
return int representing a to the power of b
How about just do this:
public static int power(int base, int exponent).
How is this not the same documentation?
Is it because we're at a uni, a place for smart people and smart people shouldn't have an issue keeping a mental map between the variables and their meaning?
Or is it because they are all mathematicians. All respect to applied mathematics. I mean, the function about exponent calculation, I was not aware that it could be that effective. But on the other hand, keep mathematicians away from programming. I get it, writing maths per hand doesn't have intellisense and therefore you don't want to write long variable names. It's and old tradition. Yada yada, yah.
But programming is not maths. And maths shouldn't be maths like that. Right naming makes it simpler. It might still be a while until we all LaTeX rather than handwrite and be able to give maths right naming schemes, but programming is beyond the point. Calling the array you handing in a function A and the one that you're returning D makes no fucking sense.4 -
I have two laravel apps. Both sharing one redis db. One has App/Post one has App/Models/Blog/Post. When I unserialize models from redis cache saved by the other app I get issues because it cannot find the right model to hydrate.
How would you build a custom map to get the right model?15 -
I can see people who fight biological limitations and deny dogma from a mile distance. They shine from within.
If the immense work of creating a full map of pathological automatic thoughts is required to beat anxiety, I will do it just because it constitutes pure, conceptual beauty and the idea of reasoning beating primal biology. When I look in a mirror, I see myself shine.
There are no limitations to personal development other than laws of physics.1 -
My first experience with a computer was actually sitting on my dad's lap and watching him play world of warcraft - and damn, that hit hard1
-
imagine you "manage" your applications firewall rules by writing them into spreadsheets and sending them to the fw-admins to implement them
imagine they don't implement exactly what you tell them / implement rules for you that you did not ask for
also imagine it is crucial that you have a reliable source of information about what firewall rules are and are not implemented for your application, because the firewall-guys cant simply check and tell you what rules are implemented for your application
:o2 -
I am a stupid monkey that spilled coffee all over his keyboard, and now it seems to be EOL.
Looking for some "inspiration" - what keyboards (and mice, might also just buy a whole new set) are you guys using?9