Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API

From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "json"
-
"Hey, Root, someone screwed up and now all of our prod servers are running this useless query constantly. I know I already changed your priorities six times in the past three weeks, but: Go fix it! This is higher priority! We already took some guesses at how and supplied the necessary code changes in the ticket, so this shouldn't take you long. Remember, HIGH PRIORITY!"
1. I have no idea how to reproduce it.
2. They have no idea how to reproduce it.
3. The server log doesn't include queries.
4. The application log doesn't include queries.
5. The tooling intercepts and strips out some log entries the legendary devs considered useless. (Tangent: It also now requires a tool to read the logs because log entries are now long json blobs instead of plain text.)
6. The codebase uses different loggers like everywhere, uses a custom logger by default, and often overwrites that custom logger with the default logger some levels in. gg
7. The fixes shown in the ticket are pretty lame. (I've fixed these already, and added one they missed.)
8. I'm sick and tired and burned out and just can't bring myself to care. I'm only doing this so i don't get fired.
9. Why not have the person who screwed this up fix it? Did they quit? I mean, I wouldn't blame them.
Why must everything this company does be so infuriatingly complicated?11 -
Storytime!
Manager: Hey fullstackchris, the maps widget on our app stopped working recently...
Dev: (Skeptical, little did he know) Sigh... probably didn't raise quota or something stupid... Logs on to google cloud console to check it out...
Google Dashboard: Your bill.... $5,197 (!!!!!!) Payment method declined (you think?!)
Dev: 😱 WTF!?!?!! (Calls managers) Uh, we have HUGE problem, charges for $5000+ in our google account, did you guys remove the quota limits or not see any limit reached warnings!?
Managers: Uh, we didn't even know that an API could cost money, besides, we never check that email account!
Dev: 🤦♂️ yeah obviously you get charged, especially when there have literally been millions of requests. Anyway, the bigger question is where or how our key got leaked. Somewhat started hammering one of the google APIs with one of our keys (Proceeds to hunt for usages of said API key in the codebase)
Dev: (sweating 😰) did I expose an API key somewhere? Man, I hope it's not my fault...
Terminal: grep results in, CMS codebase!
Dev: ah, what do we have here, app.config, seems fine.... wait, why did they expose it to a PUBLIC endpoint?!
Long story short:
The previous consulting goons put our Angular CMS JSON config on a publicly accessible endpoint.
WITH A GOOGLE MAPS API KEY.
JUST CHILLING IN PLAINTEXT.
Though I'm relieved it wasn't my fault, my faith in humanity is still somewhat diminished. 🤷♂️
Oh, and it's only Monday. 😎
Cheers!6 -
It fucking infuriates me when the docs mention to create a certain config file "xyz.json" but completely leave it to your imagination where the hell this fucking file should be placed!
Come on, I'm not getting paid for detective work, fucking imbeciles.4 -
I worked once in a company which had this tourist app which should show places on map of the city. Unfortunately it slowed the App down to load more then a couple of places. Their solution was to limit the number of loaded places to teb and prohibited zomming out. I made it handle thousands of places at the same time. Main reason for the Performance issue was, that they sent all data they had about places big, big json objects with large text blobs) to the frontend. This part was easy, I instead sent only the data needed for the map like coordinates and icon type obviously. But still the backend struggeled hard with many objects from the DB, because they built a really shitty orm or what ever this was supposed to be: every line of data retrieved from the DB was immideatly wrapped in some class wich direved from another class which had some magic methods in it which caused some absurd loops over all other obejcts and even more DB queries in unexpected moments and also in the fucking constructor. So it turrned out that the map issue was only the top of the iceberg, since using any data from the DB was extremely expensive. The hard part was to understand the insaness of this abnormination and find the bottlenecks.8
-
Once saw a sign that said "this isn't reality". Being a js dev I pondered to myself, what context is 'this' in? Is the sign referring to the actual text? Or is it referring to the actual word 'this'? Or the environment we was in? Or the music that we was playing in the background? 🤔🤷♀️2
-
"So Alecx, how did you solve the issues with the data provided to you by hr for <X> application?"
Said the VP of my institution in charge of my department.
"It was complex sir, I could not figure out much of the general ideas of the data schema since it came from a bunch of people not trained in I.T (HR) and as such I had to do some experiments in the data to find the relationships with the data, this brought about 4 different relations in the data, the program determined them for me based on the most common type of data, the model deemed it a "user", from that I just extracted the information that I needed, and generated the tables through Golang's gorm"
VP nodding and listening intently...."how did you make those relationships?" me "I started a simple pattern recognition module through supervised mach..." VP: Machine learning, that sounds like A.I
Me: "Yes sir, it was, but the problem was fairly easy for the schema to determ.." VP: A.I, at our institution, back in my day it was a dream to have such technology, you are the director of web tech, what is it to you to know of this?"
Me: "I just like to experiment with new stuff, it was the easiest rout to determine these things, I just felt that i should use it if I can"
VP: "This is amazing, I'll go by your office later"
Dude speaks wonders of me. The idea was simple, read through the CSV that was provided to me, have the parsing done in a notebook, make it determine the relationships in the data and spout out a bunch of JSON that I could use. Hook it up to a simple gorm golang script and generate the tables for that. Much simpler than the bullshit that we have in php. I used this to create a new database since the previous application had issues. The app will still have a php frontend and backend, but now I don't leave the parsing of the data to php, which quite frankly, php sucks for imho. The Python codebase will then create the json files through the predictive modeling (98% accuaracy) and then the go program will populate the db for me.
There are also some node scripts that help test the data since the data is json.
All in all a good day of work. The VP seems scared since he knows no one on this side of town knows about this kind of tech. Me? I am just happy I get to experiment. Y'all should have seen his face when I showed him a rather large app written in Clojure, the man just went 0.0 when he saw Lisp code.
I think I scare him.12 -
"Hey! We have an issue. Your API is returning the JSON with the decimal value as 1.07E instead of 10000000."
".... that's the correct representation of that value" - Me
"Oh. Well, we're a midpoint passing the value to someone else. We see that notation when we print it out, but when we pass it to them they're not getting the field at all, but we're passing it exactly the same to them so we don't understand why they're not able to process it..."
*Don't be a jerk.... don't be a jerk.... don't be a jerk*2 -
Come on guys, use those JSON schemas properly. The number of times I see people going "err, few strings here, any other properties ok, no properties required, job done." Dahhh, that's pointless. Lock that bloody thing down as much as you possibly can.
I mean, the damn things can be used to fail fast whenever you misspell properties, miss required properties, format dates wrong - heck, even when you want to validate the set format of an array - and then libraries will throw back an error to your client (or logs if you're just on backend) and tell you *exactly what's wrong.* It's immensely powerful, and all you have to do is craft a decent schema to get it for free.
If I see one more person trying to validate their JSON manually in 500 lines of buggy code and throwing ambiguous error messages when it could have been trivially handled by a schema, I'm going to scream.18 -
JSON de/serializer for C++, recommendations?
I used boost serialization until now, and it's fine.
But I will need to send some stuff over REST protocol to my future-web-GUI.
I would like something that is easy to build, not bloated, and could handle class serialization.
Shoot!18 -
I know streams are useful to enable faster per-chunk reading of large files (eg audio/ video), and in Node they can be piped, which also balances memory usage (when done correctly). But suppose I have a large JSON file of 500MB (say from a scraper) that I want to run some string content replacements on. Are streams fit for this kind of purpose? How do you go about altering the JSON file 'chunks' separately when the Buffer.toString of a chunk would probably be invalid partial JSON? I guess I could rephrase as: what is the best way to read large, structured text files (json, html etc), manipulate their contents and write them back (without reading them in memory at once)?4
-
FUCK YOU PHP, FUCK YOU SYMFONY AND DEFINITELY FUCK YOU SHOPWARE.
Don't get me wrong, PHP has evolved a lot, but the stuff people are building with it is just the biggest load of fucking shit I have ever seen: Shopware. Shopware is the most ass-sucking abomination to extend. It's nearly impossible to develop anything beyond "use the standard features and shut the fuck up" that is more sophisticated than a fucking calculator.
The architecture of this pile of crap is the worst bullshit ever. A mix of OOP, randomly making use of non OOP concepts and features together with the unnecessarily HUGE amount of useless interfaces and classes. Sometimes I feel like it's 90% fucking shitty boilerplate shit.
And don't get me started with TWIG. It's a nice thought, but WHY THE BLOODY FUCK WOULD YOU NOT USE VUE IF YOU ARE ALREADY USING IT FOR A DIFFERENT PART OF SHOPWARE. This makes no fucking sense whatsoever and makes development of new features a huge pain in the ass. I can't comprehend how people actually like using this shit.
OH AND THE DATABASE. OH MY FUCKING GOD. This one is bad. Ever tried to figure anything out in a database where random strings (yes MySQL "relational" - you might think) that are stored as text in a JSON format make up some object or relations during runtime?? Why the fuck do you have foreign and primary keys if you don't use them properly??
Seriously you can't even figure out which data belongs to what because the architecture just sucks fucking ass. FUCK YOU Shopware wankers, you suck, your product sucks, your support sucks, your architecture sucks and you keep releasing new versions that regularly break shit even in minor versions.
I used to like PHP, but not in projects like these.6 -
Follow-up on https://devrant.com/rants/5001553/...
How the fuck are Jupyter notebooks so popular in research? Like some dude had an idea to take perfectly good markdown and python code, add a whole lot of transitional properties to make version control impossible, encode it as JSON on the assumption that a human could somehow look at it and make sense of countless escaped characters and base64 encoded data, create dedicated software people need to install in order to read what used to be simple plain text, and think "This. This is what 99% of data researchers will use from now on." And somehow, overwhelming majority of researchers agreed that this extremely inefficient data format is the best there is and they should develop all their tools around it.12 -
The Zen Of Ripping Off Airtable:
(patterned after The Zen Of Python. For all those shamelessly copying airtables basic functionality)
*Columns can be *reordered* for visual priority and ease of use.
* Rows are purely presentational, and mostly for grouping and formatting.
* Data cells are objects in their own right, so they can control their own rendering, and formatting.
* Columns (as objects) are where linkages and other column specific data are stored.
* Rows (as objects) are where row specific data (full-row formatting) are stored.
* Rows are views or references *into* columns which hold references to the actual data cells
* Tables are meant for managing and structuring *small* amounts of data (less than 10k rows) per table.
* Just as you might do "=A1:A5" to reference a cell range in google or excel, you might do "opt(table1:columnN)" in a column header to create a 'type' for the cells in that column.
* An enumeration is a table with a single column, useful for doing the equivalent of airtables options and tags. You will never be able to decide if it should be stored on a specific column, on a specific table for ease of reuse, or separately where it and its brothers will visually clutter your list of tables. Take a shot if you are here.
* Typing or linking a column should be accomplishable first through a command-driven type language, held in column headers and cells as text.
* Take a shot if you somehow ended up creating any of the following: an FSM, a custom regex parser, a new programming language.
* A good structuring system gives us options or tags (multiple select), selections (single select), and many other datatypes and should be first, programmatically available through a simple command-driven language like how commands are done in datacells in excel or google sheets.
* Columns are a means to organize data cells, and set constraints and formatting on an entire range.
* Row height, can be overridden by the settings of a cell. If a cell overrides the row and column render/graphics settings, then it must be drawn last--drawing over the default grid.
* The header of a column is itself a datacell.
* Columns have no order among themselves. Order is purely presentational, and stored on the table itself.
* The last statement is because this allows us to pluck individual columns out of tables for specialized views.
*Very* fast scrolling on large datasets, with row and cell height variability is complicated. Thinking about it makes me want to drink. You should drink too before you embark on implementing it.
* Wherever possible, don't use a database.
If you're thinking about using a database, see the previous koan.
* If you use a database, expect to pick and choose among column-oriented stores, and json, while factoring for platform support, api support, whether you want your front-end users to be forced to install and setup a full database,
and if not, what file-based .so or .dll database engine is out there that also supports video, audio, images, and custom types.
* For each time you ignore one of these nuggets of wisdom, take a shot, question your sanity, quit halfway, and then write another koan about what you learned.
* If you do not have liquor on hand, for each time you would take a shot, spank yourself on the ass. For those who think this is a reward, for each time you would spank yourself on the ass, instead *don't* spank yourself on the ass.
* Take a sip if you *definitely* wildly misused terms from OOP, MVP, and spreadsheets.5 -
Do you prefer working remote or in the office?
I like to view these as equal choices. I don't think offices are as bad as some people make them up to be (of course heavily depends on the environment and company!). In opposed to working remote, offices can help you focus more on work and leave work problems "at work".
While, if you're working remote, it's not unlikely for work and personal life to become so intertwined that it's hard to tell them apart anymore. It's hard to not think about work at home if home is where you work.
I believe an ideal is somewhere inbetween - not entirely remote, but not entirely office focused either. Mixing and matching seems like the one approach where you get to have most of the benefits, but with the least negatives. It doesn't seem necessary to always be at the office but it also doesn't seem good for you to always be cooped up at home.7 -
Data wrangling is messy
I'm doing the vegetation maps for the game today, maybe rivers if it all goes smoothly.
I could probably do it by hand, but theres something like 60-70 ecoregions to chart,
each with their own species, both fauna and flora. And each has an elevation range its
found at in real life, so I want to use the heightmap to dictate that. Who has time for that? It's a lot of manual work.
And the night prior I'm thinking "oh this will be easy."
yeah, no.
(Also why does Devrant have to mangle my line breaks? -_-)
Laid out the requirements, how I could go about it, and the more I look the more involved
it gets.
So what I think I'll do is automate it. I already automated some of the map extraction, so
I don't see why I shouldn't just go the distance.
Also it means, later on, when I have access to better, higher resolution geographic data, updating it will be a smoother process. And even though I'm only interested in flora at the moment, theres no reason I can't reuse the same system to extract fauna information.
Of course in-game design there are some things you'll want to fudge. When the players are exploring outside the rockies in a mountainous area, maybe I still want to spawn the occasional mountain lion as a mid-tier enemy, even though our survivor might be outside the cats natural habitat. This could even be the prelude to a task you have to do, go take care of a dangerous
creature outside its normal hunting range. And who knows why it is there? Wild fire? Hunted by something *more* dangerous? Poaching? Maybe a nuke plant exploded and drove all the wildlife from an adjoining region?
who knows.
Having the extraction mostly automated goes a long way to updating those lists down the road.
But for now, flora.
For deciding plants and other features of the terrain what I can do is:
* rewrite pixeltile to take file names as input,
* along with a series of colors as a key (which are put into a SET to check each pixel against)
* input each region, one at a time, as the key, and the heightmap as the source image
* output only the region in the heightmap that corresponds to the ecoregion in the key.
* write a function to extract the palette from the outputted heightmap. (is this really needed?)
* arrange colors on the bottom or side of the image by hand, along with (in text) the elevation in feet for reference.
For automating this entire process I can go one step further:
* Do this entire process with the key colors I already snagged by hand, outputting region IDs as the file names.
* setup selenium
* selenium opens a link related to each elevation-map of a specific biome, and saves the text links
(so I dont have to hand-open them)
* I'll save the species and text by hand (assuming elevation data isn't listed)
* once I have a list of species and other details, to save them to csv, or json, or another format
* I save the list of species as csv or json or another format.
* then selenium opens this list, opens wikipedia for each, one at a time, and searches the text for elevation
* selenium saves out the species name (or an "unknown") for the species, and elevation, to a text file, along with the biome ID, and maybe the elevation code (from the heightmap) as a number or a color (probably a number, simplifies changing the heightmap later on)
Having done all this, I can start to assign species types, specific world tiles. The outputs for each region act as reference.
The only problem with the existing biome map (you can see it below, its ugly) is that it has a lot of "inbetween" colors. Theres a few things I can do here. I can treat those as a "mixing" between regions, dictating the chance of one biome's plants or the other's spawning. This seems a little complicated and dependent on a scraped together standard rather than actual data. So I'm thinking instead what I'll do is I'll implement biome transitions in code, which makes more sense, and decouples it from relying on the underlaying data. also prevents species and terrain from generating in say, towns on the borders of region, where certain plants or terrain features would be unnatural. Part of what makes an ecoregion unique is that geography has lead to relative isolation and evolutionary development of each region (usually thanks to mountains, rivers, and large impassible expanses like deserts).
Maybe I'll stuff it all into a giant bson file or maybe sqlite. Don't know yet.
As an entry level programmer I may not know what I'm doing, and I may be supposed to be looking for a job, but that won't stop me from procrastinating.
Data wrangling is fun.2 -
Company A (mine) is building a site for company B, company B employs company C to manage their inventory database, company C exports inventory as JSON to company A, company C says, this field (SKUs) will be an object (skus = {...}) when it only has 1 value, but an array (skus = [{}, {}, ...]) when there are multiple SKUs, company A (me) tells company B to tell company C to ensure it's always an array.... company B is scared of company C and company A (me) is always cleaning up company C's shit6
-
Are you content with your job or always searching for greener pastures?
I'm split inbetween. Current pay is very decent and working conditions are flexible. However, the work itself is not always that great. I find it to be comedically true how "hard workers" don't get promoted or bonuses, they get more work. There has recently been a heavy influx of what I'd like to classify as "shit tickets" since a guy who was the main "shit ticket doer" left the company after being burnt out.
I work with a small-ish digital agency as a BE dev, so I'm mostly dealing with small to medium scale projects built with WordPress/WooCommerce, with often custom API/ERP integrations on top. I'm not a big fan of the stack as a developer but as a contractor I can understand the business reasons why it is used. Part of me wants to find something else, part of me thinks I'm looking for a perfect company that doesn't exist and I should lower my expectations -- I might find better work for sure, but with the same pay and conditions? It seems unlikely at the moment. The company was recently acquired, so I'm hopeful for the future.4 -
I rewrote a giant VBA workbook (lots of business formulas, custom pivots of the data) into Java apps/microservices so that new tabs, other reports can be easily added using (JSON) data from the other apps.
In general, I was the only dev in the team that understood that monoliths are hard to change or scale... -
I was today years old when I created my first node.js Hello World app. I managed to add some basic routing with express.js, managed to read a mysql table and some JSON. I'm starting to 'get' the node.js environment. Shows you're never too old to learn something new.3
-
me: builds a python-script to transport data in .json-format into a config-file written in .xml for a coworker
my boss: "I am glad you have earned yourself a reputation as the 'programmer' in our team" -
Holy shit! Why is it so hard to find a JSON viewer for android that doesn't absolutely suck ass?
I want a viewer that:
-reads json from the clipboard
-queries json for strings in context
-allows copying of values, not the key and the value put together
Major bonus points:
-JSONPath querying
-Free/pay version without ads7 -
Is there a portable DB format like sqlite but stores data like Mongo.
Each record contains key value pairs.
I guess I could install Mongo again... But kinda want to play with the data first. Pulls from a web api
I guess other alternative is to just save the json responses to disk in separate folders and files for now...
And abstract the DB layer behind an interface6 -
Why does .NET keep using XML for its project files? Just wondering if there is something I'm missing and there is a good reason why you'd want to use XML instead of JSON.16
-
I completely *detest* that the MongoDB *shell* is just a fucking jS interpreter with extra API calls sprinkled on top and whoever came up with that idea should have all their commits reverted immediately, working with that thing is a punishment!
I don't even know a way to parse and chew through the json it spits out in my own json viewers, as it's "Extended", and none of my editors understand that!
Ugh, haven't been this frustrated with a tool for a while...5 -
Newbie here, is storing json in sql (as like column data) as weird as I think it is or are there valid use cases?
The one I heard, didn't get the details but something like "startup move fast"12 -
To all the M1 Macbook owners out there that use it for software development - do you regret your pick? If so, why (or what doesn't work)?4
-
I tried Appgyver over christmas, since it promised easy front-end (no-)coding I was looking forward to getting rudimentary frontends done faster.
Well, the first real project that I wanted to start didn't compile anymore (internal error from the service), the page told me to reload and try again.
It failed again... And again.
Fine with me, I only spent 10 minutes on the project at this point.
I then searched for the bugreporting page and found it. The sad thing is that when I wanted to open a ticket the server crashed. It didn't even return a HTTP error, just a JSON saying there is a error and a GUID.
I have to say, if a Dev decided to have holidays without new issues that's one way of getting that done.3 -
Thoughts on the Elastic stack? I.e. if you have used it and regretted it, please share your horror stories. Or, if you feel that it's great, share why that's the case or how did it help your company/business/product/whatever.4