Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "run json"
-
A guy on another team who is regarded by non-programmers as a genius wrote a python script that goes out to thousands of our appliances, collects information, compiles it, and presents it in a kinda sorta readable, but completely non-transferable format. It takes about 25 minutes to run, and he runs it himself every morning. He comes in early to run it before his team's standup.
I wanted to use that data for apps I wrote, but his impossible format made that impractical, so I took apart his code, rewrote it in perl, replaced all the outrageous hard-coded root passwords with public keys, and added concurrency features. My script dumps the data into a memory-resident backend, and my filterable, sortable, taggable web "frontend"(very generous nomenclature) presents the data in html, csv, and json. Compared to the genius's 25 minute script that he runs himself in the morning, mine runs in about 45 seconds, and runs automatically in cron every two hours.
Optimized!22 -
I'm convinced code addiction is a real problem and can lead to mental illness.
Dev: "Thanks for helping me with the splunk API. Already spent two weeks and was spinning my wheels."
Me: "I sent you the example over a month ago, I guess you could have used it to save time."
Dev: "I didn't understand it. I tried getting help from NetworkAdmin-Dan, SystemAdmin-Jake, they didn't understand what you sent me either."
Me: "I thought it was pretty simple. Pass it a query, get results back. That's it"
Dev: "The results were not in a standard JSON format. I was so confused."
Me: "Yea, it's sort-of JSON. Splunk streams the result as individual JSON records. You only have to deserialize each record into your object. I sent you the code sample."
Dev: "Your code didn't work. Dan and Jake were confused too. The data I have to process uses a very different result set. I guess I could have used it if you wrote the class more generically and had unit tests."
<oh frack...he's been going behind my back and telling people smack about my code again>
Me: "My code wouldn't have worked for you, because I'm serializing the objects I need and I do have unit tests, but they are only for the internal logic."
Dev:"I don't know, it confused me. Once I figured out the JSON problem and wrote unit tests, I really started to make progress. I used a tuple for this ... functional parameters for that...added a custom event for ... Took me a few weeks, but it's all covered by unit tests."
Me: "Wow. The way you explained the project was; get data from splunk and populate data in SQLServer. With the code I sent you, sounded like a 15 minute project."
Dev: "Oooh nooo...its waaay more complicated than that. I have this very complex splunk query, which I don't understand, and then I have to perform all this parsing, update a database...which I have no idea how it works. Its really...really complicated."
Me: "The splunk query returns what..4 fields...and DBA-Joe provided the upsert stored procedure..sounds like a 15 minute project."
Dev: "Maybe for you...we're all not super geniuses that crank out code. I hope to be at your level some day."
<frack you ... condescending a-hole ...you've got the same seniority here as I do>
Me: "No seriously, the code I sent would have got you 90% done. Write your deserializer for those 4 fields, execute the stored procedure, and call it a day. I don't think the effort justifies the outcome. Isn't the data for a report they'll only run every few months?"
Dev: "Yea, but Mgr-Nick wanted unit tests and I have to follow orders. I tried to explain the situation, but you know how he is."
<fracking liar..Nick doesn't know the difference between a unit test and breathalyzer test. I know exactly what you told Nick>
Dev: "Thanks again for your help. Gotta get back to it. I put a due date of April for this project and time's running out."
APRIL?!! Good Lord he's going to drag this intern-level project for another month!
After he left, I dug around and found the splunk query, the upsert stored proc, and yep, in about 15 minutes I was done.1 -
Buckle up kids, this one gets saucy.
At work, we have a stress test machine that trests tensile, puncture and breaking strength for different materials used (wood construction). It had a controller software update that was supposed to be installed. I was called into the office because the folks there were unable to install it, they told me the executable just crashed, and wanted me to take a look as I am the most tech-savvy person there.
I go to the computer and open up the firmware download folder. I see a couple folders, some random VBScript file, and Installation.txt. I open the TXT, and find the first round of bullshit.
"Do not run the installer executable directly as it will not work. Run install.vbs instead."
Now, excuse me for a moment, but what kind of dick-cheese-sniffing cockmonger has end users run VBScript files to install something in 2018?! Shame I didn't think of opening it up and examining it for myself to find out what that piece of boiled dogshit did.
I suspend my cringe and run it, and lo and behold, it installs. I open the program and am faced with entering a license key. I'm given the key by the folks at the office, but quickly conclude no ways of entering it work. I reboot the program and there is an autofilled key I didn't notice previously. Whatever, I think, and hit OK.
The program starts fine, and I try with the login they had previously used. Now it doesn't work for some reason. I try it several times to no avail. Then I check the network inspector and notice that when I hit login, no network activity happens in the program, so I conclude the check must be local against some database.
I browse to the program installation directory for clues. Then I see a folder called "Databases".
"This can't be this easy", I think to myself, expecting to find some kind of JSON or something inside that I can crawl for clues. I open the folder and find something much worse. Oh, so much worse.
I find <SOFTWARE NAME>.accdb in the folder. At this point cold sweat is already running down my back at the sheer thought of using Microsoft Access for any program, but curiosity takes over and I open it anyway.
I find the database for the entire program inside. I also notice at this point that I have read/write access to the database, another thing that sent my alarm bells ringing like St. Pauls cathedral. Then I notice a table called "tUser" in the left panel.
Fearing the worst, I click over and find... And you knew it was coming...
Usernames and passwords in plain text.
Not only that, they're all in the format "admin - admin", "user - user", "tester - tester".
I suspend my will to die, login to the program and re-add the account they used previously. I leave the office and inform the peeps that the program works as intended again.
I wish I was making this shit up, but I really am not. What is the fucking point of having a login system at all when your users can just open the database with a program that nowadays comes bundled with every Windows install and easily read the logins? It's not even like the data structure is confusing like minified JSON or something, it's literally a spreadsheet in a program that a trained monkey could read.
God bless them and Satan condemn the developers of this fuckawful program.8 -
Rant++
Just want to mention this mother fucker named Allen. Allen is a fuckin' badass. This guy fucks.
This bad mother fucker like single handedly wrote one of the best fuckin libraries for displaying tabular data, and threw in a shit ton of JSON capabilities just to make it that much fuckin' cooler.
And why? Because he fuckin fucks thats fucking why. I already told you.
And does this son of a fuck support his fucking product? You bet your sweet basement dwelling programming fucking ass that he does.
Dude works that support forum like he no doubt works that pussy. With full and complete knowledge and control, but with a gentle mature touch. Fuckin right.
Do you hate PHP? Well this fuck made a Node version? Do you hate Node? Use that shit with pure JS client side. This dude doesn't give a fuck. Don't have a table? Pass that shit JSON and GET A FUCKIN TABLE!!!
Some dipshit in your company needs to edit a database table but there's no way on sweet baby jesus's green earth you're giving that dumb fuck DB creds? Run that dumb fuck up a fully editable admin portal in like 5 fucking minutes because fuck him.
There are few things in my life I love. My corgi and my kids, and most days my wife.
But always fucking DATATABLES.
So, Allen Jardine... just wanted to give you and your product DataTables and Editor a fucking devRant shout out. It continues to be the one ray of light that works as expected and is extremely well supported when it doesn't and some days I just need that fucking consistency in my life man. So thanks.7 -
Okay, story time.
Back during 2016, I decided to do a little experiment to test the viability of multithreading in a JavaScript server stack, and I'm not talking about the Node.js way of queuing I/O on background threads, or about WebWorkers that box and convert your arguments to JSON and back during a simple call across two JS contexts.
I'm talking about JavaScript code running concurrently on all cores. I'm talking about replacing the god-awful single-threaded event loop of ECMAScript – the biggest bottleneck in software history – with an honest-to-god, lock-free thread-pool scheduler that executes JS code in parallel, on all cores.
I'm talking about concurrent access to shared mutable state – a big, rightfully-hated mess when done badly – in JavaScript.
This rant is about the many mistakes I made at the time, specifically the biggest – but not the first – of which: publishing some preliminary results very early on.
Every time I showed my work to a JavaScript developer, I'd get negative feedback. Like, unjustified hatred and immediate denial, or outright rejection of the entire concept. Some were even adamantly trying to discourage me from this project.
So I posted a sarcastic question to the Software Engineering Stack Exchange, which was originally worded differently to reflect my frustration, but was later edited by mods to be more serious.
You can see the responses for yourself here: https://goo.gl/poHKpK
Most of the serious answers were along the lines of "multithreading is hard". The top voted response started with this statement: "1) Multithreading is extremely hard, and unfortunately the way you've presented this idea so far implies you're severely underestimating how hard it is."
While I'll admit that my presentation was initially lacking, I later made an entire page to explain the synchronisation mechanism in place, and you can read more about it here, if you're interested:
http://nexusjs.com/architecture/
But what really shocked me was that I had never understood the mindset that all the naysayers adopted until I read that response.
Because the bottom-line of that entire response is an argument: an argument against change.
The average JavaScript developer doesn't want a multithreaded server platform for JavaScript because it means a change of the status quo.
And this is exactly why I started this project. I wanted a highly performant JavaScript platform for servers that's more suitable for real-time applications like transcoding, video streaming, and machine learning.
Nexus does not and will not hold your hand. It will not repeat Node's mistakes and give you nice ways to shoot yourself in the foot later, like `process.on('uncaughtException', ...)` for a catch-all global error handling solution.
No, an uncaught exception will be dealt with like any other self-respecting language: by not ignoring the problem and pretending it doesn't exist. If you write bad code, your program will crash, and you can't rectify a bug in your code by ignoring its presence entirely and using duct tape to scrape something together.
Back on the topic of multithreading, though. Multithreading is known to be hard, that's true. But how do you deal with a difficult solution? You simplify it and break it down, not just disregard it completely; because multithreading has its great advantages, too.
Like, how about we talk performance?
How about distributed algorithms that don't waste 40% of their computing power on agent communication and pointless overhead (like the serialisation/deserialisation of messages across the execution boundary for every single call)?
How about vertical scaling without forking the entire address space (and thus multiplying your application's memory consumption by the number of cores you wish to use)?
How about utilising logical CPUs to the fullest extent, and allowing them to execute JavaScript? Something that isn't even possible with the current model implemented by Node?
Some will say that the performance gains aren't worth the risk. That the possibility of race conditions and deadlocks aren't worth it.
That's the point of cooperative multithreading. It is a way to smartly work around these issues.
If you use promises, they will execute in parallel, to the best of the scheduler's abilities, and if you chain them then they will run consecutively as planned according to their dependency graph.
If your code doesn't access global variables or shared closure variables, or your promises only deal with their provided inputs without side-effects, then no contention will *ever* occur.
If you only read and never modify globals, no contention will ever occur.
Are you seeing the same trend I'm seeing?
Good JavaScript programming practices miraculously coincide with the best practices of thread-safety.
When someone says we shouldn't use multithreading because it's hard, do you know what I like to say to that?
"To multithread, you need a pair."18 -
Okay guys, this is it!
Today was my final day at my current employer. I am on vacation next week, and will return to my previous employer on January the 2nd.
So I am going back to full time C/C++ coding on Linux. My machines will, once again, all have Gentoo Linux on them, while the servers run Debian. (Or Devuan if I can help it.)
----------------------------------------------------------------
So what have I learned in my 15 months stint as a C++ Qt5 developer on Windows 10 using Visual Studio 2017?
1. VS2017 is the best ever.
Although I am a Linux guy, I have owned all Visual C++/Studio versions since Visual C++ 6 (1999) - if only to use for cross-platform projects in a Windows VM.
2. I love Qt5, even on Windows!
And QtDesigner is a far better tool than I thought. On Linux I rarely had to design GUIs, so I was happily surprised.
3. GUI apps are always inferior to CLI.
Whenever a collegue of mine and me had worked on the same parts in the same libraries, and hit the inevitable merge conflict resolving session, we played a game: Who would push first? Him, with TortoiseGit and BeyondCompare? Or me, with MinTTY and kdiff3?
Surprise! I always won! 😁
4. Only shortly into Application Development for Windows with Visual Studio, I started to miss the fun it is to code on Linux for Linux.
No matter how much I like VS2017, I really miss Code::Blocks!
5. Big software suites (2,792 files) are interesting, but I prefer libraries and frameworks to work on.
----------------------------------------------------------------
For future reference, I'll answer a possible question I may have in the future about Windows 10: What did I use to mod/pimp it?
1. 7+ Taskbar Tweaker
https://rammichael.com/7-taskbar-tw...
2. AeroGlass
http://www.glass8.eu/
3. Classic Start (Now: Open-Shell-Menu)
https://github.com/Open-Shell/...
4. f.lux
https://justgetflux.com/
5. ImDisk
https://sourceforge.net/projects/...
6. Kate
Enhanced text editor I like a lot more than notepad++. Aaaand it has a "vim-mode". 👍
https://kate-editor.org/
7. kdiff3
Three way diff viewer, that can resolve most merge conflicts on its own. Its keyboard shortcuts (ctrl-1|2|3 ; ctrl-PgDn) let you fly through your files.
http://kdiff3.sourceforge.net/
8. Link Shell Extensions
Support hard links, symbolic links, junctions and much more right from the explorer via right-click-menu.
http://schinagl.priv.at/nt/...
9. Rainmeter
Neither as beautiful as Conky, nor as easy to configure or flexible. But it does its job.
https://www.rainmeter.net/
10 WinAeroTweaker
https://winaero.com/comment.php/...
Of course this wasn't everything. I also pimped Visual Studio quite heavily. Sam question from my future self: What did I do?
1 AStyle Extension
https://marketplace.visualstudio.com/...
2 Better Comments
Simple patche to make different comment styles look different. Like obsolete ones being showed striked through, or important ones in bold red and such stuff.
https://marketplace.visualstudio.com/...
3 CodeMaid
Open Source AddOn to clean up source code. Supports C#, C++, F#, VB, PHP, PowerShell, R, JSON, XAML, XML, ASP, HTML, CSS, LESS, SCSS, JavaScript and TypeScript.
http://www.codemaid.net/
4 Atomineer Pro Documentation
Alright, it is commercial. But there is not another tool that can keep doxygen style comments updated. Without this, you have to do it by hand.
https://www.atomineerutils.com/
5 Highlight all occurrences of selected word++
Select a word, and all similar get highlighted. VS could do this on its own, but is restricted to keywords.
https://marketplace.visualstudio.com/...
6 Hot Commands for Visual Studio
https://marketplace.visualstudio.com/...
7 Viasfora
This ingenious invention colorizes brackets (aka "Rainbow brackets") and makes their inner space visible on demand. Very useful if you have to deal with complex flows.
https://viasfora.com/
8 VSColorOutput
Come on! 2018 and Visual Studio still outputs monochromatically?
http://mike-ward.net/vscoloroutput/
That's it, folks.
----------------------------------------------------------------
No matter how much fun it will be to do full time Linux C/C++ coding, and reverse engineering of WORM file systems and proprietary containers and databases, the thing I am most looking forward to is quite mundane: I can do what the fuck I want!
Being stuck in a project? No problem, any of my own projects is just a 'git clone' away. (Or fetch/pull more likely... 😜)
Here I am leaving a place where gitlab.com, github.com and sourceforge.net are blocked.
But I will also miss my collegues here. I know it.
Well, part of the game I guess?7 -
The more I use Go, the more i start to like it. I didn’t realize how nice being able to generate binaries for every OS that matters was, until I had that power. It beats the hell out of trying to distribute a Python app for sure.
Sure, it has its warts.
It’s overly bureaucratic in the same way Java is.
I hate that you can’t import something without using it (most people I’d wager preemptively import libraries they know they’re gonna need even if the code isn’t written yet)
I really wish there was a way to just say “See this JSON blob? All those keys and values are strings, trust me, you don’t need me to tell you the type of each one individually.”
Generics would be nice.
I’d kill for exceptions - any decently sized go program is going to have very many if err checks where most could be condensed down to a single try/catch in most other langs.
I wish the tooling was better. Dependency management was a solved problem when Go was released and yet they chose to ship without it. There’s still no standard. Many hours of time have been wasted dinking with this.
But ya know what? Even with those warts, it’s still easier to write than Java. It’s still write once run anywhere, it’s blazing fast, and doesn’t require your end user to install an entire freakin runtime.
<3 Go2 -
I'm not much a fan of JavaScript. In fact, I am not very fond of any dynamic language, but JavaScript is one of my least favorites.
But this isn't about that. I use NodeJS for all of my web serving. Why would I do that? Am I a masochist? Yes.
But this isn't about that. I use NodeJS because having the same language on client and server side is something that web has never really seen before, not in this scale. Something I really really love with NodeJS is socket connections. There's no JSON parsing, no annoying conversion of data types. You can get network data and use it AS IS. If you transmit over socket using JSON, as soon as that data arrives on the server, it is available to use. It gets me so hard.
JavaScript is built to be single-threaded, and this is rooted deep into the language. NodeJS knows this isn't gonna work. And while there's still no way to multi-thread, they still try their best and allow certain operations (Usually IO) to run async as if you were using ajax.
With modern versions of the language, the server and client side can share scripts! With the inclusion of the import keyword, for the first time I have ever seen, client and server can use the same fucking code. That is mindblowing.
Syntax is still fluffy and data types are still mushy but the ability to use the same language on both sides is respectable. Can't wait for WebASM to go mainstream and open this opportunity up to more languages!10 -
Kinda all other devs translate incompetent with a lack of knowledge
i would go with not able to recognize his lack of knowledge
Story 1:
once we had a developer, whom was given the task to try out a REST/Json API using Java
after a week he presented his solution,
2 Classes with actual code and a micro-framework for parsing and generating JSON
so i asked him, why he didn't use a framework like jackson or gson, while this presentation he felt pretty offended by this question
a couple of weeks later i met him and he was full of thanks for me, because i showed him, that there are frameworks like that, and even said sorry for feeling offended
- no incompentence here -
Story 2:
once i had a lead dev, who was so self-confident, he refactored (for no reason but refactoring itself) half the app and commited without trying to compile/run test
but not only once, but on a regular basis
as you may imagine, he broke the application multiple times and blamed the other devs
- incompentence warning-
Story 3:
once i had a dev, which wanted to stay up with the latest versions of his libraries
npm update && commit without trying to compile/testing multiple times
- incompentence warning-
Story 4:
once i had a cto
* thought email-marketing is cutting edge
* removed test-systems completely to reduce costs
* liked wordpress
* sets vm to sleep without letting anyone know
- i guess incompetent alert -2 -
In one of my teams there was this non-IT girl.
One morning, she asks out loud:
G - Can I run a Json?
Me - Wait! What are you trying to do?
G - I need to deploy my changes into the Dev server.
Suddently I realized what she meant.
Me- It's Jenkins! Not Json. :D1 -
Elasticsearch, from the bottom of my heart...
How can one ecosystem be so batshit crazy inconsistent?
Seemingly every agent does the same (e.g. filebeat vs journalbeat vs packetbeat)… yet there are subtle changes in configuration everywhere.
Plus YML. The most shitty markup language one can use and the cockslubbing durps used it fucking everywhere.
Makes fun to have complex stuff and requiring a python Jinja to JSON to YML converter to be able to write the complex stuff without having the fucking migraine to count like a stupid 4 year old whitespace with both hands...
To make it even more absurd: the ingest pipelines which contain a lot of regular expressions / grok and are thus very prone to quoting issues... Yes. Let's do this in YML too.
If you need to add an fucking manual section how to debug YML errors you should have realized what a fucking stupid idea it was, morons.
Now I have the joy of having a python script regex quoting the shit for a Jinja template which then generates JSON which then generates YML.
Why the JSON part?
Yeah... Because ECS and changes in the upstream YML files / GitHub.
To be able to run diffs in a sane way because in YML distinguishing thing is pretty much impossible, so JSON as an intermediary format solely for the purpose of converting upstream YML to JSON to diff it against modified JSON ingest pipelines downstream.
I fucking hate elasticsearch8 -
So I've created this account specifically for this rant. I usually just browse anonymously.
I've recently been hired in a big company that is one of the biggest Microsoft users in the world and my essentially revolves on making it easier for our collaborators to work with SharePoint (and other ms software)
Never in my life have I hit that much of a roadblock. So for the past week I've been trying to integrate what Ms calls webparts. And to modify the default webparts Ms provides you need to their properties (or Metadata). Except here's the big problem these are NOT documented anywhere (unless I failed to find it, if you do know where it is documented please HMU), so I've found myself trying to reverse engineer the js scripts that are served with SharePoint to figure out what the webpart properties are called and what type of data they are! I've been going through endless github repos using the CSOM nuget package (it's the library everyone uses to interact with SharePoint) and I finally found out about this other library called PnP which is a wrapper around CSOM that makes it easier to use. That wrapper has a way for me to load existing page and look at the properties of existing webparts. So here I thought it was the end of my suffering and I could finally get an idea of what it should be. Turns out this method doesn't work because one of the dependencies it has has had breaking changes and they still updated it even though it breaks their code! So for the past two days I've been trying random combinations of key values with different data types and json serialization methods.
Oh and yeah I've also looked at all the http calls via the chrome network tab, the metadata is not served as an individual file but is computed by Ms servers when they're serving you their html files.
So uh yeah run from CSOM if you can..3 -
Fuck...
I'm not getting that job then.
So I just had one of those interview coding tests on hacker rank and screwed it up big time.
I'm a C# guy and it was a Java position. I worked with Java, like 10 years ago, and they're pretty similar so I brushed up over the last week when I had free time.
Absolutely blew it. It's not like it was hard, I just got into one question (of 6) and it ate up all of my time. The task was simple, make a JSON call, read the data, check if you need more calls, pull out a data field from all the concatenated results and return it in a sorted list. ONE HOUR it took me. A combination of not knowing the API well enough, simple syntax errors and relatively slow compilation.
Godammit.
The next question was implement an Object hierarchy but since I'd run out of time, all I got was the class declarations before the timer ran out.
fuck, fuck, fuck.
I guess the test did it's job and weeded out someone who can't contribute to the team...6 -
Vodafone India is so shit omfg
Run npm install, ERROR json parse error due to ssl exception
Run pip install, again ssl exception
Run gradle build, again ssl exception!!!
Now everytime i gotta make a new project or install a dependency in anything, i have to pray to the blood god that cache contains a valid/uncorrupted package dependency or else ill have to nuke cache and borrow internet from someone else.
Once i port it to some other operator, i am gonna incinerate this mf sim.12 -
Moved all my configuration to json files from normal JS last night. It took me 10 mins to convert. Everything worked perfectly.
This morning I woke up with angry messages from everyone in the team. No one could run their code anymore. It took me whole day to find out that those jsons were the issue. I still don't know how though. 😥1 -
I know streams are useful to enable faster per-chunk reading of large files (eg audio/ video), and in Node they can be piped, which also balances memory usage (when done correctly). But suppose I have a large JSON file of 500MB (say from a scraper) that I want to run some string content replacements on. Are streams fit for this kind of purpose? How do you go about altering the JSON file 'chunks' separately when the Buffer.toString of a chunk would probably be invalid partial JSON? I guess I could rephrase as: what is the best way to read large, structured text files (json, html etc), manipulate their contents and write them back (without reading them in memory at once)?4
-
Finally made my node production server stable enough that I could focus on writing tests*. I start by setting up docker, mocking cognito, preparing the database and everything. Reading up on Node test suites and following a short tut to set up my first unit test. Didn't go smoothly, but it's local and there are no deadlines so who cares. 4 days later, first assert.equal(1+1, 2) passes and I'm happy.
I start writing all sorts of tests, installing everything required into "devDependancies," and getting the joy of having some tests pass on first try with all asserts set up, feels good!
I decide to make a small update to production, so I add a test, run and see it fail, implement the feature, re-run and, it passes!
I push the feature to develop, test it, and it works as intended. Merge that to master and subsequently to one of my ec2 production servers**, and lo and behold, production server is on a bootloop claiming it "Cannot find module `graphql`". But how? I didn't change any production dependencies, and my package lock json is committed so wth?
I google the issue, but can't find anything relevant. The only thing that I could guess was that some dependencies (including graphql) were referenced*** in both, prod and dev, and were omitted when installed on a prod NODE_ENV, but googling that specific issue yielded no results, and I would have thought npm would be clever enough to see that and would always install those dependencies (spoiler: it didn't for me).
With reduced production capacity (having one server down) I decided to npm uninstall all dev dependencies anyway and see what happens. Aaaaand it works.....
So now I have a working production server, but broken local tests, and I'm not sure why npm is behaving like this...
* Yes I see the irony.
** No staging because $$$, also this is a personal project.
*** I am not directly referencing the same thing twice, it's probably a subdependency somewhere.2 -
To me this is one of the most interesting topics. I always dream about creating the perfect programming class (not aimed at absolute beginners though, in the end there should be some usable software artifact), because I had to teach myself at least half of the skills I need everyday.
The goal of the class, which has at least to be a semester long, is to be able to create industry-ready software projects with a distributed architecture (i.e. client-server).
The important thing is to have a central theme over the whole class. Which means you should go through the software lifecycle at least once.
Let's say the class consists of 10 Units à ~3 hours (with breaks ofc) and takes place once a week, because that is the absolute minimum time to enable the students to do their homework.
1. Project setup, explanation of the whole toolchain. Init repositories, create SSH keys for github/bitbucket, git crash course (provide a cheat sheet).
Create a hello world web app with $framework. Run the web server, let the students poke around with it. Let them push their projects to their repositories.
The remainder of the lesson is for Q&A, technical problems and so on.
Homework: Read the docs of $framework. Do some commits, just alter the HTML & CSS a bit, give them your personal touch.
For the homework, provide a $chat channel/forum/mailing list or whatever for questions where not only the the teacher should help, but also the students help each other.
2. Setup of CI/Build automation. This is one of the hardest parts for the teacher/uni because the university must provide the necessary hardware for it, which costs money. But the students faces when they see that a push to master automatically triggers a build and deploys it to the right place where they can reach it from the web is priceless.
This is one recurring point over the whole course, as there will be more software artifacts beside the web app, which need to be added to the build process. I do not want to go deeper here, whether you use Jenkins, or Travis or whatev and Ansible or Puppet or whatev for automation. You probably have some docker container set up for this, because this is a very tedious task for initial setup, probably way out of proportion. But in the end there needs to be a running web service for every student which they can reach over a personal URL. Depending on the students interest on the topic it may be also better to setup this already before the first class starts and only introduce them to all the concepts in a theory block and do some more coding in the second half.
Homework: Use $framework to extend your web app. Make it a bit more user interactive with buttons, forms or the like. As we still have no backend here, you can output to alert or something.
3. Create a minimal backend with $backendFramework. Only to have something which speaks with the frontend so you can create API calls going back and forth. Also create a DB, relational or not. Discuss DB schema/model and answer student questions.
Homework: Create a form which gets transformed into JSON and sent to the backend, backend stores the user information in the DB and should also provide a query to view the entry.
4. Introduce mobile apps. As it would probably too much to introduce them both to iOS and Android, something like React Native (or whatever the most popular platform-agnostic framework is then) may come in handy. Do the same as with the minimal web app and add the build artifacts to CI. Also talk about getting software to the app/play store (a common question) and signing apps.
Homework: Use the view API call from the backend to show the data on the mobile. Play around with the mobile project to display it in a nice way.
5. Introduction to refactoring (yes, really), if we are really talking about JS here, mention things like typescript, flow, elm, reason and everything with types which compiles to JS. Types make it so much easier to refactor growing codebases and imho everybody should use it.
Flowtype would make it probably easier to get gradually introduced in the already existing codebase (and it plays nice with react native) but I want to be abstract here, so that is just a suggestion (and 100% typed languages such as ELM or Reason have so much nicer errors).
Also discuss other helpful tools like linters, formatters.
Homework: Introduce types to all your API calls and some important functions.
6. Introduction to (unit) tests. Similar as above.
Homework: Write a unit test for your form.
(TBC)4 -
WTF!!!! I come back from a 1 week vacation and nothing has been done and some things seem to have gone to shit...
I transferred the responsibility of running and supporting a report thats supposed to go live to someone else. I show up today and check and well none of the reports for the last 2 weeks were run (first report was already late).
I sent out a few emails asking for feedback on a new JSON log I wanted to add so it can be used by ELK. The people I was asking (a senior dev on a sister team that shares ownership) never replied like he said he would.1 -
I HATE WINDOWS' WINDOW MANAGEMENT. I have two monitors and nothing can be maximized. Windows' spaces are terrible as well.
I am building in the back end in VS Code.
I have three terminals open because I need them to run multiple parts of the app locally.
I have postman open to try requests.
I have firefox for the orm system's documentation.
I have my database tool running as well.
I have an ERD diagram floating in a window.
I have another VS Code window showing a diff of my JSON compared to the version I'm replacing.
Also all of my team communication tools.
I have never hated shuffling windows around so much. Would it kill us to use some command line tools for http instead of Postman? Could we please get a decent shell in windows? Could we get some simple ways to switch between virtual desktops? Click click click. I can't automate clicking. Why do we use the most clicky tools we can find?17 -
Disclaimer: I should know what I'm doing but I don't. 😢
I'm a very experienced full stack dev (15+ years), but I don't know the more modern JS frameworks. I'm trying to learn React and I have a little project I'd like to do.
I have database (in both SQLite form and JSON form). I'd like to read from it, parse it and run various displays in a shared hosting environment (that doesn't have node). So webpack. And either an API to get the data or a React compatible SQL component.
But dagnamit, I cannot find a tutorial or example with this kind of set up and I can't figure it out. What packages do I need and what kind of config?
I genuinely thought this would be a traditional and simple architecture but I'm obviously mistaken. And I'm about to turn in my developer card because I'm clearly a stupid twonk.
Has anyone done this? Do you know of any tutorials or examples of this kind of thing? Is there somewhere else I should ask this question? Thanks anyway...5 -
Hey, know that joke where people say it runs yesterday but for some reason it doesn't run the next day? The same thing happens to me here with Hecker (a Hacker News 'client' written in Go that I am currently working on)... Oh wait a second it works again!
Btw, if you care about this, then the error seems to be a JSON error, which means that one of the submissions the program scrape has wrong JSON format, and its error is an invalid character error. Bruh.2 -
So i have been thinking..
SQL is a lang that runs on a specific software on the server, and helps creating data stores(databases and tables) that can be queried & manipulated.
is there a way to run sql like queries on the client side with no interaction from backend at all?
Say i have 5 inter related data models. in a backend world, they will form nice little tables of a db with all their joins and composite keys. from the server, i shall be querying them like "SELECT name from x where y=z & ..."
but what if i could store them like tables in browser memory and run the same query filters via a query language... is this possible?
i know this poses a certain security risk, but we already use cookies, local storage and a lot of json based shitty client side storages. surely it might be possible to have a lesser optimised sql tables on the frontend with extremely good querying capabilities?
or am i talking something far fetched here?8 -
A very long rant.. but I'm looking to share some experiences, maybe a different perspective.. huge changes at the company.
So my company is starting our microservices journey (we have a 359 retail websites at this moment)
First question was: What to build first?
The first thing we had to do was to decide what we wanted to build as our first microservice. We went looking for a microservice that can be used read only, consumers could easily implement without overhauling production software and is isolated from other processes.
We’ve ended up with building a catalog service as our first microservice. That catalog service provides consumers of the microservice information of our catalog and its most essential information about items in the catalog.
By starting with building the catalog service the team could focus on building the microservice without any time pressure. The initial functionalities of the catalog service were being created to replace existing functionality which were working fine.
Because we choose such an isolated functionality we were able to introduce the new catalog service into production step by step. Instead of replacing the search functionality of the webshops using a big-bang approach, we choose A/B split testing to measure our changes and gradually increase the load of the microservice.
Next step: Choosing a datastore
The search engine that was in production when we started this project was making user of Solr. Due to the use of Lucene it was performing very well as a search engine, but from engineering perspective it lacked some functionalities. It came short if you wanted to run it in a cluster environment, configuring it was hard and not user friendly and last but not least, development of Solr seemed to be grinded to a halt.
Elasticsearch started entering the scene as a competitor for Solr and brought interesting features. Still using Lucene, which we were happy with, it was build with clustering in mind and being provided out of the box. Managing Elasticsearch was easy since there are REST APIs for configuration and as a fallback there are YAML configurations available.
We decided to use Elasticsearch since it provides us the strengths and capabilities of Lucene with the added joy of easy configuration, clustering and a lively community driving the project.
Even bigger challenge? Which programming language will we use
The team responsible for developing this first microservice consists out of a group web developers. So when looking for a programming language for the microservice, we went searching for a language close to their hearts and expertise. At that time a typical web developer at least had knowledge of PHP and Javascript.
What we’ve noticed during researching various languages is that almost all actions done by the catalog service will boil down to the following paradigm:
- Execute a HTTP call to fetch some JSON
- Transform JSON to a desired output
- Respond with the transformed JSON
Actions that easily can be done in a parallel and asynchronous manner and mainly consists out of transforming JSON from the source to a desired output. The programming language used for the catalog service should hold strong qualifications for those kind of actions.
Another thing to notice is that some functionalities that will be built using the catalog service will result into a high level of concurrent requests. For example the type-ahead functionality will trigger several requests to the catalog service per usage of a user.
To us, PHP and .NET at that time weren’t sufficient enough to us for building the catalog service based on the requirements we’ve set. Eventually we’ve decided to use Node.js which is better suited for the things we are looking for as described earlier. Node.js provides a non-blocking I/O model and being event driven helps us developing a high performance microservice.
The leap to start programming Node.js is relatively small since it basically is Javascript. A language that is familiar for the developers around that time. While Node.js is displaying some new concepts it is relatively easy for a developer to start using it.
The beauty of microservices and the isolation it provides, is that you can choose the best tool for that particular microservice. Not all microservices will be developed using Node.js and Elasticsearch. All kinds of combinations might arise and this is what makes the microservices architecture so flexible.
Even when Node.js or Elasticsearch turns out to be a bad choice for the catalog service it is relatively easy to switch that choice for magic ‘X’ or component ‘Z’. By focussing on creating a solid API the components that are driving that API don’t matter that much. It should do what you ask of it and when it is lacking you just replace it.
Many more headaches to come later this year ;)3 -
So I thought to myself.
Hey I'll go ahead and use python, it will make this easier than using c++.
So I start looking at python.
And I start looking at specific common functions that c/c++ and .net all offer.
Like writing a fucking png image.
And I start seeing 3rd party libs that are at version 0.2
And so I say, this is supposedly the language data people love. which would include searching gis data too right ?
Everybody touts this level for ai and machine learning and all this other bullshit but I can't even create a fucking image ? And every document points to this same lib where it comes to creating this image ? at version 0.2 ?? 20 years or more after PNG was created ?
So I look up geotiff, and see 0.4........ so..... what is this language good for again ? I can parse json in javascript and do the other things I want...
Oh scatterplot generation ? What is it being displayed in jpeg ? Maybe the jpeg implementation is good. because you know i just use scatterplots constantly. yup. most of the data I require to analyze uses scatterplots. not risk.
fun.
oh and look django.... who the fuck uses django ?
and omg it makes me format my text or the run bombs.....
jesus. rpg much ?
I'm just... I'm not seeing...
WHY ?????????
and then I have zimmermans voice buzzing in my head about just using goddamn .net26 -
TLDR: RTFM...
My dad (taught me how to code when I was a kid) was stuck serializing a Java enum/class to XML.... The enum wasn't just a list of string values but more like a Map(String,Object>.
He tried to annotate it with XMLEnum but the moment I saw this enum, I'm thinking that's unlikely to work.... Mapping all that to just a string?
He tried annotating the Fields in it using XMLAttribute but clearly wasnt working...
Also he use XMLEnumValue but from his test run I could clearly see it just replaced whatever the enum value would've been with some fixed String...
Me: Did you read the documentation or when the javadocs?
Dad: no, I don't like reading documentation and the samples didn't work.
I haven't done XML Serialization for years thought did use JSON and my first instinct was... You need a TypeAdapter to convert the enum to a serializable class.
So I do some Googling, read the docs then just played around with the code, figured out how to serialize a class and also how to implement XmlTypeAdapter.... 20 mins ...
Text him back with screenshots and basically:
See it's not that hard if you actually read up on the javadocs and realized ur enum is more like a class so probably the simple way won't work...2 -
I've been helping a friend of mine with his postgraduate project the last 3 months.
It was a Java based program made in Processing. Though I am not a Java developer and I never used processing before, it wasn't that hard to write the logic of the program.
I noticed that sometimes Java made me use loops for almost everything.
Also I had to communicate between server and client via JSON but I had to write it manually as string due to the lack of keys in Java.
The main trial though was with the logic of the project. It was supposed to be made as a framework to be extended from custom user classes. I had to change the core classes I made many times because the user class had methods that should run while the parent class didn't have them declared. That could be my fault for not knowing how to write desktop application framework but you can't expect a framework to be extended in a compiled state, or so I think. Processing on the other hand doesn't seem to like the idea of an external java library. At least it didn't workout for me, it should be able to work normally.
In the end the project was never as completed as we wanted. It could rum a basic sim but we hadn't the time to test other possibilities. -
After a 3 hour fight with a custom Ajax request I have won the fight of getting a limiter run on the output. My goal was to only display x amount of result from a Json response and I finally got it working by putting everything in a if statement. Might not be the cleanest way but it works.
-
Dialogflow documentation is ABSOLUTE TRASH. Trying to run the example code? It gives you a super helpful error: `Unexpected error determining execution environment`. Uh, yes, indeed. What it means? IT MEANS THAT YOU PROVIDED NO CREDENTIALS. Because, as we all know, providing no credentials should end in an error of 'determining execution environment', of fucking course.
You want to know how to provide credentials? Think again, all examples in the ENTIRE DOCUMENTATION assume that you're running the code... from their servers. Seriously. You wanna know how to authenticate your shit? NOT IN THIS DOCUMENTATION, LOSER. You want to know what exactly is happening when you're initializing your client with `new dialogflow.SessionsClient()`? Good luck, documentation is on another platform. For .NET. Because fuck you.
Also, you think you can store your auth info in a neat .env file? THINK AGAIN, because google is above such petty things as industry standards, you're getting a .json file and you're gonna like it, HAVE FUCKING FUN.
Dear google, die in a fire.
Sincerely yours.1 -
So, do any of your poor fuckers have the opportunity - nay, PRIVILEGE of using the absolute clusterfuck piece of shit known as SQL Server Integration Services?
Why do I keep seeing articles about how "powerful" and "fast" it is? Why do people recommend it? Why do some think it's easy to use - or even useful?
It can't report an error to save its life. It's logging is fucked. It's not just that it swallows all exceptions and gives unhelpful error messages with no debugging information attached, its logging API is also fucked. For example, depending on where you want to log a message - it's a totally different API, with a billion parameters most of which you need to supply "-1" or "null" to just to get it do FUCKING DO SOMETHING. Also - you'll only see those messages if you run the job within the context of SQL FUCKING SERVER - good luck developing on your ACTUAL FUCKING MACHINE.
So apart from shitty logging, it has inherited Microsoft's insane need to make everything STATICALLY GODDAMN TYPED. For EVERY FUCKING COMPONENT you need to define the output fields, types and lengths - like this is 1994. Are you consuming a dynamic data structure, perhaps some EAV thing from a sales system? FUCK YOU. Oh - and you can't use any of the advances in .NET in the last 10 years - mainly, NuGet and modern C# language features.
Using a modern C# language feature REMOVES THE ABILITY TO FUCKING DEBUG ANYTHING. THE FUCKER WILL NOT STOP ON YOUR BREAKPOINTS. In addition - need a JSON parsing library? Want to import a SDK specific to what you're doing? Want to use a 3rd party date library? WELL FUCK YOU. YOU HAVE TO INDEPENDENTLY INSTALL THE ASSEMBLIES INTO THE GAC AND MAKE IT CONSISTENT ACROSS ALL YOUR ENVIRONMENTS.
While i'm at it - need to connect to anything? FUCK YOU, WE ONLY INCLUDE THE MOST BASIC DATABASE CONNECTORS. Need to transform anything? FUCK YOU, WRITE A SCRIPT TASK. Ok, i'd like to write a script task please. FUCK YOU IM GOING TO PAUSE FOR THE NEXT 10 MINUTES WHILE I FIRE UP A WHOLE FUCKING NEW INSTANCE OF VISUAL STUDIO JUST TO EDIT THE FUCKING SCRIPT. Heaven forbid you forget to click the "stop" button after running the package and open the script. Those changes you just made? HAHA FUCK YOU I DISCARDED THEM.
I honestly cant understand why anyone uses this shit. I guess I shouldn't really expect anything less from Microsoft - all of their products are average as fuck.
Why do I use this shit? I work for a bunch of fucks that are so far entrenched in Microsoft technologies that they literally cannot see outside of them (and indeed don't want to - because even a cursory look would force them to conclude that they fucked up, and if you're a manager thats something you can never do).
Ok, rant over. Also fuck you SSIS1 -
Implemented a feature against a "restful" json api. The feature works, test-driven development ftw.
Yet on the run with the live api: certain important fields all only contain the value `0`.
Confused I asked around what's going on, expected a bug in the api. Now I've been told that those fields never worked and the relevant information has to be gathered by either querying against a (deprecated!) mysql database. Or use a different endpoint increasing the http request overhead by factor over 1000.
We call it team work. -
Ugh, retrieving specific data fields nested within several arrays and objects in Javascript/Json jacks me up every fucking time!!!
Anyone ever fuck with the MapQuest geolocation/geoqueries api??
I'm trying to retrieve the lat/lng values out of responses generated from submitted address strings, and it's nested about 8 json layers deep.
I feel like I'm overthinking this?
I can access the values in my web console, and can reach them after using the console to assign them to a temp var, but can't get to the values from my actual js code. Only when I run some business logic from the console.
Here's a shitty example of me explaining the tree:
[{...}]
0:
locations: Array(3)
0:
latLng:
lat: <data here>
lng: <data here)1 -
Been stuck a week with JSON serializer struggles on the backend I'm working on... First of all, this project has source code dating back to 2013, and the dudes back then decided to use three versions of json. So you have your usual application/json and then two custom ones.
Not happy with that, they decide on using two serializers, XStream and Jackson. One custom and application/json run through XStream, and the other more legacy custom JSON runs through Jackson. So this is a bloody mess.
But now they want application/json running through Jackson, and this is breaking all the regression tests. Have to reimplement all the type, field, alias and other kinds of mappings they made for XStream, and sort out all the regressions this causes.
And the dude who designed all of this is revered in the company, although he left a while back. Not sure if I'm too much of an idiot to understand the utter brilliance of the approach, or its just poorly designed... Fuck my life, those due dates just keep creeping closer and closer and this kinda crap just keeps coming :S2 -
Huh, just created a job in Jenkins to run msbuild over some solutions. The job accepted a parameter called "Configuration" which was just some simple JSON.
Everytime I ran the job, msbuild failed.
Then I realised that the Debug/Release info for a .NET Solution gets put into an environment variable called "Configuration" that msbuild creates and relies on! >.< -
I know I sound stupid but I need help, I create a repo on GitHub using gh-api ```js
export async function createARepo({name,description,token}) {
const headers = {
"Authorization": `token ${token}`,
"Accept": "application/vnd.github.v3+json",
}
const {data} = await axios(
{
method: "POST",
url: "https://api.github.com/user/repos",
data: {name,description,auto_init: true},
headers
}
)
return data
// console.log(res)
}```
when I run this code it only creates an empty project with a readme but I also want to create a file with a .html extension of the project can anybody help me with how I do this?7 -
I am trying to extract data from the PubSub subscription and finally, once the data is extracted I want to do some transformation. Currently, it's in bytes format. I have tried multiple ways to extract the data in JSON format using custom schema it fails with an error
TypeError: __main__.MySchema() argument after ** must be a mapping, not str [while running 'Map to MySchema']
**readPubSub.py**
import apache_beam as beam
from apache_beam.options.pipeline_options import PipelineOptions
import json
import typing
class MySchema(typing.NamedTuple):
user_id:str
event_ts:str
create_ts:str
event_id:str
ifa:str
ifv:str
country:str
chip_balance:str
game:str
user_group:str
user_condition:str
device_type:str
device_model:str
user_name:str
fb_connect:bool
is_active_event:bool
event_payload:str
TOPIC_PATH = "projects/nectar-259905/topics/events"
def run(pubsub_topic):
options = PipelineOptions(
streaming=True
)
runner = 'DirectRunner'
print("I reached before pipeline")
with beam.Pipeline(runner, options=options) as pipeline:
message=(
pipeline
| "Read from Pub/Sub topic" >> beam.io.ReadFromPubSub(subscription='projects/triple-nectar-259905/subscriptions/bq_subscribe')#.with_output_types(bytes)
| 'UTF-8 bytes to string' >> beam.Map(lambda msg: msg.decode('utf-8'))
| 'Map to MySchema' >> beam.Map(lambda msg: MySchema(**msg)).with_output_types(MySchema)
| "Writing to console" >> beam.Map(print))
print("I reached after pipeline")
result = message.run()
result.wait_until_finish()
run(TOPIC_PATH)
If I use it directly below
message=(
pipeline
| "Read from Pub/Sub topic" >> beam.io.ReadFromPubSub(subscription='projects/triple-nectar-259905/subscriptions/bq_subscribe')#.with_output_types(bytes)
| 'UTF-8 bytes to string' >> beam.Map(lambda msg: msg.decode('utf-8'))
| "Writing to console" >> beam.Map(print))
I get output as
{
'user_id': '102105290400258488',
'event_ts': '2021-05-29 20:42:52.283 UTC',
'event_id': 'Game_Request_Declined',
'ifa': '6090a6c7-4422-49b5-8757-ccfdbad',
'ifv': '3fc6eb8b4d0cf096c47e2252f41',
'country': 'US',
'chip_balance': '9140',
'game': 'gru',
'user_group': '[1, 36, 529702]',
'user_condition': '[1, 36]',
'device_type': 'phone',
'device_model': 'TCL 5007Z',
'user_name': 'Minnie',
'fb_connect': True,
'event_payload': '{"competition_type":"normal","game_started_from":"result_flow_rematch","variant":"target"}',
'is_active_event': True
}
{
'user_id': '102105290400258488',
'event_ts': '2021-05-29 20:54:38.297 UTC',
'event_id': 'Decline_Game_Request',
'ifa': '6090a6c7-4422-49b5-8757-ccfdbad',
'ifv': '3fc6eb8b4d0cf096c47e2252f41',
'country': 'US',
'chip_balance': '9905',
'game': 'gru',
'user_group': '[1, 36, 529702]',
'user_condition': '[1, 36]',
'device_type': 'phone',
'device_model': 'TCL 5007Z',
'user_name': 'Minnie',
'fb_connect': True,
'event_payload': '{"competition_type":"normal","game_started_from":"result_flow_rematch","variant":"target"}',
'is_active_event': True
}
Please let me know if I m doing something wrong while parsing the data to JSON. Also, I am looking for examples to do data masking and run some SQL within Apache Beam4 -
A prototype I'm working on has a feature that fetches thousands of db rows. feature is running 'slow' according to everyone except me and my pc. I can speed up one section slightly by splitting a JSON array into two parts to reduce calls for the 3rd party assets because that's probably the cause.
Still doesn't 'work', says the hapless technoweenies. more troubleshooting.
The cause is Mozilla on a single computer chokes on a 3rd party api, in this case Mapbox.
How do I 'make mapbox run faster' on Mozilla?