Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "step functions"
-
I've been fairly lucky with my bosses of late since I've progressed in my programming career. But my absolute worst boss was when I first started working in an office environment doing data entry. My boss at the time was terrible, and she was always against innovation or process improvement. She also always tried to make herself look good and taking credit for the accomplishments of others. If she screwed up it was your fault, and she was "always buried in email" so she could never respond to you for pto requests, or escalation of issues between departments. My whole family pretty much worked in various roles in the department and she fired my brother after my mother left the company for no reason, saying he was "sleeping", but I worked right next to him and he's tall and had to slouch just to comfortable see his computer screen since the same manager refused to approve work station improvements for him.
Our workflow was to receive daily spreadsheets of health care claims that we had to manually process and enter into the system. So being the lazy innovator that I am, and trying to find ways I can efficiently work, I delved into studying visual basic and programmed a few functions and tools in excel to analyze, highlight, and process some of the data since the claims on the spreadsheets always had a specific pattern. This was all before I had any formal education in computer science so the program was very basic and clunky but it tripled my efficiency. When I brought it up to my boss to spread it among the rest of our team so they could use it after a short 20 minute training, she struck it down saying any training or use of it would be a waste of resources since it was too technical and complex to be used and if I were to keep improving it or use it I would be fired. It was literally copy and paste from one spreadsheet to the other en masse and clicking a button to sort and fill in the blanks. Eventually I showed it to the director of the department when working on a large data entry project with her, and I was later offered a job as a technical analyst where I was responsible for the codebase that generated the reports for the department and specifically all the reports my old boss used where I would occasionally mess with her to get back at all the crap she gave me and my brother. Since all the reports were blind carbon copied to everyone, I would send out her reports on a delay while everyone else got them on time. It eventually got her in so much crap she had to step down as a manager. She still works in the same company that I started working at again earlier this year, and like the many careers she's ruined she eventually ruined her own within the company 😂4 -
When I was in school I had some guys walk up to me and asked:
G: Are you Feeno?
Me: Yes, what's up?
G: We need our FY project on school management system done.
Me: Okay?
G: How much will that cost us?
Me: *confused because I was still a freshman. At that point the only programming language I knew was elementary qbasic. I couldn't even write a hello world program without the help of Google*
So played along because yes we're talking about money here.
Me: It will cost you guys N amount of money (*improvised deep voice*).
G: Okay. Fair price.
* Right there they transferred half the requested amount to me. *
Holy moly! This guys aren't joking around. I don't know shit! They clearly mistook me for a senior student whose first name is Feeno, to me that was a nick referred to me by my friends.
I'm in this one for sure and it's a do or die transaction cus I'm returning no fucking money. I told my friends what had happened and they insisted I return back the money to the students and admit I can't deliver the project they were requesting.
Fuck all of yah! I'm keeping this money. Same afternoon I visited the school library with the intension of writing the code using the help of YouTube tutorials. I didn't find anything useful for qbasic as I thought I could write a full fledged school management system using qbasic.
I was lucky enough to find an existing source code on Codeproject, God bless that Indian guy. The source was in PHP and the tutor gave a step by step guide to setup XAMP and MySQL. I really don't know PHP but I guess source code modification is a natural skill to all programmers as I was able to modify the code to meet the requirements of the students (i.e school name, logo and other minor changes).
Most of what I learnt in programming came from modifying the source of that project. I learnt how to connect a PHP source to a MySQL database, I learnt about functions and their usage, I learnt the basics of HTML, I really learnt a lot and I would say that the speed at which I learnt was proportional to the amount of pressure I received to deliver.
That was how my journey as a full stack developer started. By chance maybe.2 -
The nightmare continues.
Currently dealing with a code review from a “principal” dev (one step above senior), who is unironically called a “legendary dev” by some coworkers. It’s painfully obvious he didn’t read the code, and just started complaining and nitpicking.
It’s full of requests to do things that make absolutely no sense, and would make the code an unmaintainable mess.
• Ex: moving the logic and data collection from the module’s many callers into the module instead of just passing in the data.
• Ex: hiding api endpoint declarations by placing them in the module itself, and using magic instance variables to pass data to it. Basically: using global functions and variables instead of explicit declarations and calls.
• Ex: moving the logic to determine which api endpoint to use, for all callers, into the view.
More comments about methods being “too complex” (barely holds water) right next to comments saying “why are these separate? merge them together!”
Incredulously asking how many times I’m checking permissions and how ridiculous it all is. (The answer? Twice.)
Conflating my “permissions” param and method names with a supposedly forthcoming permissions system overhaul, and saying I shouldn’t use permissions because my code will all have to get rewritten. Even if that were true, and it’s likely not, the ticket still needs to use the current permissions. I can’t just ignore them because they might be rewritten someday.
Requests to revert some code cleanup because the reviewer thought the previous heavily-nested and uncommented versions (with code duplication) were easier to read. Unsurprisingly, he wrote them.
On the same ticket, my boss wants me to remove all styling and clientside validation, debouncing, and error messages from a form. Says “success” and “connection failed” messages are good enough. The form in question sends SMS and email using arbitrary user input for addresses. He also says it shouldn’t be denounced on the server, and doesn’t want me to bother checking permissions. Hello, spam!
Related: the legendary dev reviewer says he can’t think of a reason why we would want to disable the feature for consumers, so I should remove the consumer feature flag.
You can’t make this stuff up.7 -
First I wanna say how grateful I am that devRant exists, because my friends either don’t understand this vocab or don’t care lol.
Last week I worked on a pretty large ticket, opened a PR with 54 file changes. Just to follow standards I set the PR milestone to a future release version, but the truth is I didn’t care which version this work ended up in— I just needed it to go into the develop branch asap.
Since it was a large PR there was some expected discussion that prolonged its merging, but in the meantime I started a second branch that depended on some of the work from this branch. I set the new branch’s upstream to develop, fully expecting my PR to merge into develop, since that’s what I set the PR base to.
I completed all the work I could in the new branch, and got two colleagues to approve the initial PR so it would be merged into develop, I could add the finishing touch and get this work done seamlessly before the week was over. They approved, it got merged, I pulled develop, and… my work wasn’t there. I went to look at my PR and someone had changed the base branch to a release branch. It was my boss, who thought he was helping. (Our bosses don’t actually work on the same team as us, so he didn’t know. it’s weird. We have leads that keep track of our work instead.)
I messaged him and told him I really needed this in develop, knowing our release branch won’t be in develop for probably another week. I was very annoyed but didn’t wanna make him feel too bad so I said I’d just merge the release branch into my new branch. So many conflicts I couldn’t see straight. His response was “yeah and you’ll probably have a bunch of package manager conflicts too because that’s in that release.” He was right— I have so many package manager conflicts that I can’t even see how many compiler conflicts there are. I considered cherry picking my changes, but the whole reason I set develop as my upstream was to avoid having any conflicts since I’m working in the same functions, and this would create more.
So I could spend the next (?) days making educated guesses on possibly a thousand conflict resolutions, or I can revert my release branch merge and quietly step back and wait for the release branch to be merged into develop.
I’m sure cherry picking is the best option here but I’m genuinely too annoyed lol, and fortunately my team does not care to notice if I step back and work on something else to kill time until it’s fixed automatically. But I’m still in dire need of a rant because my entire plan was ruined by a well-meaning person who messed with my PR without asking, so here is that rant and I thank you for your time.8 -
I could bitch about XSLT again, as that was certainly painful, but that’s less about learning a skill and more about understanding someone else’s mental diarrhea, so let me pick something else.
My most painful learning experience was probably pointers, but not pointers in the usual sense of `char *ptr` in C and how they’re totally confusing at first. I mean, it was that too, but in addition it was how I had absolutely none of the background needed to understand them, not having any learning material (nor guidance), nor even a typical compiler to tell me what i was doing wrong — and on top of all of that, only being able to run code on a device that would crash/halt/freak out whenever i made a mistake. It was an absolute nightmare.
Here’s the story:
Someone gave me the game RACE for my TI-83 calculator, but it turned out to be an unlocked version, which means I could edit it and see the code. I discovered this later on by accident while trying to play it during class, and when I looked at it, all I saw was incomprehensible garbage. I closed it, and the game no longer worked. Looking back I must have changed something, but then I thought it was just magic. It took me a long time to get curious enough to look at it again.
But in the meantime, I ended up played with these “programs” a little, and made some really simple ones, and later some somewhat complex ones. So the next time I opened RACE again I kind of understood what it was doing.
Moving on, I spent a year learning TI-Basic, and eventually reached the limit of what it could do. Along the way, I learned that all of the really amazing games/utilities that were incredibly fast, had greyscale graphics, lowercase text, no runtime indicator, etc. were written in “Assembly,” so naturally I wanted to use that, too.
I had no idea what it was, but it was the obvious next step for me, so I started teaching myself. It was z80 Assembly, and there was practically no documents, resources, nothing helpful online.
I found the specs, and a few terrible docs and other sources, but with only one year of programming experience, I didn’t really understand what they were telling me. This was before stackoverflow, etc., too, so what little help I found was mostly from forum posts, IRC (mostly got ignored or made fun of), and reading other people’s source when I could find it. And usually that was less than clear.
And here’s where we dive into the specifics. Starting with so little experience, and in TI-Basic of all things, meant I had zero understanding of pointers, memory and addresses, the stack, heap, data structures, interrupts, clocks, etc. I had mastered everything TI-Basic offered, which astoundingly included arrays and matrices (six of each), but it hid everything else except basic logic and flow control. (No, there weren’t even functions; it has labels and goto.) It has 27 numeric variables (A-Z and theta, can store either float or complex numbers), 8 Lists (numeric arrays), 6 matricies (2d numeric arrays), 10 strings, and a few other things like “equations” and literal bitmap pictures.
Soo… I went from knowing only that to learning pointers. And pointer math. And data structures. And pointers to pointers, and the stack, and function calls, and all that goodness. And remember, I was learning and writing all of this in plain Assembly, in notepad (or on paper at school), not in C or C++ with a teacher, a textbook, SO, and an intelligent compiler with its incredibly helpful type checking and warnings. Just raw trial and error. I learned what I could from whatever cryptic sources I could find (and understand) online, and applied it.
But actually using what I learned? If a pointer was wrong, it resulted in unexpected behavior, memory corruption, freezes, etc. I didn’t have a debugger, an emulator, etc. I had notepad, the barebones compiler, and my calculator.
Also, iterating meant changing my code, recompiling, factory resetting my calculator (removing the battery for 30+ sec) because bugs usually froze it or corrupted something, then transferring the new program over, and finally running it. It was soo slowwwww. But I made steady progress.
Painful learning experience? Check.
Pointer hell? Absolutely.4 -
!rant && Announcement
I am working on a DEVRANT TOOLBOX for the Firefox Desktop Browser.
It includes a dark mode and some new functions (autoreload, colored notifs, image preview on mouseover).
The Alpha version is finished.
As a first step I created an experimental version just for the dark mode (no other functions).
Its alpha and experimental, so I will not not published it.
I need some testers for it. My email is temporary available in my users profile for that.16 -
When defining a range, let's say from 1 to 3, I expect:
[1, 2, 3]
Yet most range functions I come across, e.g. lodash, will do:
_.range(1, 3)
=> [1, 2]
And their definition will say: "Creates an array of numbers ... progressing from start up to, but not including, end."
Yet why the fuck not including end? What don't I understand about the concept of a frigging range that you won't include the end?
The only thing I can come up with that's this is related to the array's-indexes-start with-0-thing and someone did not want to substract `-1` when preparing a for loop over an 10 items array with range(0,10), even though they do not want a range of 0 to 10, they want a range from 0 to 9. (And they should not use a for loop here to begin with but a foreach construct anyway.)
So the length of your array does not match the final index of your array.
Bohhoo.
Yet now we can have ranges with very weird steps, and now you always have to consider your proper maximum, leading to code like:
var start = 10;
var max = 50;
var step = 10;
_.range(start, max + step, step)
=> [10, 20, 30, 40, 50]
and during code review this would scream "bug!" in my face.
And it's not only lodash doing that, but also python and dart.
Except php. Php's range is inclusive. Good job php.4 -
WTF IS WRONG WITH ASSEMBLY LANGUAGE?!
I was just modifying an existing program for adding a sequence of numbers from the data section and through console input. I studied the code and started modifying it one step at a time. I needed to modify it into a multiplication program. So I started by changing the ADD functions, replaced the result and buffer registers with bigger size and thought I completed it. WELL GUESS WHAT? SHIT JUST GIVES ME SEGMENTATION FAULT! NOW I HAVE TO REDO THE WHOLE THING! WHY DOESN'T IT TELL ME WHICH LINE OF THE CODE I FUCKED UP AT?! STUPID NASM COMPILER.9 -
If only they allow us to write unit test at work, its not that It is forbidden but we are not given time to do so :\
Done my test on my side project and now I can happily move to the next step.
Though I'd be happy if someone answers this:
1. When I have to execute functions by order, do I write all their code in one single function and divide them into regions (speaking of C# #reagion)
OR
2. I keep them split and implement the order attribute for XUnit?
My test case is basically just to make sure CRUD methods inside my repositories are working as expected, noting complex5 -
Man learning I’m not good at learning new languages, I get to the point where I have the basics of the language ex: Conditional statements, loops, functions, classes, structures, file manipulation, etc but idk what to do after that, is this where I start learning libraries cause I still get the feeling I’m not at that step yet.
Before you ask, yes I know I am heavily over thinking this2 -
It's been a while DevRant!
Straight back into it with a rant that no doubt many of us have experienced.
I've been in my current job for a year and a half & accepted the role on lower pay than I normally would as it's in my home town, and jobs in development are scarce.
My background is in Full Stack Development & have a wealth of AWS experience, secure SaaS stacks etc.
My current role is a PHP Systems Developer, a step down from a senior role I was in, but a much bigger company, closer to home, with seemingly a lot more career progression.
My job role/descriptions states the following as desired:
PHP, T-SQL, MySQL, HTML, CSS, JavaScript, Jquery, XML
I am also well versed in various JS frameworks, PHP Frameworks, JAVA, C# as well as other things such as:
Xamarin, Unity3D, Vue, React, Ionic, S3, Cognito, ECS, EBS, EC2, RDS, DynamoDB etc etc.
A couple of months in, I took on all of the external web sites/apps, which historically sit with our Marketing department.
This was all over the place, and I brought it into some sort of control. The previous marketing developer hadn't left and AWS access key, so our GitLabs instance was buggered... that's one example of many many many that I had to work out and piece together, above and beyond my job role.
Done with a smile.
Did a handover to the new Marketing Dev, who still avoid certain work, meaning it gets put onto me. I have had a many a conversation with my line manager about how this is above and beyond what I was hired for and he agrees.
For the last 9 months, I have been working on a JAVA application with ML on the back end, completely separate from what the colleagues in my team do daily (tickets, reports, BI, MI etc.) and in a multi-threaded languages doing much more complicated work.
This is a prototype, been in development for 2 years before I go my hands on it. I needed to redo the entire UI, as well as add in soo many new features it was untrue (in 2 years there was no proper requirements gathering).
I was tasked initially with optimising the original code which utilised a single model & controller :o then after the first discussion with the product owner, it was clear they wanted a lot more features adding in, and that no requirement gathering had every been done effectively.
Throughout the last 9 month, arbitrary deadlines have been set, and I have pulled out all the stops, often doing work in my own time without compensation to meet deadlines set by our director (who is under the C-Suite, CEO, CTO etc.)
During this time, it became apparent that they want to take this product to market, and make it as a SaaS solution, so, given my experience, I was excited for this, and have developed quite a robust but high level view of the infrastructure we need, the Lambda / serverless functions/services we would want to set up, how we would use an API gateway and Cognito with custom claims etc etc etc.
Tomorrow, I go to London to speak with a major cloud company (one of the big ones) to discuss potential approaches & ways to stream the data we require etc.
I love this type of work, however, it is 100% so far above my current job role, and the current level (junior/mid level PHP dev at best) of pay we are given is no where near suitable for what I am doing, and have been doing for all this time, proven, consistent work.
Every conversation I have had with my line manager he tells me how I'm his best employee and how he doesn't want to lose me, and how I am worth the pay rise, (carrot dangling maybe?).
Generally I do believe him, as I too have lived in the culture of this company and there is ALOT of technical debt. Especially so with our Director who has no technical background at all.
Appraisal/review time comes around, I put in a request for a pay rise, along with market rates, lots of details, rates sources from multiple places.
As well that, I also had a job offer, and I rejected it despite it being on a lot more money for the same role as my job description (I rejected due to certain things that didn't sit well with me during the interview).
I used this in my review, and stated I had already rejected it as this is where I want to be, but wanted to use this offer as part of my research for market rates for the role I am employed to do, not the one I am doing.
My pay rise, which was only a small one really (5k, we bring in millions) to bring me in line with what is more suitable for my skills in the job I was employed to do alone.
This was rejected due to a period of sickness, despite, having made up ALL that time without compensation as mentioned.
I'm now unsure what to do, as this was rejected by my director, after my line manager agreed it, before it got to the COO etc.
Even though he sits behind me, sees all the work I put in, creates the arbitrary deadlines that I do work without compensation for, because I was sick, I'm not allowed a pay rise (doctors notes etc supplied).
What would you do in this situation?4 -
I had the idea that part of the problem of NN and ML research is we all use the same standard loss and nonlinear functions. In theory most NN architectures are universal aproximators. But theres a big gap between symbolic and numeric computation.
But some of our bigger leaps in improvement weren't just from new architectures, but entire new approaches to how data is transformed, and how we calculate loss, for example KL divergence.
And it occured to me all we really need is training/test/validation data and with the right approach we can let the system discover the architecture (been done before), but also the nonlinear and loss functions itself, and see what pops out the other side as a result.
If a network can instrument its own code as it were, maybe it'd find new and useful nonlinear functions and losses. Networks wouldn't just specificy a conv layer here, or a maxpool there, but derive implementations of these all on their own.
More importantly with a little pruning, we could even use successful examples for bootstrapping smaller more efficient algorithms, all within the graph itself, and use genetic algorithms to mix and match nodes at training time to discover what works or doesn't, or do training, testing, and validation in batches, to anneal a network in the correct direction.
By generating variations of successful nodes and graphs, and using substitution, we can use comparison to minimize error (for some measure of error over accuracy and precision), and select the best graph variations, without strictly having to do much point mutation within any given node, minimizing deleterious effects, sort of like how gene expression leads to unexpected but fitness-improving results for an entire organism, while point-mutations typically cause disease.
It might seem like this wouldn't work out the gate, just on the basis of intuition, but I think the benefit of working through node substitutions or entire subgraph substitution, is that we can check test/validation loss before training is even complete.
If we train a network to specify a known loss, we can even have that evaluate the networks themselves, and run variations on our network loss node to find better losses during training time, and at some point let nodes refer to these same loss calculation graphs, within themselves, switching between them dynamically..via variation and substitution.
I could even invision probabilistic lists of jump addresses, or mappings of value ranges to jump addresses, or having await() style opcodes on some nodes that upon being encountered, queue-up ticks from upstream nodes whose calculations the await()ed node relies on, to do things like emergent convolution.
I've written all the classes and started on the interpreter itself, just a few things that need fleshed out now.
Heres my shitty little partial sketch of the opcodes and ideas.
https://pastebin.com/5yDTaApS
I think I'll teach it to do convolution, color recognition, maybe try mnist, or teach it step by step how to do sequence masking and prediction, dunno yet.6 -
I saw one of my coworkers do a multi step bus ticket purchase in one file (we use angular 4) instead of using components he just hide and show the sections, resulting in a class that have about 2000 lines of code, unused variables, unused functions o just functions that console.log something, and many many lines of declaring variables. I tried to fix that, but this crazy deadlines were fucking with me3
-
Finally finished an algo to check an image for grouping of pixels that will form a rectangular area. I got the grouping to work on one image, but found it was utterly failing on another. I went through every step of the algo and still could not find the solution. The 128x128 image was working, but the 128x16 image was not. I knew it had something to do with the dimensions. Started thinking it was overflowing a buffer somewhere. So I started putting asserts in the functions that abstracted the buffer access. None of the numbers exceeded the proper bounds. It was close to bedtime so I finally gave up. I was tired. Then I realized it wouldn't be until the next evening when I could look at this again. So I got up again and started looking at the code again. I had a loop to check the output of my algo that I did the memory access of the buffer. It too was not fully filling my temp image to show how the algo was working. WTF!
Then I finally realized the flaw:
buffer[x+y*height]
And my test loop to test the algo:
buffer[x+y*ymax]
I kept overlooking the error because I was sure it was right. Also my asserts for the functions to access the buffers? They only checked the inputs x and y. So it didn't help that the math was wrong for reading and writing the buffers. It also worked fine on 128x128 images because the width and height were the same.
It is funny that I struggled with this part. The algo was actually surprisingly easy to formulate. I just looked through every point and checked a buffer to see if that point was used. If not then I would attempt to grow in the x and y direction the shaped of that point based upon pixel color. This was saved in a structure while growing that point. Then when that rectangle could not be grown further the inner loop would continue checking used points again.
I still have work to do to use the data this algo produces. I need to now figure out how to parent the rectangular areas to each other. I will probably use my check buffer to keep track of these rects by an index. Then do adjacent checks to determine parenting. Eventually I will have to extend this algo to 3 dimensions, but that should not be difficult.2 -
More from my big black book of ai and neuroscience:
I think if trace theory is true to any degree it would go some distance in explaining phenomenal consciousness, assuming I haven't misunderstood anything.
In fuzzy trace theory (FTT) it is posited that people form two types of mental representations about a past event:
*verbatim traces: detailed representations of a past event.
*gist traces: fuzzy representations of a past event.
People can reason with verbatim *and* gist traces but prefer gists.
*vision was suggested to work similarly in 1999. With human vision, two processes could be used: one that aggregates local receptive fields and one that parses the local receptive spatial field. It was suggested that people used prior experience, gists, to decide which dominates a perceptual decision.
Gist processes form representations of events, semantic details, where verbatim reinstates the context found in the surface details of an event.
__notes__
Parallel storage: asserts encoding/storage of verbatim/gist traces operate in *parallel*, not in serial.
I like to think of verbatim traces as databases, and gists as queries constructed by recognition.
Several studies have found that the meaning (gist) of an item is encoded even *before* the surface details (verbatim).
This might be important as a survival mechanism but should not be taken to mean strictly that gists are formed wholly *without* details or important and recognizable features of the item in question. It may well be for high level el processing and classification efficiency this may be an important reprocessing step, in the same way that many functions of the brain are duplicated throughout.5 -
I'm still on a regular basis reminded of how I might be wrong despite the absolute certainty in how obviously wrong the other person is.
Lately I've been working on setting up this API with a fairly intricate database integration. One request can lead to multiple db calls if we're not careful, so we have been polishing up the implementation to guard against ddosing ourselves and dealing with thread-unsafe concurrency.
Someone on the team could happily report that they got rid of all async use so there should no longer be threading issues. "You mean it all runs sync now?" "I guess. It works at least".
I'm just internally pulling a surrender cobra. If this was pre-dev me I would have let him and everyone know what a stupidpants he is and that I thought he had some experience in api development. But let's not make an exception to the rule; I might be wrong. I mean I'm not, but let's pretend I could be. Let's pull down the changes and maybe set up a minimal example to demonstrate how this is a bad idea.
Funny story. He got rid of explicit calls to the database entirely. When resolving data, the query is instead constructed virtually and execution is deferred until the last step. Our functions are sync now because they don't call the database, and threading isn't an issue since there's only one call per request context.
Thank god I've learned to keep my mouth shut until I can prove with absolute conclusive certainty that they are wrong. Here's to another day of not making an ass of myself. -
research 10.09.2024
I successfully wrote a model verifier for xor. So now I know it is in fact working, and the thing is doing what was previously deemed impossible, calculating xor on a single hidden layer.
Also made it generalized, so I can verify it for any type of binary function.
The next step would be to see if I can either train for combinations of logical operators (or+xor, and+not, or+not, xor+and+..., etc) or chain the verifiers.
If I can it means I can train models that perform combinations of logical operations with only one hidden layer.
Also wrote a version that can sum a binary vector every time but I still have
to write a verification table for that.
If chaining verifiers or training a model to perform compound functions of multiple operations is possible, I want to see about writing models that can do neighborhood max pooling themselves in the hidden layer, or other nontrivial operations.
Lastly I need to adapt the algorithm to work with values other than binary, so that means divorcing the clamp function from the entire system. In fact I want to turn the clamp and activation into a type of bias, so a network
that can learn to do binary operations can also automatically learn to do non-binary functions as well.10 -
(I'm not completely sure of what I'm saying here, so don't take this too seriously)
Settling on a language to write the api for ranterix is hard.
I'm finding a lot of things about elixir to be insanely good for a stable api.
But I'm having a lot of gripes with the most important elixir web framework, phoenix.
Take a look at this piece of code from the phoenix docs:
defmodule Hello.Repo.Migrations.CreateUsers do
use Ecto.Migration
def change do
create table(:users) do
add :name, :string
add :email, :string add :bio, :string
add :number_of_pets, :integer
timestamps()
end
end
end
Jesus christ, I hate this shit.
Wtf are create, add and timestamps. Add is somehow valid inside the create, how the fuck is that considered good code? What happens if you call timestamps twice? It's all obscure "trust me, it works" code.
It appears to be written by a child.
js may have a million problems. But one thing I like about CJS (require) or ESM (import) is that there's nothing unexplained. You know where the fuck most things come from.
You default export an eatShit() function on one file and import it from another, and what do you get?
The goddamn actual eatShit function.
require is a function the same way toString is a function and it returns whatever the fuck you had exported in the target file.
Meanwhile some dynamic langs are like "oh, I'll just export only some lang construct that i expect you to specify and put that shit in fucking global of the importing file".
Js is about the fucking freedom. It won't decide for you what things will files export, you can export whatever the fuck you want, strings, functions, classes, objects or even nothing at all, thanks to module.exports object or export statement.
And in js, you can spy on anything external, for example with (...args) => debugger; fnToSpyOn(...args)
You can spoof console.log this way to see what the fuck is calling it (note: monkey patching for debugging = GOOD, for actual programming = DOGSHIT)
To be fair though, that is possible because of being a dynamic lang and elixir is kind of a hybrid typed lang, fair enough.
But here's where i drop the shit.
Phoenix takes it one step further by following the braindead ruby style of code and pretty DSLs.
I fucking hate DSLs, I fucking hate abstraction addiction.
Get this, we're not writing fucking poetry here. We're writing programs for machines for them to execute.
Machines are not humans with emotions or creativity, nor feel.
We need some level of abstraction to save time understanding source code, sure.
But there has to be a balance. Languages can be ergonomic for humans, but they also need to be ergonomic for algorithms and machines.
Some of the people that write "beautiful" "zen" code are the folks that think that everyone who doesn't push the pretty code agenda is a code elitist that doesn't want "normal" people to get into programming.
Programming is hard, man, there's no fucking way around it.
Sometimes operating system or even hardware details bleed into code.
DSLs are one easy way to make code really really easy to understand, but also make it really fucking hard to debug or to lose "programming meaning".7 -
!rant (I got down voted for this on Stack Overflow, so I try to discuss the issue with a more professional crowd.)
In a Software Engineering class, we had an assignment to read Parnas' seminal paper on modularization [0]. In this paper, two approaches of dividing a software into modules are discussed:
Traditional Approach: A flow chart is drawn to work out the single processing steps and the program's high-level flow. Then every processing step is turned into a module. This approach doesn't yield very good results.
New Approach: Every design decision will be turned into a module by the means of information hiding. This approach leads to much better results.
My personal interpretation of the term design decision is that the modules are identified as data structures rather than as processing steps of an algorithm. This makes sense, because data structures are much more suitable for information hiding then processing steps of an algorithm. (The information inside a data structure is hidden behind functions, whereas a function only hides more detailed processing steps and no information; the information is actually passed in as arguments.)
Why does the second approach work so much better than the first approach? Here comes my second interpretation: The single processing steps of an algorithm are not replaceable (and thus not reusable), whereas it's possible to convert data structures into other data structures.
And here's my question: Could that be the reason why software development using workflow engines (based on BPMN, for example) never really took off?
My personal experience is that the activities created in such workflows are hardly ever reused, but there often are big data structures passed around all the involved activities, even if most of the activities use only one or two of them.
My question exaggerated: Could we get rid of all those clumsy workflow engines by giving managers Parnas' paper to read?
[0]: On the criteria to be used in decomposing systems into modules (Parnas 1972)2 -
You work in a team, for a team to move forward successfully the team should work in sync. A team always has a goal and a plan to get to it. There are times when the team needs to take a different direction therefore the set path should always be available for change because our environments dictate it.
We all have different styles of working and different opinions on how things should work. Sometimes one is wrong and the other is right, and sometimes both are wrong, or actually sometimes both are right. However, at the end of it all, the next step is a decision for the team, not an individual, and moving forward means doing it together. #KickAssTeam
The end result can not come in at the beginning but only at the end of an implementation and sometimes if you’re lucky, during implementation you can smell the shit before it hits the fan. So as humans, we will make mistakes at times by using the wrong decisions and when this happens, a strong team will pull things in the right direction quickly and together. #KickAssTeam
Having a team of different opinions does not mean not being able to work together. It actually means a strong team! #kickAssTeam However the challenging part means it can be a challenge. This calls for having processes in place that will allow the team members to be heard and for new knowledge to take lead. This space requires discipline in listening and interrogating opinions without attachment to ideas and always knowing that YOUR opinion is a suggestion, not a solution. Until it is taken on by the team. #KickAssTeam We all love our own thinking. However, learning to re-learn or change opinions when faced with new information should become as easy to take in and use.
Now, I am no expert at this however through my years of development I find this strategy to work in a team of developers. It’s a few questions you ask yourself before every commit, When faced with working in a new team and possibly as a suggestion when trying to align other team members with the team.
The point of this article, the questions to self!
Am I following the formatting standard set?
Is what I have written in line with official documentation?
Is what I am committing a technical conversion of the business requirement?
Have I duplicated functionality the framework already offers?
I have introduced a methodology, library, heavily reusable component to the system, have you had a discussion with the team before implementing?
Are your methods and functions truly responsible for 1 thing?
Will someone you will never get to talk to or your future self have documentation of your work?
Either via point number 2, domain-specific, or business requirements documentation.
Are you future thinking too much in your solution?
Will future proof have a great chance of complicating the current use case?
Remember, you can never write perfect code that cures every future problem, but what you can do perfectly is serve the current business problem you are facing and after doing that for decades, you would have had a perfect line of development success.1 -
I love serverless functions but I'm so tired of complex orchestration, juggling event parameters and now scipy+numpy+pandas exceeds size limit of 250MB..
Feel like cramming it all in a monolith like the geezers of yore and be done with it3 -
To what order of magnitude would be sufficient to step time based functions for physics applications ? Since direct solutions off multivariant co dependent problems usually cant be found until you update things like position velocity and or energy over a very short tike period first. Example being gravitation or electrostatics
Like nano seconds ?12 -
!rant
Looking for help starting with DevOps.
Does anyone know of a site or forum where you can talk about general coding/scripting patterns rather than just asking specific questions?
Bear with me, this may be a bit longer than most posts here.
I'm a self-taught admin/tech working with one colleague (who's also mostly self taught) at a high school, managing both clients and servers.
We've been doing most things manually bit I'm looking into converting as much work as possible into more of a DevOps setup, with Powershell-scripts for multi step tasks.
I want to do this for a number of reasons. Having a script doing a number of steps would cut down on time spent on individual tasks and minimize the risk that a step is missed or, perhaps even worse, mistyped. Also it's important that I actually learn what I'm doing, why something works and why something fails.
As and example, I have a powershell-script which moves a student from one year to another (basically they have user names with a two-digit prefix based on the year they started and a suffix with two letters from their first names and four from their last names) if they need to repeat a grade.
It basically renames the account in the AD with the correct year-prefix, changes the samAccountName, renames Home and Profile-directories on disk and changes paths on the profile-tab in AD, moves the user into a new OU and security group etc.
It works as intended if the user account to be renamed exists and there's no name conflict with the new name. But I'd like for the script to validate that there's no problem with user names, source and target security groups and OUs etc. and eventually split the script up into smaller clearly defined functions for better readability.
However, I don't want someone to just write the script for me, I'd prefer to be able to discuss script flow and come to my own conclusions and solutions.1 -
I rolled out a feature in one of my previous organizations. It looked awesome. I couldn’t wait to receive all the praises and appreciations but instead was bombarded with bugs and issues. Well, I tested the feature on chrome but little did I know that the users used IE and safari. This is where polyfills in javascript step in. Here I've assembled a list of some important polyfills. Do read it and let me know your opinions.
https://readosapien.com/polyfills-o...1