Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "case switching"
-
Unpopular opinion about Microsoft buying GitHub.
Just putting it out there that when you made your github repos you did so under their privacy policy and terms and will be protected under those in the future, and that both GitHub and Microsoft are corporations with the goals of making money.
Are people seriously mad that their code has gone from one capitalist corporation to another, with no foreseeable change in privacy or data policy? I have respect for those that switched to self hosted long ago since that's going from corporate to private, but if you throw away the UX and community GitHub has developed because a multinational corporation (with so many branches, products and divisions, which happens to have a few products you don't like) will soon own it, are you actually making a rational, guided decision?
Also just throwing it out there that GitLab is also a company. They've also had issues with keeping data intact in the past. They do, however, have free private repos (although I can't ever trust someone who gives me "free" privacy) as well as builtin CI. There are some definite upsides to it, although the UX has a ton of differences. If you're expecting the same dashboard and workflow you've used on GitHub, don't, GitLab has cool features but the bells and whistles aren't the exact same.
If you're switching to GitLab solely because of Microsoft, step back and think, regardless of how popular it might make you to hate Microsoft, is it really worth changing your development ecosystem to go from one corporate entity to another solely because you don't like the company?
I use GitLab and GitBub as well as Bitbucket and selfhosted git on a daily basis. They each have their upsides and downsides; but I think switching from one to the other solely because of Microsoft is not only totally irrational, but really makes light of/disrespects the amazing tools and UX the teams behind each one have carefully developed. Pick your Git hosting based on features and what works out for your use case, not because of which corporate overlord has their name plastered on it.
(Also just throwing it out there that lots of devs love VS Code, and that's Microsoft owned too... They did also build and pioneer a bunch of really cool shit for devs including Typescript so it's not like they're evil or incapable in any sense?)11 -
sprint retros with PM are a fucking farce, it cannot possibly get any more grotesque.
they are held like this:
- in the meeting, PM asks each team member directly what they found good and bad
- only half of the team gives real negative feedback directed towards the PM or the process, because they are intimidated or just not that confrontative
- when they state a bad point, he explains them that their opinion is just wrong or they just need to learn more about the scrum process, in any case he didn't do anything wrong and he is always right
- when people stand up against this behavior, he bullshits his way out, e.g. using platitudes like "it's a learning process for the whole team", switching the topic, or solely repeating what he had just said, acting like everybody agreed on this topic, and then continue talking
- he writes down everything invisible for the team
- after the meeting he mostly remembers sending a mail to the team which "summarizes" the retro. it contains funny points like "good: living the agile approach" (something he must have obviously hallucinated during the meeting)
- for each bad point from team members, he adds a long explanation why this is wrong and he is doing everything right and it's the team's fault
- after that happens the second part of the retro, where colleagues from the team start arguing with him via mail that they don't feel understood or strongly disagree with his summary. of course he can parry all their criticism again, with his perfectly valid arguments, causing even longer debates
- repeated criticism of colleagues about poor retro quality and that we might want to use a retro tool, are also parried by him using arguments such as "obviously you still have to learn a lot about the scrum process, the agile manifesto states 'individuals and interactions over processes and tools', so using a tool won't improve our sprint retros" and "having anonymous feedback violates the principles of scrum"
- when people continue arguing with him, he writes them privately that they are not allowed to criticize or confront him.
i must say, there is one thing that i really like about PM's retro approach:
you get an excellent papertrail about our poor retro quality and how PM tries to enforce his idiocratic PM dictatorship on the team with his manipulative bullshit.
independently from each other, me and my colleague decided to send this papertrail to our boss, and he is veeeery interested.
so shit is hitting the fan, and the fan accelerates. stay tuned シ16 -
* Recruiter says he has a nice proposition
* I say that I'm not comfortable switching jobs yet, but I'd be up for a short phone interview to hear him out, out of pure interest
* Recruiter explains a lot about the company, and then asks if I am up for "a short Teams introduction with the team lead to hear more"
* I say yes, though still stating that I do not intend on switching, but want to know more in case of a future possibility
* Recruiter says I need to send my full CV / Resumé plus grades from every school I ever intended (including the early ones that doesn't even matter)
* I say no since 1) I'd have to dig them out from the basement, 2) I am not looking for a job right now, and 3) This request is absurd to me, and NOT a norm in my part of the world when I am not applying.
* He says I HAVE to, since I could be lying
(I am mostly self-taught and have very little actual education, so this logic made NO sense to me)
* I continue to say no, stating that it's simply not worth the time finding the old grades in the basement for a job I will not be taking, and that I am mostly self-taught so grades wouldn't matter
* He starts getting angry, accusing me of "purposefully wasting his time", and says he'll warn the company about me.
Fair point. I'll warn my contacts about you then. Have a nice day, you f*cking prick :)3 -
I hate setting up case statements cause it's hard to cover every case. What if a virus puts a gun to my programs head? What if my program is at a cache party and chrome offers it weed? What if my program isn't gay, but $20 is $20?2
-
Obvious wisdom from me;
1. HR is not your friend. HR is created to protect companies from employees, not to protect employees from companies. HR serve the company and upper level management.
2. If you are victim of mobbing, keep a mobbing diary with exact quoting. Nothing more, nothing less, no speculation. Create an airtight case for future.
3. If you want to change because of mobbing, just find a new job. Do not, I repeat, DO NOT talk to HR about mobbing before you got another job offer at the ready.
4. Present HR with mobbing diary during your exit, imply that you will talk to CEO and take legal actions if you don't get a satisfactory last laugh on the mobber.
5. Do not accept counter-offer from your company-regardless of mobbing case or not. You considered switching to another company, you are branded now and you will be axed at the first chance. Counter-offer is not a guaranteed employment in your company.9 -
Switching from a camel cased standard(js) to a snake cased standard (rust) tutorial
pub fn do_shit() {°°°}
...
pub fn doMoreShit()°°
*Notice wrong case*
To self:
Aaarghbkflahvflw. Why can't you fucking get the damn case right! And you call yourself a fucking senior programmer, you piece of useless shit.
#existentialCrisis
#questioning_life
pub fn do_other_shit()
...3 -
Until today, I had assumed deploying stuff to prod would NOT be one of my responsabilities in this company. Apparently that's not the case.
Had to deploy my code and pray it didn't break anything. Why is this a big deal at all?
Well you see, there is no repository. At all. No git, no svn, not even duplicate folders. No tests, no pipeline. Just a bunch of CPanels.
Had to manually copy files and folders from the development site to the production site and partially copy a database. "Just drag and drop" were the instructions I was given.
As if using CakePHP2, PHP5 and having to parse fucking Excel files wasn't bad enough, now I have to deal with one of the worst ways to deploy code.
Fuck it, I'm switching on the looking-for-job flag on linkedin.5 -
From perfectly working scrum team to... Don't know what it is now...
Long story short - our SM left company and our team have ongoing "reorganization", our tester leaving at the end of this month, probably we will be out of tester for next month...
I don't mean reorganization, it's normal thing, but... It looks like it's slowly collapsing under bad head decisions (one of them is the reason why our tester is leaving)... Multiple "side" projects / tasks for ppl in team and problems with delivering sprint tasks on time because of it, context switching etc.
I fucking like this project, it gives me much opportunities to learn new things and design new features - it's up to us how we will implement it. Client is satisfied with our work and we worked for their trust for long time. But if things will be going same way as now, we will probably lose it.
How do you think, is it worth to try stay with this project? Or should I update CV just in case?6 -
Back in https://devrant.com/rants/5492690 @Nihil75 referred to SlickVPN with a link, where you can buy a lifetime licence for $20. I thought - what the hell.. I don't need a public VPN rn, but for $20 for a lifetime lic - I'll take it, in case I'll ever need one.
I had some trouble signing up - the confirmation email never reached my inbox. So I got in touch with support. And they.... generated and send me a password in plain-text.
And there even isn't any nagging requirement to change the pass after I sign in for the first time!
IDK... As for a service claiming to be security-oriented, the first interaction already screams "INSECURE".
Well.. should still be OK for IP switching, to unlock Netflix content I guess. Don't need anything secure for that 🤷15 -
Context: https://devrant.com/rants/7767049
OOF
It's been a full month. Today's my last with Debian.
Funnily enough, I was so looking forward to switching off from Ubuntu, but I'm almost sad switching away from Debian.
Which is kinda weird for me, before that I kinda assumed they'd be the same thing, and "If you see one you see all the rest"
Apparently I was wrong. I thought Ubuntu being "Debian based" basically just means "Debian with extra steps"
But holy fuck was Debian just more stable and less annoying.
Tomorrow: Elementary OS. Have a few friends who are Apple fans, and use Macbook with macOS as their main system, so I wanna try elementary to see if it's worth suggesting in case they ever get tired from Apple.1 -
How to switch from linux to windows?
There is a lot of discussion online how to switch in a opposite way, but none on my case. As I am switching jobs, I will have to work using a windows machine. Any tips on how to feel more at home?5 -
My CS exam today had a case study question that, and i quote, talked about "Chernobyl in japan switching to manual monitoring due to the wannacry virus" xD wtf. Im fucking done xD
-
[linux distro stuff]
Hey guys!
Im considerig switching to linux because:
My macbook does not support mojave and the new ones are expensive af.
Windows 10 is bloated and not a great user experience(removing stuff from the control panel and adding it to the very stripped down settings app, privacy etc..).
I love open source software
However i did not used linux for a long time, back then i used ubuntu and SUSE.
My considerations:
Debian - because .deb on them haters
OpenSUSE - because i used it in the past and it seemed very stable and fast
Arch - i heard from a lot of sources that it’s “da best”
My use case is game development and 3D modeling. I use gimp, blender vscode and unity (the game engine) at work i sometimes use autodesk stuff (motionbuilder, 3ds max) because of fbx.
For audio stuff i use audacity
So overall i’m looking for a distro that is fast, lightweight, i can develop on it (mostly 3D stuff) and occasionally play some games
Anyone has experience with the mentioned distros? What distro would you use for this?6 -
Linux is great - to tinker, to pull in all your FOSS, mess around...
But it's so fucked up, if you actually build and maintain a product on it, i.e. try to distribute s.th. in binary for money even. It's just not intended. If you offer your code for free, you can always say: "Ah, just compile it yourself. You might need these 29 dependencies, of which 2 are not even checked by configure, oops, and now it crashes, maybe in that qt library version, you picked there's still a bug?.. you know, it worked on my machine, sorry."
But if you sell it, it better install and run! And even if you target only the main distros of all that fragmented Linuverse - let's say, Debian, Ubuntu, RHEL, CentOS, Fedora, and if you're in Germany OpenSuSE and SLES, you'll start to see the crap of work you're up with. What you could try is to orchestrate a docker fleet with one container per distro, where you take the oldest version you still support compile a newer gcc there (to at least have C++11) and all your third party libs and then hope the resulting binary runs on all the newer versions of that distro, too.
(You could even be so brave as to try to pick a deb and rpm distro to build for all other distros.)
But ABI incompatibility can still bite you. For instance we once had the insane case, that our GUI would no longer start just by switching the Window-Manager to KDE.8 -
Without a doubt it has to be the internal company search engine/file finding tool @thewamz and I wrote.
The company has a wide UNC network with files scattered all over the place and they need a way to keep track of where the files get moved to (they can and do get moved). The original tool was written in Java/Tomcat and didn't use any frameworks or utilities beyond custom written ones, no orms, and the SQL was just raw strings. The program didn't take into account that files might be moved or deleted so it never removed anything from the database, it just kept adding files and never removing them.
It however never stores files itself, just links to files elsewhere on the UNC network.
It took six months to get it into what might be a stable beta or release candidate state. The user interface is good, very simple and intuitive, the whole thing was rewritten in python/django, there were issues with utf 8 (and mysql not fully supporting utf 8 in its own utf 8 mode), we added a regex search mode (which was sorely lacking), the search used to take up to fifteen minutes however we sped it up to less than a minute (worst case when a user simply puts "^$" as the regex search). It has a multi threaded design which does some checks to ensure it doesn't spawn too many threads and get stuck in constant Gil switching. Still some bugs to fix, like moving the processing of results returned by the server in a web worker so that the content widget doesn't lock up processing millions of search results and moving the back end to use asynchronous python might gain a performance boost. But on the whole I think the system is ready to replace the older system that all the users are frustrated with and constantly complain about.
However the annoying bit is... How to actually get the new system online, while I am responsible for the development of tools and their maintenance, I am not responsible for their initial deployment and that means I have no idea when (or even if) my new tool will even ever be released :/ -
So I figure since I straight up don't care about the Ada community anymore, and my programming focus is languages and language tooling, I'd rant a bit about some stupid things the language did. Necessary disclaimer though, I still really like the language, I just take issue with defense of things that are straight up bad. Just admit at the time it was good, but in hindsight it wasn't. That's okay.
For the many of you unfamiliar, Ada is a high security / mission critical focused language designed in the 80's. So you'd expect it to be pretty damn resilient.
Inheritance is implemented through "tagged records" rather than contained in classes, but dispatching basically works as you'd expect. Only problem is, there's no sealing of these types. So you, always, have to design everything with the assumption that someone can inherit from your type and manipulate it. There's also limited accessibility modifiers and it's not granular, so if you inherit from the type you have access to _everything_ as if they were all protected/friend.
Switch/case statements are only checked that all valid values are handled. Read that carefully. All _valid_ values are handled. You don't need a "default" (what Ada calls "when others" ). Unchecked conversions, view overlays, deserialization, and more can introduce invalid values. The default case is meant to handle this, but Ada just goes "nah you're good bro, you handled everything you said would be passed to me".
Like I alluded to earlier, there's limited accessibility modifiers. It uses sections, which is fine, but not my preference. But it also only has three options and it's bizarre. One is publicly in the specification, just like "public" normally. One is in the "private" part of the specification, but this is actually just "protected/friend". And one is in the implementation, which is the actual" private". Now Ada doesn't use classes, so the accessibility blocks are in the package (namespace). So guess what? Everything in your type has exactly the same visibility! Better hope people don't modify things you wanted to keep hidden.
That brings me to another bad decision. There is no "read-only" protection. Granted this is only a compiler check and can be bypassed, but it still helps prevent a lot of errors. There is const and it works well, better than in most languages I feel. But if you want a field within a record to not be changeable? Yeah too bad.
And if you think properties could fix this? Yeah no. Transparent functions that do validation on superficial fields? Nah.
The community loves to praise the language for being highly resilient and "for serious engineers", but oh my god. These are awful decisions.
Now again there's a lot of reasons why I still like the language, but holy shit does it scare me when I see things like an auto maker switching over to it.
The leading Ada compiler is literally the buggiest compiler I've ever used in my life. The leading Ada IDE is literally the buggiest IDE I've ever used in my life. And they are written in Ada.
Side note: good resilient systems are a byproduct of knowledge, diligence, and discipline, not the tool you used. -
Heres some research into a new LLM architecture I recently built and have had actual success with.
The idea is simple, you do the standard thing of generating random vectors for your dictionary of tokens, we'll call these numbers your 'weights'. Then, for whatever sentence you want to use as input, you generate a context embedding by looking up those tokens, and putting them into a list.
Next, you do the same for the output you want to map to, lets call it the decoder embedding.
You then loop, and generate a 'noise embedding', for each vector or individual token in the context embedding, you then subtract that token's noise value from that token's embedding value or specific weight.
You find the weight index in the weight dictionary (one entry per word or token in your token dictionary) thats closest to this embedding. You use a version of cuckoo hashing where similar values are stored near each other, and the canonical weight values are actually the key of each key:value pair in your token dictionary. When doing this you align all random numbered keys in the dictionary (a uniform sample from 0 to 1), and look at hamming distance between the context embedding+noise embedding (called the encoder embedding) versus the canonical keys, with each digit from left to right being penalized by some factor f (because numbers further left are larger magnitudes), and then penalize or reward based on the numeric closeness of any given individual digit of the encoder embedding at the same index of any given weight i.
You then substitute the canonical weight in place of this encoder embedding, look up that weights index in my earliest version, and then use that index to lookup the word|token in the token dictionary and compare it to the word at the current index of the training output to match against.
Of course by switching to the hash version the lookup is significantly faster, but I digress.
That introduces a problem.
If each input token matches one output token how do we get variable length outputs, how do we do n-to-m mappings of input and output?
One of the things I explored was using pseudo-markovian processes, where theres one node, A, with two links to itself, B, and C.
B is a transition matrix, and A holds its own state. At any given timestep, A may use either the default transition matrix (training data encoder embeddings) with B, or it may generate new ones, using C and a context window of A's prior states.
C can be used to modify A, or it can be used to as a noise embedding to modify B.
A can take on the state of both A and C or A and B. In fact we do both, and measure which is closest to the correct output during training.
What this *doesn't* do is give us variable length encodings or decodings.
So I thought a while and said, if we're using noise embeddings, why can't we use multiple?
And if we're doing multiple, what if we used a middle layer, lets call it the 'key', and took its mean
over *many* training examples, and used it to map from the variance of an input (query) to the variance and mean of
a training or inference output (value).
But how does that tell us when to stop or continue generating tokens for the output?
Posted on pastebin if you want to read the whole thing (DR wouldn't post for some reason).
In any case I wasn't sure if I was dreaming or if I was off in left field, so I went and built the damn thing, the autoencoder part, wasn't even sure I could, but I did, and it just works. I'm still scratching my head.
https://pastebin.com/xAHRhmfH33 -
I solved the Monty Hall problem for once and for all! Suckers. Of course a computer can't decide if switching or keeping is the best choice. Even wikipedia states that switching wins. NEVER. And even if that would be the case, it's pure how you arranged the labels to determine which one wins. If everyone actually wrote their own code, the conclusion wouldn't be what it is now. Many people probably just changed their code until that false result comes out or had it at the beginning caused by lack of experience.
Here is a GOOD implementation: https://pastebin.com/dRiTWQpw
It gives a 50%-ish chance on a choice like mathematically is correct.
The problem is in the computer simulations: using > or < to check which choice has won. But actually, often no one has won (it's a tie) after running it x times so you have to filter out the ==.
Then, you get the right results. My first version also had a bias, but i refused to accept it and did spent 45 minutes on the code instead of 15. This is the end result. And no, with double ?: in a printf statement i don't expect a prize.
It was a lot of fun actually, did not expect this from such stupid 'problem'35 -
After switching distros ~ every 6 months for years, I came to the conclusion that one of the main factors to decide if I like it or not is its package manager..
Not saying that some are better or worse than others, just that i have my preferences..
How important is the package manager to you guys, do you even use it via terminal or are you using a GUI (in which case it doesn’t really matter, does it?..)
Kind of a random question but would be interesting for me to know..
I like pacman, not even sure why, it just feels right to me and apt-get just because I know it best😅2