Details
Joined devRant on 3/1/2018
Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API

From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
-
Okay, I have a desktop and a laptop. I don't think that's surprising.
I do sync the contents of both via git. Also not surprising.
But I thought, hmm, I hate having to do temporary git commits. Stuff like
git add .
git commit -m temp
git push
Just so I can remove it alter via
git reset HEAD^
I hate it because it forces me to force push. So, how do I sync stuff I do not want to commit yet?
Well, I just set up an instance of owncloud. Was easy. 20 minutes and everything is running. Can recommend. But...
For some reason it doesn't work. It syncs stuff just fine... But it also syncs my .git directory... I thought it wouldn't be a problem.
Saves me a pull. Don't have to pull what's synced, right? Also setting up new projects should be terribly simple. Just add it normally. So, git just versions and does pipelines. And I copy everything inside the git directory over.
Also allows me to have more private .git/info/exclude files and hooks...
But for some reason... everything is synced. Dot-files are being synced as well. Everything works... But running git status on one side tells me everything is commited... Doing it on the other side it tells me there are new files.
How is that possible??? I kind of expected that even a branch checkout would be synced... Was curious if that would lead to issues, but I didn't expect it just not recognizing changes. Git doesn't hold projects in memory, does it? Nah, that doesn't make any sense. So, why does git status disagree? Git log is identical... Git status is not...
It makes no bloody sense.11 -
I hate copyright disclaimer on the top of code files.
I also hate lint-exclude comments.
For the same reason. Both have nothing to do with the code. Both are talking to a system that wraps around our code. Either the linter, that is not part of our code, but part of our continuous integration pipeline or the legal system that governs our living together in whatever country we are.
But for the linter comments and code checker comments and so on, I get it. They are functional. And I do not know how to add information to a line of code without writing into the code in a better way.
But... Does anyone know if the Copyright claim is even valid? Is it functional?
Don't get me wrong. I understand that code has copyright, but is writing it in code even a good method for it?
First I copyright it to myself or the company that I am working for. But that's the norm, isn't it? It's not like someone can look through my code and say: "He forgot to write it on top of this file, we can steal this file."
So if it was missing it wouldn't change anything. It's just so that it is harder for people to claim they haven't known is was copyrighted... Was that even a legal defense in the first place? "Your honor, I was unaware that I violated a valid copyright," seem like the words with which you'd lose a trial.
But then you upload it to Github. And you choose a license. And... It contradicts what you've written in the file. That sounds like a good legal defense. I was looking at that statement, not at this statement. And if you change your mind you have to find hundreds of copyright notices to change it.
Not to speak about the legal system. Is the code protected in the USA? China? Netherlands? Germany? Different legal systems... Maybe different rules...
I know little about law, but I cannot imagine that copyright notices at the top of code files have any real legal power. And it is strangely enough not a topic I find a lot about.
I start believing it's just like when you draw a nice looking vase and somehow scribble van Gogh on it.6 -
Alright, I sometimes... Alright often... almost every night while trying to fall asleep... imagine applications on an NP computer. No, I don't claim there is a NP computer. But still...
Alright, if you don't wanna think of NP computer... Think of non-deterministic turing machines, which are NP computers....
Quick recap about the NP rules:
- If you have a problem with a realm of solutions, your computer will guess the right answer in O(1), if an answer exists.
- After guessing an answer you have to confirm it using a normal deterministic approach that the answer is correct. No unconfirmed answers, no ambiguity.
Anyway... Data compression in an NP computer. I will make a claim that I don't wanna look up or calculate, but think it is correct:
1. There is a number n. If we have any number of bits smaller or equal to n, we cannot find two combination of bits, so that combination 1 and combination 2 both evaluate to the same md5 hash and have the same length.
2. The given number n is really large, so that at least a few gigabytes, if not terabytes can be described by it. (Hash collisions are generally allowed, just not between two hashes with the same amount of bits within the bit amount of n)
Now it is possible to send a whole file by just sending it's md5hash and how many bits are in the file (as long as the file is smaller or equal to n, otherwise slice it). Because the other side can just decompress it by guessing the right program and confirming it by hashing it again.
This would be compressed in O(n) and decompressed in O(n). So it would be extremely fast.
I mean, sometimes it is a pity that we don't have NP computers, but given that with enormous amounts of calculation powers and or enough memory space, every NP program can be run on a P computer, we can conclude that technically md5 is compression. Even though our computers are far too slow to actually use it as such.
Obviously not limited to md5. True for other hashes. Just n changes.4 -
Something that annoys me about AI discussions:
We often have this explanation that it is not real intelligence. It lacks an inner life. It doesn't wonder.
But most of those argument are biased on a believe. The believe that we have real intelligence. That we wonder.
Just as an example: https://youtube.com/watch/... (You are two video from CGP Grey about split brains)
To the best of my understanding, we do not make reasons and decisions usually at the same time. We decide. We are asked why, we invent a reason.
This can be shown via contrast MRI. Also shown in the above video about split brains.
There is this hypothesis, that reason developed as a way of non-hierarchic decision finding in groups. Two group members make different decisions. No reason. They find out they disagree with each other, there bains come up with defenses for their decisions. Now they can decide which arguments are better. Those decisions are now reasoned.
Different study found that it takes usually up to 15 seconds before the rational part of the brain is activated when hearing an argument that you're oppossed to. When hearing (or making) an argument agreeing with your own opinion, the rational part of your brain is turned on immediately. Also in support of a group communication hypothesis.
Our brains evolved to fool us into believing we make rational decisions based upon reasons. That we are one entity. And that it belongs together. Because that was best for our survival. We take ownership of our decisions.
But in the end, it just makes us believe this intelligent thought has happened.
Now, we examine our inner self, which as just right now explained fools us. And we assume that other have similar inner selfs. And we arrive at the believe that we are terribly complicated and have an amazing sense of self.
Oh, an humans run on wet ware. Meaning, it is probably very unreliable. We must experience the equivalent of bit flips all the time. We must have great error correction systems. But lots of our human like tendancies might just be error corrections. (Some forms of creativity for instance.) It would be simple to add bad internal communication to simulate errors.
Emotions are bad communication aswell. But for a different reason. Imagine you have to put a few million states within a few states. Well, you mix and add, but in the end you just get as close as possible. And have an intelligent observer find out why you feel dread. Maybe your lizard brain saw a snake. Maybe you realised you're late for an assignment. Same flag. Our cognition has to work out why the flag was raised. Fewer than required states are also easy to simulate.
I think the thesis of this rant is: There is a good likelihood that we fool ourselves into thinking our own intelligence is so special and AI is actually far closer to human level intelligence than we think.
Or in other words, we are internally a Chinese room and we have decided we actually speak Chinese.
Disclaimer, I freely mixed my hypotheses with scientfic results. But hey, this is not a thesis, it's a rant.13 -
!dev
Feed a man a fish and he'll eat for a meal. Feed a man a chicken and he'll eat for the whole day.
At least if there is no floss around. Why the fuck am I always running out of floss in the most inopportune times?12 -
Why is no one ever talking about cli flags in OS X?
For instance: "ls / -lahtr" doesn't work in OS X.
The flags must be directly after the command.
And I have no fucking clue why. The only reason I can think of is that there must be some best practice paper and for the OS X version they compile a version that complies with those best practices.
I mean, not all programs do that. But a surprising amount.
And it is so annoying when I use a Mac. I have the habit of first saying what I want to do and finally adding flags at the end as an afterthought.
Recently, my wife couldn't make a curl command work she got from a tutorial... Turns out the curl command was written for a linux system and ended with a "-o filename" argument. Solution, move the argument to the front. Worked like a charm. But if you don't know about this; this might stop you dead in your tracks...
And the strangest thing... I never read anyone talking about it. Complaining about it. Or at least warning about it. Something like "hey, when you're on a mac, make sure to put the flags in front". Just acknowledging the existing of this.
Why is it that this quirk did not make it into public debate?7 -
I just googled the "It's not the fall that kills you, it's the sudden stop at the bottom" quote and learned it's from Douglas Adams...
I knew it from the Riddick games... That means, Richard B. Riddick once in his life sat down and read "Hitchhiker's Guide to the Galaxy" or "The long dark five o' clock tea of the Soul".
That left me mind blown...
And for all of you who have no idea what I am talking about: Old people stuff...2 -
Alright,
I recently installed pi-hole...
Everything was immediately perfect.
So, about two days later, I install a linux system... Hadn't had one when I setup my pi-hole. (Well, no Linux with desktop environment...)
So... Now I had error messages in Chrome... Connection change detected. The page didn't load, 3 seconds later it loaded. Many pages had to be reloaded.
And I focused my Google-Fu on issues connecting to pi-hole. Some issues where there, referring to Safari and pi-hole, but none for Chrome or/and Linux.
But what's a pi-hole? A DNS Resolver/Non-authoritive server and a DHCP server...
Maybe I haven't turned off my router's DHCP server correctly. So, wireshark... "bootp or dns" filter...
All dns communication is perfect, via UDP and from the pi-hole to my machine, not from the router. No DHCP messages from my router either...
Almost accidentally I found a page speaking about this issue. Had nothing to do with the pi-hole. Timing was a coincidence. Had everything to do with IPv6. Somehow that's switching over. Even worst, after reading that, I remembered I had the same issue in the past. I just forgot.
Turning off IPv6 was the solution. And fuck. Let this be a PSA: "Confirm your bloody assumptions when troubleshooting/debugging or waste time like an idiot... Just like me..." -
If you use:
apt upgrade -y ; sudo !!
Is that chaotic neutral or simply madness?
Asking for a friend... -
Alright, I was a little bored...
Rewatching some old flicks and thought I write a little script.
Mind you, I had no idea how to do any of the following, but as it turns out it's quite simple.
I wanted to take an image. A background image. Automatically write some text to it. The text, I would take from the RSS feed of my favourite blogger. It's the three latest blog entries, or at least the first 50 characters of their titles.
Then set the newly created background image with text as my desktop background.
And finally do all of that every 5 minutes.
So, I looked up a library for the image manipulation. Looked up how to set a background image from python.
What would you wager was the hardest part of the exercise? If you said every step will take you like 5 minutes, except setting up Windows Scheduler, that shit will take you more than an hour, then you're dead on.
That thing seems so... useful, yet so hard to understand. I found a StackOverflow post in which people argued what the settings meant.
Maybe I am just no Windows person, but hey, that thing seems needlessly complicated.8 -
There was a boom and my computer was dead. No power to the motherboard at all. Strong burning stench.
And I have no spare parts to test if the motherboard or the PSU is broken... My money is on the PSU. No visible marks anywhere. But could be both.
It has been roughly ten years... GPU was updated. But besides that, same computer.
Let's see, best I order a new PSU and see if it works and if it doesn't I salvage the GPU and build a new computer around it. But hey, that sucks.21 -
I am stupid as fuck.
This just happened:
I walk by the kitchen. I think to myself, I should get a cup of coffee. Let's quickly go to my office and grab my mug. I go to my office, grab my full mug, go to the kitchen, look down in my full mug and realise it is full with coffee.
Look around the kitchen for a moment, wondering why I am here when I already have some coffee... Figure out that I am stupid as fuck. Go back to write this rant. And now I'll continue with my work.13 -
So, just to recap if you missed the last few episodes. I've been a web developer for years. But I decided to get a degree and go to uni.
Also I am firmly on the fewer comments side of the debate about self-documenting code. Even though I usually rephrase it and say method and variable names are comments. Basic idea: something is unclear, you should leave a comment. But, before you leave a comment, take a good look at your method. Can you rename a variable? Maybe the method name? Maybe extract a method into smaller methods so it doesn't need a comment? And only if you fail to do so, leave a comment.
Alright, now that we rehashed that, uni coding makes no bloody sense.
There is code that is abbreviated to the max (or min).
And then, they need everything commented. I mean, why do that? Why call the parameters a and b instead of base and exponent. And then say:
"But write a whole article about it above the method". Like:
a is the base for a power operation.
b is the exponent for a power operation.
return int representing a to the power of b
How about just do this:
public static int power(int base, int exponent).
How is this not the same documentation?
Is it because we're at a uni, a place for smart people and smart people shouldn't have an issue keeping a mental map between the variables and their meaning?
Or is it because they are all mathematicians. All respect to applied mathematics. I mean, the function about exponent calculation, I was not aware that it could be that effective. But on the other hand, keep mathematicians away from programming. I get it, writing maths per hand doesn't have intellisense and therefore you don't want to write long variable names. It's and old tradition. Yada yada, yah.
But programming is not maths. And maths shouldn't be maths like that. Right naming makes it simpler. It might still be a while until we all LaTeX rather than handwrite and be able to give maths right naming schemes, but programming is beyond the point. Calling the array you handing in a function A and the one that you're returning D makes no fucking sense.4 -
I noticed something...
I drink a lot of coffee. I also drink tea. But usually when I want something sweet. So, lots of sweetener.
Sometimes I go and make me a cup of coffee and a cup of tea at the same time. And then, then it happens.
When I want to sip from my cup of tea and I reach for my tea, everything is fine. Same goes when I want a sip of coffee.
But woe me when I want a sip of tea and accidentally take a sip of coffee. You see, it's not the rapid switching between coffee and tea, it's the expectation of taking a sip of sweet tea and getting bitter black coffee. Suddenly coffee is the most disgusting thing I've ever drunken.
But give me half a minute and then I drink a sip of coffee on purpose and I like it again.
Yet, while I don't expect coffee, I feel like a ten year old who stole a sip of coffee from his parents' mug.
So I surmise, my frontal lobe has detected coffee as something good and must override the fact that I don't like the taste or something like that. But to do so it must anticipate coffee. Anyone willing to experiment with that to figure out if that's normal or if I am just weird?18 -
Really minor gripe, but how comes that on a new install of Regolith ping is not installed but mtr is?
I think it's not limited to Regolith... But that's weird, isn't it? -
My cat is powerless.
I am perfectly capable to operate my computer with my keyboard only. I am perfectly capable to operate most of it with my mouse only.
But my cat can only block one or the other.
Got two cats, though. Hope they won't team up against me.15 -
In a continuation of my previous rant:
Alright KVM is running.
First devrant post from my Windows inside my Linux with dedicated pass-through graphics card. So far, looks like it's working.
Installing Horizon Zero Dawn to gage gaming performance. But it looks promising. With a stylish button on desk that switches between Windows and Linux.2 -
I am so tired of Windows.
Latest story. I am doing homework for uni. I write it in LaTeX.
My LaTeX editor is vscode. Because there are great LaTeX plugins which can use a docker container for LaTeX. Also vscode has a vim plugin.
I wanted to synchronize my progress, so I installed GDrive Sync and pointed it to my homework directory.
And suddenly compiling regularly crashes. And it's Windows fault.
This is how the plugin uses LaTeX: "First creating some auxiliary files. Then create the pdf. Then delete the auxiliary files.
But sometimes it happens that GDrive finds the auxiliary files. Then it will open the file in readmode. And upload the contents. And here's the problem. When it's opened, it cannot be deleted. This crashes the pluging. Could have been programmed better, but hey, in Linux, it could be deleted.
Files in Linux are garbage collected. Well, not really, but same effect. When a file is deleted, it disappears immediately, but is actually only deleted when no more process has it opened. Meaning, you could delete something that is being uploaded. It would be continued to be uploaded until GDrive is done, at which point the file is deleted. GDrive would see the change and delete the auxiliary file remotely.
So, it is inherently better at throwing multiple applications together without them conflicting with each other.
Yesterday, I was finally fed up with all of that and installed regolith on my system. But I am worried. I don't know what my uni will throw at me. Stuff like zoom breakout session. There is no guarantee that not someone needs something done that's only possible in Windows (or only possible with reasonable effort in Windows). And if it's just turning in an assignment as a power point presentation.
Plus I want to game. And I have more than just steam games.
Well, anyway. Today is the day where my KVM-switch and second graphics card arrives. Think I have that covered.
Also gives me the opportunity to spin up a separate windows for applications I don't trust.
So, I guess my setup just made a huge leap to a better state.7 -
Expectation.
Separate theoretical computer science from practical computer science.
Honestly, create a new speciality, the computer mathematician. Fill it with theoretical computer science, algorithmic and applied mathematics. So, the core of a pure computer science curriculum.
People wouldn't be surprised about what they get.
And then you can have some more application creating speciality. I mean, we already have those. And they advertise themselves quite right. But pure computer science does not.3 -
I should be happy that I am bored, but I am bored, not happy. Paradox of understanding at the university.1
-
It's great. I'm constantly talking to customers at the moment, because I quit my job and became a student and a freelancer. I love to see them gulp when I finally name my price and some give me work. And I make in twenty hours what I made in a month before.
That's because I'm expensive now but have been dreadfully underpaid before -
First day of uni. Quit my job as a developer to become a computer science student.
So many people told me that makes no sense and they're right financially. But I want to.
One day down, 7 more years to go. Old man's learning again.18 -
Just thought about fizzbuzz and how while being dear simple, there is no good solution to it.
And then I thought there must be s library for it. And yes, there is: https://pypi.org/project/...
I was never asked on an interview to write FizzBuzz, but if I will, I'll try to use the library and see the look on their faces.21 -
I finally have a pair of Bluetooth headsets that fit perfectly in my ears. A cheap knock-off of Apple Earpods.
And they are so amazingly comfortable and hold perfectly in my ear. Just like it is supposed to be.
They became my main headset. Doesn't matter if on my phone or my laptop.
Now, when I stand up from my computer, I always have to do this litter dance. It mostly goes like this:
Step 1: Lock my screen
Step 2: Pull out my phone
Step 3: Realising I have forgotten to end the Bluetooth connection with my Laptop
Step 4: Curse bloody murder, because it's surely the ravens' fault
Step 5: Unlock my screen.
Step 6: Open config and disconnect my headset
Step 7: Lock screen again
Step 8: Connect my phone
It's such an annoyance. Why does Bluetooth not support force connecting with a previously connected device? Next vacation, I will write an application that allows me to remotely disconnect and connect to Bluetooth devices.4 -
Why are all of my Bluetooth headsets saying: "Your device is disconnect?"
When I connect them, they use the grammatically correct phrase: "Your device is connected."
But on disconnects it says "disconnect" instead of "disconnected." Different headsets from different manufacturers and all of them are doing that on different devices.
Is there something in the Bluetooth specs that requires that grammar error, similar to the orthographic mistake in the referrer header?10 -
My private lappy is ageing.
Funny things about modern computers, it's hardly noticeable. At least in performance. SSD makes it start fast. And as long as I am not playing any games, I can do everything just as quickly.
But then, there is the keyboard. Some keys getting annoyingly unresponsive. Need to open those keys up. Oh, and yesterday when I unplugged a USB connector, part of the plastic around the port broke away.
But there is one thing I cannot fix. The screen resolution. It's a 1366x768. And so many applications already don't show up completely. Or I have to scroll. Or they unnaturally pressed together.
i3 is a godsend to deal with screen limitations, but it cannot perform miracles. How did we use screens in the past? I mean, I know intellectually that in the past developers were developing applications on smaller screens, hence they were optimised for smaller screens, yet whenever a modal opens that is bigger than my screen I am kind of amazed this is a thing.
Cannot wait for my new XPS.7