Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "kernel error"
-
Does anyone else have that one guy or gal you work with that's ALWAYS the one to find the weirdest, inexplicable bugs possible? Yup. That's me. Here's some fun examples.
*Unplugs monitor from laptop, causing kernel panic*
*Mouse moves in reverse when inside canvas*
*Program fails to compile, yet compiler blames a syntax error that doesn't exist*
*malloc on the first line of a program causes a segfault*
And for how the conversation usually goes
Me: "[coworker], mind taking a look at this?"
Coworker: "Sure.This better not be another one of 'your bugs'. ... ... ... Well, if you need me I'll be at my desk."
Me: "So you know what's causing it?"
Coworker: "Nope. I've accepted that you're cursed and you should do the same."8 -
fork() can fail: this is important
Ah, fork(). The way processes make more processes. Well, one of them, anyway. It seems I have another story to tell about it.
It can fail. Got that? Are you taking this seriously? You should. fork can fail. Just like malloc, it can fail. Neither of them fail often, but when they do, you can't just ignore it. You have to do something intelligent about it.
People seem to know that fork will return 0 if you're the child and some positive number if you're the parent -- that number is the child's pid. They sock this number away and then use it later.
Guess what happens when you don't test for failure? Yep, that's right, you probably treat "-1" (fork's error result) as a pid.
That's the beginning of the pain. The true pain comes later when it's time to send a signal. Maybe you want to shut down a child process.
Do you kill(pid, signal)? Maybe you do kill(pid, 9).
Do you know what happens when pid is -1? You really should. It's Important. Yes, with a capital I.
...
...
...
Here, I'll paste from the kill(2) man page on my Linux box.
If pid equals -1, then sig is sent to every process for which the calling process has permission to send signals, except for process 1 (init), ...
See that? Killing "pid -1" is equivalent to massacring every other process you are permitted to signal. If you're root, that's probably everything. You live and init lives, but that's it. Everything else is gone gone gone.
Do you have code which manages processes? Have you ever found a machine totally dead except for the text console getty/login (which are respawned by init, naturally) and the process manager? Did you blame the oomkiller in the kernel?
It might not be the guilty party here. Go see if you killed -1.
Unix: just enough potholes and bear traps to keep an entire valley going.
Source: https://rachelbythebay.com/w/2014/...12 -
!rant
So this year I had a subject at university called "Linux internal architecture", and for the last assignment I had to write a kernel module and interact with it with a separate program written in C.
Once I had finished and tested the driver, I went on to write the other program, which was supposed to use system calls to read and write data to the module. While debugging this program (~500 lines of code) I reached the level of frustration where you just start printing absurd messages everywhere in your code to see what's wrong. So for example instead of printing "This error happened in this function", my error messages were more like "Fuck this fucking function it doesn't fucking work".
Guess who forgot to delete all those messages before sending the code to the teacher...
Also, if a specific mode is selected, the program enters a while(1) that, apart from doing what it's expected to do, also creates a file in the user's home directory called something like 'motherfucker' and appends the words 'fuck this shit' to it. INFINITELY.
I really really hope this teacher doesn't try to run the program in his own computer, or he's in for a big surprise.8 -
Installing Ubuntu in VMWare. After the installation, proceeded to install VMWare tools to get the full resolution.
Shitloads of errors. Kernel build failing, gcc exiting with error code other than 0, all the copying failed. At the end of the process the executable says:
Enjoy,
---The VMWare Team
What the fuck am supposed to enjoy? My broken fucking Ubuntu in a VM?5 -
*gets countless amounts of shit with Windows because of my "nonstandard use"*
WanBLowS fanbois: "Cheap hardware!! Hardware error!! Unstable drivers!!! Can't be anything else, this OS is rock solid."
I really wish that I had your ignorance. I know when I see a shitty OS in front of me. And mind you, it actually ran Linux a while back.. just that I couldn't use my Nvidia GPU in it and had to compile a kernel with all of that crap excluded to make it work decently.. fuck Nvidia. And you know what, it actually did run fucking rock solid!!! But over time I lost the config and X.org doesn't like my dualhead setup all that much, especially the ultrawide display.
So, how about we address this issue for what it is already. THE OS FUCKING SUCKS!!!21 -
So I just wanted to log back into windows. Typed in the password. Wrong password...
Then I tried being super accurate while typing and also checked keyboard layout, etc. Still, wrong password.
Then I noticed that the letter p is not working. Shit, keyboard seems to be broken.
On screen keyboard -> p is not working...
What the hell? What kind of error is this?
NT Kernel code has to be something like this:
if(timeSinceLastError > someValue)
keyboard.p.enable = false;
I guess you could also replace the keyboard error with some random error.
If you encounter this, restart Windows.3 -
Ok friends let's try to compile Flownet2 with Torch. It's made by NVIDIA themselves so there won't be any problem at all with dependencies right?????? /s
Let's use Deep Learning AMI with a K80 on AWS, totally updated and ready to go super great always works with everything else.
> CUDA error
> CuDNN version mismatch
> CUDA versions overwrite
> Library paths not updated ever
> Torch 0.4.1 doesn't work so have to go back to Torch 0.4
> Flownet doesn't compile, get bunch of CUDA errors piece of shit code
> online forums have lots of questions and 0 answers
> Decide to skip straight to vid2vid
> More cuda errors
> Can't compile the fucking 2d kernel
> Through some act of God reinstalling cuda and CuDNN, manage to finally compile Flownet2
> Try running
> "Kernel image" error
> excusemewhatthefuck.jpg
> Try without a label map because fuck it the instructions and flags they gave are basically guaranteed not to work, it's fucking Nvidia amirite
> Enormous fucking CUDA error and Torch error, makes no sense, online no one agrees and 0 answers again
> Try again but this time on a clean machine
> Still no go
> Last resort, use the docker image they themselves provided of flownet
> Same fucking error
> While in the process of debugging, realize my training image set is also bound to have bad results because "directly concatenating" images together as they claim in the paper actually has horrible results, and the network doesn't accept 6 channel input no matter what, so the only way to get around this is to make 2 images (3 * 2 = 6 quick maths)
> Fix my training data, fuck Nvidia dude who gave me wrong info
> Try again
> Same fucking errors
> Doesn't give nay helpful information, just spits out a bunch of fucking memory addresses and long function names from the CUDA core
> Try reinstalling and then making a basic torch network, works perfectly fine
> FINALLY.png
> Setup vid2vid and flownet again
> SAME FUCKING ERROR
> Try to build the entire network in tensorflow
> CUDA error
> CuDNN version mismatch
> Doesn't work with TF
> HAVE TO FUCKING DOWNGEADE DRIVERS TOO
> TF doesn't support latest cuda because no one in the ML community can be bothered to support anything other than their own machine
> After setting up everything again, realize have no space left on 75gb machine
> Try torch again, hoping that the entire change will fix things
At this point I'll leave a space so you can try to guess what happened next before seeing the result.
Ready?
3
2
1
> SAME FUCKING ERROR
In conclusion, NVIDIA is a fucking piece of shit that can't make their own libraries compatible with themselves, and can't be fucked to write instructions that actually work.
If anyone has vid2vid working or has gotten around the kernel image error for AWS K80s please throw me a lifeline, in exchange you can have my soul or what little is left of it5 -
tl;dr:
The Debian 10 live disc and installer say: Heavens me, just look at the time! I’m late for my <segmentation fault
—————
tl:
The Debian 10 live cd and its new “calamares” installer are both complete crap. I’ve never had any issues with installing Debian prior to this, save with getting WiFi to work (as expected). But this version? Ugh. Here are the things I’ve run into:
Unknown root password; easy enough to get around as there is no user password; still annoying after the 10th time.
Also, the login screen doesn’t work off-disc because it won’t accept a blank password, so don’t idle or you’ll get locked out.
The lock screen is overzealous and hard-locks the computer after awhile; not even the magic kernel keys work!
The live disc doesn’t have many standard utilities, or a graphical partition editor. Thankfully I’m comfortable with fdisk.
The graphical installer (calamares) randomly segfaults, even from innocuous things like clicking [change partition] when you don’t have a partition selected. Derp.
It also randomly segfaults while writing partitions to disk — usually on the second partition.
It strangely seems less likely to segfault if the partitions are already there, even if it needs to “reformat” (recreate) them.
It also defaults to using MBR instead of GPT for the partition table, despite the tooltip telling you that MBR is deprecated and limited, and that GPT is recommended for new systems. You cannot change this without doing the partitions manually.
If you do the partitions manually and it can’t figure out where to install things, it just crashes. This is great because you can’t tell it where to install things, and specifying mount points like /boot, /, and /home don’t seem to be enough.
It also tries installing 32bit grub instead of 64bit, causing the grub installer to fail.
If you tell it to install grub on /boot, it complains when that partition isn’t encrypted — fair — but if you tell it to encrypt /boot like it wants you to, it then tries installing grub on the encrypted partition it just created, apparently without decrypting it, so that obviously fails — specific error: cannot read file system.
On the rare chance that everything else goes correctly, the install process can still segfault.
The log does include entries for errors, but doesn’t include an error message. Literally: “ERROR: Installation failed:” and the log ends. Helpful!
If the installer doesn’t segfault and the install process manages to complete, the resulting install might not even boot, even when installed without any drive encryption. Why? My guess is it never bothered to install Grub, or put it in the wrong place, or didn’t mark it as bootable, or who knows what.
Even when using the live disc that includes non-free firmware (including Ath9k) it still cannot detect my wlan card (that uses Ath9k).
I’ve attempted to install thirty plus times now, and only managed to get a working install once — where I neglected to include the Ath9k firmware.
I’m now trying the cli-only installer option instead of the live session; it seems to behave at least. I’m just terrified that the resulting install will be just as unstable as the live session.
All of this to copy the contents of my encrypted disks over so I can use them on a different system. =/
I haven’t decided which I’m going with next, but likely Arch, Void, or Gentoo. I’d go with Qubes if I had more time to experiment.
But in all seriousness, the Debian devs need some serious help. I would be embarrassed if I released this quality of hot garbage.
(This same system ran both Debian 8 and 9 flawlessly for years)15 -
I propose that the study of Rust and therefore the application of said programming language and all of the technology that compromises it should be made because the language is actually really fucking good. Reading and studying how it manages to manipulate and otherwise use memory without a garbage collector is something to be admired, illuminating in its own accord.
BUT going for it because it is a "beTter C++" should not constitute a basis for it's study.
Let me expand through anecdotal evidence, which is really not to be taken seriously, but at the same time what I am using for my reasoning behind this, please feel free to correct me if I am wrong, for I am a software engineer yes, I do have academic training through a B.S in Computer Science yes, BUT my professional life has been solely dedicated to web development, which admittedly I do not go on about technical details of it with you all because: I am not allowed to(1) and (2)it is better for me to bitch and shit over other petty development related details.
Anecdotal and otherwise non statistically supported evidence: I have seen many motherfuckers doing shit in both C and C++ that ADMIT not covering their mistakes through the use of a debugger. Mostly because (A) using a debugger and proper IDE is for pendejos and debugging is for putos GDB is too hard and the VS IDE is waaaaaa "I onlLy NeeD Vim" and (B) "If an error would have registered then it would not have compiled no?", thus giving me the idea that the most common occurrences of issues through the use of the C father/son languages come from user error, non formal training in the language and a nice cusp of "fuck it it runs" while leaving all sorts of issues that come from manipulating the realm of the Gods "memory".
EVERY manual, book, coming all the way back to the K&C book talks about memory and the way in which developers of these 2 languages are able to manipulate and work on it. EVERY new standard of the ISO implementation of these languages deals, through community effort or standard documentation about the new items excised through features concerning MODERN (meaning, no, the shit you learned 20 years ago won't fucking cut it) will not cut it.
THUS if your ass is not constantly checking what the scalpel of electrical/circuitry/computational representation of algorithms CONDONES in what you are doing then YOU are the fucking problem.
Rust is thus no different from the original ideas of the developers behind Go when stating that their developers are not efficient enough to deal with X language, Rust protects you, because it knows that you are a fucking moron, so the compiler, advanced, and well made as it is, will give you warnings of your own idiotic tendencies, which would not have been required have you not been.....well....a fucking idiot.
Rust is a good language, but I feel one that came out from the necessity of people writing system level software as a bunch of fucking morons.
This speaks a lot more of our academic endeavors and current documentation than anything else. But to me DEALING with the idea of adapting Rust as a better C++ should come from a different point of view.
Do I agree with Linus's point of view of C++? fuck no, I do not, he is a kernel engineer, a damn good one at that regardless of what Dr. Tanenbaum believes(ed) but not everyone writes kernels, and sometimes that everyone requires OOP and additions to the language that they use. Else I would be a fucking moron for dabbling in the dictionary of languages that I use professionally.
BUT in terms of C++ being unsafe and unsecured and a horrible alternative to Rust I personaly do not believe so. I see it as a powerful white canvas, in which you are able to paint software to the best of your ability WHICH then requires thorough scrutiny from the entire team. NOT a quick replacement for something that protects your from your own stupidity BY impending the use of what are otherwise unknown "safe" features.
To be clear: I am not diminishing Rust as the powerhouse of a language that it is, myself I am quite invested in the language. But instead do not feel the reason/need before articles claiming it as the C++ killer.
I am currently heavily invested in C++ since I am trying a lot of different things for a lot of projects, and have been able to discern multiple pain points and unsafe features. Mainly the reason for this is documentation (your mother knows C++) and tooling, ide support, debugging operations, plethora of resources come from it and I have been able to push out to my secret project a lot of good dealings. WHICH I will eventually replicate with Rust to see the main differences.
Online articles stating that one will delimit or otherwise kill the other is well....wrong to me. And not the proper approach.
Anyways, I like big tits and small waists.14 -
I found this on a wiki with Haskell Humor... it's interesting...
How to Shoot Your Self in the Foot With Haskell: Putting the unsafe in unsafePerformIO!
You shoot the gun, but the bullet gets trapped in the IO monad.
Couldn't match expected type 'Deer' against inferred type 'Foot'.
While compiling your program the compiler produces a type error long enough to overflow a kernel buffer, overwrite the trigger control register and shoot you in the foot.
After trying to decipher the type errors from the compiler, your head explodes.
After you've finally found a way to circumvent the type system and shoot yourself in the foot, Oleg appears out of nothing and shoots you in the foot for coming up with it before him.
You shoot the gun but nothing happens (Haskell is pure, after all).
Your foot is fine, until you try to walk on it, at which point it becomes mangled.
You have a shootFoot function which you've proven correct. QuickCheck validates it for arbitrary you-like values. It will be evaluated only when you end up at the hospital. You hope this doesn't come to pass, as it actually returns a bullet-ridden copy of yourself and you don't want to be garbage-collected.
foreign import ccall "shootparts.h shootfoot" shoot_foot :: Gun -> Programmer -> IO ()
shootSelfInFoot = unsafePerformIO . shoot . foot $ self -- Shoot self in foot 0 or more times depending on evaluation order
No instance for (Target Foot)
arising from use of `shoot' at SelfInflictedInjury.hs:1:0
Possible fix: add an instance declaration for (Target Foot)
In the expression: shoot foot
You go to shoot yourself in the foot but the bullet is in the ST monad and the gun is in the IO monad, so you can't.
You ask Haskell to shoot you in the foot but by the rules of lazy evaluation you don't need the result yet so it doesn't happen.
You decide to shoot yourself in the foot but get distracted devising a ballistics algebra and wondering if you can do the calculations in the type system.
You want to shoot yourself in the foot but realize there is no Gun datatype so use Arrows instead.
You shoot in the direction of your foot, but since you are inside the STM monad you can just retry until you figure out what to do.
You shoot yourself in the foot, but you are perfectly fine as long you just don't evaluate the foot.
You shoot yourself in the foot, but nothing happens unless you start walking.
Don't forget about memory consumption! If you don't look, the bullet causes heap overflow. If you look, the bullet causes stack overflow.
You *appear* to have deliberately shot yourself in the foot, and yet your program actually runs perfectly OK due to lazy evaluation. (So long as you remember to not look at your foot...)
You aim the gun at your foot, pull the trigger and remove the clip. When you look at your undamaged foot, the hammer clicks on an empty barrel.1 -
OpenCL...
Okay so I'm completely new to OpenCL and I just put some stuff together to get a simple GPU Kernel running. Well that worked pretty good.
The reason I got into OpenCL was because I wanted to do some simple SHA1 cracking on my GPU. What I did was, I got a fast implementation of SHA1 from the internet, which works perfect in normal C++, but for OpenCL I have to rewrite some things. So I replaced all the memset and memcpy and so on with simple for loops and it still worked. Now, this should work on OpenCL, too, I thaught. God I was wrong!
Somehow the clBuildKernel got executed normally, but when I try to access the returned value (the error code) I get an Access Violation? It just doesn't make any sense to me?
Well I will try some stuff tomorrow again and I will find a solution for sure, but still, until now I just don't understand it. -
Users running Linux on laptops with Intel processors should avoid Linux Kernel 5.19.12 due to an error that might physically harm the display. Fortunately, kernel 5.19.13 has already fixed the issue. Versions 6.0 and 6.1 have also begun rolling out with many significant changes.4
-
Avoid ACPICA if at all possible. It's one garbage tier cluster fuck of bad design, horrible documentation and downright misleading and wrong code
It's meant to consist of an ASL compiler, disassembler, debugger, dumper, various user space utitilies and a kernel resident OSPM implementation *if* you can figure out what belongs to what. Even just compiling this pile of trash is a mystery in itself. Think you need the source files in source/common? EEEEH, wrong. Well, at least partially since most of them seem to be for the user space stuff..? Other ones *are* needed on the other hand. At least the disassembler and/or debugger and/or dumper components seem to reference them. Not that I could figure out how to compile those anyways. The real path to your goal seems to be to ignore a seemingly arbitrary subset of source and header files until your linker stops complaining
There's also a bunch of configuration defines, some of which *you* define, some defined *for* you, based on again others. Of course most of them do stupid shit. Enabling the debugger automatically enables debug logging. Enabling the disassembler force enables debug allocation tracking... What?
The code itself isn't of much help either. Looking in "os_specific/service_layers" you find what looks to be reference implementations of acpica functions in certain os' like windows and unix. Of course I had a look because AcpiOsReadMemory is supposed to read physical memory and I don't know how I would even implement that. But hey, osunixxf.c (xf for interface... of course) should tell me. I'll let you see for yourself in the attached image. Apparently it does fuck all and just returns AE_OK. No error, no logging, no nothing. Just ok. As you can imagine, AcpiOsWriteMemory doesn't do much more either.
...okay so maybe physical memory accesses aren't actually used and these functions are some sort of relic from past times? Nope! They are absolutely necessary for doing low level device interaction. WTF. So finally I went to the linux source and checked how *they* implemented them, and just as I thought, these functions are anything but no-ops...
...So for what fucking reason do these stupid interface implementations even exist but to purposefully mislead you?? They aren't used for fucking anything! As far as I know Windows doesn't even *use* ACPICA and Linux have their own fork with working implementations... They just sit there, just to tell you how to NOT do it
So that's some of my thoughts about ACPICA. Note that I haven't even used it as a library yet, I just got it to compile and link and it already fucked with me this much.
There's also so much more I didn't mention like that you *have* to modify the acpica source in order to get your own platform header working (else #error) eventhough the docs explicitely instruct you not too but you get the point
Don't use ACPICA if you don't have to. Save your sanity for something that's worth it -
WTF is wrong with Manjaro, every package I install gets "error while loading shared libraries". VSCode, chromium, even yaourt?! I mean this just doesn't even fucking work.
I downgraded my kernel and now I get the same errors and when I -Syu, I get a thousand "Warning: x is newer than y"
Linux.25 -
I don't understand some developer's thought processes when they fix a bug/issue.
Let's say the error is -> "Cannot read property id of undefined".
My first thought is to add a check for undefined and null and figure out if further code should be executed if a null or undefined is encountered, depending on what the code is supposed to do.
But some devs are like, "Yesterday the sunrise was at 5:30 AM, Earth's rotational axis is titled at 15 degrees to the left, My aunt asked me about how I am doing today, so therefore the bug fix is required at line 65,456 of this particular kernel file".
And they implement it, and it WORKS.
Weird.5 -
Well the company I work for was too cheap to buy a new Graphics Card so we got an old graphics card (nvidia) and had to install it with working drivers
We checked and the drivers were End Of Live since I think 2015 or 2011 so we downgraded the hell out of ubuntu 20.04 Kernel, Compilers and the driver installation still failed We had to get the gcc 7.2 but there wasn't any PPA's available with this anymore and the installation still failed without giving proper error message 11 Hours later we decided to go home
The next day we got an Email we wasted time and money of the company
but we were asked to do this.... (two working students getting minimum wage)5 -
I really hate PHP frameworks.
I also often write my own frameworks but propriety. I have two decades experience doing without frameworks, writing frameworks and using frameworks.
Virtually every PHP framework I've ever used has causes more headaches than if I had simply written the code.
Let me give you an example. I want a tinyint in my database.
> Unknown column type "tinyint" requested.
Oh, doctrine doesn't support it and wont fix. Doctrine is a library that takes a perfectly good feature rich powerful enough database system and nerfs it to the capabilities of mysql 1.0.0 for portability and because the devs don't actually have the time to create a full ORM library. Sadly it's also the defacto for certain filthy disgusting frameworks whose name I shan't speak.
So I add my own type class. Annoying but what can you do.
I have to try to use it and to do so I have to register it in two places like this (pseudo)...
Types::add(Tinyint::class);
Doctrine::add(Tinyint::class);
Seems simply enough so I run it and see...
> Type tinyint already exists.
So I assume it's doing some magic loading it based on the directory and commend out the Type::add line to see.
> Type to be overwritten tinyint does not exist.
Are you fucking kidding me?
At this point I figure out it must be running twice. It's booting twice. Do I get a stack trace by default from a CLI command? Of course not because who would ever need that?
I take a quick look at parent::boot(). HttpKernel is the standard for Cli Commands?
I notice it has state, uses a protected booted property but I'm curious why it tries to boot so many times. I assume it's user error.
After some fiddling around I get a stack trace but only one boot. How is it possible?
It's not user error, the program flow of the framework is just sub par and it just calls boot all over the place.
I use the state variable and I have to do it in a weird way...
> $booted = $this->booted;parent::boot();if (!$booted) {doStuffOnceThatDependsOnParentBootage();}
A bit awkward but not life and death. I could probably just return but believe or not the parent is doing some crap if already booted. A common ugly practice but one that works is to usually call doSomething and have something only work around the state.
The thing is, doctrine does use TINYINT for bool and it gets all super confused now running commands like updates. It keeps trying to push changes when nothing changed. I'm building my own schema differential system for another project and it doesn't have these problems out of the box. It's not clever enough to handle ambiguous reverse mappings when single types are defined and it should be possible to match the right one or heck both are fine in this case. I'd expect ambiguity to be a problem with reverse engineer, not compare schema to an exact schema.
This is numpty country. Changing TINYINT UNSIGNED to TINYINT UNSIGNED. IT can't even compare two before and after strings.
There's a few other boots I could use but who cares. The internet seems to want to use that boot function. There's also init stages missing. Believe it or not there's a shutdown and reboot for the kernel. It might not be obvious but the Type::add line wants to go not in the boot method but in the top level scope along with the class definition. The top level scope is run only once.
I think people using OOP frameworks forget that there's a scope outside of the object in PHP. It's not ideal but does the trick given the functionality is confined to static only. The register command appears to have it's own check and noop or simply overwrite if the command is issued twice making things more confusing as it was working with register type before to merely alias a type to an existing type so that it could detect it from SQL when reverse engineering.
I start to wonder if I should just use columnDefinition.
It's this. Constantly on a daily basis using these pretentious stuck up frameworks and libraries.
It's not just the palava which in this case is relatively mild compared to some of the headaches that arise. It's that if you use a framework you expect basic things out of the box like oh I don't know support for the byte/char/tinyint/int8 type and a differential command that's able to compare two strings to see if they're different.
Some people might say you're using it wrong. There is such a thing as a learning curve and this one goes down, learning all the things it can't do. It's cripplesauce.12 -
1. Update some packages
2. Linux machine stops working soon after
3. Panic
4. Go get windows machine to help me troubleshoot the issue
5. Starts windows update on startup
6. Panic some more -
Anyone else getting a bug with every opengl application on Linux? Gives me an error something like "X error of failed request: badvalue"
Weird- maybe happened in the latest kernel update? I'm using 4.147 -
Been trying to install myself a gentoo but it's been more like the mode of broken packages than the godmode of Linux... I mean I see that some packages break if I am trying to compile via musl (not fully supported yet) or via uclibc. But please. CAN'T YOU JUST FUCKING TEST THE PACKAGES BEFORE PUSHING TO LIVE? Seriously. I just wanna install a system with i3 and lightdm for the first. But do you think I could build even the first 20 packages WITHOUT A FUCKING ERROR MESSAGE?! FUCK NO. I mean it's a clean install - nothing should be blocking - let's wait a day.
*one day later*
Fuck. Shit doesn't work now either.
*gets himself a new tarball*
Wow now it works.... Or not. 4 packages later it failed again. And like that it continues.
Gentoo isn't even running on that new software. BUT IT STILL WON'T BUILD ANYTHING TO EVEN LET ME CONTINUE BUILDING A FUCKING KERNEL AND SETTING THAT SHIT UP.
Now I am totally frustrated - deleted my efivars once because I forgot to unmount /sys from the Chroot - after a few days of trying. I tell myself: Why not just arch? It always worked.
Okay then reboot to windows and get an arch-livesystem.... If only my Windows didn't boot entry disappear again. -
So I salvaged some computer that was about to be thrown out by IT because it works perfectly fine and would be a waste to lose.
The (current) problem is, there is no built-in wi-fi adapter so I had to order a usb adapter to plug into the machine.
Fortunately enough it supports Linux but cones with a cd with the _source_ of the driver in it and we are supposed to build it. Now what's the problem with that?
First problem: building needs all sorts of build tools, starting from gcc and make. Since it's a fresh install, though, I _cannot_ install those normally because -- you guessed it -- I don't have a WiFi connection, which is why I needed the bloody adapter in the first place, so I spent hours trying to fetch the binaries from the apt register using another computer and bringing them over via USB.
Once that was (more or less) successful, the next problem came around; Second problem: the instructions clearly state to run make, but there is no fucking makefile anywhere so that obviously fails spectacularly. What _is_ there are some bash scripts so I try running those.
Now, just when I think it's finally done (one of the scripts has been running for a while and seems to work) the compiler dies with an error: the fucking driver won't build for the current kernel version. And not just that, but it is clear nobody is using basic things like include guards because gcc kept screaming at me about the same macros being defined over and over due to header file re-inclusion. Like, seriously? Come on!
Long story short the fucking adapter is going back to the seller, let's see if the next one I order more civilized.8 -
my biggest lol moment was talking to some hardcore always bring in your own algos and ds games to the table, always going to the core of the world devs, better than thou my shit is better than you ass, my point of view is the best in the world devs, cite papers and algos to you devs, shit like that that were making way less money than some dudester ruby on rails dev sitting at the the conf sipping on his drink.
Really, all that comp sci shit is legit and fun as fuck. But if you are not getting the green for it and living the life then what is the fucking point. Even then, those that are are normally fucking morons. This shit ain't some art, or a personality trait, it is a job.
Fuck me i am so tired of the whole hacker news reddit ass SO mentality of devs, then again I am also tired of mfkers with no knowledge of actual engineering publishing medium articles left and right.
As long as you cannot take human error out of this computer equation you will always have a shitfest of opinions, because regardless of correctness you will always have a shitfest as long as some dickwad has a difference of opinion in an otherwise young ass scientific field such as computer science.
Language wars, framework wars, editor wars you name it. This field is so fucking broken and so full of shit it ain't funny, made less comedic by the fact that it runs the world.
If we are going to die it will be by some massive kernel panic made possible because somewhere, some morons could not mergr a repo due to conflict in ideas. As if being right was going to bring you closer to not being an ugly fat nerd and getting pussy, or dick, whatever your flavor is you fucking losers. -
Let me start this off by stating I'm a Java dev, and a noob with C++.
Thought it'd be cool to learn some OpenCL, since I want to do some maths stuff and why not learn something new.
So I sat down, installed Nvidia proprietary drivers, broke my x-org server, purged, reinstalled, rebooted and after a while I got stuff sorted out.
Then on to my IDE. I use CLion and it uses Cmake. C++ noob knows shit about Cmake, so struggle for two hours trying to figure out wtf is going on with the OpenCL libs and why they're only partially detected. Fml.
Finally, everything is configured and I'm set. I start working on a Hello World program using OpenCL. Finish it in 20 mins, all good. No output. Do some googling, check my program a million times. Nothing wrong here. Check the kernel, everything as in the tutorial.
I start checking error codes after a while reported by OpenCL (which I had no clue was a thing) and I get some code saying the program was not created properly (to run the kernel). No fucking clue what's up with that. Google around, find another tutorial, rewrite my code in case I'm using outdated code or something. Nothing.
Fast forward an hour, I find out that OpenCL has logs! So I grab some code from the website I found it on, and voila, I finally get some info on what's going on.
Get a load of this bs.
In the kernel file, so that OpenCL knows that it's a function to run, you have to put __kernel. But in all the places I read, it said to put it as _kernel.
Add the underscore, compile, run and everything is perfect.
Then I tried just putting 'kernel'. Also compiles and runs fine.
Two hours hours and my program was fixed by adding an underscore. IF ONLY C++ GAVE AN INDICATION OF WHAT BLEW UP INSTEAD OF SITTING BACK AND BEING LIKE "oh wow man feels bad, work some magic and try again" THEN THIS WOULD NOT HAVE TAKEN SO LONG.
Then again, it was OpenCL that was being shitty with its styling enforcement or whatever the hell the underscore business is. But screw it. C++ eats shit too for this. Sure, maybe Java babies you by giving you the exact error and position that the error took place at. But at least that way you don't waste hours of your life chasing invisible bugs 😠😠
I'm going to eat some food... Too much energy was consumed fighting the system... Then I'll get back to OpenCL because 😇 but that doesn't make it less bs.1 -
finally got a Powerline set, so I can actually *use* my desktop upstairs.
...wait, my ethernet isn't working.
look for the chipset's proper driver package...?
"oh it installs the wrong driver by default, which doesn't work on kernel 5.x. Use <other driver, DKMS>"
"oh it won't see your device? downgrade to <version>"
DKMS error: "<snip>/linux-headers-5.10<whatever>/Documentation/Makefile" doesn't exist
fuck it, plug laptop into powerline adapter
less useful than current situation
i'm going to fucking cry8 -
Urgh... No exceptions in Rust annoys me. Now you only have the choice between "this didn't work please handle this error, thank you ^-^" and "you fool, prepare for annihilation". So basically if anything remotely serious happens your programs dead and there's nothing you can do about it. I don't get why people have this hate for exceptions. Everytime a new language gets made it's always either "ew it has exceptions" or "it's so nice it doesn't even have exceptions". NOOO! They can deal with serious situations in the best possible way and they can be statically checked (so no "but they're so complex and unpredicable" stuff please). If you can expect an exception they shouldn't be used in the first place (eventhough they are absolutely no less good than Option returntypes or whatever, just different) but in cases when it's impossible to predict an error they really shine. And not having them makes your language worse. If a device driver accesses illegal memory it should throw an exception, so instead of the computer shitting the bed, first the offending function has a chance to resolve the problem at it's root, then a few functions up the call stack, the general control functions of the device drivers can handle it and restart the operation if applicable, and even if the driver fails to handle it, the OS can jump in and restart the driver, log an error and do whatever. It's absolutely beautiful: This hierarchical ramp from near the accident site to more high level operations code ensures the error can be caught at the right level of abstraction without introduction a lot of boilerplate. If everything fails and nobody can handle it *then* the program or kernel or whatever can panic.4
-
Ok so that's my plan, find a kernel with HUGE amout of drivers and , high version.
I built a small os based on linux
-- kernel version 5.0.2 from Plop Linux,
many libraries added 'by hand' -- packages from apts of Debian&Ubuntu, and unpacked packages into system with ArchiveManager,
has GUI but it's called xfree86 ( looks strange when a very old app running on Kernel5 )
So, without compiling, i can make a os.
But i found that Plop didn't compile rtl8188eu module which makes linux support some specific network cards.
I have no professional compiler but a tiny C/Cpp compiler called TinyCC (aka. tcc), but for my pc ( CPU freq = 800MHz ), it seems not possible to compile the module by myself.
And then i downloaded a 5.2 kernel with modules from kernel.ubuntu.com, but when i tried to mount my disk ( part. vfat ), i got some errors like IO charset not found, and then i replaced it with Xanmod kernel but also reported an error said Invalid Arguments, but i checked /proc/filesystems, it supports.
So what can i do? Are there any pre-compiled kernel & modules with 'full common supports'?
I tried kernel 4.4 ( from Ubuntu 16.04 LTS ) just now but the driver crashed when wpa_supplicant tried to initialize the device.7 -
I’m new to using gdb to debug. It could detect the places where I was getting SIGSEGV randomly. But there are also rare cases when the program just gets segmentation fault as soon as it starts, and it when that happens, even my gdb gets frozen and I have to shut down my terminal altogether.
What’s with this ? Is this some sort of kernel error (like Bus error) ? I’m using macOS Catalina and gdb 9.1.2 -
Why i got the message 'Input/Output Error" when mounting /dev/mmcblk0p2....
Nothing crashed till i just booted twice to enter different systems Android and MeeGo (via Ubiboot).And when i powered off my Android and tried to boot into MeeGo, it said "Boot OS/kernel selection failed! \n Please run a maintance boot"
I already ran a maintance boot...
And i entered Ubiboot's Telnet (low battery), found that /dev/mmcblk0p2 failed to mount with an error "Input/Output error", but i dd'd this partition (copy data into a file), everything was fine...
And after that, my phone turned into blank...
So can anybody tell me how to fix this? It's not like a hardware problem, i think there's something wrong with the data in the partition, like the header...3