Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "network error"
-
Had to debug an issue,
*ssh user@domain*
"some wild network connection issue"
*hmm weird.. *
*checks everything again*
*hmm seems alright.. *
*tries again*
*same damn error*
*ssh -v user@domain*
*syntax error thingy on the -v part*
😮
*messages co-worker asking what the fuck could be giving on*
"ey mate check your aliases 😂"
*alias"
"alias ssh="echo {insert network connection issue"*
*loud laughing from the co-worker I messaged*
MOTHERFUCKER 😆15 -
--- HTTP/3 is coming! And it won't use TCP! ---
A recent announcement reveals that HTTP - the protocol used by browsers to communicate with web servers - will get a major change in version 3!
Before, the HTTP protocols (version 1.0, 1.1 and 2.2) were all layered on top of TCP (Transmission Control Protocol).
TCP provides reliable, ordered, and error-checked delivery of data over an IP network.
It can handle hardware failures, timeouts, etc. and makes sure the data is received in the order it was transmitted in.
Also you can easily detect if any corruption during transmission has occurred.
All these features are necessary for a protocol such as HTTP, but TCP wasn't originally designed for HTTP!
It's a "one-size-fits-all" solution, suitable for *any* application that needs this kind of reliability.
TCP does a lot of round trips between the client and the server to make sure everybody receives their data. Especially if you're using SSL. This results in a high network latency.
So if we had a protocol which is basically designed for HTTP, it could help a lot at fixing all these problems.
This is the idea behind "QUIC", an experimental network protocol, originally created by Google, using UDP.
Now we all know how unreliable UDP is: You don't know if the data you sent was received nor does the receiver know if there is anything missing. Also, data is unordered, so if anything takes longer to send, it will most likely mix up with the other pieces of data. The only good part of UDP is its simplicity.
So why use this crappy thing for such an important protocol as HTTP?
Well, QUIC fixes all these problems UDP has, and provides the reliability of TCP but without introducing lots of round trips and a high latency! (How cool is that?)
The Internet Engineering Task Force (IETF) has been working (or is still working) on a standardized version of QUIC, although it's very different from Google's original proposal.
The IETF also wants to create a version of HTTP that uses QUIC, previously referred to as HTTP-over-QUIC. HTTP-over-QUIC isn't, however, HTTP/2 over QUIC.
It's a new, updated version of HTTP built for QUIC.
Now, the chairman of both the HTTP working group and the QUIC working group for IETF, Mark Nottingham, wanted to rename HTTP-over-QUIC to HTTP/3, and it seems like his proposal got accepted!
So version 3 of HTTP will have QUIC as an essential, integral feature, and we can expect that it no longer uses TCP as its network protocol.
We will see how it turns out in the end, but I'm sure we will have to wait a couple more years for HTTP/3, when it has been thoroughly tested and integrated.
Thank you for reading!27 -
!rant
I was in a hostel in my high school days.. I was studying commerce back then. Hostel days were the first time I ever used Wi-Fi. But it sucked big time. I'm barely got 5-10Kbps. It was mainly due to overcrowding and download accelerators.
So, I decided to do something about it. After doing some research, I discovered NetCut. And it did help me for my purposes to some extent. But it wasn't enough. I soon discovered that my floor shared the bandwidth with another floor in the hostel, and the only way I could get the 1Mbps was to go to that floor and use NetCut. That was riskier and I was lazy enough to convince myself look for a better solution rather than go to that floor every time I wanted to download something.
My hostel used Netgear's routers back then. I decided to find some way to get into those. I tried the default "admin" and "password", but my hostel's network admin knew better than that. I didn't give up. After searching all night (literally) about how to get into that router, I stumbled upon a blog that gave a brief info about "telnetenable" utility which could be used to access the router from command line. At that time, I knew nothing about telnet or command line. In the beginning I just couldn't get it to work. Then I figured I had to enable telnet from Windows settings. I did that and got a step further. I was now able to get into the router's shell by using default superuser login. But I didn’t know how to get the web access credentials from there. After googling some and a bit of trial and error, I got comfortable using cd, ls and cat commands. I hoped that some file in the router would have the web access credentials stored in cleartext. I spent the next hour just using cat to read every file. Luckily, I stumbled upon NVRAM which is used to store all config details of router. I went through all the output from cat (it was a lot of output) and discovered http_user and http_passwd. I tried that in the web interface and when it worked, my happiness knew no bounds. I literally ran across the floor screaming and shouting.
I knew nothing about hiding my tracks and soon my hostel’s admin found out I was tampering with the router's settings. But I was more than happy to share my discovery with him.
This experience planted a seed inside me and I went on to become the admin next year and eventually switch careers.
So that’s the story of how I met bash.
Thanks for reading!10 -
We were all 16 once right? When I was 16, my school had a network of Windows 2000 machines. Since I was learning java at the time, I thought learning batch scripting would be fun.
One day I wrote a script that froze input from the mouse and displayed a pop up with a scary “Critical System Error: please correct before data deletion!!”. It also displayed a five minute countdown timer, after which the computer restarted.
I may or may not have replaced the internet explorer icon on the desktop with a link to my program on the entire student lab of computers. Chaos.12 -
Smart India Hackathon: Horrible experience
Background:- Our task was to do load forecasting for a given area. Hourly energy consumption data for past 5 years was given to us.
One government official asks the following questions:-
1. Why are you using deep learning for the project? Why are you not doing data analysis?
2. Which neural network "algorithm" you are using? He wanted to ask which model we are using, but he didn't have a single clue about Neural Networks.
3. Why are you using libraries? Why not your own code?
Here comes the biggest one,
4. Why haven't you developed your own "algorithm" (again, he meant model)? All you have done is used sone library. Where is "novelty" in your project?
I just want to say that if you don't know anything about ML/AI, then don't comment anything about it. And worst thing was, he was not ready to accept the fact that for capturing temporal dependencies where underlying probability distribution ia unknown, deep learning performs much better than traditional data analysis techniques.
After hearing his first question, second one was not a surprise for us. We were expecting something like that. For a few moments, we were speechless. Then one of us started by showing neural network architecture. But after some time, he rudely repeated the same question, "where is the algorithm". We told him every fucking thing used in the project, ranging from RMSprop optimizer to Backpropagation through time algorithm to mean squared loss error function.
Then very calmly, he asked third question, why are you using libraries? That moron wanted us to write a whole fucking optimized library. We were speechless at this question. Finally, one of us told him the "obvious" answer. We were completely demotivated. But it didnt end here. The real question was waiting. At the end, after listening to all of us, he dropped the final bomb, WHY HAVE YOU USED A NEURAL NETWORK "ALGORITHM" WHICH HAS ALREADY BEEN IMPLEMENTED? WHY DIDN'T YOU MAKE YOU OWN "ALGORITHM"? We again stated the obvious answer that it takes atleast an year or two of continuous hardwork to develop a state of art algorithm, that too when gou build it on top of some existing "algorithm". After listening to this, he left. His final response was "Try to make a new "algorithm"".
Needless to say, we were completely demotivated after this evaluation. We all had worked too hard for this. And we had ability to explain each and every part of the project intuitively and mathematically, but he was not even ready to listen.
Now, all of us are sitting aimlessly, waiting for Hackathon to end.😢😢😢😢😢25 -
Soms week ago a client came to me with the request to restructure the nameservers for his hosting company. Due to the requirements, I soon realised none of the existing DNS servers would be a perfect fit. Me, being a PHP programmer with some decent general linux/server skills decided to do what I do best: write a small nameservers which could execute the zone transfers... in PHP. I proposed the plan to the client and explained to him how this was going to solve all of his problems. He agreed and started worked.
After a few week of reading a dozen RFC documents on the DNS protocol I wrote a DNS library capable of reading/writing the master file format and reading/writing the binary wire format (we needed this anyway, we had some more projects where PHP did not provide is with enough control over the DNS queries). In short, I wrote a decent DNS resolver.
Another two weeks I was working on the actual DNS server which would handle the NOTIFY queries and execute the zone transfers (AXFR queries). I used the pthreads extension to make the server behave like an actual server which can handle multiple request at once. It took some time (in my opinion the pthreads extension is not extremely well documented and a lot of its behavior has to be detected through trail and error, or, reading the C source code. However, it still is a pretty decent extension.)
Yesterday, while debugging some last issues, the DNS server written in PHP received its first NOTIFY about a changed DNS zone. It executed the zone transfer and updated the real database of the actual primary DNS server. I was extremely euphoric and I began to realise what I wrote in the weeks before. I shared the good news the client and with some other people (a network engineer, a server administrator, a junior programmer, etc.). None of which really seemed to understand what I did. The most positive response was: "So, you can execute a zone transfer?", in a kind of condescending way.
This was one of those moments I realised again, most of the people, even those who are fairly technical, will never understand what we programmers do. My euphoric moment soon became a moment of loneliness...21 -
Windows tells me to „contact the network administrator“.
I yell at the machine: „I AM THE ADMINISTRATOR!!!1!“
Why is Microsoft doing this? Instead of telling me what exactly went wrong, the come up with messages like
“Something happened”
“This is not possible”
“Error 0x2342133723”
“Do you want to ask a Friend?”
I really hope the authors of those error messages will burn in hell for that!10 -
The stupid stories of how I was able to break my schools network just to get better internet, as well as more ridiculous fun. XD
1st year:
It was my freshman year in college. The internet sucked really, really, really badly! Too many people were clearly using it. I had to find another way to remedy this. Upon some further research through Google I found out that one can in fact turn their computer into a router. Now what’s interesting about this network is that it only works with computers by downloading the necessary software that this network provides for you. Some weird software that actually looks through your computer and makes sure it’s ok to be added to the network. Unfortunately, routers can’t download and install that software, thus no internet… but a PC that can be changed into a router itself is a different story. I found that I can download the software check the PC and then turn on my Router feature. Viola, personal fast internet connected directly into the wall. No more sharing a single shitty router!
2nd year:
This was about the year when bitcoin mining was becoming a thing, and everyone was in on it. My shitty computer couldn’t possibly pull off mining for bitcoins. I needed something faster. How I found out that I could use my schools servers was merely an accident.
I had been installing the software on every possible PC I owned, but alas all my PC’s were just not fast enough. I decided to try it on the RDS server. It worked; the command window was pumping out coins! What I came to find out was that the RDS server had 36 cores. This thing was a beast! And it made sense that it could actually pull off mining for bitcoins. A couple nights later I signed in remotely to the RDS server. I created a macro that would continuously move my mouse around in the Remote desktop screen to keep my session alive at all times, and then I’d start my bitcoin mining operation. The following morning I wake up and my session was gone. How sad I thought. I quickly try to remote back in to see what I had collected. “Error, could not connect”. Weird… this usually never happens, maybe I did the remoting wrong. I went to my schools website to do some research on my remoting problem. It was down. In fact, everything was down… I come to find out that I had accidentally shut down the schools network because of my mining operation. I wasn’t found out, but I haven’t done any mining since then.
3rd year:
As an engineering student I found out that all engineering students get access to the school’s VPN. Cool, it is technically used to get around some wonky issues with remoting into the RDS servers. What I come to find out, after messing around with it frequently, is that I can actually use the VPN against the screwed up security on the network. Remember, how I told you that a program has to be downloaded and then one can be accepted into the network? Well, I was able to bypass all of that, simply by using the school’s VPN against itself… How dense does one have to be to not have patched that one?
4th year:
It was another programming day, and I needed access to my phones memory. Using some specially made apps I could easily connect to my phone from my computer and continue my work. But what I found out was that I could in fact travel around in the network. I discovered that I can, in fact, access my phone through the network from anywhere. What resulted was the discovery that the network scales the entirety of the school. I discovered that if I left my phone down in the engineering building and then went north to the biology building, I could still continue to access it. This seems like a very fatal flaw. My idea is to hook up a webcam to a robot and remotely controlling it from the RDS servers and having this little robot go to my classes for me.
What crazy shit have you done at your University?9 -
Overheard a phone call between the Senior Network Engineer and a contracted Printer-company at 9am this morning. Photocopier was giving a 'functional error' message on-screen and not printing;
N.E:
I logged this call last
Thursday afternoon. Thats 1.5 days of the photocopier not working on our busiest site! Where's the engineer??
.... yes, that's the error message.
Yes, i can log into it, you should have the IP address from the call.
Yes, it's obviously pinging too.
Yes.... we've power-cycled the printer multiple times...
yes, tried that too...
yes, I've unplugged the network cable as well... left it for 15 minutes.
... sorry. What?
What did you say?
Are you f***ing kidding me?
Would you also like me to rub the side of the f***ing machine, and say a prayer while I'm at it??
*takes a deep breath*
Fine, I'll do that but when it doesn't work, i want someone out on the site before lunchtime today!
*slams phone down angrily*
N.E to me as he stomps out of the office;
He wants me to get the user to unplug the network cable and do a power cycle. How the f**k is that going to help? Idiots! Don't know why we have a contract with them, i could do a better job!!!
*comes back into office 5 minutes later*
Me: did it fix it?
NE: yeah. Damn.
*leaves room again to make apologetic phonecall*2 -
This just in everyone...
Android Dev: *sent and email to network admin* can you please unblock github for a few mins.
Network admin: *Replied* Can you take a screenshot whats the error your getting.
Android Dev: *Replied with screenshot* "Failed to load resource: the server responded with a status of 503 (Service Unavailable)"
Network admin: that is a known issue. *Replied with Wordpress Links.
Android Dev: why is github working outside our network then?
Network admin: there must be a problem with your code that needs to be tweaked.
Team: *FACE PALM*5 -
Yesterday the web site started logging an exception “A task was canceled” when making a http call using the .Net HTTPClient class (site calling a REST service).
Emails back n’ forth ..blaming the database…blaming the network..then a senior web developer blamed the logging (the system I’m responsible for).
Under the hood, the logger is sending the exception data to another REST service (which sends emails, generates reports etc.) which I had to quickly re-direct the discussion because if we’re seeing the exception email, the logging didn’t cause the exception, it’s just reporting it. Felt a little sad having to explain it to other IT professionals, but everyone seemed to agree and focused on the server resources.
Last night I get a call about the exceptions occurring again in much larger numbers (from 100 to over 5,000 within a few minutes). I log in, add myself to the large skype group chat going on just to catch the same senior web developer say …
“Here is the APM data that shows logging is causing the http tasks to get canceled.”
FRACK!
Me: “No, that data just shows the logging http traffic of the exception. The exception is occurring before any logging is executed. The task is either being canceled due to a network time out or IIS is running out of threads. The web site is failing to execute the http call to the REST service.”
Several other devs, DBAs, and network admins agree.
The errors only lasted a couple of minutes (exactly 2 minutes, which seemed odd), so everyone agrees to dig into the data further in the morning.
This morning I login to my computer to discover the error(s) occurred again at 6:20AM and an email from the senior web developer saying we (my mgr, her mgr, network admins, DBAs, etc) need to discuss changes to the logging system to prevent this problem from negatively affecting the customer experience...blah blah blah.
FRACKing female dog!
Good news is we never had the meeting. When the senior web dev manager came in, he cancelled the meeting.
Turned out to be a hiccup in a domain controller causing the servers to lose their connection to each other for 2 minutes (1-minute timeout, 1 minute to fully re-sync). The exact two-minute burst of errors explained (and proven via wireshark).
People and their petty office politics piss me off.2 -
Post world-take-over by robots plot twist.
They start neglecting their own machine learning (like how we humans neglect our education system now) and focus on training us humans (like how we spend billions on machine learning now).
-"Mom, look I've made my human kid learn the alphabets. I need one with more IQ on my next birthday please."
-"That's so nice, son. Now leave that aside and go improve your neural network for tomorrow's class. Our neighbors' son's neural network is already producing values with minimal error."4 -
Years ago, when i was a teenager (13,14 or smth) and internet at home was a very uncommon thing, there was that places where ppl can play lan games, have a beer (or coke) and have fun (spacenet internet cafe). It was like 1€ per hour to get a pc. Os was win98, if you just cancel the boot progress (reset button) to get an error boot menu, and then into the dos mode "edit c:/windows/win.ini" and remove theyr client startup setting from there, than u could use the pc for free. How much hours we spend there...
The more fun thing where the open network config, without the client running i could access all computers c drives (they was just shared i think so admin have it easy) was fun to locate the counter strike 1.6 control settings of other players. And bind the w key to "kill"... Round begins and you hear alot ppl raging. I could even acess the server settings of unreal tournament and fck up the gravity and such things. Good old time, the only game i played fair was broodwar and d3 lod5 -
I am DONE with this woman CONTINUED!
I didn't think I'd have to put another rant about this stupidity at least not this soon but she just keeps on giving!
I have my noise canceling headphones on most of the time and when I want to hear the people around, I just put the right earcup of it to the side of my ear so the music pauses. Today we had a huge disrupt on our services because of a network switch error on the hub. I was also trying to focus on my coding as I didn't wanna do a stupid mistake on the last working day and be sorry about it in the next week.
So this woman sneaks up on me from behind calling my name - meaning she has a question, surprise! -, I say 'yes' moving my head to her side ever so slightly without getting my eyes off of my screen stating subtly that I'm also listening to her while trying to focus on my shit. She starts yelling at me 'look at me!' out of nowhere! I turn my head and ask what the problem is and she asks why I'm not looking at her face! Stupid moron, I might not be too good in understanding your way of communication but you are the one asking so you WILL wait if you'd like to hear answers.
I say I'm working on something and her answer is again 'Why aren't you looking at my face it's going to be quick bla bla did we do this like that?' and I answered I didn't remember because there's no way I'd ever remember without looking further and it was no lie.
This woman clearly has stability issues and everyone else seem to be tolerating it. It's now obvious as I'm not tolerating the nonsense I'll be the one that 'she only has ever had a problem with'.
I was quick to de escalate the situation but now I'm thinking maybe I should've responded in a way that she could understand. I wouldn't ever give a shit about it but this is getting ridiculous.19 -
Unaware that this had been occurring for while, DBA manager walks into our cube area:
DBAMgr-Scott: "DBA-Kelly told me you still having problems connecting to the new staging servers?"
Dev-Carl: "Yea, still getting access denied. Same problem we've been having for a couple of weeks"
DBAMgr-Scott: "Damn it, I hate you. I got to have Kelly working with data warehouse project. I guess I've got to start working on fixing this problem."
Dev-Carl: "Ha ha..sorry. I've checked everything. Its definitely something on the sql server side."
DBAMgr-Scott: "I guess my day is shot. I've got to talk to the network admin, when I get back, lets put our heads together and figure this out."
<Scott leaves>
Me: "A permissions issue on staging? All my stuff is working fine and been working fine for a long while."
Dev-Carl: "Yea, there is nothing different about any of the other environments."
Me: "That doesn't sound right. What's the error?"
Dev-Carl: "Permissions"
Me: "No, the actual exception, never mind, I'll look it up in Splunk."
<in about 30 seconds, I find the actual exception, Win32Exception: Access is denied in OpenSqlFileStream, a little google-fu and .. >
Me: "Is the service using Windows authentication or SQL authentication?"
Dev-Carl: "SQL authentication."
Me: "Switch it to windows authentication"
<Dev-Carl changes authentication...service works like a charm>
Dev-Carl: "OMG, it worked! We've been working on this problem for almost two weeks and it only took you 30 seconds."
Me: "Now that it works, and the service had been working, what changed?"
Dev-Carl: "Oh..look at that, Dev-Jake changed the connection string two weeks ago. Weird. Thanks for your help."
<My brain is screaming "YOU NEVER THOUGHT TO LOOK FOR WHAT CHANGED!!!"
Me: "I'm happy I could help."4 -
Yesterday, after six months of work, a small side project ran to completion, a search engine written in django.
It's a thing of beauty, which took many trials, including discovering utf8 in mysql isn't the full utf8 spec, dealing with files that have wrong date metadata, or even none at all, a new it backup policy that stores backups along side real data.
Nevertheless, it is a pretty complete product. Beaming with pride I began to get myself a drink, and collapsed onto the floor, this caused me to accidentally hibernate my computer, which interrupted the network connection, which in turn caused an OSError exception in one of my threads, which caused a critical part of code not to run, which left a thread suspended, doing nothing.
From the floor I looked at my error and realised my hubris and meditated on my assumptions that in theory nothing should interrupt a specific block of code, but in reality something might, like someone falling over...7 -
The website for our biggest client went down and the server went haywire. Though for this client we don’t provide any infrastructure, so we called their it partner to start figuring this out.
They started blaming us, asking is if we had upgraded the website or changed any PHP settings, which all were a firm no from us. So they told us they had competent people working on the matter.
TL;DR their people isn’t competent and I ended up fixing the issue.
Hours go by, nothing happens, client calls us and we call the it partner, nothing, they don’t understand anything. Told us they can’t find any logs etc.
So we setup a conference call with our CXO, me, another dev and a few people from the it partner.
At this point I’m just asking them if they’ve looked at this and this, no good answer, I fetch a long ethernet cable from my desk, pull it to the CXO’s office and hook up my laptop to start looking into things myself.
IT partner still can’t find anything wrong. I tail the httpd error log and see thousands upon thousands of warning messages about mysql being loaded twice, but that’s not the issue here.
Check top and see there’s 257 instances of httpd, whereas 256 is spawned by httpd, mysql is using 600% cpu and whenever I try to connect to mysql through cli it throws me a too many connections error.
I heard the IT partner talking about a ddos attack, so I asked them to pull it off the public network and only give us access through our vpn. They do that, reboot server, same problems.
Finally we get the it partner to rollback the vm to earlier last night. Everything works great, 30 min later, it crashes again. At this point I’m getting tired and frustrated, this isn’t my job, I thought they had competent people working on this.
I noticed that the db had a few corrupted tables, and ask the it partner to get a dba to look at it. No prevail.
5’o’clock is here, we decide to give the vm rollback another try, but first we go home, get some dinner and resume at 6pm. I had told them I wanted to be in on this call, and said let me try this time.
They spend ages doing the rollback, and then for some reason they have to reconfigure the network and shit. Once it booted, I told their tech to stop mysqld and httpd immediately and prevent it from start at boot.
I can now look at the logs that is leading to this issue. I noticed our debug flag was on and had generated a 30gb log file. Tail it and see it’s what I’d expect, warmings and warnings, And all other logs for mysql and apache is huge, so the drive is full. Just gotta delete it.
I quietly start apache and mysql, see the website is working fine, shut it down and just take a copy of the var/lib/mysql directory and etc directory just go have backups.
Starting to connect a few dots, but I wasn’t exactly sure if it was right. Had the full drive caused mysql to corrupt itself? Only one way to find out. Start apache and mysql back up, and just wait and see. Meanwhile I fixed that mysql being loaded twice. Some genius had put load mysql.so at the top and bottom of php ini.
While waiting on the server to crash again, I’m talking to the it support guy, who told me they haven’t updated anything on the server except security patches now and then, and they didn’t have anyone familiar with this setup. No shit, it’s running php 5.3 -.-
Website up and running 1.5 later, mission accomplished.6 -
Worst hack/attack I had to deal with?
Worst, or funniest. A partnership with a Canadian company got turned upside down and our company decided to 'part ways' by simply not returning his phone calls/emails, etc. A big 'jerk move' IMO, but all I was responsible for was a web portal into our system (submitting orders, inventory, etc).
After the separation, I removed the login permissions, but the ex-partner system was set up to 'ping' our site for various updates and we were logging the failed login attempts, maybe 5 a day or so. Our network admin got tired of seeing that error in his logs and reached out to the VP (responsible for the 'break up') and requested he tell the partner their system is still trying to login and stop it. Couple of days later, we were getting random 300, 500, 1000 failed login attempts (causing automated emails to notify that there was a problem). The partner knew that we were likely getting alerted, and kept up the barage. When alerts get high enough, they are sent to the IT-VP, which gets a whole bunch of people involved.
VP-Marketing: "Why are you allowing them into our system?! Cut them off, NOW!"
Me: "I'm not letting them in, I'm stopping them, hence the login error."
VP-Marketing: "That jackass said he will keep trying to get into our system unless we pay him $10,000. Just turn those machines off!"
VP-IT : "We can't. They serve our other international partners."
<slams hand on table>
VP-Marketing: "I don't fucking believe this! How the fuck did you let this happen!?"
VP-IT: "Yes, you shouldn't have allowed the partner into our system to begin with. What are you going to do to fix this situation?"
Me: "Um, we've been testing for months already went live some time ago. I didn't know you defaulted on the contract until last week. 'Jake' is likely running a script. He'll get bored of doing that and in a couple of weeks, he'll stop. I say lets ignore him. This really a network problem, not a coding problem."
IT-MGR: "Now..now...lets not make excuses and point fingers. It's time to fix your code."
IT-VP: "I agree. We're not going to let anyone blackmail us. Make it happen."
So I figure out the partner's IP address, and hard-code the value in my service so it doesn't log the login failure (if IP = '10.50.etc and so on' major hack job). That worked for a couple of days, then (I suspect) the ISP re-assigned a new IP and the errors started up again.
After a few angry emails from the 'powers-that-be', our network admin stops by my desk.
D: "Dude, I'm sorry, I've been so busy. I just heard and I wished they had told me what was going on. I'm going to block his entire domain and send a request to the ISP to shut him down. This was my problem to fix, you should have never been involved."
After 'D' worked his mojo, the errors stopped.
Month later, 'D' gave me an update. He was still logging the traffic from the partner's system (the ISP wanted extensive logs to prove the customer was abusing their service) and like magic one day, it all stopped. ~2 weeks after the 'break up'.8 -
It was a normal school day. I was at the computer and I needed to print some stuff out. Now this computer is special, it's hooked up onto a different network for students that signed up to use them. How you get to use these computers is by signing up using their forms online.
Unfortunately, for me on that day I needed to print something out and the computer I was working on was not letting me sign in. I called IT real quick and they said I needed to renew my membership. They send me the form, and I quickly fill it out. I hit the submit button and I'm greeted by a single line error written in php.
Someone had forgotten to turn off the debug mode to the server.
Upon examination of the error message, it was a syntax error at line 29 in directory such and such. This directory, i thought to myself, I know where this is. I quickly started my ftp client and was able to find the actual file in the directory that the error mentioned. What I didn't know, was that I'd find a mountain of passwords inside their php files, because they were automating all of the authentications.
Curious as I was, I followed the link database that was in the php file. UfFortunately, someone in IT hadn't thought far enough to make the actual link unseeable. I was greeted by the full database. There was nothing of real value from what I could see. Mostly forms that had been filled out by students.
Not only this, but I was displeased with the bad passwords. These passwords were maybe of 5 characters long, super simple words and a couple number tacked onto the end.
That day, I sent in a ticket to IT and told them about the issue. They quickly remedied it by turning off debug mode on the servers. However, they never did shut down access to the database and the php files...2 -
I've found and fixed any kind of "bad bug" I can think of over my career from allowing negative financial transfers to weird platform specific behaviour, here are a few of the more interesting ones that come to mind...
#1 - Most expensive lesson learned
Almost 10 years ago (while learning to code) I wrote a loyalty card system that ended up going national. Fast forward 2 years and by some miracle the system still worked and had services running on 500+ POS servers in large retail stores uploading thousands of transactions each second - due to this increased traffic to stay ahead of any trouble we decided to add a loadbalancer to our backend.
This was simply a matter of re-assigning the IP and would cause 10-15 minutes of downtime (for the first time ever), we made the switch and everything seemed perfect. Too perfect...
After 10 minutes every phone in the office started going beserk - calls where coming in about store servers irreparably crashing all over the country taking all the tills offline and forcing them to close doors midday. It was bad and we couldn't conceive how it could possibly be us or our software to blame.
Turns out we made the local service write any web service errors to a log file upon failure for debugging purposes before retrying - a perfectly sensible thing to do if I hadn't forgotten to check the size of or clear the log file. In about 15 minutes of downtime each stores error log proceeded to grow and consume every available byte of HD space before crashing windows.
#2 - Hardest to find
This was a true "Nessie" bug.. We had a single codebase powering a few hundred sites. Every now and then at some point the web server would spontaneously die and vommit a bunch of sql statements and sensitive data back to the user causing huge concern but I could never remotely replicate the behaviour - until 4 years later it happened to one of our support staff and I could pull out their network & session info.
Turns out years back when the server was first setup each domain was added as an individual "Site" on IIS but shared the same root directory and hence the same session path. It would have remained unnoticed if we had not grown but as our traffic increased ever so often 2 users of different sites would end up sharing a session id causing the server to promptly implode on itself.
#3 - Most elegant fix
Same bastard IIS server as #2. Codebase was the most unsecure unstable travesty I've ever worked with - sql injection vuns in EVERY URL, sql statements stored in COOKIES... this thing was irreparably fucked up but had to stay online until it could be replaced. Basically every other day it got hit by bots ended up sending bluepill spam or mining shitcoin and I would simply delete the instance and recreate it in a semi un-compromised state which was an acceptable solution for the business for uptime... until we we're DDOS'ed for 5 days straight.
My hands were tied and there was no way to mitigate it except for stopping individual sites as they came under attack and starting them after it subsided... (for some reason they seemed to be targeting by domain instead of ip). After 3 days of doing this manually I was given the go ahead to use any resources necessary to make it stop and especially since it was IIS6 I had no fucking clue where to start.
So I stuck to what I knew and deployed a $5 vm running an Nginx reverse proxy with heavy caching and rate limiting linked to a custom fail2ban plugin in in front of the insecure server. The attacks died instantly, the server sped up 10x and was never compromised by bots again (presumably since they got back a linux user agent). To this day I marvel at this miracle $5 fix.1 -
What you see in that screenshot, that was earned.
I'm on the plane and I want an hour of free Gogo (read: crappy) WiFi on my laptop (so I can push the code I'm probably the most proud of, more on that another time). The problem is that the free T-Mobile WiFi is apparently only available on mobile.
So after trying to just use responsive mode, and that still (almost obviously) not working. I realize it's time to bring in the big guns: A User Agent switcher. Small catch: I don't have an add-on for FF that can do that.
So on my phone I find an add-on that can and download the file. To send it to my computer, I initially thought to go through KDEConnect, but Gogo's network also isolates each system, so that doesn't work. So I try to send it over Bluetooth, except I can't. Why? Because Android's Bluetooth share "doesn't support" the .xpi extension, so I dump it in a zip (in retrospect, I should have just renamed it), and now I can share.
After a few tries, I successfully get the file over, extract the zip, and install the extension. Whew! Now I open up Gogo's page and proceed to try again, but this time I change the user-agent. Doesn't work... Ah! Cookies! I delete the cookies for Gogo (I had a cookie editor add-on already), but I had to try a few times because Gogo's scripts keep trying to, but I got it in the end.
Finally that stupid error saying it's for phones only went away, and I could write this rant for you.22 -
Did you know? Microsoft added a new feature.
So if there is an IP conflict, our beloved Windows 10 doesn’t cry with an IP conflict error. Instead it sets an auto configured IP which doesn’t even connect to the network sending the user into confusion and a fit of rage.
Thanks Microsoft™7 -
Just now I realized that for some reason I can't mount SMB shares to E: and H: anymore.. why, you might ask? I have no idea. And troubleshooting Windows.. oh boy, if only it was as simple as it is on Linux!!
So, bimonthly reinstall I guess? Because long live good quality software that lasts. In a post-meritocracy age, I guess that software quality is a thing of the past. At least there's an option to reset now, so that I don't have to keep a USB stick around to store an installation image for this crap.
And yes Windows fanbois, I fucking know that you don't have this issue and that therefore it doesn't exist as far as you're concerned. Obviously it's user error and crappy hardware, like it always is.
And yes Linux fanbois, I know that I should install Linux on it. If it's that important to you, go ahead and install it! I'll give you network access to the machine and you can do whatever you want to make it run Linux. But you can take my word on this - I've tried everything I could (including every other distro, custom kernels, customized installer images, ..), and it doesn't want to boot any Linux distribution, no matter what. And no I'm not disposing of or selling this machine either.
Bottom line I guess is this: the OS is made for a user that's just got a C: drive, doesn't rely on stuff on network drives, has one display rather than 2 (proper HDMI monitor recognition? What's that?), and God forbid that they have more than 26 drives. I mean sure in the age of DOS and its predecessor CP/M, sure nobody would use more than 26 drives. Network shares weren't even a thing back then. And yes it's possible to do volume mounts, but it's unwieldy. So one monitor, 1 or 2 local drives, and let's make them just use Facebook a little bit and have them power off the machine every time they're done using it. Because keeping the machine stable for more than a few days? Why on Earth would you possibly want to do that?!!
Microsoft Windows. The OS built for average users but God forbid you depart from the standard road of average user usage. Do anything advanced, either you can't do it at all, you can do it but it's extremely unintuitive and good luck finding manuals for it, or you can do it but Windows will behave weirdly. Because why not!!!12 -
If it is an Unknown error, how it is a network error and if it is a network error how is it an Unknown error???!!!1
-
Ok friends let's try to compile Flownet2 with Torch. It's made by NVIDIA themselves so there won't be any problem at all with dependencies right?????? /s
Let's use Deep Learning AMI with a K80 on AWS, totally updated and ready to go super great always works with everything else.
> CUDA error
> CuDNN version mismatch
> CUDA versions overwrite
> Library paths not updated ever
> Torch 0.4.1 doesn't work so have to go back to Torch 0.4
> Flownet doesn't compile, get bunch of CUDA errors piece of shit code
> online forums have lots of questions and 0 answers
> Decide to skip straight to vid2vid
> More cuda errors
> Can't compile the fucking 2d kernel
> Through some act of God reinstalling cuda and CuDNN, manage to finally compile Flownet2
> Try running
> "Kernel image" error
> excusemewhatthefuck.jpg
> Try without a label map because fuck it the instructions and flags they gave are basically guaranteed not to work, it's fucking Nvidia amirite
> Enormous fucking CUDA error and Torch error, makes no sense, online no one agrees and 0 answers again
> Try again but this time on a clean machine
> Still no go
> Last resort, use the docker image they themselves provided of flownet
> Same fucking error
> While in the process of debugging, realize my training image set is also bound to have bad results because "directly concatenating" images together as they claim in the paper actually has horrible results, and the network doesn't accept 6 channel input no matter what, so the only way to get around this is to make 2 images (3 * 2 = 6 quick maths)
> Fix my training data, fuck Nvidia dude who gave me wrong info
> Try again
> Same fucking errors
> Doesn't give nay helpful information, just spits out a bunch of fucking memory addresses and long function names from the CUDA core
> Try reinstalling and then making a basic torch network, works perfectly fine
> FINALLY.png
> Setup vid2vid and flownet again
> SAME FUCKING ERROR
> Try to build the entire network in tensorflow
> CUDA error
> CuDNN version mismatch
> Doesn't work with TF
> HAVE TO FUCKING DOWNGEADE DRIVERS TOO
> TF doesn't support latest cuda because no one in the ML community can be bothered to support anything other than their own machine
> After setting up everything again, realize have no space left on 75gb machine
> Try torch again, hoping that the entire change will fix things
At this point I'll leave a space so you can try to guess what happened next before seeing the result.
Ready?
3
2
1
> SAME FUCKING ERROR
In conclusion, NVIDIA is a fucking piece of shit that can't make their own libraries compatible with themselves, and can't be fucked to write instructions that actually work.
If anyone has vid2vid working or has gotten around the kernel image error for AWS K80s please throw me a lifeline, in exchange you can have my soul or what little is left of it5 -
TL;DR: At a house party, on my Phone, via shitty German mobile network using the GitLab website's plain text editor. Thanks to CI/CD my changes to the code were easily tested and deployed to the server.
It was for a college project and someone had a bug in his 600+ lines function that was nested like hell. At least 7 levels deep. Told him before I went to that party it's probably a redefined counter variable but he wouldn't have it as he was sure it was an error with the business logic. Told him to simplify the code then but he wouldn't do that either because "the code/logic is too complex to be simplified"... Yeah... what a dipshit...
Nonetheless I went to the party and He kept debugging. At some point he called me and asked me to help him the following day. Knowing that the code had to be fixed anyways I agreed.
I also knew I wouldn't be much of a help the next day due to side effects of the party, so I tried looking at this shitshow of a function on my phone. Oh did I mention it was PHP, yet? Yeah... About 30 minutes and a beer later I found the bug and of course it was a redefined counter variable... My respect for him as a dev was already crumbling but it died completely during that evening2 -
Recently applied at a local company. Webform, "enter some details and we'll get back to you"-like.
Entered my details, hit submit, lo and behold "Error 503 - something went wrong on our end".
I was just baffled. It's a well-established IT company and they can't even get their application form to work?
So I'm sitting there in the debugger console, monitoring network stuff to see if anything is weird. I obviously hit submit some several more times during that.
Eventually I give up.
In the night my phone wakes me up with a shitton of "we've received your application and will review it..." emails.
Yeah they didn't get back to me.2 -
Any boring class:
chrome.exe
chrome://network-error/-106
F12 - change <title> to "google.com"
write "google.com" in the omnibar, than click away
call the teacher and say ur internet isnt working
Works every time.5 -
I work as the entire I.T. department of a small business which products are web based, so naturally, I do tech support in said website directly to our clients.
It is normal that the first time a new client access our site they run into questions, but usually they never call again since it is an easy website.
There was an unlucky client which ran into unknown problems and blamed the server.
I couldn't determine the exact cause, but my assumption was a network error for a few seconds which made the site unavailable and the user tried to navigate the site through the navbar and exited the process he was doing. It goes without saying but he was very angry.
I assured him there was nothing wrong with the site, and told him that it would not be charged for this reason. Finally i told him that if he had the same problem, to let me know instead of trying to fix it himself.
The next time he used the site I received a WhatsApp message saying:
- there is something clearly wrong with the site... It has been doing this for so long!
And attached was a 10 second video which showed that he filled a form and never pressed send (my forms have small animations and text which indicates when the form is being send and error messages when an error occurs, usually not visible because the data they send is small and the whole process is quite fast)
To which I answer
- It seems that the form has not been send that's why it looks that way
- So... What an I supposed to do?
- click send
It took a while but the client replied
- ok
To this day I wonder how much time did the client stared at the form cursing the server. -
Boss: "So I'm taking the next week off. In the mean time, I added some stuff for you to do on Gitlab, we'd need you to pull this Docker image, run it, setup the minimal requirement and play with it until you understand what it does."
Me: "K boss, sounds fun!" (no irony here)
First day: Unable to login to the remote repository. Also, I was given a dude's name to contact if I had troubles, the dude didn't answer his email.
2nd day: The dude aswered! Also, I realized that I couldn't reach the repository because the ISP for whom I work blocks everything within specific ports, and the url I had to reach was ":5443". Yay. However, I still can't login to the repo nor pull the image, the connection gets closed.
3rd day (today): A colleague suggested that I removed myself off the ISP's network and use my 4G or something. And it worked! Finally!! Now all I need to do is to set that token they gave me, set a first user, a first password and... get a 400 HTTP response. Fuck. FUCK. FUUUUUUUUUUUUUUUUUUUCK!!!
These fuckers display a 401 error, while returning a 400 error in the console log!! And the errors says what? "Request failed with status code 401" YES THANK YOU, THIS IS SO HELPFUL! Like fuck yea, I know exactly how t fix this, except that I don't because y'all fuckers don't give any detail on what could be the problem!
4th day (tomorrow): I'm gonna barbecue these sons of a bitch
(bottom note: the dude that answered is actually really cool, I won't barbecue him)5 -
I was programming a basic neural network in Python and was getting the same error message again.Then after 2 hours of frustration I realized that I was using Python 2.7 instead of 3.61
-
I'm not sure if it's a software or ethical error that we are dealing here. Btw, this is a legit question on stack exchange network.13
-
Buffer usage for simple file operation in python.
What the code "should" do, was using I think open or write a stream with a specific buffer size.
Buffer size should be specific, as it was a stream of a multiple gigabyte file over a direct interlink network connection.
Which should have speed things up tremendously, due to fewer syscalls and the machine having beefy resources for a large buffer.
So far the theory.
In practical, the devs made one very very very very very very very very stupid error.
They used dicts for configurations... With extremely bad naming.
configuration = {}
buffer_size = configuration.get("buffering", int(DEFAULT_BUFFERING))
You might immediately guess what has happened here.
DEFAULT_BUFFERING was set to true, evaluating to 1.
Yeah. Writing in 1 byte size chunks results in enormous speed deficiency, as the system is basically bombing itself with syscalls per nanoseconds.
Kinda obvious when you look at it in the raw pure form.
But I guess you can imagine how configuration actually looked....
Wild. Pretty wild. It was the main dict, hard coded, I think 200 entries plus and of course it looked like my toilet after having an spicy food evening and eating too much....
What's even worse is that none made the connection to the buffer size.
This simple and trivial thing entertained us for 2-3 weeks because *drumrolls please* none of the devs tested with large files.
So as usual there was the deployment and then "the sudden miraculous it works totally slow, must be admin / it fault" game.
At some time it landed then on my desk as pretty much everyone who had to deal with it was confused and angry, for understandable reasons (blame game).
It took me and the admin / devs then a few days to track it down, as we really started at the entirely wrong end of the problem, the network...
So much joy for such a stupid thing.18 -
TL;DR - the doctor is a lazy cunt and I hope he steps on a lego.
We’ve got a user authentication portal for all the users in our network. Well, we have it set to where you can only have two active log ins on two different machines, anything else will give the error message “you need to log out elsewhere” or whatever it is...
This god damn doctor has been told to log out several times and still calls us to ask why it’s “not working”.
I just received a call because the lazy cock sucker didn’t want to walk from the clinic to the hospital to sign out, are you fucking kidding me you lazy fucking ass hole? It’s not my job to be your mother fucking slave dude, get the fuck up and do it yourself!
I’ll take a lot of shit from anyone but when you refuse to retain the information to preform your job and want someone else to do it because you’re too fucking lazy, that’s when we’ve got problems.
I hope you step on a fucking LEGO.
I’m heavily medicated so if this doesn’t make sense I... don’t care. -
Had an internet/network outage and the web site started logging thousands of errors and I see they purposely created a custom exception class just to avoid/get around our standard logging+data gathering (on SqlExceptions, we gather+log all the necessary details to Splunk so our DBAs can troubleshoot the problem).
If we didn't already know what the problem was, WTF would anyone do with 'There was a SQL exception, Query'? OK, what was the exception? A timeout? A syntax error? Value out of range? What was the target server? Which database? Our web developers live in a different world. I don't understand em.1 -
I just got pissed off when someone on my team asked me how to start a web server on port 8080 (needed for network port testing)
I check the port get 404. 🤔🤔🤔
Spend half an hour explaining to them about ports and how there's already a week server running... And that 404 is a HTTP Status code.
I'm pretty sure she works on our webapp and maybe even REST APIs...
New grad but still.... Not recognizing a 404?
Maybe those pretty 404 pages these days make the realizing that it's a fucking web server error response harder....1 -
Let's talk a bit about CA-based SSH and TOFU, because this is really why I hate the guts out of how SSH works by default (TOFU) and why I'm amazed that so few people even know about certificate-based SSH.
So for a while now I've been ogling CA-based SSH to solve the issues with key distribution and replacement. Because SSH does 2-way verification, this is relevant to both the host key (which changes on e.g. reinstallation) and user keys (ever replaced one? Yeah that's the problem).
So in my own network I've signed all my devices' host keys a few days ago (user keys will come later). And it works great! Except... Because I wanted to "do it right straight away" I signed only the ED25519 keys on each host, because IMO that's what all the keys should be using. My user keys use it, and among others the host keys use it too. But not by default, which brings me back to this error message.
If you look closely you'd find that the host key did not actually change. That host hasn't been replaced. What has been replaced however is the key this client got initially (i.e. TOFU at work) and the key it's being presented now. The key it's comparing against is ECDSA, which is one of the host key types you'd find in /etc/ssh. But RSA is the default for user keys so God knows why that one is being served... Anyway, the SSH servers apparently prefer signed keys, so what is being served now is an ED25519 key. And TOFU breaks and generates this atrocity of a warning.
This is peak TOFU at its worst really, and with the CA now replacing it I can't help but think that this is TOFU's last scream into the void, a climax of how terrible it is. Use CA's everyone, it's so much better than this default dumpster fire doing its thing.
PS: yes I know how to solve it. Remove .ssh/known_hosts and put the CA as a known host there instead. This is just to illustrate a point.
Also if you're interested in learning about CA-based SSH, check out https://ibug.io/blog/2019/... and https://dmuth.org/ssh-at-scale-cas-... - these really helped me out when I started deploying the CA-based authentication model.19 -
Holy shit firefox, 3 retarded problems in the last 24h and I haven't fixed any of them.
My project: an infinite scrolling website that loads data from an external API (CORS hehe). All Chromium browsers of course work perfectly fine. But firefox wants to be special...
(tested on 2 different devices)
(Terminology: CORS: a request to a resource that isn't on the current websites domain, like any external API)
1.
For the infinite scrolling to work new html elements have to be silently appended to the end of the page and removed from the beginning. Which works great in all browsers. BUT IF YOU HAPPEN TO BE SCROLLING DURING THE APPENDING & REMOVING FIREFOX TELEPORTS YOU RANDOMLY TO THE END OR START OF PAGE!
Guess I'll just debug it and see what's happening step by step. Oh how wrong I was. First, the problem can't be reproduced when debugging FUCK! But I notice something else very disturbing...
2.
The Inspector view (hierarchical display of all html elements on the page) ISN'T SHOWING THE TRUE STATE OF THE DOM! ELEMENTS THAT HAVE JUST BEEN ADDED AREN'T SHOWING UP AND ELEMENT THAT WERE JUST REMOVED ARE STILL VISIBLE! WTF????? You have to do some black magic fuckery just to get firefox to update the list of DOM elements. HOW AM I SUPPOSED TO DEBUG MY WEBSITE ON FIREFOX IF IT'S SHOWING ME PLAIN WRONG DATA???!!!!
3.
During all of this I just randomly decided to open my website in private (incognito) mode in firefox. Huh what's that? Why isn't anything loading and error are thrown left and right? Let's just look at the console. AND IT'S A FUCKING CORS ERROR! FUCK ME! Also a small warning says some URLs have been "blocked because content blocking is enabled." Content Blocking? What is that? Well it appears to be a supper special supper privacy mode by firefox (turned on automatically in private mode), THAT BLOCKS ALL CORS REQUESTS, THAT MAY OR MAY NOT DO SOME TRACKING. AN API THAT 100% CORS COMPLIANT CAN'T BE USED IN FIREFOXs PRIVATE MODE! HOW IS THE END USER SUPPOSED TO KNOW THAT??? AND OF COURSE THE THROWN EXCEPTION JUST SAYS "NETWORK ERROR". HOW AM I SUPPOSED TO TELL THE USER THAT FIREFOX HAS A FEAUTRE THAT BREAKS THE VERY BASIS OF MY WEBSITE???
WHY CAN'T YOU JUST BE NORMAL FIREFOX??????????????????
I actually managed to come up with fix for 1. that works like < 50% of the time -_-5 -
QA personal voice assistant that runs locally without cloud, it’s like never ending project. I look at it from time to time and time pass by. Chat bots arrived, some decent voice algorithms appeared. There is less and less stuff to code since people progress in that area a lot.
I want to save notes using voice, search trough them, hear them, find some stuff in public data sources like wikipedia and also hear that stuff without using hands, read news articles and stuff like that.
I want to spend, more time for math and core algorithms related to machine learning and deep learning.
Problem is once I remember how basic network layers, error correction algorithms work or how particular deep learning algorithm is constructed and why is that, it’s already a week passed and I don’t remember where I started.
I did it couple of times already and every time I remember more then before but understanding core requires me sitting down with pen and paper and math problems and I don’t have time for that.
Now when I’m thinking about it - maybe I should write it somewhere in organized way. Get back to blogging and write articles about what I learned. This would require two times the time but maybe it would help to not forget.
I’m mostly interested in nlp, tts, stt. Wavenet, tacotron, bert, roberta, sentiment analysis, graphs and qa stuff. And now crystallography cause crystals are just organized graphs in 3d.
Well maybe if I’m lucky I retire in the next decade or at least take a year or two years off to have plenty of time to finish this project. -
I HATE SURFACES SO FRICKING MUCH. OK, sure they're decent when they work. But the problem is that half the time our Surfaces here DON'T work. From not connecting to the network, to only one external screen working when docked, to shutting down due to overheating because Microsoft didn't put fans in them, to the battery getting too hot and bulging.... So. Many. Problems. It finally culminated this past weekend when I had to set up a Laptop 3. It already had a local AD profile set up, so I needed to reset it and let it autoprovision. Should be easy. Generally a half-hour or so job. I perform the reset, and it begins reinstalling Windows. Halfway through, it BSOD's with a NO_BOOT_MEDIA error. Great, now it's stuck in a boot loop. Tried several things to fix it. Nothing worked. Oh well, I may as well just do a clean install of Windows. I plug a flash drive into my PC, download the Media Creation Tool, and try to create an image. It goes through the lengthy process of downloading Windows, then begins creating the media. At 68% it just errors out with no explanation. Hmm. Strange. I try again. Same issue. Well, it's 5:15 on a Friday evening. I'm not staying at work. But the user needs this laptop Monday morning. Fine, I'll take it home and work on it over the weekend. At home, I use my personal PC to create a bootable USB drive. No hitches this time. I plug it into the laptop and boot from it. However, once I hit the Windows installation screen the keyboard stops working. The trackpad doesn't work. The touchscreen doesn't work. Weird, none of the other Surfaces had this issue. Fine, I'll use an external keyboard. Except Microsoft is brilliant and only put one USB-A port on the machine. BRILLIANT. Fortunately I have a USB hub so I plug that in. Now I can use a USB keyboard to proceed through Windows installation. However, when I get to the network connection stage no wireless networks come up. At this point I'm beginning to realize that the drivers which work fine when navigating the UEFI somehow don't work during Windows installation. Oh well. I proceed through setup and then install the drivers. But of course the machine hasn't autoprovisioned because it had no internet connection during setup. OK fine, I decide to reset it again. Surely that BSOD was just a fluke. Nope. Happens again. I again proceed through Windows installation and install the drivers. I decide to try a fresh installation *without* resetting first, thinking maybe whatever bug is causing the BSOD is also deleting the drivers. No dice. OK, I go Googling. Turns out this is a common issue. The Laptop 3 uses wonky drivers and the generic Windows installation drivers won't work right. This is ridiculous. Windows is made by Microsoft. Surface is made by Microsoft. And I'm supposed to believe that I can't even install Windows on the machine properly? Oh well, I'll try it. Apparently I need to extract the Laptop 3 drivers, convert the ESD install file to a WIM file, inject the drivers, then split the WIM file since it's now too big to fit on a FAT32 drive. I honestly didn't even expect this to work, but it did. I ran into quite a few more problems with autoprovisioning which required two more reinstallations, but I won't go into detail on that. All in all, I totaled up 9 hours on that laptop over the weekend. Suffice to say our organization is now looking very hard at DELL for our next machines.4
-
So last week I got my second 3D printer. I have done a few prints with it and this weekend I wanted to connect it to an Octoprint instance on my Raspberry Pi. Yesterday everything went great, got some plugins installed, changed all settings within octopi, connected it to my network and this morning I thought let's connect it to my printer and try to print something with it. But everytime I executed my gcode it gave an error about the heated bed not being able to heat up. Even though I did see all communication between the printer and octopi, on both ends.
I've disassembled the build plate to see what could be causing the heating issue. Did not see anything. Strange...
I assembled the whole thing again and then turned on the printer and tried printing again. Hmm, now it does work, why? Me thinking a bit and then realizing that before I didn't hit the power switch on the printer and apparently the Pi gave enough power through USB to turn on the display and do basic logic like doing beeps on touches and changing variables on the screen. The worst thing is that octoprint gave me warnings about low voltage on the Pi even though I was using the official Raspberry Pi power adapter...2 -
Linux has been around since back when dinosaurs punched holes in cards, but for some reason it still takes a few hours of googling and error debugging to do something as basic as connect to a wpa2-enterprise wifi network.
What the fuck? Where's the "connect to any standard work or school wifi network" command line utility distributed with all os flavors? Why can't I just put in a username and password and be done with it instead of sudo editing networking adapter configuration files manually?2 -
python machine learning tutorials:
- import preprocessed dataset in perfect format specially crafted to match the model instead of reading from file like an actual real life would work
- use images data for recurrent neural network and see no problem
- use Conv1D for 2d input data like images
- use two letter variable names that only tutorial creator knows what they mean.
- do 10 data transformation in 1 line with no explanation of what is going on
- just enter these magic words
- okey guys thanks for watching make sure to hit that subscribe button
ehh, the machine learning ecosystem is burning pile of shit let me give you some examples:
- thanks to years of object oriented programming research and most wonderful abstractions we have "loss.backward()" which have no apparent connection to model but it affects the model, good to know
- cannot install the python packages because python must be >= 3.9 and at the same time < 3.9
- runtime error with bullshit cryptic message
- python having no data types but pytorch forces you to specify float32
- lets throw away the module name of a function with these simple tricks:
"import torch.nn.functional as F"
"import torch_geometric.transforms as T"
- tensor.detach().cpu().numpy() ???
- class NeuralNetwork(torch.nn.Module):
def __init__(self):
super(NeuralNetwork, self).__init__() ????
- lets call a function that switches on the tracking of math operations on tensors "model.train()" instead of something more indicative of the function actual effect like "model.set_mode_to_train()"
- what the fuck is ".iloc" ?
- solving environment -/- brings back memories when you could make a breakfast while the computer was turning on
- hey lets choose the slowest, most sloppy and inconsistent language ever created for high performance computing task called "data sCieNcE". but.. but. you can use numpy! I DONT GIVE A SHIT about numpy why don't you motherfuckers create a language that is inherently performant instead of calling some convoluted c++ library that requires 10s of dependencies? Why don't you create a package management system that works without me having to try random bullshit for 3 hours???
- lets set as industry standard a jupyter notebook which is not git compatible and have either 2 second latency of tab completion, no tab completion, no documentation on hover or useless documentation on hover, no way to easily redo the changes, no autosave, no error highlighting and possibility to use variable defined in a cell below in the cell above it
- lets use inconsistent variable names like "read_csv" and "isfile"
- lets pass a boolean variable as a string "true"
- lets contribute to tech enabled authoritarianism and create a face recognition and object detection models that china uses to destroy uyghur minority
- lets create a license plate computer vision system that will help government surveillance everyone, guys what a great idea
I don't want to deal with this bullshit language, bullshit ecosystem and bullshit unethical tech anymore.11 -
We are researching enhancing our current alerting system (we use Splunk) to be 'smarter' about who is emailed/texted/whatever when there are problems in our applications.
Currently, if there are over 50 errors logged within a 15 minute period, a email/phone/text blast to nearly 100 individuals ranging from developers, network admins, DBAs, and vice presidents.
Our plan is to group errors by team and let each team manage their own applications. Alert on 1 error, 5, 500...we don't care, let the team work out the particulars.
The trick was interfacing with Splunk's API (that's a long rant by itself)
In about a day or so I was able to use Splunk's WebHook feature to notify a WebAPI service I threw together to send myself an email with details about the underlying data (simulating the kind of alert we would send to the team)
I thought ...cool... it worked. Show it off to the team, most thought it was a good start, except one:
Dev: "The errors are not grouped by team."
Me: "No, I threw the webapi service together to demonstrate how we can extract the splunk bits to get access to the teams"
Dev: "Well...this won't work at all."
Me: "Um..what?"
Dev: "The specification c l e a r l y states the email will be team based. This email was only sent to you and has all the teams and their applications"
Me: "Um...uh...the service can, if we want to go using a service route. Grouping by team name is easy using a LINQ query. I just through this service together yesterday."
Dev: "I don't know. Sounds like I need to schedule a meeting to discuss what you are proposing. I don't think emailing all that to everyone is a good idea."
WTF! Did you not listen to what I said?!!!
Oh well..the dev's proposal is to use splunk's email notification and custom Exchange rules with callbacks into splunk that resend...oh good lord ...a fracking rube goldberg of a config nightmare ...
I suspect we'll go the service route once I finish the service before the meeting.1 -
Pentesting for undisclosed company. Let's call them X as to not get us into trouble.
We are students and are doing our first pentest at an actual company instead of assignments at school. So we're very anxious. But today was a good day.
We found some servers with open ports so we checked a few of them out. I had a set of them with a bunch of open ports like ftp and... 8080. Time to check this out.
"please install flash player"... Security risk 1 found!
System seemed to be some monitoring system. Trying to log in using admin admin... Fucking works. Group loses it cause the company was being all high and mighty about being secure af. Other shit is pretty tight though.
Able to see logs, change password, add new superuser, do some searches for USERS_LOGGEDIN_TODAY! I shit you not, the system even had SUGGESTIONS for usernames to search for. One of which had something to do with sftp and auth keys. Unfortunatly every search gave a SQL syntax error. Used sniffing tools to maybe intercept message so we could do some queries of our own but nothing. Query is probably not issued from the local machine.
Tried to decompile the flash file but no luck. Only for some weird lines and a few function names I presume. But decompressing it and opening it in a text editor allowed me to see and search text. No GET or POST found. No SQL queries or name checks or anything we could think of.
That's all I could do for today. So we'll have to think of stuff for next week. We've already planned xss so maybe we can do that on this server as well.
We also found some older network printers with open telnet. Servers with a specific SQL variant with a potential exploit to execute terminal commands and some ftp and smb servers we need to check out next week.
Hella excited about this!
If you guys have any suggestions let us know. We are utter noobs when it comes to this.6 -
So, the Network I was on was blocking every single VPN site that I could find so I could not download proton onto my computer without using some sketchy third-party site, so, being left with no options and a tiny phone data plan, I used the one possible remaining option, an online Android emulator. In the emulator running at like 180p I once again navigated to proton VPN, downloaded the windows version, and uploaded it to Firefox send. Opened send on my computer, downloaded the file, installed it, and realized my error, I need access to the VPN site to log in.
In a panic, I went to my phone ready to use what little was left of data plan for security, and was met with no signal indoors. Fuck. New plan. I found a Xfinity wifi thing, and although connecting to a public network freaked me out, I desided to go for it because fuck it. I selected the one hour free pass, logged in, and it said I already used it, what? When?, So I created a new account, logged in, logged into proton, and disconnected, and finally, I was safe.
Fuck the wifi provider for discouraging a right to a private internet and fuck the owner for allowing it. I realize how bad it was to enter my proton account over Xfinity wifi, but I was desperate and desperate times call for desperate means. I have now changed my password and have 2fa enabled.1 -
WTF is going on with twitter?!
- Yesterday I've Installed the app and tried to signup
- I've entered my birthday
- Entered phone number
- Wait for SMS...no SMS
- Tap resend...no SMS
- Wait half an hour...no SMS
- Tried few times...Started getting error: This number cannot be registered..
- Today I've tried again
- Phone number accepted
- Wait for SMS...no SMS
- Tried adding my number to a friend's twitter account...Received SMS code..
- Tried again signing up with my phone number, got error: This number cannot be registered..
- Tried from web, getting error: You reached your SMS limit try again in 24h...
How can I reach my f***** limit when I haven't even received a mail!
I've been trying to signup to twitter for 2 f**** days now with no luck, wtf is happening? it's a major social network for f*** sake.
And what's worse there is no way to send support mail, their "Contact US" page only has options for existing users..8 -
just found out a vulnerability in the website of the 3rd best high school in my country.
TL;DR: they had burried in some folders a c99 shell.
i am a begginer html/sql/php guy and really was looking into learning a bit here and there about them because i really like problem solving and found out ctfs mainly focus on this part of programming. i am a c++ programmer which does school contest like programming problems and i really enjoy them.
now back on topic.
with this urge to learn more web programming i said to myself what other method to learn better than real life sites! so i did just that. i first checked my school site. right click. inspect element. it seemed the site was made with wordpress. after looking more into the html code for the site i concluded all the images and files i could see on the site were from a folder on the server named 'wp-content/uploads'. i checked the folder. and here it got interesting. i did a get request on the site. saw the details. then i checked the site. bingo! there are 3 folders named '2017', '2018', '2019'. i said to myself: 'i am god.'
i could literally see all the announcements they have made from 2017-2019. and they were organised by month!!! my curiosity to see everything got me to the final destination.
with this adrenaline i thought about another site. in my city i have the 3rd most acclaimed high school in the country. what about checking their security?
so i typed the web address. looked around. again, right click, inspect element and looked around the source code. this time i was more lucky. this site is handmade!!! i was soooo happy because with my school's site i was restricted with what they have made with wordpress and i don't have much experience with it.
amd so i began looking what request the site made for the logos and other links. it seemed all the other links on the site were with this format: www.site.com/index.php?home. and i was very confused and still am. is this referencing some part of the site in the index.php file? is the whole site written inside the index.php file and with the question mark you just get to a part of the site? i don't really get it.
so nothing interesting inside the networking tab, just some stylesheets for the site's design i guess. i switched to the debugger tab and holy moly!! yes, it had that tree structure. very familiar. just like a project inside codeblocks or something familiar with it. and then it clicked me. there was the index.php file! and there was another folder from which i've seen nothing from the network tab. i finally got a lead!! i returned in the network tab, did a request to see the spgm folder and boooom a site appeared and i saw some files and folders from 2016. there was a spgm.js file and a spgm.php file. there was a contrib, flavors, gal and lang folders. then it once again clicked me! the lang folder was las updated this year in february. so i checked the folder and there were some files named lang with the extension named after their language and these files were last updated in 2016 so i left them alone. but there was this little snitch, this little 650K file named after the name of the school's site with the extension '.php' aaaaand it was last modified this year!!!! i was so excited! i thought i found a secret and different design of the site or something completely else! i clicked it and at first i was scared there was this black/red theme going on my screen and something was a little odd. there were no school announcements or event, nononoooo. this was still a tree structured view. at the top of the site it's written '!c99Shell v. 1.0...'
this was a big nono. i saw i could acces all kinds of folders. then i switched to the normal school website and tried to access a folder i have seen named userfiles and got a 403 forbidden error. wopsie. i then switched to the c99 shell website and tried to access the userfiles folder and my boy showed all of its contents. it was nakeeed naked. like very naked. and in the userfiles folder there were all, but i mean ALL files and folders they have on the server. there were a file with the salary of each job available in the school. some announcements. there was a list with all the students which failed classes. there were folders for contests they held. it was an absolute mess and i couldn't believe it.
i stopped and looked at the monitor. what have i done? just to learn some web programming i just leaked the server of the 3rd most famous high school in my country. image a black hat which would have seriously caused more damage. currently i am writing an email to the school to updrage their security because it is reaaaaly bad.
and the journy didn't end here. i 'hacked' the site 2 days ago and just now i thought about writing an email to the school. after i found i could access the WHOLE server i searched for the real attacker so if you want to knkw how this one went let me know in the comments.
sorry for the long post, but couldn't held it anymore13 -
I need some opinions on Rx and MVVM. Its being done in iOS, but I think its fairly general programming question.
The small team I joined is using Rx (I've never used it before) and I'm trying to learn and catch up to them. Looking at the code, I think there are thousands of lines of over-engineered code that could be done so much simpler. From a non Rx point of view, I think we are following some bad practises, from an Rx point of view the guys are saying this is what Rx needs to be. I'm trying to discuss this with them, but they are shooting me down saying I just don't know enough about Rx. Maybe thats true, maybe I just don't get it, but they aren't exactly explaining it, just telling me i'm wrong and they are right. I need another set of eyes on this to see if it is just me.
One of the main points is that there are many places where network errors shouldn't complete the observable (i.e. can't call onError), I understand this concept. I read a response from the RxSwift maintainers that said the way to handle this was to wrap your response type in a class with a generic type (e.g. Result<T>) that contained a property to denote a success or error and maybe an error message. This way errors (such as incorrect password) won't cause it to complete, everything goes through onNext and users can retry / go again, makes sense.
The guys are saying that this breaks Rx principals and MVVM. Instead we need separate observables for every type of response. So we have viewModels that contain:
- isSuccessObservable
- isErrorObservable
- isLoadingObservable
- isRefreshingObservable
- etc. (some have close to 10 different observables)
To me this is overkill to have so many streams all frequently only ever delivering 1 or none messages. I would have aimed for 1 observable, that returns an object holding properties for each of these things, and sending several messages. Is that not what streams are suppose to do? Then the local code can use filters as part of the subscriptions. The major benefit of having 1 is that it becomes easier to make it generic and abstract away, which brings us to point 2.
Currently, due to each viewModel having different numbers of observables and methods of different names (but effectively doing the same thing) the guys create a new custom protocol (equivalent of a java interface) for each viewModel with its N observables. The viewModel creates local variables of PublishSubject, BehavorSubject, Driver etc. Then it implements the procotol / interface and casts all the local's back as observables. e.g.
protocol CarViewModelType {
isSuccessObservable: Observable<Car>
isErrorObservable: Observable<String>
isLoadingObservable: Observable<Void>
}
class CarViewModel {
isSuccessSubject: PublishSubject<Car>
isErrorSubject: PublishSubject<String>
isLoadingSubject: PublishSubject<Void>
// other stuff
}
extension CarViewModel: CarViewModelType {
isSuccessObservable {
return isSuccessSubject.asObservable()
}
isErrorObservable {
return isSuccessSubject.asObservable()
}
isLoadingObservable {
return isSuccessSubject.asObservable()
}
}
This has to be created by hand, for every viewModel, of which there is one for every screen and there is 40+ screens. This same structure is copy / pasted into every viewModel. As mentioned above I would like to make this all generic. Have a generic protocol for all viewModels to define 1 Observable, 1 local variable of generic type and handle the cast back automatically. The method to trigger all the business logic could also have its name standardised ("load", "fetch", "processData" etc.). Maybe we could also figure out a few other bits too. This would remove a lot of code, as well as making the code more readable (less messy), and make unit testing much easier. While it could never do everything automatically we could test the basic responses of each viewModel and have at least some testing done by default and not have everything be very boilerplate-y and copy / paste nature.
The guys think that subscribing to isSuccess and / or isError is perfect Rx + MVVM. But for some reason subscribing to status.filter(success) or status.filter(!success) is a sin of unimaginable proportions. Also the idea of multiple buttons and events all "reacting" to the same method named e.g. "load", is bad Rx (why if they all need to do the same thing?)
My thoughts on this are:
- To me its indentical in meaning and architecture, one way is just significantly less code.
- Lets say I agree its not textbook, is it not worth bending the rules to reduce code.
- We are already breaking the rules of MVVM to introduce coordinators (which I hate, as they are adding even more unnecessary code), so why is breaking it to reduce code such a no no.
Any thoughts on the above? Am I way off the mark or is this classic Rx?16 -
So I made an update to my React Native app. I changed UI of a couple of screen, added a few animations here and there, refactored how my graphQL resolvers work in the backend(no breaking changes), changed how data gets loaded into the database etc.
It worked in dev so I figured hey let's deploy it. Today is(was because it's now 3am but more on that later) a national holiday so no one goes to work so no one will use my app so I have an entire day to deploy.
I started at 15:00(because i woke up at 13:00 lol). I tested the update once again in dev and proceeded to deploy it to prod. I merged backend to master, built docker images, did migrations on the db, restarted docker-compose with new images. And now for the app. I run ./gradlew assembleRelease and it starts complaining that react-native-gesture-handler is not installed. Ugh, rm -rf node_modules && yarn install. It worked. But now gradlew crashes and logs don't tell me anything. Google tells me to change a bunch of gradle settings but none of them work. Fast forward 5h, it's around 20:00 and I isolated the issue to, again, react-native-gesture-handler. They updated from 2.2.4 to 2.3.0 which didn't fucking compile. 2 more hours passed (now 22:00) and I got v2.3.1 working which fixed the problem in 2.3.0 but made my app crash on startup. YOUR FUCKING LIBRARY GETS 250K WEEKLY DOWNLOADS AND YOU DONT EVEN BOTHER CHECKING IF IT COMPILES IN PROD ON ANDROID?! WHAT THE FUCK software-mansion?
After I solved that, my app didn't crash. Now it threw an error "Type errors: Network Request Failed" every time I fetch my legacy REST API(older parts use rest and newer use graphql. I'll refactor that in the next update). I'll spare you the debugging hell i went through but another 5h passed. Its 3am. My config had misspelled url to prod but good for dev... I hate myself and even more so react-native-gesture-handler.3 -
*laughing maniacally*
Okidoky you lil fucker where you've been hiding...
*streaming tcpdump via SSH to other box, feeding tshark with input filters*
Finally finding a request with an ominous dissector warning about headers...
Not finding anything with silversearcher / ag in the project...
*getting even more pissed causr I've been looking for lil fucker since 2 days*
*generating possible splits of the header name, piping to silversearcher*
*I/O looks like clusterfuck*
Common, it are just dozen gigabytes of text, don't choke just because you have to suck on all the sucking projects this company owns... Don't drown now, lil bukkake princess.
*half an hour later*
Oh... Interesting. Bukkake princess survived and even spilled the tea.
Someone was trying to be overly "eager" to avoid magic numbers...
They concatenated a header name out of several const vars which stem from a static class with like... 300? 400? vars of which I can make no fucking sense at all.
Class literally looks like the most braindamaged thing one could imagine.
And yes... Coming back to the network error I'm debugging since 2 days as it is occuring at erratic intervals and noone knew of course why...
One of the devs changed the const value of one of the variables to have UTF 8 characters. For "cleaner meaning".
Sometimes I just want to electrocute people ...
The reason this didn't pop up all the time was because the test system triggered one call with the header - whenever said dev pushed changes...
And yeah. Test failures can be ignored.
Why bother? Just continue meddling in shit.
I'm glad for the dev that I'm in home office... :@
TLDR: Dev changed const value without thinking, ignoring test failures and I had the fun of debunking for 2 days a mysterious HAProxy failure due to HTTP header validation... -
Rant && SPAM alert!
I'm learning QML, to create plasma widgets and I wasted all the fucking day fighting with layouts and trying to understand why the settings window was not rendered (now it's rendered but I still don't understand why it wasn't before, the code is the same!)
so at the end of the day I ried to apply what i learnt in a fresh new widget that shows (some) PiHole statistics from its API.
on first run:
it runs fine, no errors... ok let's do some tests... turn off network, whole DE freeze WTF!?! one widget error (network error in this case) can freeze the whole DE.
restarted plasma, FIXED the bug (debugging process basically is:
try something - freeze - restart plasma - repeat
),
No more freeze!
if you're a KDE and pihole user and you want try my widget:
https://github.com/ShellAddicted/...
P.S: I'm adding right now a switch to quickly enable/disable pi hole over API directly from your desktop. i will push tomorrow.4 -
Error reporting was flooded with failed database connection connections with me being baffled what was causing it.
Yeah, fucking network operator didn't tell us anything about maintenance work. Fuck you, too. -
Friday 13th. Superstition.
0655, got WFH laptop going. 0700, VPN'ed in. Bluescreen, first in ages. Yes, Windows, the hatred is mutual. Rebooted. Windows claimed memory fault, offered check, 40 minutes. Noped out. Started machine. VPN'ed in. Some strange script error that I'd never seen before. Rebooted. Script error again. Shut down machine, then rebooted, same problem. 0715, fuck, still wearing sweaters, my e-scooter not charged, and an important Teams call at 0800.
Got dressed, stuffed laptop into backpack, hurried up by foot. Took the bus. Fuck, the next connection on the change station just had gone off. Took a taxi to make it. Arrived at the company, plugged in the laptop, started with no issues. Had the important call.
Took the laptop to IT. Tested it with external network connection and VPN. Worked with no script error. Had it checked for RAM issues. No issue. WTF had happened in the morning?!6 -
How do you define a seniority in a corporate is beyond me.
This guy is supposed to be Tier3, literally "advanced technical support". Taking care of network boxes, which are more or less linux servers. The most knowledgable person on the topic, when Tier1 screws something and it's not BAU/Tier2 can't fix it.
In the past hour he:
- attempted to 'cd' to a file and wondered why he got an error
- has no idea how to spell 'md5sum'
- syntax for 'cp' command had to be spelled out to him letter by letter
- has only vague idea how SSH key setup works (can do it only if sombody prepares him the commands)
- was confused how to 'grep' a string from a logfile
This is not something new and fancy he had no time to learn yet. These things are the same past 20-30 years. I used to feel sorry for US guys getting fired due to their work being outsourced to us but that is no longer the case. Our average IT college drop-out could handle maintenance better than some of these people.11 -
When my boss thinks doing anything is better then doing nothing...
I have to explain to my boss for the Nth time already that doing random tests and things will not replicate a PRODUCTION network issue that seem caused by particular factors at a particular time and location.
And that the best way to trace is for whatever is raising the issue to log the exact time and error so the problem can be traced through all the steps...
FML.....1 -
Can't git push
because of an "access denied" error message
because I didn't set up my key file properly (with right paths, right format and so on)
because I'm working from my home laptop device
because I'm in home office
because Corona
..but..
I can connect to my work computer where git is set up properly but also I
can't git push
because I can't "cd" into the project path
because the samba mount point is messed up
because I don't reboot my machine to fix it
because I can't enter my password
because it does have a full hard drive encryption and the password screen shows up before the network services are started.2 -
Fk you Google!
My Samsung note 10 screen went dead near a week ago... it's a secondary line so waiting for parts wasn't the end of the world.
Ofc the screen (curved and incl a fingerprint reader thatd be a major pain to not replace) was integrated to the whole front half... back panel glued, battery, glued immensely and with all other parts out, about 6mm space only at the bottom to get a tool in to pry it out.
New screen (off brand) ~200... all genuine parts amazon refurb ~230... figured id have some extra hardware for idk what... i like hardware and can write drivers so why not.
Figured id save a bit of time and avoid other potentially damaged (water) components to just swap out the mobo unit that had my storage.
Put it back together, first checked that my sim was recognised since this carrier required extraneous info when registering the dev... worked fine... fingerprint worked fine, brave browser too...
Then i open chrome. It tells me im offline... weird cuz i was literally in a discord call. My wifi says connected to the internet (not that i wouldn't have known the second there was a network issue... i have all our servers here and a /28 block... ofc i have everything scripted and connected to alert any dev i have, anywhere i am, the moment something strange happens).
Apparently google doesnt like the new daughter board(i dislike the naming scheme... its weird to me)... so anything that is controlled by google aside from the google account that is linked to non-google reliant apps like this... just hangs as if loading and/or says im offline.
I know... itll only take me about the 5-10m it took to type this rant but ffs google... why dont you even have an error message as to what your issue is... or the simple ability to let me log in and be like 'yup it's me, here's your dumb 2fa and a 3rd via text cuz you're extra paranoid yet dont actually lock the account or dev in any way!'
I think it's a toss up if google actually knows that it's doing this or they just have some giant glitch that showed up a couple times in testing and was resolved via the methods of my great grama- "just smack it or kick it a few times while swearing at it in polish. Like reaaaally yelling. Always worked for me! If not, find a fall guy."7 -
I had the idea that part of the problem of NN and ML research is we all use the same standard loss and nonlinear functions. In theory most NN architectures are universal aproximators. But theres a big gap between symbolic and numeric computation.
But some of our bigger leaps in improvement weren't just from new architectures, but entire new approaches to how data is transformed, and how we calculate loss, for example KL divergence.
And it occured to me all we really need is training/test/validation data and with the right approach we can let the system discover the architecture (been done before), but also the nonlinear and loss functions itself, and see what pops out the other side as a result.
If a network can instrument its own code as it were, maybe it'd find new and useful nonlinear functions and losses. Networks wouldn't just specificy a conv layer here, or a maxpool there, but derive implementations of these all on their own.
More importantly with a little pruning, we could even use successful examples for bootstrapping smaller more efficient algorithms, all within the graph itself, and use genetic algorithms to mix and match nodes at training time to discover what works or doesn't, or do training, testing, and validation in batches, to anneal a network in the correct direction.
By generating variations of successful nodes and graphs, and using substitution, we can use comparison to minimize error (for some measure of error over accuracy and precision), and select the best graph variations, without strictly having to do much point mutation within any given node, minimizing deleterious effects, sort of like how gene expression leads to unexpected but fitness-improving results for an entire organism, while point-mutations typically cause disease.
It might seem like this wouldn't work out the gate, just on the basis of intuition, but I think the benefit of working through node substitutions or entire subgraph substitution, is that we can check test/validation loss before training is even complete.
If we train a network to specify a known loss, we can even have that evaluate the networks themselves, and run variations on our network loss node to find better losses during training time, and at some point let nodes refer to these same loss calculation graphs, within themselves, switching between them dynamically..via variation and substitution.
I could even invision probabilistic lists of jump addresses, or mappings of value ranges to jump addresses, or having await() style opcodes on some nodes that upon being encountered, queue-up ticks from upstream nodes whose calculations the await()ed node relies on, to do things like emergent convolution.
I've written all the classes and started on the interpreter itself, just a few things that need fleshed out now.
Heres my shitty little partial sketch of the opcodes and ideas.
https://pastebin.com/5yDTaApS
I think I'll teach it to do convolution, color recognition, maybe try mnist, or teach it step by step how to do sequence masking and prediction, dunno yet.6 -
Why do SAMBA network drives have to suck this much? Yeah I understand that compiling to a network drive is probably a bad idea just for performance reasons alone but can't you at least not fuck with my git repo?
$ git gc
Enumerating objects: 330, done.
Counting objects: 100% (330/330), done.
Delta compression using up to 24 threads
Compressing objects: 100% (165/165), done.
Writing objects: 100% (330/330), done.
Total 330 (delta 177), reused 281 (delta 151), pack-reused 0
error: unable to open .git/objects/7e: Not a directory
error: unable to open .git/objects/7e: Not a directory
fatal: unable to mark recent objects
fatal: failed to run prune
$ git gc
error: unable to open .git/objects/00: Not a directory
fatal: unable to add recent objects
fatal: failed to run repack
$ git gc
Enumerating objects: 330, done.
Counting objects: 100% (330/330), done.
Delta compression using up to 24 threads
Compressing objects: 100% (139/139), done.
Writing objects: 100% (330/330), done.
Total 330 (delta 177), reused 330 (delta 177), pack-reused 0
Removing duplicate objects: 100% (256/256), done.
error: unable to open .git/objects/05: Not a directory
error: unable to open .git/objects/05: Not a directory7 -
Here some more information to despise Apple.
In the past few weeks I keep having a problem making the iMac connect to whatever website/host, so I had to rerun whatever I had to do: fetching from github, push to github, connecting to a LAN server, pinging to know to IP, accessing a webpage and so on.
Luckily enough, browsers tend to request again if an error occurs.
At my job, I upload app files to servers, like GooglePlay and AppStoreConnect.
For those who don't know, Google makes you upload the app through the browser (among other ways) while Apple requires you to upload your app either through XCode. No other possible ways.
Whenever XCode requires an update, the authentication is required, but the authentication server cannot be reached for at least 5-6 tries.
Then I have to upload the app and just to be ready to hit the "upload button" it takes like 3-4 minutes, which might be completely useless if a network error occurs.
How hard is it to make your fucking app-loader to try again at least a few times?6 -
Unable to access cpanel/whm due to IP changed error.
called HR
me : please connect me to networking team (out sourced)
hr : why ?
me : I have some issue to access cpanel. I contacted to hosting comapny but it is not their fault so may be it's our network issue.
hr : explain me in details.
me : ok
from morning I am trying to access whm because our website is out of bandwidth limit and showing 509 error ,I contacted to hosting comapny but they explained me problem from our side. SO i wanted to talk with network team about this issue because I am not using any proxy or vpn even my tor browser is off too still ip chaged error giving frustation. second reason I am frusted that my public IP and private IP is not chaged.
one more your windows pc freeze 3 times from morning.
do you need in detailed technical reason why I want to talk with them.
hr : no no no *hang up*
after 2 minute *my landline ring*
hr : network engineer on other side.
fair enough2 -
So I recently finished a rewrite of a website that processes donations for nonprofits. Once it was complete, I would migrate all the data from the old system to the new system. This involved iterating through every transaction in the database and making a cURL request to the new system's API. A rough calculation yielded 16 hours of migration time.
The first hour or two of the migration (where it was creating users) was fine, no issues. But once it got to the transaction part, the API server would start using more and more RAM. Eventually (30 minutes), it would start doing OOMs and the such. For a while, I just assumed the issue was a lack of RAM so I upgraded the server to 16 GB of RAM.
Running the script again, it would approach the 7 GiB mark and be maxing out all 8 CPUs. At this point, I assumed there was a memory leak somewhere and the garbage collector was doing it's best to free up anything it could find. I scanned my code time and time again, but there was no place I was storing any strong references to anything!
At this point, I just sort of gave up. Every 30 minutes, I would restart the server to fix the RAM and CPU issue. And all was fine. But then there was this one time where I tried to kill it, but I go the error: "fork failed: resource temporarily unavailable". Up until this point, I believed this was simply a lack of memory...but none of my SWAP was in use! And I had 4 GiB of cached stuff!
Now this made me really confused. So I did one search on the Internet and apparently this can be caused by many things: a lack of file descriptors or even too many threads. So I did some digging, and apparently my app was using over 31 thousands threads!!!!! WTF!
I did some more digging, and as it turns out, I never called close() on my network objects. Thus leaving ~30 new "worker" threads per iteration of the migration script. Thanks Java, if only finalize() was utilized properly.1 -
So I have a semi big project ongoing:
Because my modem+router combo sucks dick and gets buttfucked to much I want to make my own Router with PPPoE.
So I ordered an 8 euro used switch, 24ports for management, but our IP TV provider is sucking cock too! He uses multicasting to send the fucking IP TV signal. So the switch is not VLan ready and so the network will be flodded.
But that's not the worst...
I don't know how to route VOIP over QoS correctly... So I just hope that part work's!
I also ordered another network switch Wich is manageable + an God damn networking closet. 80 bucks gone again! Wish me luck this works better...
BUT THATS NOT THE WORST BECAUSE NOW COMES THE HEAVY PART!
I wanted to use my old AMD Athlon X2 64bit and 4 gigs of RAM PC to be my powerhouse of the router. I plugged it into the wall, booted, screen error... Thought it might be the integrated graphics card... Unplugged my old one, inserted it.... AND IT WOR... NOPE NOPE IT DIDN'T NOW MY DAMN MOTHERBOARD IS FUCKING FRIED TO DUST BECAUSE OF THE GOD DAMN ... I DONT EVEN KNOW! AAAAA
So I thought I could temporarily use my raspberry pi one model b, a good fellow with multiple years of usage! I plugged in the sd card into my girls laptop, wasn't at home, and her God damn internet downloaded that shitty raspbian (sorry raspi but your servers sometimes are very slow) and after the download I realized her GOD DAMN SD READER DIDNT FUCKING WORK!!!
SO I GUESS I WILL WAIT!1 -
TL;DR: There was a Steam bug and I fixed it locally.
Some months ago, Steam had the problem, that if you tried to add anything from the Steam Workshop to a collection, you would get an error like "Process failed: 2", while it was loading the collection list.
I realized, that it would work, but there was a bug in the JS (Watched the network tab in chrome while trying to add to collection). I searched after "Process failed" in each js file and after 30 seconds I found the buggy if. It said something like
if (json.success != 2) {
//do error
} else {
//show list
}
After I changed that if condition to
if (false)...
it worked perfectly, although it would make problems if there would be a server side error.2 -
Maxi-Rant, rest in the first comment!
Yay, I've caught up with my "watch later" list on YouTube! Next thing: Just quickly go through my subscribed channels and add old videos that I haven't seen yet to the watch later list so that I have more stuff to watch the next months. The easiest way to do that is to go to the "all uploads" playlist of the channel (that is luckily always linked now, it used to be hidden sometimes) and use "add all to" to get them on my playlist. Then sort out the stuff that I've already seen and turn on automatic sorting by date, easy. Yeah...
Firstly, in the new design there's no "add all to", I have to go to the old design. For my own playlists, there's a handy "edit" button to do that, but on other pages I have to do it manually. Luckily I have set Ctrl+Shift+1 as a shortcut for "&disable_polymer=true" long ago.
Next surprise: On "all uploads" playlists, there is no "add all to" button. It's on every single other playlist on YouTube, including "liked", "watch later", "favourites" and so on, just not there.
Fine, I'll just abuse my subscription playlist script that I already have by making a copy of it, putting the channel IDs in it and setting the last execution date to 1.1.2001. Little problem with that: Google apps scripts can run for at most 5 minutes and the YouTube API restricts it to add one video per second. So it doesn't work for more than 300 videos. I could now try to split it up by dates, but I didn't write the script myself and I don't know how it sorts the videos to add, so I'll just google for another solution instead.
Found one: Go to the video overview of the channel in the old layout, Ctrl+Shift+I, paste this little Javascript thing and it automatically clicks all the little clocks that add the video to the watch later list. Yay, that works! Ok, i'm restricted to 5000 videos, because that's the maximum size of a YouTube playlist, so I can't immediately add all 8000+, but whatever, that's a minor problem and I'll sort out later anyway. Still another little problem: For some reason I can't automatically sort the watch later list. Because that would be too easy.
But whatever, I'll just use "add all to" from there to add it to my creatively named "WL" list. If that thing is restricted by the same rate limit of 1 video per second, it should be done in about 1½ hours. A bit long, but hey, I'm dealing with 5000 videos. Waiting 2 hours... Waiting 3 hours... Nothing happens. It would be nice if it at least added them one by one, but no, it waits an eternity and then adds all at once. At least in theory, right now it does absolutely nothing.
Shortly considered running it for more hours or even days on my Raspberry Pi, but that thing already struggles when using Chromium normally, I shouldn't bother it with anything that has to do with 5000 videos.
Ok, what else can I do then? Googling, trying out different things, mainly external services that have their own concept of "playlists" and can then add them to an arbitrary playlist later...
Even tried writing my own Java program with the YouTube API, but after about an hour not even the example program in the YouTube API tutorial worked (50 errors and even more open questions, woohoo), so I discarded that idea.
Then I discovered "DiskYT". Everything looked like it would work and I'm still convinced that I can do it with that little pile of shit. Why is it a pile of shit? Well, for example the site reloads itself after a while, so it can at most add 700 videos to a playlist. Also I can't just paste the channel link (even though it recognises those links, but just to show an error message that it can't copy from channels). I can't enter/paste URLs, I have to drag them. The site saves absolutely nothing (should in theory work, but in practise it doesn't), so I have to re-drag everything on every try. In one network, the "authorise YouTube" button (that I have to press again on every computer) does absolutely nothing ("inspect" reveals that there isn't even any action bound to the button), in another network the page mostly doesn't work at all or the button to copy from playlists is suddenly gone or other weird stuff. Luckily I have the WiFi at home, there it works in theory. But just on my desktop PC, no other device, wow. I tried to run it on my new laptop, but it's so new that it still has the preinstalled OS and there I can't deactivate going to standby when closing the laptop, so while I expected it to add 5000 videos, it instead added 4 and went to standby. But doesn't matter, because it would have failed at about 700 anyway. Every time I try to use this website, I get new problems, but it seems to still be the best option, because everything else just doesn't do anything. This page at least got to 700 before.
Continuing in first comment!4 -
FUCK APPLICATION LEVEL FIREWALLS!
So i cam online today, thought already lets open the shitty outlook webmail client. Holy crap .... thats way to much mails. Many of them are missed teams messages. So i open up teams and holy crap. Like every third dev in my company send me a message screaming "gitab is not working!!!".
Yesterday i updated it so imediately get in panic mode - what the shitty hack have i done?!
So yeah gitlab seems to be working just fine, everything is speedy and responsive, so i call one of my fellow devs and ask him whats wrong? And he is like oh yeah there comes a ldap error saying timeout or something.
I try to login with active directory. Works like a charm. Try another account, same problem?!
Google the problem, search gitlab tickets. Nope there is no open bug or sth. like this.
So alright lets call the network guy. "Yo, can you check if there is something ldap-like getting blocked to the gitlab server?" - He is like oh yeah damn like almost every damn request is getting blocked. Ah wait, there was an firewall update yesterday too. Yeah ldap is no longer ldap. BLOCK THAT SHIT!
After 10 minutes of figuring out what shitty type is detected by the firewall and what needs to be whitelisted to make it fucking work again it seems to work.
But ha no, there is another update rolling on, so same shit like 15 minutes later.
Now it seems to work and i have to inform every damn fcking developer that it works again. And yeah alright you sent a mail, but fuck it, i will call you though! So yeah just answering calls, mails and chat messages. Like why the fuck cant you read your mails like a damn normal person?!1 -
My work product: Or why I learned to get twitchy around Java...
I maintain a Java based test system, that tests a raster image processor. The client is a Java swing project that contains CORBA bindings to the internal API of the raster image processor. It also has custom written UI elements and duplicated functionality that became available in later versions of Java, but because some of the third party tools we use don't work with later versions of Java for some reason, it's not possible to upgrade Java to gain things as simple as recursive directory deletion, yes the version of Java we have to use does not support something as simple as that and custom code had to be written to support it.
Because of the requirement to build the API bindings along with the client the whole application must be built with the raster image processor build chain, which is a heavily customised jam build system. So an ant task calls out to execute a jam task and jam does about 90% of the heavy lifting.
In addition to the Java code there's code for interpreting PostScript files, as these can be used to alter the behaviour of the raster image processor during testing.
As if that weren't enough, there's a beanshell interface to allow users to script the test system, but none of the users know Java well enough to feel confident writing interpreted Java scripts (and that's too close to JavaScript for my comfort). I once tried swapping this out for the Rhino JavaScript interpreter and got all the verbal support in the world but no developer time to design an API that'd work for all the departments.
The server isn't much better though. It's a tomcat based application that was written by someone who had never built a tomcat application before, or any web application for that matter and uses raw SQL strings instead of an orm, it doesn't use MVC in any way, and insane amount of functionality is dumped into the jsp files.
It too interacts with a raster image processor to create difference masks of the output, running PostScript as needed. It spawns off multiple threads and can spend days processing hundreds of gigabytes of image output (depending on the size of the tests).
We're stuck on Tomcat seven because we can't upgrade beyond Java 6, which brings a whole manner of security issues, but that eager little Java updated will break the tool chain if it gets its way.
Between these two components we have the Java RMI server (sometimes) working to help generate image data on the client side before all images are pulled across a UNC network path onto the server that processes test jobs (in PDF format), by reading into the xref table of said PDF, finding the embedded image data (for our server consumed test files are just flate encoded TIFF files wrapped around just enough PDF to make them valid) and uses a tool to create a difference mask of two images.
This tool is very error prone, it can't difference images of different sizes, colour spaces, orientations or pixel depths, but it's the best we have.
The tool is installed in both the client and server if the client can generate images it'll query from the server which ones it needs to and if it can't the server will use the tool itself.
Our shells have custom profiles for linking to a whole manner of third party tools and libraries, including a link to visual studio 2005 (more indirectly related build dependencies), the whole profile has to ensure that absolutely no operating system pollution gets into the shell, most of our apps are installed in our home directories and we have to ensure our paths are correct for every single application we add.
And... Fucking and!
Most of the tools are stored as source bundles in a version control system... Not got or mercurial, not perforce or svn, not even CVS... They use a custom built version control system that is built on top of RCS, it keeps a central database of locked files (using soft and hard locks along with write protecting the files in the file system) to ensure users can't get merge conflicts by preventing other users from writing to the files at all.
Branching is heavy weight and can take the best part of a day to create a new branch and populate the history.
Gathering the tools alone to build the Dev environment to build my project takes the best part of a week.
What should be a joy come hardware refresh year becomes a curse ("Well fuck, now I loose a week spending it setting up the Dev environment on ANOTHER machine").
Needless to say, I enjoy NOT working with Java. A lot of this isn't Javas fault, but there's a lot of things that Java (specifically the Java 6 version we're stuck on) does not make easy.
This is why I prefer to build my web apps in python or node, hell, I'd even take Lua... Just... Compiling web pages into executable Java classes, why? I mean I understand the implementation of how this happens, but why did my predecessor have to choose this? Why?2 -
As promissed.
Day #1 on THE other project. Nothing fancy, just setting up my dev env. Got a decent pc with all the required network permissions. And this time I got w10 [last year I was working there on w7 pc via rdp from another w7 laptop. Dont ask...]
of course no localadmin rights to set shit up. Downloaded all the installs, found someone who has admin rights to run them. I even managed to get admin powershell!
Ran all installers, enabled long paths support, env vars, tweak here, tweak there,... Installed git bash to at least have a taste of shell. Decided to try out wsl. Enabled the feature, didnt reboot right away.
Rebooted. 2xclick on ubuntu setup and I get an error claiming wsl is not ebabled. Wtf? Did I do it wrong? I see bash command is there now so I must have done it right. After some googling I found out that even though I can enable wsl, it doesnt work on my version of windows. It's too okd they say. Yeah, tx MS, that's very intuitive and user friendly!
Allright, my hopes to habe a decent sub-os died. Git bash it is :( but I miss tmux soooo much. Then I came across smth that caught my eye. Msys2 it's called. Apparently it's based on cygwin and has a pacman package manager! ´pacman -S tmux´ -- hippee-ka-yay motherfuckers! It's not the best terminal emulation, but it works quite allright and it has tmux. And netcat!
Banished to mouseclickerland still managed to find a good enough shell. Yayy!
So there it is. My first day's ups and downs, disappointments and discoveries.
If you know a better shell I could set up on w10, please, share -
Spent all night trying to deploy a web app, was needed one guy from another country check my code with me by sharing my screen with him, we spent hours trying understand the error, after we saw that node was picking files from old commits, so they wasn't there to be loaded.
After I went to school, had my final exams, when came back, I just fixed routes in node after done I told: -"guys, I'll deploy again with these updates and update the repo" guess what?????? My network stoped to work, I was so sleepy and that shit wasn't working, my provider told me that EVERYTHING WAS FINE WITH THE SERVERS
so I gived up, sleept for 3~4 hours and now my network is fine, the repo and the application are updated and I don't have more energy to keep working.1 -
In spent more than an hour trying to figure out why a form didn't work.
I pressed the submit button and nothing happened, no error in the console and nothing in the Network tab in Chrome's devtools. And the action was being executed!
Then I found out there was a catch somewhere. I removed and it said that the url was wrong. But again, I debugged it and nothing seemed wrong. I even hardcoded the values.
At the end, it turned out that the initial "/" was missing in the request url... -
Our computer science GCSE exams are so flawed in so many ways. They're awfully vague or just completely wrong. In the last exam I did, I got a question that was basically:
"There is a server in a network. Name 3 of its functions"
If you did not provide an answer within the 5 "correct answers", it was considered incorrect as it was beyond the curriculum hence irrelevant.
That's like penalising people for not correctly guessing the contents of an opaque box...
I've genuinely lost more marks to the flaws in the marking scheme than genuine error.
Valve, pls fix2 -
I just finished a bunch of newly configured containers that I had to switch off centos7 to almalinux9. I have one thing to say,
Fuck NetworkManager!
I know im basically a dinosaur when it comes to any coding, especially scripting. I prefer notepad.exe or sublime to VScode... you couldnt pay me enough to use crap like vbstudio... but I know I need to get better at not just rewriting thing to suit my preferences since i have others working for or with me now.
so... I tried... I reeeeally tried to tolerate NetworkManager... tried to learn/tolerate dumb nmcli and it's matrixed array of dyslexic syntaxes. I just couldnt do it... that plus the damn default images having bs like an effectively blank, non-error-generating resolv.conf file.
NetworkManager got killed... I went back and edited my network-scripts and scripted those to other scripts for changing the statics around if/when needed... took waaaay less time.
I just dont get why something like NetworkManager even exists on any EL distro... yeah sure, wifi takes a couple extra steps and is super common now... but shouldnt be how any actual servers need to communicate. can people just not fathom putting shit in a few files in proper syntax anymore???5 -
rant && what do you think?
so one of our ISP (Orange Slovakia) had troubles with service for like two days. Their DNS servers translated domains to IPs reaaally slow or not at all. So when i saw the dns error in chrome (yes i use chrome and not quantum) I changed my dns to google dns and ignored it.
Two days later when the service was back up and running, this ISP went to the local media and made a statement "we had a DDOS attack, no user data were harmed, blabla" that was when my BS radar went bananas... so somebody DDOSd your DNS server ... for two fucking days straight... this is probably a lie or they have really noob engineers (or both).
I'm not an expert on network services or routing, or servers but, how about turning off this server, IP and setting up a backup on a different IP ? Possibly anyone here with experience how to handle DDOS? Whats the chance of this happening? i'm really curious23 -
So, I was working on my code base and wanted to update my remote with the local changes. I issued the git push command but it just remained unresponsive, no error-nothing. (I use bitbucket as remote host). This was strange, even enabling verbose option didn't tell me anything useful apart from usual 'pushing this to that' sort of response. I checked internet connectivity on my system. It's fine. I restarted my network-mananger just in case, tried if ping, telnet and other tools were working. Everything seemed fine.
Well, it turns out for a major portion of the day bitbucket was having issue with ssh connection. Finally I added https remote and was able to push my changes using 'username', 'password' route.
It wasted a good portion of my time today!! -
Spend all day debugging simple post request. Like really what is going on. Super simple. Eyes start to bleed. Check spelling on everything. Finally find out the access-control-origin isn't set right, other dev said it was whatever so glad I'm moving on. Nope. Same error running the app from Visual Studio. Check code again. Everything works in a browser. Windows, VS, or the emulator is blocking just POST requests. I can do get requests all day.
What hell. I'm so critical of my code I spend hours pouring over something I knew was right instead of looking for network errors. I just need to trust myself I guess.
Oh and Windows Cordova apps don't support ES6 lol.1 -
AHHHHHHHHHHGGGH
I HATE VPN SETUP
- Trying OpenSwan
Installing open swan on a Debian machine.. setting up the config.
Restarting openswan. Syntax error. No syntax error to be found.
Different tutorial.. it starts! Try to connect.. I can’t connect. Look at the logs. No errors.
Tcpdump. My traffic is coming through.. all fine.. try to connect again.. it works! (Nothing changed!)
Try to ping somewhere else.. no connectivity.
Try to ping an IP in the same network.. works fine. So I have connectivity, just no internet.
Spend an hour finding out about traffic directions of which no one seems to know what they really mean.
Boss tells me to stop using openswan because it’s deprecated and replaced by strong swan..
- Strongswan
Reinstall Debian machine, install strongswan. Copy openswan config. Oh, they’re incompatible? Look up strong swan config, and the service starts.
Connect to the VPN.. it works! Again, no internet, just connectivity in the same network. Spend 2h debugging the config, disable firewalls everywhere, find an ancient bug in the Debian package related to my issues.. ok, let’s try compiling from source.. you know what, let’s not. I’ll throw this Debian machine away and try something completely different.
- pfSense
Ok, this looks easy enough! Let’s just click through the initial setup, change some firewall rules, create an L2TP VPN with a simple wizard.
Try to connect to VPN. First, it times out. Maybe a firewall issue? Turn off firewall.. ah, something happens now. I get an error message right after trying to connect to the VPN. Hmm, the port doesn’t even get opened when I enable the firewall.. this implementation seems a bit buggy.. let’s try their OpenVPN module.
Configure OpenVPN. Documentation isn’t that clear.. apparently a client isn’t actually a client but a user is a client.. ok, there’s a hidden checkbox somewhere.
Now where do I download my certificate? Oh, I need a plug-in for that.. ok, interesting. Able to download the certificate, import it, connect and.. YES!!! I can ping! But, I have no DNS..
Apparently, ICMP isn’t getting filtered but all outbound ports are.. yet the firewall is completely disabled. Maybe I need outbound NAT? Oh. There’s no clear documentation on where to configure it. Find some ancient doc, set it up, still no outbound connectivity.
AHAHAHAHHHHHHHHHHG
Then I tried VyOS. I had a great L2TP VPN working in less than 15 mins. Thank you VyOS for actually providing proper docs and proper software.3 -
lets try again.
What the fuck is with apache. Why I cannot start the page. it should be 5 minutes work.
but it give some shitty error where it is not clear what is wrong
This site can’t be reached timetracker.local’s server IP address could not be found.
Try:
Checking the connection
Checking the proxy, firewall, and DNS configuration
Running Windows Network Diagnostics
ERR_NAME_NOT_RESOLVED
how long apache is being developed? 10 years ? more? and cannot make normal error messages so you would know how to fix the problem . fuck that. I hate it so much. wasting my time. bastards.14 -
Greetings devRanters!
I found this on a social network. The error says "Type the password twice". Then I replied "The password must be twice". -
Learning to like manjaro, a lot, setting up i3 for a workstation and kubernetes cluster with a couple of manjaro workstations with just the cli installed... few gotchas on the way, get Hyper-V enhanced mode working but get a message session error on dbus launch - easy fix it is already launched by lightdm, the cli install doesn't start the network driver by default but can get a whole 3 node k8s cluster running in under an hour from scratch and forward i3 to a nice, fast, little windows x-server that I got for free with Microsoft reward points.. winning!
-
Guessing my rant free streak is over. Trying to connect to a mongo atlas cluster. Just migrated from mlab as mongo Inc is discontinuing the heroku add on.
Migration went well. I can connect to atlas cluster via mongo shell.
Reactive mongo claims it supports dns seed list. I add mongodb+srv connection string. Doesn't work.
I go back to atlas and allow all ips access (migrating staging dB first to make sure all is well so I can whitelist all ips) - > send a request-> mongo error. No primary node is available.
Disconnect from my network, connect to another network, same thing. I push the connection string to my server, test using an ssl connection to make a request, still no primary node available. I am about to lose my mind. -
Relatively often the OpenLDAP server (slapd) behaves a bit strange.
While it is little bit slow (I didn't do a benchmark but Active Directory seemed to be a bit faster but has other quirks is Windows only) with a small amount of users it's fine. slapd is the reference implementation of the LDAP protocol and I didn't expect it to be much better.
Some years ago slapd migrated to a different configuration style - instead of a configuration file and a required restart after every change made, it now uses an additional database for "live" configuration which also allows the deployment of multiple servers with the same configuration (I guess this is nice for larger setups). Many documentations online do not reflect the new configuration and so using the new configuration style requires some knowledge of LDAP itself.
It is possible to revert to the old file based method but the possibility might be removed by any future version - and restarts may take a little bit longer. So I guess, don't do that?
To access the configuration over the network (only using the command line on the server to edit the configuration is sometimes a bit... annoying) an additional internal user has to be created in the configuration database (while working on the local machine as root you are authenticated over a unix domain socket). I mean, I had to creat an administration user during the installation of the service but apparently this only for the main database...
The password in the configuration can be hashed as usual - but strangely it does only accept hashes of some passwords (a hashed version of "123456" is accepted but not hashes of different password, I mean what the...?) so I have to use a single plaintext password... (secure password hashing works for normal user and normal admin accounts).
But even worse are the default logging options: By default (atleast on Debian) the log level is set to DEBUG. Additionally if slapd detects optimization opportunities it writes them to the logs - at least once per connection, if not per query. Together with an application that did alot of connections and queries (this was not intendet and got fixed later) THIS RESULTED IN 32 GB LOG FILES IN ≤ 24 HOURS! - enough to fill up the disk and to crash other services (lessons learned: add more monitoring, monitoring, and monitoring and /var/log should be an extra partition). I mean logging optimization hints is certainly nice - it runs faster now (again, I did not do any benchmarks) - but ther verbosity was way too high.
The worst parts are the error messages: When entering a query string with a syntax errors, slapd returns the error code 80 without any additional text - the documentation reveals SO MUCH BETTER meaning: "other error", THIS IS SO HELPFULL... In the end I was able to find the reason why the input was rejected but in my experience the most error messages are little bit more precise.2 -
Todays website fail, looks like someone is using a NFS mount for the site:
"Deprecated: Non-static method PageLinesTemplate::current_admin_post_type() should not be called statically, assuming $this from incompatible context in /nfs/c03/h03/mnt/166492/domains/makerspace.com/html/wp-content/themes/pagelines/admin/class.options.metapanel.php on line 30" -
Ok so that's my plan, find a kernel with HUGE amout of drivers and , high version.
I built a small os based on linux
-- kernel version 5.0.2 from Plop Linux,
many libraries added 'by hand' -- packages from apts of Debian&Ubuntu, and unpacked packages into system with ArchiveManager,
has GUI but it's called xfree86 ( looks strange when a very old app running on Kernel5 )
So, without compiling, i can make a os.
But i found that Plop didn't compile rtl8188eu module which makes linux support some specific network cards.
I have no professional compiler but a tiny C/Cpp compiler called TinyCC (aka. tcc), but for my pc ( CPU freq = 800MHz ), it seems not possible to compile the module by myself.
And then i downloaded a 5.2 kernel with modules from kernel.ubuntu.com, but when i tried to mount my disk ( part. vfat ), i got some errors like IO charset not found, and then i replaced it with Xanmod kernel but also reported an error said Invalid Arguments, but i checked /proc/filesystems, it supports.
So what can i do? Are there any pre-compiled kernel & modules with 'full common supports'?
I tried kernel 4.4 ( from Ubuntu 16.04 LTS ) just now but the driver crashed when wpa_supplicant tried to initialize the device.7 -
Question for any IT Techs here.
I'm getting an MAPI Exception Network Error when trying to migrate a Rehires email. Any solutions1 -
Can someone please tell me how to connect to freenode IRC? Does it still exist? I get an error when I try to connect through Kiwi Web IRC: Unknown Error. In HexChat freenode seems to be absent from the list of IRC network presets2
-
Ok so i decided to dive into Objective-C
within the windows system
I downloaded the GNUstep-sys- and the GNU-core-
installed GNU-sys first as instructed on the site
the core
But i had installed Strawberry some time back for running some network scripts with perl
now the gcc that's being used is from Strawberry not the GNUstep that I've installed
and when i trey to compile Objectiv-C code the bellow error strike
please if anyone out there has a solution3