Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "make the server your home"
-
As a developer, sometimes you hammer away on some useless solo side project for a few weeks. Maybe a small game, a web interface for your home-built storage server, or an app to turn your living room lights on an off.
I often see these posts and graphs here about motivation, about a desire to conceive perfection. You want to create a self-hosted Spotify clone "but better", or you set out to make the best todo app for iOS ever written.
These rants and memes often highlight how you start with this incredible drive, how your code is perfectly clean when you begin. Then it all oscillates between states of panic and surprise, sweat, tears and euphoria, an end in a disillusioned stare at the tangled mess you created, to gather dust forever in some private repository.
Writing a physics engine from scratch was harder than you expected. You needed a lot of ugly code to get your admin panel working in Safari. Some other shiny idea came along, and you decided to bite, even though you feel a burning guilt about the ever growing pile of unfinished failures.
All I want to say is:
No time was lost.
This is how senior developers are born. You strengthen your brain, the calluses on your mind provide you with perseverance to solve problems. Even if (no, *especially* if) you gave up on your project.
Eventually, giving up is good, it's a sign of wisdom an flexibility to focus on the broader domain again.
One of the things I love about failures is how varied they tend to be, how they force you to start seeing overarching patterns.
You don't notice the things you take back from your failures, they slip back sticking to you, undetected.
You get intuitions for strengths and weaknesses in patterns. Whenever you're matching two sparse ordered indexed lists, there's this corner of your brain lighting up on how to do it efficiently. You realize it's not the ORMs which suck, it's the fundamental object-relational impedance mismatch existing in all languages which causes problems, and you feel your fingers tingling whenever you encounter its effects in the future, ready to dive in ever so slightly deeper.
You notice you can suddenly solve completely abstract data problems using the pathfinding logic from your failed game. You realize you can use vector calculations from your physics engine to compare similarities in psychological behavior. You never understood trigonometry in high school, but while building a a deficient robotic Arduino abomination it suddenly started making sense.
You're building intuitions, continuously. These intuitions are grooves which become deeper each time you encounter fundamental patterns. The more variation in environments and topics you expose yourself to, the more permanent these associations become.
Failure is inconsequential, failure even deserves respect, failure builds intuition about patterns. Every single epiphany about similarity in patterns is an incredible victory.
Please, for the love of code...
Start and fail as many projects as you can.30 -
Was lead developer at a small startup, I was hiring and had a budget to add 3 new people to my team to develop a new product for the company.
Some context first and then the rant!
Candidate 1 - Amazing, a dev I worked with before who was under utilized at the previous company. Still a junior, but, she was a quick learner and eager to expand her knowledge, never an issue.
Candidate 2 - Kickass dev with back end skills and extras, he was always eager to work a bit more than what was expected. I use to send him home early to annoy him. haha!
Candidate 3 - Lets call him P.
In the interview he answers every question perfectly, he asks all the right questions and suggests some things I havent even thought of. CTO goes ahead and says we should skip the technical test and just hire the guy, his smart and knows what his talking about, I agree and we hire him. (We where a bit desperate at this stage as well.)
He comes in a week early to pick up his work laptop to get setup before he starts the next week, awesome! This guy is going to be an asset to the company, cant wait to have him join the team - The CTO at this stage is getting ready to leave the company and I will be taking over the division and need someone to take over lead position, he seems like the guys to do it.
The guys starts the next week, he comes in and the laptop we gave him is now a local server for testing and he will be working off his own laptop, no issue, we are small so needed a testing stack, but wasnt really needed since we had procedures in place for this already.
Here is where everything goes wrong!!! First day goes great... Next day he gets in early 6:30am (Nice! NO!), he absolutely smells, no stinks, of weed, not a light smell, the entire fucking office smells of weed! (I have no problem with weed, just dont make it my problem to deal with). I get called by boss and told to sort this out people are complaining! I drive to office and have a meeting with him, he says its all good he understands. (This was Friday).
Monday comes around - Get a call from Boss at 7:30am. Whole office smells like weed, please talk to P again, this cannot happen again. I drive to office again, and he again says it wont happen again, he has some issues with back pain and the weed helps.
Tuesday - Same fucking thing! And now he doesnt want to sign for the laptop("server") that was given to him, and has moved to code in the boardroom, WHERE OUR FUCKING CLIENTS WILL BE VIEWING A DEMO THAT DAY OF THE PRODUCT!! Now that whole room smells like weed, FML!
Wednesday - We send P a formal letter that he is under probation, P calls me to have a meeting. In the meeting he blames me for not understanding "new age" medicine, I ask for his doctors prescription and ask why he didnt tell me this in the interview so I could make arrangements, we dont care if you are stoned, just do good work and be considerate to your co-workers. P cant provide these and keeps ranting, I suggest he takes pain killers, he has none of it only "new age" medicine for him.
Thursday - I ask him to rather "work" from home till we can get this sorted, he comes in for code reviews for 2 weeks. I can clearly see he has no idea how the system works but is trying, I thought I will dive deeper and look at all of his code. Its a mess, nothing makes sense and 50% of it is hard coded (We are building a decentralized API for huge data sets so this makes no sense).
Friday - In code review I confront him about this, he has excuses for everything, I start asking him harder questions about the project and to explain what we are building - he goes quiet and quits on the spot with a shitty apology.
From what I could make out he was really smart when it came to theory but interpreting the theory to actual practice wasnt possible for him, probably would have been easier if he wasnt high all the time.
I hate interview code tests, but learned a valuable lesson that day! Always test for some code knowledge as well even if you hate doing it, ask the right questions and be careful who you hire! You can only bullshit for so long in coding before someone figures out that you are a fraud.16 -
Worst dev team failure I've experienced?
One of several.
Around 2012, a team of devs were tasked to convert a ASPX service to WCF that had one responsibility, returning product data (description, price, availability, etc...simple stuff)
No complex searching, just pass the ID, you get the response.
I was the original developer of the ASPX service, which API was an XML request and returned an XML response. The 'powers-that-be' decided anything XML was evil and had to be purged from the planet. If this thought bubble popped up over your head "Wait a sec...doesn't WCF transmit everything via SOAP, which is XML?", yes, but in their minds SOAP wasn't XML. That's not the worst WTF of this story.
The team, 3 developers, 2 DBAs, network administrators, several web developers, worked on the conversion for about 9 months using the Waterfall method (3~5 months was mostly in meetings and very basic prototyping) and using a test-first approach (their own flavor of TDD). The 'go live' day was to occur at 3:00AM and mandatory that nearly the entire department be on-sight (including the department VP) and available to help troubleshoot any system issues.
3:00AM - Teams start their deployments
3:05AM - Thousands and thousands of errors from all kinds of sources (web exceptions, database exceptions, server exceptions, etc), site goes down, teams roll everything back.
3:30AM - The primary developer remembered he made a last minute change to a stored procedure parameter that hadn't been pushed to production, which caused a side-affect across several layers of their stack.
4:00AM - The developer found his bug, but the manager decided it would be better if everyone went home and get a fresh look at the problem at 8:00AM (yes, he expected everyone to be back in the office at 8:00AM).
About a month later, the team scheduled another 3:00AM deployment (VP was present again), confident that introducing mocking into their testing pipeline would fix any database related errors.
3:00AM - Team starts their deployments.
3:30AM - No major errors, things seem to be going well. High fives, cheers..manager tells everyone to head home.
3:35AM - Site crashes, like white page, no response from the servers kind of crash. Resetting IIS on the servers works, but only for around 10 minutes or so.
4:00AM - Team rolls back, manager is clearly pissed at this point, "Nobody is going fucking home until we figure this out!!"
6:00AM - Diagnostics found the WCF client was causing the server to run out of resources, with a mix of clogging up server bandwidth, and a sprinkle of N+1 scaling problem. Manager lets everyone go home, but be back in the office at 8:00AM to develop a plan so this *never* happens again.
About 2 months later, a 'real' development+integration environment (previously, any+all integration tests were on the developer's machine) and the team scheduled a 6:00AM deployment, but at a much, much smaller scale with just the 3 development team members.
Why? Because the manager 'froze' changes to the ASPX service, the web team still needed various enhancements, so they bypassed the service (not using the ASPX service at all) and wrote their own SQL scripts that hit the database directly and utilized AppFabric/Velocity caching to allow the site to scale. There were only a couple client application using the ASPX service that needed to be converted, so deploying at 6:00AM gave everyone a couple of hours before users got into the office. Service deployed, worked like a champ.
A week later the VP schedules a celebration for the successful migration to WCF. Pizza, cake, the works. The 3 team members received awards (and a envelope, which probably equaled some $$$) and the entire team received a custom Benchmade pocket knife to remember this project's success. Myself and several others just stared at each other, not knowing what to say.
Later, my manager pulls several of us into a conference room
Me: "What the hell? This is one of the biggest failures I've been apart of. We got rewarded for thousands and thousands of dollars of wasted time."
<others expressed the same and expletive sediments>
Mgr: "I know..I know...but that's the story we have to stick with. If the company realizes what a fucking mess this is, we could all be fired."
Me: "What?!! All of us?!"
Mgr: "Well, shit rolls downhill. Dept-Mgr-John is ready to fire anyone he felt could make him look bad, which is why I pulled you guys in here. The other sheep out there will go along with anything he says and more than happy to throw you under the bus. Keep your head down until this blows over. Say nothing."11 -
Worst thing you've seen another dev do? So many things. Here is one...
Lead web developer had in the root of their web application config.txt (ex. http://OurPublicSite/config.txt) that contained passwords because they felt the web.config was not secure enough. Any/all applications off of the root could access the file to retrieve their credentials (sql server logins, network share passwords, etc)
When I pointed out the security flaw, the developer accused me of 'hacking' the site.
I get called into the vice-president's office which he was 'deeply concerned' about my ethical behavior and if we needed to make any personnel adjustments (grown-up speak for "Do I need to fire you over this?")
Me:"I didn't hack anything. You can navigate directly to the text file using any browser."
Dev: "Directory browsing is denied on the root folder, so you hacked something to get there."
Me: "No, I knew the name of the file so I was able to access it just like any other file."
Dev: "That is only because you have admin permissions. Normal people wouldn't have access"
Me: "I could access it from my home computer"
Dev:"BECAUSE YOU HAVE ADMIN PERMISSIONS!"
Me: "On my personal laptop where I never had to login?"
VP: "What? You mean ...no....please tell me I heard that wrong."
Dev: "No..no...its secure....no one can access that file."
<click..click>
VP: "Hmmm...I can see the system administration password right here. This is unacceptable."
Dev: "Only because your an admin too."
VP: "I'll head home over lunch and try this out on my laptop...oh wait...I left it on...I can remote into it from here"
<click..click..click..click>
VP: "OMG...there it is. That account has access to everything."
<in an almost panic>
Dev: "Only because it's you...you are an admin...that's what I'm trying to say."
Me: "That is not how our public web site works."
VP: "Thank you, but Adam and I need to discuss the next course of action. You two may go."
<Adam is her boss>
Not even 5 minutes later a company wide email was sent from Adam..
"I would like to thank <Dev> for finding and fixing the security flaw that was exposed on our site. She did a great job in securing our customer data and a great asset to our team. If you see <Dev> in the hallway, be sure to give her a big thank you!"
The "fix"? She moved the text file from the root to the bin directory, where technically, the file was no longer publicly visible.
That 'pattern' was used heavily until she was promoted to upper management and the younger webdev bucks (and does) felt storing admin-level passwords was unethical and found more secure ways to authenticate.5 -
HO. LY. SHIT.
So this gig I got myself into, they have a whitelist of IP addresses that are allowed to access their web server. It's work-at-home. We just got a new internet provider, and it looks like I get a different public IP address everytime I disconnect and connect to the WIFI. And since it looks like the way they work on their codebase is that you either edit the files right on the server or you download the files that you need to work on, make the changes, and then re-upload the file back to the server and refresh the website to see the changes, now I can't access the server because I get different IP addresses. And it's highly inconvenient to keep emailing them to add IP addresses to the whitelist.
No source control, just straight-up download/upload from/to the server. Like, srsly. So that also means debugging is extremely hard for me because one, they use ColdFusion and I've never used that shit before and two, how the hell do you debug with this style of work?
I just started this last Tuesday, and I already want to call it quits. This is just a pain in the ass and not worth my time. I'll be glad to just go back to driving Lyft/Uber to make money while I look for a full-time, PROPER job.
By the way, can I do that to a contracting job? Just call it quits when you haven't even finished your first task? How does this work?17 -
So last year in highschool, everybody had to make a website project with stores and stuff. Everybody was in groups and you could plan your time on you own, working in class or at home.
So my friend spent hours at home designing his website for his group, and to be honest, it looked amazing.
There was only one problem, all files were located on a server which was accessible by all groups without the teacher even being able to know who had accessed which file.
There was this one group which just spent their time in class playing stupid browser games, 3 people in a group, one of them looking at kpop and puppy's for what seemed like to be hours at a time.
Well to get to the point, about 1-2 days before deadline, they noticed they hadn't done shit.
SO THOSE FUCKING BASTARDS JUST COPIED ALL THE FUCKING CODE MY FRIEND HAD MADE.
AT PRESENTATION TIME THEIR "PROJECT" CAME UP FIRST AND MY FRIEND WAS TO ME LIKE "OH NO THOSE FUCKERS DIDN'T JUST COPY MY WHOLE WEBSITE".
ANYWAYS, THEY CLAIMED MY FRIEND HAD COPIED THEM AND GOT AWAY WITH IT!!! (They got an A on the Project, my friend got a C because the teacher thought he copy pasted the design.)
I spoke to diz dude who copied the code, we knew who it was, because the others in the group probably dont even know what the copy and paste keys were.
He laughed at me and said, "C'mon, it's not a big deal.."
IF YOU ARE TOO INCOMPETENT TO WRITE A SINGLE CODE OF LINE, THEN DONT FUCKING STEAL SHIT FROM OTHERS WHO PUT IN HOURS OF WORK, AND MESS UP THEIR GRADE. TAKE YOUR FUCKING F AND LEAVE THE CLASS!!!12 -
Finally got myself a Lytro Illum!
I,v been wanting to buy one since it came out but the company who made it closed down in 2015..
Thoose fuckers just thrown everything in the trash and set it on fire, software, firmware, mobile app etc.. no open source, no archives, your expensive camera is now a paper weight! You’r welcome!
So i got myself a new hobby, started reverse-engineering the fuck out of it, luckily it’s based on android (api17), i have adb and it’s running a hidden DHCP server too so it’s coming along nicely :D
I’m planning to make a camera control mobile app for it and maybe some faster image processing, wifi sharing etc..
I love beeing in home office :D19 -
Long rant ahead.. so feel free to refill your cup of coffee and have a seat 🙂
It's completely useless. At least in the school I went to, the teachers were worse than useless. It's a bit of an old story that I've told quite a few times already, but I had a dispute with said teachers at some point after which I wasn't able nor willing to fully do the classes anymore.
So, just to set the stage.. le me, die-hard Linux user, and reasonably initiated in networking and security already, to the point that I really only needed half an ear to follow along with the classes, while most of the time I was just working on my own servers to pass the time instead. I noticed that the Moodle website that the school was using to do a big chunk of the course material with, wasn't TLS-secured. So whenever the class begins and everyone logs in to the Moodle website..? Yeah.. it wouldn't be hard for anyone in that class to steal everyone else's credentials, including the teacher's (as they were using the same network).
So I brought it up a few times in the first year, teacher was like "yeah yeah we'll do it at some point". Shortly before summer break I took the security teacher aside after class and mentioned it another time - please please take the opportunity to do it during summer break.
Coming back in September.. nothing happened. Maybe I needed to bring in more evidence that this is a serious issue, so I asked the security teacher: can I make a proper PoC using my machines in my home network to steal the credentials of my own Moodle account and mail a screencast to you as a private disclosure? She said "yeah sure, that's fine".
Pro tip: make the people involved sign a written contract for this!!! It'll cover your ass when they decide to be dicks.. which spoiler alert, these teachers decided they wanted to be.
So I made the PoC, mailed it to them, yada yada yada... Soon after, next class, and I noticed that my VPN server was blocked. Now I used my personal VPN server at the time mostly to access a file server at home to securely fetch documents I needed in class, without having to carry an external hard drive with me all the time. However it was also used for gateway redirection (i.e. the main purpose of commercial VPN's, le new IP for "le onenumity"). I mean for example, if some douche in that class would've decided to ARP poison the network and steal credentials, my VPN connection would've prevented that.. it was a decent workaround. But now it's for some reason causing Moodle to throw some type of 403.
Asked the teacher for routers and switches I had a class from at the time.. why is my VPN server blocked? He replied with the statement that "yeah we blocked it because you can bypass the firewall with that and watch porn in class".
Alright, fair enough. I can indeed bypass the firewall with that. But watch porn.. in class? I mean I'm a bit of an exhibitionist too, but in a fucking class!? And why right after that PoC, while I've been using that VPN connection for over a year?
Not too long after that, I prematurely left that class out of sheer frustration (I remember browsing devRant with the intent to write about it while the teacher was watching 😂), and left while looking that teacher dead in the eyes.. and never have I been that cold to someone while calling them a fucking idiot.
Shortly after I've also received an email from them in which they stated that they wanted compensation for "the disruption of good service". They actually thought that I had hacked into their servers. Security teachers, ostensibly technical people, if I may add. Never seen anyone more incompetent than those 3 motherfuckers that plotted against me to save their own asses for making such a shitty infrastructure. Regarding that mail, I not so friendly replied to them that they could settle it in court if they wanted to.. but that I already knew who would win that case. Haven't heard of them since.
So yeah. That's why I regard those expensive shitty pieces of paper as such. The only thing they prove is that someone somewhere with some unknown degree of competence confirms that you know something. I think there's far too many unknowns in there.
Nowadays I'm putting my bets on a certification from the Linux Professional Institute - a renowned and well-regarded certification body in sysadmin. Last February at FOSDEM I did half of the LPIC-1 certification exam, next year I'll do the other half. With the amount of reputation the LPI has behind it, I believe that's a far better route to go with than some random school somewhere.25 -
This rant is particularly directed at web designers, front-end developers. If you match that, please do take a few minutes to read it, and read it once again.
Web 2.0. It's something that I hate. Particularly because the directive amongst webdesigners seems to be "client has plenty of resources anyway, and if they don't, they'll buy more anyway". I'd like to debunk that with an analogy that I've been thinking about for a while.
I've got one server in my home, with 8GB of RAM, 4 cores and ~4TB of storage. On it I'm running Proxmox, which is currently using about 4GB of RAM for about a dozen VM's and LXC containers. The VM's take the most RAM by far, while the LXC's are just glorified chroots (which nonetheless I find very intriguing due to their ability to run unprivileged). Average LXC takes just 60MB RAM, the amount for an init, the shell and the service(s) running in this LXC. Just like a chroot, but better.
On that host I expect to be able to run about 20-30 guests at this rate. On 4 cores and 8GB RAM. More extensive migration to LXC will improve this number over time. However, I'd like to go further. Once I've been able to build a Linux which was just a kernel and busybox, backed by the musl C library. The thing consumed only 13MB of RAM, which was a VM with its whole 13MB of RAM consumption being dedicated entirely to the kernel. I could probably optimize it further with modularization, but at the time I didn't due to its experimental nature. On a chroot, the kernel of the host is used, meaning that said setup in a chroot would border near the kB's of RAM consumption. The busybox shell would be its most important RAM consumer, which is negligible.
I don't want to settle with 20-30 VM's. I want to settle with hundreds or even thousands of LXC's on 8GB of RAM, as I've seen first-hand with my own builds that it's possible. That's something that's very important in webdesign. Browsers aren't all that different. More often than not, your website will share its resources with about 50-100 other tabs, because users forget to close their old tabs, are power users, looking things up on Stack Overflow, or whatever. Therefore that 8GB of RAM now reduces itself to about 80MB only. And then you've got modern web browsers which allocate their own process for each tab (at a certain amount, it seems to be limited at about 20-30 processes, but still).. and all of its memory required to render yours is duplicated into your designated 80MB. Let's say that 10MB is available for the website at most. This is a very liberal amount for a webserver to deal with per request, so let's stick with that, although in reality it'd probably be less.
10MB, the available RAM for the website you're trying to show. Of course, the total RAM of the user is comparatively huge, but your own chunk is much smaller than that. Optimization is key. Does your website really need that amount? In third-world countries where the internet bandwidth is still in the order of kB/s, 10MB is *very* liberal. Back in 2014 when I got into technology and webdesign, there was this rule of thumb that 7 seconds is usually when visitors click away. That'd translate into.. let's say, 10kB/s for third-world countries? 7 seconds makes that 70kB of available network bandwidth.
Web 2.0, taking 30+ seconds to load a web page, even on a broadband connection? Totally ridiculous. Make your website as fast as it can be, after all you're playing along with 50-100 other tabs. The faster, the better. The more lightweight, the better. If at all possible, please pursue this goal and make the Web a better place. Efficiency matters.9 -
My school just tried to hinder my revision for finals now. They've denied me access just today of SSHing into my home computer. Vim & a filesystem is soo much better than pen and paper.
So I went up to the sysadmin about this. His response: "We're not allowing it any more". That's it - no reason. Now let's just hope that the sysadmin was dumb enough to only block port 22, not my IP address, so I can just pick another port to expose at home. To be honest, I was surprised that he even knew what SSH was. I mean, sure, they're hired as sysadmins, so they should probably know that stuff, but the sysadmins in my school are fucking brain dead.
For one, they used to block Google, and every other HTTPS site on their WiFi network because of an invalid certificate. Now it's even more difficult to access google as you need to know the proxy settings.
They switched over to forcing me to remote desktop to access my files at home, instead of the old, faster, better shared web folder (Windows server 2012 please help).
But the worst of it includes apparently having no password on their SQL server, STORING FUCKING PASSWORDS IN PLAIN TEXT allowing someone to hijack my session, and just leaving a file unprotected with a shit load of people's names, parents, and home addresses. That's some super sketchy illegal shit.
So if you sysadmins happen to be reading this on devRant, INSTEAD OF WASTING YOUR FUCKING TIME BLOCKING MORE WEBSITES THAN THEIR ARE LIVING HUMANS, HOW ABOUT TRY UPPING YOUR SECURITY, PASSWORDS LIKE "", "", and "gryph0n" ARE SHIT - MAKE IT BETTER SO US STUDENTS CAN ACTUALLY BROWSE MORE FREELY - I THINK I WANT TO PASS, NOT HAVE EVERY OTHER THING BLOCKED.
Thankfully I'm leaving this school in 3 weeks after my last exam. Sure, I could stay on with this "highly reputable" school, but I don't want to be fucking lied to about computer studies, I don't want to have to workaround your shitty methods of blocking. As far as I can tell, half of the reputation is from cheating. The students and sysadmins shouldn't have to have an arms race between circumventing restrictions and blocking those circumventions. Just make your shit work for once.
**On second thought, actually keep it like that. Most of the people I see in the school are c***s anyway - they deserve to have half of everything they try to do censored. I won't be around to care soon.**undefined arms race fuck sysadmin ssh why can't you just have any fucking sanity school windows server security2 -
I've recently received another invitation to Google's Foobar challenges.
A while ago someone here on devRant (which I believe works at Google, and whose support I deeply appreciate) sent me a couple of links to it too. Unfortunately back then I didn't take the time to learn the programming languages (Python or Java) that Google requires for these challenges. This time I'm putting everything on Python, as it's the easiest language to learn when coming from Bash.
But at the end of the day.. I am a sysadmin, not a developer. I don't know a single thing about either of these languages. Yet I can't take these challenges as the sysadmin I am. Instead, I have to learn a new language which chances are I'll never need again outside of some HR dickhead's interview with lateral thinking questions and whiteboard programming, probably prohibited from using Google search like every sane programmer and/or sysadmin would for practical challenges that actually occur in real life.
I don't want to do that. Google is a once in a lifetime opportunity, I get that. Many people would probably even steal that foobar link from me if they could. But I don't think that for me it's the right thing to do. Google has made a serious difference by actually challenging developers with practical scenarios, and that's vastly superior to whatever a HR person at any other company could cobble together for an interview. But there's one thing that they don't seem to realize. A company like Google consists of more than just developers. Not only that, it probably consists - even within their developer circles - of more than just Python and Java developers. If any company would know about languages that are more optimized such as C, it would be Google that has to leverage this performance in order to be able to deliver their services.
I'll be frank here. Foobar has its own issues that I don't like. But if Google were a nice company, I'd go for it all the way nonetheless - after all, they are arguably the single biggest tech company in the world, and the tech industry itself is one of the biggest ones in the world nowadays. It's safe to say that there's likely no opportunity like working at Google. But I don't think it's the right thing. Even if I did know Python or Java... Even if I did. I don't like Google's business decisions.
I've recently flashed my OnePlus 6T with LineageOS. It's now completely Google-free, except for a stock Yalp account (that I'm too afraid to replace with my actual Google account because oh dear, third-party app stores, oh dear that could damage our business and has to be made highly illegal!1!). My contacts on that phone are are all gone. They're all stored on a Google server somewhere (except for some like @linuxxx' that I consciously stored on device storage and thus lost a while back), waiting for me to log back in and sync them back. I've never asked for this. If Google explicitly told me that they'd sync all my contacts to my Google account and offer feasible alternatives, I'd probably given more priority to building a CalDAV and CardDAV server of my own. Because I do have the skills and desire to maintain that myself. I don't want Google to do this for me.
Move fast and break things. I've even got a special Termux script on my home screen, aptly named Unfuck-Google-Play. Every other day I have to use it. Google Search. When I open it on my Nexus 6P, which was Google's foray into hardware and in which they failed quite spectacularly - I've even almost bent and killed it tonight, after cursing at that piece of shit every goddamn day - the Google app opens, I type some text into it.. and then it just jumps back to the beginning of whatever I was typing. A preloader of sorts. The app is a fucking web page parser, or heck probably even just an API parser. How does that in any way justify such shitty preloaders? How does that in any way justify such crappy performance on anything but the most recent flagships? I could go on about this all day... I used to run modern Linux on a 15 year old laptop, smoothly. So don't you Google tell me that a - probably trillion dollar - company can't do that shit right. When there's (commercialized) community projects like DuckDuckGo that do things a million times better than you do - yet they can't compete with you due to your shit being preloaded on every phone and tablet and impossible to remove without rooting - that you Google can't do that and a lot more. You've got fucking Google Assistant for fucks sake! Yet you can't make a decent search app - the goddamn thing that your company started with in the first place!?
I'm sorry. I'd love to work at Google and taste the diversity that this company has to offer. But there's *a lot* wrong with it at the business end too. That is something that - in that state - I don't think I want to contribute to, despite it being pretty much a lottery ticket that I've been fortunate enough to draw twice.
Maybe I should just start my own company.6 -
@JoshBent and @nikola1402 requested a tutorial for installing i3wm in a windows subsystem for linux. Here it is. I have to say though, I'm no expert in windows nor linux, and all I'm going to put here is the result of duckduck searches, reddit and documentation. As you will see, it isn't very difficult.
First things first: Install WSL. It's easy and there's a ton of good tutorials on this. I think I used this one: https://msdn.microsoft.com/en-us/...
Once you got it installed, I guess it would be better to run "sudo apt-get update" to make sure we don't encounter many problems.
Install a windows X server: X is what handles the graphical interface in linux, and it works with the client/server paradigm. So what we'll do with this is provide the linux client we want to use (in this case i3wm) with an X server for it on windows. I guess any X server will do the work, but I highly recommend vcXsrv. You can download it here:
https://sourceforge.net/projects/...
for i3 just "sudo apt-get install i3"
Configurations to make stuff work:
open your ~/.bashrc file ("nano ~/.bashrc" vim is cool too). You'll have to add the following lines to the end of it:
"""
export DISPLAY=:0.0 #This display variable points to the windows X server for our linux clients to use it.
export XDG_RUNTIME_DIR=$HOME/xdg #This is a temporary directory X will use
export RUNLEVEL=3
sudo mkdir /var/run/dbus #part of the dbus fix
sudo dbus-daemon --config-file=/usr/share/dbus-1/system.conf #part of the dbus fix
"""
Ok so after this we'll have a functional x client/server configuration. You'll just have to install your desktop enviroment of choice. I only installed i3wm, but I've seen unity and xfce working on the WSL too. There are still some files that X will miss though.
*** Here we'll add some files X would miss and :
With "nano ~/.xinitrc" edit the xinitrc to your liking. I only added this:
"""
#!/usr/bin/env bash
exec i3
"""
Then run "sudo chmod +x ~/.xinitrc" to make it an excecutable.
Then, to make a linking file named xsession, run:
"ln -s ~/.xinitrc ~/.xsession"
Now you'll be able to run whatever you put in ~/.xinirc with:
"dbus-launch --exit-with-session ~/.xsession"
There's a ton of personalisation to be done, but that would be a whole new tutorial. I'll just share a github repo with my dotfiles so you can see them here:
https://github.com/DanielVZ96/...
SHIT I ALMOST FORGOT:
Everytime you open any graphical interface you'll need to have the x server running. With vcXsrv, you can use X launch. Choose the options with no othe programs running on the X server. I recommend using "one window without title bar".10 -
A few days ago Aruba Cloud terminated my VPS's without notice (shortly after my previous rant about email spam). The reason behind it is rather mundane - while slightly tipsy I wanted to send some traffic back to those Chinese smtp-shop assholes.
Around half an hour later I found that e1.nixmagic.com had lost its network link. I logged into the admin panel at Aruba and connected to the recovery console. In the kernel log there was a mention of the main network link being unresponsive. Apparently Aruba Cloud's automated systems had cut it off.
Shortly afterwards I got an email about the suspension, requested that I get back to them within 72 hours.. despite the email being from a noreply address. Big brain right there.
Now one server wasn't yet a reason to consider this a major outage. I did have 3 edge nodes, all of which had equal duties and importance in the network. However an hour later I found that Aruba had also shut down the other 2 instances, despite those doing nothing wrong. Another hour later I found my account limited, unable to login to the admin panel. Oh and did I mention that for anything in that admin panel, you have to login to the customer area first? And that the account ID used to login there is more secure than the password? Yeah their password security is that good. Normally my passwords would be 64 random characters.. not there.
So with all my servers now gone, I immediately considered it an emergency. Aruba's employees had already left the office, and wouldn't get back to me until the next day (on-call be damned I guess?). So I had to immediately pull an all-nighter and deploy new servers elsewhere and move my DNS records to those ASAP. For that I chose Hetzner.
Now at Hetzner I was actually very pleasantly surprised at just how clean the interface was, how it puts the project front and center in everything, and just tells you "this is what this is and what it does", nothing else. Despite being a sysadmin myself, I find the hosting part of it insignificant. The project - the application that is to be hosted - that's what's important. Administration of a datacenter on the other hand is background stuff. Aruba's interface is very cluttered, on Hetzner it's super clean. Night and day difference.
Oh and the specs are better for the same price, the password security is actually decent, and the servers are already up despite me not having paid for anything yet. That's incredible if you ask me.. they actually trust a new customer to pay the bills afterwards. How about you Aruba Cloud? Oh yeah.. too much to ask for right. Even the network isn't something you can trust a long-time customer of yours with.
So everything has been set up again now, and there are some things I would like to stress about hosting providers.
You don't own the hardware. While you do have root access, you don't have hardware access at all. Remember that therefore you can't store anything on it that you can't afford to lose, have stolen, or otherwise compromised. This is something I kept in mind when I made my servers. The edge nodes do nothing but reverse proxying the services from my LXC containers at home. Therefore the edge nodes could go down, while the worker nodes still kept running. All that was necessary was a new set of reverse proxies. On the other hand, if e.g. my Gitea server were to be hosted directly on those VPS's, losing that would've been devastating. All my configs, projects, mirrors and shit are hosted there.
Also remember that your hosting provider can terminate you at any time, for any reason. Server redundancy is not enough. If you can afford multiple redundant servers, get them at different hosting providers. I've looked at Aruba Cloud's Terms of Use and this is indeed something they were legally allowed to do. Any reason, any time, no notice. They covered all their bases. Make sure you do too, and hope that you'll never need it.
Oh, right - this is a rant - Aruba Cloud you are a bunch of assholes. Kindly take a 1Gbps DDoS attack up your ass in exchange for that termination without notice, will you?5 -
Want to make someone's life a misery? Here's how.
Don't base your tech stack on any prior knowledge or what's relevant to the problem.
Instead design it around all the latest trends and badges you want to put on your resume because they're frequent key words on job postings.
Once your data goes in, you'll never get it out again. At best you'll be teased with little crumbs of data but never the whole.
I know, here's a genius idea, instead of putting data into a normal data base then using a cache, lets put it all into the cache and by the way it's a volatile cache.
Here's an idea. For something as simple as a single log lets make it use a queue that goes into a queue that goes into another queue that goes into another queue all of which are black boxes. No rhyme of reason, queues are all the rage.
Have you tried: Lets use a new fangled tangle, trust me it's safe, INSERT BIG NAME HERE uses it.
Finally it all gets flushed down into this subterranean cunt of a sewerage system and good luck getting it all out again. It's like hell except it's all shitty instead of all fiery.
All I want is to export one table, a simple log table with a few GB to CSV or heck whatever generic format it supports, that's it.
So I run the export table to file command and off it goes only less than a minute later for timeout commands to start piling up until it aborts. WTF. So then I set the most obvious timeout setting in the client, no change, then another timeout setting on the client, no change, then i try to put it in the client configuration file, no change, then I set the timeout on the export query, no change, then finally I bump the timeouts in the server config, no change, then I find someone has downloaded it from both tucows and apt, but they're using the tucows version so its real config is in /dev/database.xml (don't even ask). I increase that from seconds to a minute, it's still timing out after a minute.
In the end I have to make my own and this involves working out how to parse non-standard binary formatted data structures. It's the umpteenth time I have had to do this.
These aren't some no name solutions and it really terrifies me. All this is doing is taking some access logs, store them in one place then index by timestamp. These things are all meant to be blazing fast but grep is often faster. How the hell is such a trivial thing turned into a series of one nightmare after another? Things that should take a few minutes take days of screwing around. I don't have access logs any more because I can't access them anymore.
The terror of this isn't that it's so awful, it's that all the little kiddies doing all this jazz for the first time and using all these shit wipe buzzword driven approaches have no fucking clue it's not meant to be this difficult. I'm replacing entire tens of thousands to million line enterprise systems with a few hundred lines of code that's faster, more reliable and better in virtually every measurable way time and time again.
This is constant. It's not one offender, it's not one project, it's not one company, it's not one developer, it's the industry standard. It's all over open source software and all over dev shops. Everything is exponentially becoming more bloated and difficult than it needs to be. I'm seeing people pull up a hundred cloud instances for things that'll be happy at home with a few minutes to a week's optimisation efforts. Queries that are N*N and only take a few minutes to turn to LOG(N) but instead people renting out a fucking off huge ass SQL cluster instead that not only costs gobs of money but takes a ton of time maintaining and configuring which isn't going to be done right either.
I think most people are bullshitting when they say they have impostor syndrome but when the trend in technology is to make every fucking little trivial thing a thousand times more complex than it has to be I can see how they'd feel that way. There's so bloody much you need to do that you don't need to do these days that you either can't get anything done right or the smallest thing takes an age.
I have no idea why some people put up with some of these appliances. If you bought a dish washer that made washing dishes even harder than it was before you'd return it to the store.
Every time I see the terms enterprise, fast, big data, scalable, cloud or anything of the like I bang my head on the table. One of these days I'm going to lose my fucking tits.10 -
"There's more to it"
This is something that has been bugging me for a long time now, so <rant>.
Yesterday in one of my chats in Telegram I had a question from someone wanting to make their laptop completely bulletproof privacy respecting, yada yada.. down to the MAC address being randomized. Now I am a networking guy.. or at least I like to think I am.
So I told him, routers must block any MAC addresses from leaking out. So the MAC address is only relevant inside of the network you're in. IPv6 changes this and there is network discovery involved with fandroids and cryphones where WiFi remains turned on as you leave the house (price of convenience amirite?) - but I'll get back to that later.
Now for a laptop MAC address randomization isn't exactly relevant yet I'd say.. at least in something other than Windows where your privacy is right out the window anyway. MAC randomization while Nadella does the whole assfuck, sign me up! /s
So let's assume Linux. No MAC randomization, not necessary, privacy respecting nonetheless. MAC addresses do not leak outside of the network in traditional IPv4 networking. So what would you be worried about inside the network? A hacker inside Starbucks? This is the question I asked him, and argued that if you don't trust the network (and with a public hotspot I personally don't) you shouldn't connect to it in the first place. And since I recall MAC randomization being discussed on the ISC's dhcp-users mailing list a few months ago (http://isc-dhcp-users.2343191.n4.nabble.com/...), I linked that in as well. These are the hardcore networking guys, on the forum of one of the granddaddies of the internet. They make BIND which pretty much everyone uses. It's the de facto standard DNS server out there.
The reply to all of this was simply to the "don't connect to it if you don't trust it" - I guess that's all the privacy nut could argue with. And here we get to the topic of this rant. The almighty rebuttal "there's more to it than that!1! HTTPS doesn't require trust anymore!1!"
... An encrypted connection to a website meaning that you could connect to just about any hostile network. Are you fucking retarded? Ever heard of SSL stripping? Yeah HSTS solves that but only a handful of websites use it and it doesn't scale up properly, since it's pretty much a hardcoded list in web browsers. And you know what? Yes "there's more to it"! There's more to networking than just web browsing. There's 65 THOUSAND ports available on both TCP and UDP, and there you go narrow your understanding of networking to just 2 of them - 80 and 443. Yes there's a lot more to it. But not exactly the kind of thing you're arguing about.
Enjoy your cheap-ass Xiaomeme phone where the "phone" part means phoning home to China, and raging about the Google apps on there. Then try to solve problems that aren't actually problems and pretty vital network components, just because it's an identifier.
</rant>
P.S. I do care a lot about privacy. My web and mail servers for example do not know where my visitors are coming from. All they see is some reverse proxies that they think is the whole internet. So yes I care about my own and others' privacy. But you know.. I'm old-fashioned. I like to solve problems with actual solutions.11 -
Any other Screeps players here?
for the people running into a "Screeps is not defined":
Screeps is a MMO RTS where you code your "army" to do stuff in Javascript (a la NodeJS).
Code how your harvesters should behave, how your soldiers should behave, how your builders should behave etc. etc.
So far, it is quite a fun game, tho my (Intel Nehalem based) laptop has issues handling it (thanks to a awfully slow GPU...) so it's difficult to play for me at the moment (I'm on holiday, my home PC is a LOT faster).
It costs about 15 euro on steam, and if you're into this stuff, it's well worth it.
Just make sure you finish the tutorial first... I didn't and I regretted it when I bought the game (it's a huge pain in the buttocks to get started if you don't understand the API and such).
Currently just playing on my own localhosted private server to discover how the game works and such, but will be setting up a public server later down the road to play with others.
Tho it would be nice if Screeps would allow for "team-based" gameplay as well so it'll be slightly harder for early players to bully the newer ones.2 -
!Dev
TL Dr :- Debugging a software I barely know about was slow and ended up breaking in the shop it was used in and reverting the changes does not solve the problem
I asked my father a few days ago why he was buying a dedicated server for his ERP software and not using a client computer as his server which he is doing in his shop currently. He said that it was slow on other computers in the LAN which is an wired. The solutions given by the company that made it did not work. Big bills would sometimes also dissapear which took around 30 minutes to make. So when he bought the computer to home during lockdown I pulled up the debugging guide from the company which summed up to check latency,ram and add these files to exclusion list of your antivirus. Latency was kinda high at the first when pinging another computer on the LAN but I was testing on WiFi so it could be pretty inaccurate. The computer met the ram requirements so that was not a problem. I checked the data path by opening the software and accidentally typed something but I did not worry since the changes needed to be manually accepted. I added the files to the Windows defender exclusion list and shut it down.
Next day :- My father calls me up and says the software is working on the server but is broken on other computers. So I check if the changes were automatically accepted for some reason and yes that happened. So so pull up a guide to configure the software in multi user mode and I replace the mistyped setting with the correct one and it still does not work. My father asks me to undo everything by using anydesk. I remove all the exclusions I added to Windows defender and disable windows firewall. Still does not work. Restart the computer and software. Still does not work. Check permissions on data folder. They are correct.
WTF I reverted all the changes I made and the software does not work on other computers.7 -
I am the responsible for the atlassian Suite at work, as I maintain the systems, set them up, and stuff.
One day, our crowd (the authentication and authorization application) just went crazy. At like lunch time it could not connect to the AD anymore. No reasons. Throwing XSRF errors (cross site scripting), because http would connect to https. "won't do it, fuck you" it told me. Out of the blue. Noone changed anything. And yea, seriously. Noone did.
It just refused to connect (as connecting to AD is connecting yourself with you own api. And refusing yourself talking to yourself). It runs behind a proxy. Therefore http/https. Well, this worked for years. But out of sudden not anymore.
Yea. Fuck you.
It was reported some hours later, at like 3pm, as people could not login to the applications using crowd as authentication and authorization server.
Tried to debug the system, where nothing was did, to make it work. At best time to fail.
First workaround: if you are logged into one of the other applications of atlassian, just refresh the site, so your SSO token gets a refresh and you are signed on again.
Then I searched more and more. And more.
But nothing worked, nothing helped.
So I addressed an emergency maintenance, take down the whole Suite, restart crowd, to apply some changes to it's settings, not knowing what happening then, because all connections of SSO will then be released. Sent out the mail like 30 minutes beforehands.
While waiting for the window, I just typed my credentials... And redid, and redid, so to type and being bored.
Three minutes before the window...
It just worked again.
Well. Wtf. Serioudl
Just came back.
No Intrusion, no changes at all. Just came back, as nothing has happened.
Kind of best part of this story... A headhunter messaged me on my way home to offer me a job as an Atlassian Suite SysAdmin for a company, at kinda the double of my salary.
At first I was thinking to go there, and when someone then asked me sth about Atlassian just start to laugh and then leave still laughing...
But then I very nicely respond that I dont want to cry at work. And wished him best luck.
I am doing some bad upgrades now on our Suite. Very painful.
And I looked into the start scripts. Some Look like the untalented intern tells another one to write scripts. Seriously wtf.
Today I followed the guide to Update a confluence and change database to Postgres. Didnt work, Postgres error.
Try it again, jquery won't load. Next try, tomcat not starting anymore. Did same thing. Every fucking time.
Yea. Maintenance window to get a nice new export soon. Will only take an hour.
To switch database in confluence, you need to set it up very fresh. And then Import your export.
Export takes an hour at our system.
Importing maybe the same time. Hope it will work (hint: Nope).
Oh, can be nice also. Just tell the Bitbucket to migrate databases, there is a fucking setting for it. Enter new database, ready, go, finished.
At least they don't raise costs very much every kinda year.
Oh sorry, yes, they do.4 -
Last weekend I was working on a small project for a friend of mine: a dockerized webapp, plus API backend and DB. I had some problems with the installation on the vps and had to try out different images and never really did a complete setup of my usual dotfiles. Got it running on an Ubuntu distro. Everything great.
It was the first release so I still had to check that every configuration worked ok, like letsencrypt companion container, the reverse proxy and all that stuff, so I decided to clone the whole project on the server tho make the changes there and then commit them from there.
Docker compose, 10 lines of code, change the hosts and password. Boom everything working. Great... Except for the images in the webapp.
WTF? Check the repo, here they are, all ok. I try different build tactics. Nothing. Even building the app on another docker always the same. Checked browser cache, all the correct ports are open. I even though that maybe react was still using some weird websocket I didn't know, but no.
Damn, I spent 5 hours checking why the f*** the server wouldn't make it out.
Then, finally, the realization...
I didn't install the f******* git-lfs plugin and all I was working with were stupid symbolics links! Webpack never even throw an error for any of the stupid images and the browser would only show a corrupted image, when decoding the base64 string.
Literally the solution took 5 minutes.
F*** changes on production, now I do everything on a fully automated CI. -
-- Have you ever self hosted a Linux/Free Bsd server at home?
-- What was the maintenance like in terms of operation and cost compared to an online service?
I and my partner are planning to self host our Ubuntu server locally because currently though we spend less than $1100 a month on Azure with moderate CPU usage but we plan to scale out with believes that the server cost might sky-rocket.
We made a budget of $25k for the setup which includes cost of hardware, bandwidth and power.
We also made some research concerning most used hardwares for home servers because we really are newbees talk of hardware. What we found are options related to the Intel Xeon as CPU, some others say use NAS, while some are more of advertising.
$25k on the desk,
we care more about speed than of space. How can we make the setup totally worth it? You don't have to spare us a change, just some headlight and way to go.
Your advice are needed. Thank you.8