Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Get a devDuck
Rubber duck debugging has never been so cute! Get your favorite coding language devDuck
Buy Now
Search - "deployments"
-
Boss: "I looked at a testing suite. It is $2,500 a license and I'm buying 60 licenses. You should probably get familiar with it."
LeadDev: "Um, we already use NUnit, and it's free."
Boss: "Hmm...I'd better add Pluralsight training in the budget so you can learn about the new program."
LeadDev: "Oh, no...we need new laptops more than we need software."
Boss:"New laptops? Not my budget. When we buy this new software, everyone is going to use it"
LeadDev: "Everyone? How will you monitor it's usage?"
Boss: "I'll have networking send me captures of all the running tasks on the dev machines. The test suite better be running. Writing good tests will be our #1 priority."
LeadDev: "Um, we already write tests using NUnit."
Boss: "I don't understand what you are saying. I need something I can visualize. This UI testing suite is exactly what I need."
LeadDev: "Maybe the testing suite would be better suited for you and QA?"
<click..click>
Boss: "Submitted the budget. There will be a test server available for you to configure. This whole project costs over $100,000, so don't screw it up. Any questions?"
LeadDev: "Oh...well...what server ..."
Boss: "Dang...sorry, I'm taking off the rest of the afternoon. We'll talk about this more on Monday. Get started on those Pluralsight videos. I'll expect a full training and deployments by next week. Have a great weekend!"15 -
The reason why aliens are avoiding earth:
Me : Guys, the CI/CD pipeline is ready. ci.yaml is our config file, so don't remove it as the deployments will fail.
**10 seconds later**
slack: BUILD FAILED
Me: *Looks at git commits* "Brian removed ci.yaml
Wtf BRIAN!🖕🖕🖕🖕17 -
First rant.
Managing an app in Canada, came back home to Thailand to visit my parents.
No deployments while you're gone, just bug fixes, boss said.
Landed at 3am, "hey I know we only support desktop but we got new customers only on iPad, make it responsive in a day and deploy." Wtf.
Haven't even seen my parents. In Starbucks.15 -
OH MY FUCKING GOD!
What is the point in separating us into backend/frontend developers if everyone has to learn/do everything?
And now this FUCKING DUMBASS that is leaving!!! The company convinced my FUCKING STUPID boss to start using react with nodejs on the new platforms ...
Did anyone think about talking to the fucking devops that maintain the fucking deployments about this????
By the way, this sucker is me.
And now I have one month to: deploy a new app... ALONE!! learning fucking react (please kill me) and probably merge it in a clusterfuck of unseparated backend/frontend because fuck it.
Oh, and figure out a way to make deployment automated and easy for me at least.
I'm about to rant in real life...7 -
One of our clients deploy their own server app. So this happened after a prod deployment. (4am)
*Cellphone rings while sleeping*
Client : we need you on the conference call now. URGENT!
*Gets on conference call*
*Client explain the problem*
*Explaining to the client that the problem is in their side (https connection not working, either network or certificate problem)*
*Client doesn't believe it and pushes me for a fix that I have no control on*
*4 hours later in a heated conversation*
Client : ok problem is on our side. We used our SSL certificate from staging with production and thought it would work.
Me :5 -
This is why you DONT deploy in a Friday!
Now can we all agree to stop disagreeing!
Ps: I love CommitStrip sometimes.2 -
After our Head Of Software has terminated.
I started to take control over our development crew. And in this year I did more then the old head in the last 6 years.
- Swapped from plain old SVN to Gitlab.
- Build a complete autonomous deployment with Gitlab.
- Introduced code reviews.
- Started to refactor the legacy product with 500.000 lines of code...
- learned how to use confluent apache kafka and kubernetes to split the legacy project in many small and maintainable one.(not done yet)
- Last 3 weeks I learned how to use elasticstack with kibana and co. That we aren't blind anymore. Big dashboards are now shown in the middle of the room :) and maybe convinced my coworker that we use unity3d for our business application cause of support for all devices and same design on them. And offline capabilities. (Don't know if this was my best idea)
When I look back, I'm proud to did that much in one year alone. And my coworkers are happy too that they have less work with deployments and everything.
But I can't decide what's the title for this. System or Software Architect cause I litterallity did both :/7 -
--- GitHub 24-hour outage post mortem ---
As many of you will remember; Github fell over earlier this month and cracked its head on the counter top on the way down. For more or less a full 24 hours the repo-wrangling behemoth had inconsistent data being presented to users, slow response times and failing requests during common user actions such as reporting issues and questioning your career choice in code reviews.
It's been revealed in a post-mortem of the incident (link at the end of the article) that DB replication was the root cause of the chaos after a failing 100G network link was being replaced during routine maintenance. I don't pretend to be a rockstar-ninja-wizard DBA but after speaking with colleagues who went a shade whiter when the term "replication" was used - It's hard to predict where a design decision will bite back and leave you untanging the web of lies and misinformation reported by the databases for weeks if not months after everything's gone a tad sideways.
When the link was yanked out of the east coast DC undergoing maintenance - Github's "Orchestrator" software did exactly what it was meant to do; It hit the "ohshi" button and failed over to another DC that wasn't reporting any issues. The hitch in the master plan was that when connectivity came back up at the east coast DC, Orchestrator was unable to (un)fail-over back to the east coast DC due to each cluster containing data the other didn't have.
At this point it's reasonable to assume that pants were turning funny colours - Monitoring systems across the board started squealing, firing off messages to engineers demanding they rouse from the land of nod and snap back to reality, that was a bit more "on-fire" than usual. A quick call to Orchestrator's API returned a result set that only contained database servers from the west coast - none of the east coast servers had responded.
Come 11pm UTC (about 10 minutes after the initial pant re-colouring) engineers realised they were well and truly backed into a corner, the site was flipped into "Yellow" status and internal mechanisms for deployments were locked out. 5 minutes later an Incident Co-ordinator was dragged from their lair by the status change and almost immediately flipped the site into "Red" status, a move i can only hope was accompanied by all the lights going red and klaxons sounding.
Even more engineers were roused from their slumber to help with the recovery effort, By this point hair was turning grey in real time - The fail-over DB cluster had been processing user data for nearly 40 minutes, every second that passed made the inevitable untangling process exponentially more difficult. Not long after this Github made the call to pause webhooks and Github Pages builds in an attempt to prevent further data loss, causing disruption to those of us using Github as a way of kicking off our deployment processes (myself included, I had to SSH in and run a git pull myself like some kind of savage).
Glossing over several more "And then things were still broken" sections of the post mortem; Clever engineers with their heads screwed on the right way successfully executed what i can only imagine was a large, complex and risky plan to untangle the mess and restore functionality. Github was picked up off the kitchen floor and promptly placed in a comfy chair with a sweet tea to recover. The enormous backlog of webhooks and Pages builds was caught up with and everything was more or less back to normal.
It goes to show that even the best laid plan rarely survives first contact with the enemy, In this case a failing 100G network link somewhere inside an east coast data center.
Link to the post mortem: https://blog.github.com/2018-10-30-...9 -
Feeling blessed, left the army after 5 years, 3 deployments, to attend 3 years of computer science school and get a stable lifestyle.
I've got a supporting girlfriend, mentally never felt better, and for the next 3 years, this is my seat and view in class !
Happy coding all, and have a nice day✌️8 -
My team handles infrastructure deployment and automation in the cloud for our company, so we don't exactly develop applications ourselves, but we're responsible for building deployment pipelines, provisioning cloud resources, automating their deployments, etc.
I've ranted about this before, but it fits the weekly rant so I'll do it again.
Someone deployed an autoscaling application into our production AWS account, but they set the maximum instance count to 300. The account limit was less than that. So, of course, their application gets stuck and starts scaling out infinitely. Two hundred new servers spun up in an hour before hitting the limit and then throwing errors all over the place. They send me a ticket and I login to AWS to investigate. Not only have they broken their own application, but they've also made it impossible to deploy anything else into prod. Every other autoscaling group is now unable to scale out at all. We had to submit an emergency limit increase request to AWS, spent thousands of dollars on those stupidly-large instances, and yelled at the dev team responsible. Two weeks later, THEY INCREASED THE MAX COUNT TO 500 AND IT HAPPENED AGAIN!
And the whole thing happened because a database filled up the hard drive, so it would spin up a new server, whose hard drive would be full already and thus spin up a new server, and so on into infinity.
Thats probably the only WTF moment that resulted in me actually saying "WTF?!" out loud to the person responsible, but I've had others. One dev team had their code logging to a location they couldn't access, so we got daily requests for two weeks to download and email log files to them. Another dev team refused to believe their server was crashing due to their bad code even after we showed them the logs that demonstrated their application had a massive memory leak. Another team arbitrarily decided that they were going to deploy their code at 4 AM on a Saturday and they wanted a member of my team to be available in case something went wrong. We aren't 24/7 support. We aren't even weekend support. Or any support, technically. Another team told us we had one day to do three weeks' worth of work to deploy their application because they had set a hard deadline and then didn't tell us about it until the day before. We gave them a flat "No" for that request.
I could probably keep going, but you get the gist of it.7 -
Manager: Why are we missing our deadlines?
Me: Cause we don't own any of the codebase that we work on and have to literally beg to other team for code reviews and deployments, for which it takes long mail chains and meetings. An even before that we(devs) have to explain to them what/why are we doing things, because our Product managers are a bunch of NoGood AHoles. And after all that we finally do some development, in whatever measly time we have left.
Manager: I know all that, tell me why are we missing our deadlines?5 -
It's enough. I have to quit my job.
December last year I've started working for a company doing finance. Since it was a serious-sounding field, I tought I'd be better off than with my previous employer. Which was kinda the family-agency where you can do pretty much anything you want without any real concequences, nor structures. I liked it, but the professionalism was missing.
Turns out, they do operate more professionally, but the intern mood and commitment is awful. They all pretty much bash on eachother. And the root cause of this and why it will stay like this is simply the Project Lead.
The plan was that I was positioned as glue between Design/UX and Backend to then make the best Frontend for the situation. Since that is somewhat new and has the most potential to get better. Beside, this is what the customer sees everyday.
After just two months, an retrospective and a hell lot of communication with co-workers, I've decided that there is no other way other than to leave.
I had a weekly productivity of 60h+ (work and private, sometimes up to 80h). I had no problems with that, I was happy to work, but since working in this company, my weekly productivity dropped to 25~30h. Not only can I not work for a whole proper work-week, this time still includes private projects. So in hindsight, I efficiently work less than 20h for my actual job.
The Product lead just wants feature on top of feature, our customers don't want to pay concepts, but also won't give us exact specifications on what they want.
Refactoring is forbidden since we get to many issues/bugs on a daily basis so we won't get time.
An re-design is forbidden because that would mean that all Screens have to be re-designed.
The product should be responsive, but none of the components feel finished on Desktop - don't talk about mobile, it doesn't exist.
The Designer next to me has to make 200+ Screens for Desktop and Mobile JUST so we can change the primary colors for an potential new customer, nothing more. Remember that we don't have responsiveness? Guess what, that should be purposely included on the Designs (and it looks awful).
I may hate PHP, but I can still work with it. But not here, this is worse then any ecommerce. I have to fix legacy backend code that has no test coverage. But I haven't touched php for 4 years, letalone wrote sql (I hate it). There should be no reason whatsoever to let me do this kind of work, as FRONTEND ARCHITECT.
After an (short) analysis of the Frontend, I conclude that it is required to be rewritten to 90%. There have been no performance checks for the Client/UI, therefor not only the components behave badly, but the whole system is slow as FUCK! Back in my days I wrote jQuery, but even that shit was faster than the architecuture of this React Multi-instance app. Nothing is shared, most of the AppState correlate to other instances.
The Backend. Oh boy. Not only do we use an shitty outated open-source project with tons of XSS possibillities as base, no we clone that shit and COPY OUR SOURCES ON TOP. But since these people also don't want to write SQL, they tought using Symfony as base on top of the base would be an good idea.
Generally speaking (and done right), this is true. but not then there will be no time and not properly checked. As I said I'm working on Legacy code. And the more I look into it, the more Bugs I find. Nothing too bad, but it's still a bad sign why the webservices are buggy in general. And therefor, the buggyness has to travel into the frontend.
And now the last goodies:
- Composer itself is commited to the repo (the fucking .phar!)
- Deployments never work and every release is done manually
- We commit an "_TRASH" folder
- There is an secret ongoing refactoring in the root of the Project called "_REFACTORING" (right, no branches)
- I cannot test locally, nor have just the Frontend locally connected to the Staging webservices
- I am required to upload my sources I write to an in-house server that get's shared with the other coworkers
- This is the only Linux server here and all of the permissions are fucked up
- We don't have versions, nor builds, we use the current Date as build number, but nothing simple to read, nonono. It's has to be an german Date, with only numbers and has always to end with "00"
- They take security "super serious" but disable the abillity to unlock your device with your fingerprint sensor ON PURPOSE
My brain hurts, maybe I'll post more on this shit fucking cuntfuck company. Sorry to be rude, but this triggers me sooo much!4 -
**Web Host Rant**
I can't believe how saturated the market is. I also can't believe how many Web hosts do not know a thing about development. You would think you'd want to read up on development practices before going into the business since developers are your customers.
Not to mention that a lot of hosting services are resellers of resellers of resellers. It's to the point where a 15 year old with their mom's credit card can start doing Web hosting. The problem is... they don't know how to answer actually development questions... they won't be in a conference call with you while you do deployments.
It infuriated me to the point where I've started my own hosting company. Completely managed and using the most advanced technologies aimed towards developers. Not only that but an advanced managment package that will teach proper deployment procedures and be there to hold your hand when you do deploy.
Oh and did I mention git will be available to even shared hosting? Oh and did I also mention that we are currently setting up put own git server?37 -
The 5 whys
So.. we cant deploy
Why? > We had to take our deployment tool offline
Why? > Because random people from the internet started deployments
Why? > Because we had no authentication and so it was publicly available
Why? > Boss said auth was no priority (we told him every day)
Why? > ¯\_(ツ)_/¯5 -
!rant
Observed a full deployment the other day and discovered it's extremely inefficient. I proposed an idea to fix it, and was shot down by a senior dev on the team. I was ranting about how asinine the process was and how my process could reduce the amount of time and training required to do deployments with out any additional cost or overhead. A senior dev from another department over heard me, found my workspace and told me (in a nutshell), "write up a document about why the current process is garbage and how yours is better, and how it works, I'll review it and we'll get it worded and formatted right. When we finish the document, I'll forward it to the CTO of your department with your name on it and my recommendation for review." Fuck yeah. 😈😎7 -
Ok, it’s been a loooong fucking day.
28 hours later from starting work yesterday, preparing for a giant deployment over multiple systems, Doing deployments overnight... on a Friday night of all days and finally, finally seeing everything working is just a beautiful thing.
Good night devRant!1 -
Them: "Automated builds and deployments are a waste of time, they do strange things you didn't tell them to and they make mistakes"
Everyone else: \/11 -
Fuck yeah I love Thursdays! Deployments went well in the office, freelance clients are satisfied as well and I am drunk with my wife... what more can a dev(ranter) ask for?!2
-
Let me tell you a story.
Our company has a homegrown monitoring solution. Keeps track of our deployments and alerts us when something is broken. Really nice for the most part, except a little issue where we get up to 25 alerts PER DAY that our PRODUCTION ENVIRONMENT IS DOWN. Including weekends.
With this many false positives, we quickly learn to ignore the alerts and miss real incidents.
So we approached this team, remember its our own tool, and told them about the problem. Turns out it is a known issue. And here's the kicker: they aren't planning on fixing it!
It gets better. Rather than fix this glaring issue, their solution is to make ANOTHER ALERT that lets us know the monitoring is misbehaving.
To recap, we can now expect to get up to 25 false positive alerts per day that our production is down, followed immediately by more alerts that the monitor is broken, which means we can ignore the previous alert.
As our PM said when he heard this: fuck that noise. We are escalating the shit out of this!7 -
Used to pay $5/mo on a small instance for my personal site. Then I discovered Kubernetes and realized my site didn't scale! No canary deployments! So I upgraded and pay $200/mo now. Took weeks to configure. Millions of people can now read my resume. Damn, it's never looked better8
-
Alright boys, let me tell you how someone fucked up so hard they got their deployment schedule delayed "indefinitely".
Being security, we get to oversee most deployments, and we especially get to oversee all deployments that are on IT-managed tech. Knowing fullwell about this fact, some dumb motherfuckers woke up and thought to themselves "You know what would be good fun? To piss on security's asshole and then try and ream them up the backside before they notice the piss!"
Well let me tell you, we noticed. And our boss noticed. And his boss noticed. And the CIO noticed. Thus it came down the chain that this particular group of lie-spurting, baseless accusation-leveling group of developers would have their deployments put on hold. How long? "A while."
I have never quite heard my higher-ups this mad before, but damn if i dont share in their enthusiasm to stick it to entitled cunts.17 -
Just a moral story.
It's been a few years I've been using Linux for deployments.
And currently I'm working on a project that has Win on the Server so I'm working on the necessary installations and configurations and I caught myself actually reading everything in the installation and configurations dialogs. And I'm having this urge just to click next and get it overwith.
But thank you Penguin almighty for thee hath introduced patience and knowledge into mine soul. Or else...
... I would've fucked the whole system by a click lol6 -
1. high severity production incident was asked to look into at the end of the day.
2. needed fix in ui.
3. fixed and deployed in 1 hour.
4. issue remained. debugging began.
5. gave up at 1 AM and went to sleep.
6. woke up at 6 and after debugging for 2 hours, identified to be a back end issue.
7. worked with back end team for the fix, and 6 hours and 3 deployments later, it worked.
8. third party vendor reported they are still not receiving one parameter from us.
9. back end team realised they forgot to ask ui to send another parameter.
10. added the parameter in ui, redeployed ui.
11. build and deployment tool broke down. got it fixed. delay of 1.5 hours.
12. finally things are in place. total time 26 hours.
13. found half bottle of vodka, leftover from last weekend. *Priceless*1 -
My company has two offices in separate cities but they treat each the devs of each location very very differently.
In one office the devs get full power to experiment with whatever tech they want, they just stomp their feet and management gives em whatever they ask for, freedom of choice regarding anything they are working on, to be allowed to do greenfield work or experimental stuff
But in my office we are forced to do ONLY. Bug fixing and refactoring shitty code from over a decade a go, our tech is ancient and we are not allowed to to
Shit , anything we ask for is denied
And improvements to our process is shut down with the reasoning that whatever we got works so why meddle ??
For us , management is solely focussed on making sure we respond to support calls , deployments , configurations and little bug fixing. Basically they only care that we manage to finish for out next delivery.
No new work whatsoever!
If there is any hint of something new to to
Implemented the golden boys from the other office just stopm their feet tillmthey get it or just go off and start working on it then seek permission afterwards, with their much larger team they obviously get further than we do by the time management hears about it so they end up taking over the work since they already have more done already
My manager decided to push us to attend a company devCon to share ideas with our devs from our other location. This rapidly turned into a sour experience
Basically we do all shitty boring work which puts money on the table which goes straight to those idiots to play with...
They have the guts to laugh when we mentioned that we never get anything interesting to work on
Never seen so many of our devs looking up job sites on the bus back...
This is gonna blow up in management's face...2 -
Soooo it's Monday........ 🤯
@C0D4 started the day fixing current projects defects (4 tickets smashed before coffee 💪)
Then after coffee, run a test coverage report and see a significant decline over the past few months, so spends a couple hours adding more tests to get some areas filled in - meh, nothing like 50+ lines per test... to test a if() statement but whatever - complex scenarios will be complex to get too, but no my tests break and I'm missing data I didn't know about🤦♂️
So let's comment all that out, and go to lunch ... mmmm lunch.
Get back, start working on those again, and then get handed a new issue, so comment that all back out again, ( ok I know what you're thinking, but I'm working in an environment that does not use git for deployments - don't ask, real pain in the ass I haven't had time to invest into yet - but as code versioning only) anywho, starts to workout this new issue but don't figure it out, enter a 30 minute meeting.................. yea that was 2 hours later but was a very practical whiteboard session only to work out I have something like 16-20 weeks of work over 4-5 projects to get out in like 6 weeks... hahahahahahaha fml..... oh and that's excluding another project which had a 6 weeks of work in the pipeline to get to somehow.... I'm not seeing this one happening, and probably conflicting projects needed on top of that down the track... but we'll leave those out for now!
Whoot is fucking home time!!!
🤷♂️I'm starting to think I'm like a team of 5-10 devs right now, maybe I should start asking for 5-10x more 😏
#letsBringOnTuesday!!!!4 -
You lousy fucking test class of an ass wipe,
TLDR; it fails and it passes all at the same time.
So during a deployment, one of the pre-deployment test classes fails, not something anyone has worked on so figured I run it manually to see what’s going on, but no the shit of a thing passed second time around.
Now because we can’t deploy without 100% of the test classes passing so I have to organise another deployment which it fails again. Fuck this,
Unprotects master
Git checkout master
Git merge dev
Git push master -f
Protects master
Skrew this!
Well would you look at that, it works now 😰 -
It's Sunday. A day for prayer. Today I am praying that others dont mess up the code base when I get to work on Monday. Amen.2
-
I was wondering, how do you guys deploy code to servers? In my company we use Organist, an open source tool that runs deployments based on Ruby scripts. How is it done with you guys?12
-
A coworker changed the application deployment process. He told all three of the other developers who need deployments, but not me. We sit six feet away from each other and I've run/managed deployments for a year longer than him.
His new process doesn't work and he's blaming the dev ops team for not following it. The new process clearly doesn't fit their workflow and never could have.
The lack of deployments have caused production issues and he still won't ping dev ops to remind them about the deployment because "it's not in the new workflow".
He's been painting dev ops as incompetent at the last three retrospectives without having ever personally reminded the deployment guy.
Ugggh. -
Boss: we need to standardize the CMS we use.
Me: well 90% of what we build are custom Wordpress deployments...
Boss: yeah but Wordpress is best suited for all our clients.
Me: well yeah, I know...we could use Django or Rails and give the clients more customized solutions...
Boss: yeah but not all of our developers know those frameworks, and they require maintenance...
Me: -_- we could really use Jekyll for most sites we build
Boss: yeah, but what about our clients that want a blog?
Me: ...we can build a blog with anything...
Boss: ...we just need to standardize what we use. -
Leading a team of 10 people, 5 environments (3 non prod 2 prod) to support, 25 formal deployments per week, and all I have is one fucking repository in fucking svn.
-
My current job at the release & deploy mgmt team:
Basically this is the "theoretically sound flow":
* devs shit code and build stuff => if all tests in pipeline are green, it's eligible for promotion
* devs fill in desired version number build inside an excel sheet, we take this version number and deploy said version into a higher environment
* we deploy all the thingies and we just do ONE spec run for the entire environment
* we validate, and then go home
In the real world however:
* devs build shit and the tests are failed/unstable ===> disable test in the pipeline
* devs write down a version umber but since they disabled the tests they realize it's not working because they forgot thing XYZ, and want us to deploy another version of said application after code-freeze deadline
* deployments fail because said developers don't know jack shit about flyway database migrations, they always fail, we have to point them out where they'd go wrong, we even gave them the tooling to use to check such schema's, but they never use it
* a deploy fails, we send feedback, they request a NEW version, with the same bug still in it, because working with git is waaaaay too progressive
* We enable all the tests again (we basically regenerate all the pipeline jobs) And it turns out some devs have manually modified the pipelines, causing the build/deploy process to fail. We urged Mgmt to seal off the jenkins for devs since we're dealing with this fucking nonsense the whole time, but noooooo , devs are "smart persons that are supposed to have sense of responsibility"...yeah FUCK THAT
* Even after new versions received after deadline, the application still ain't green... What happens is basically doing it all over again the next day...
This is basically what happens when you:=
* have nos tandards and rules inr egards to conventions
* have very poor solution-ed work flow processes that have "grown organically"
* have management that is way too permissive in allowing breaking stuff and pleasing other "team leader" asscracks...
* have a very bad user/rights mgmt on LDAP side (which unfortunately we cannot do anything about it, because that is in the ownership of some dinosaur fossil that strangely enough is alive and walks around in here... If you ask/propose solutions that person goes into sulking mode. He (correctly) fears his only reason for existence (LDAP) will be gone if someone dares to touch it...
This is a government agency mind you!
More and more thinking daily that i really don't want to go to office and make a ton of money.
So the only motivation right now is..the money, which i find abhorrent.
And also more stuff, but now that i am writing this down makes me really really sad. I don't want to feel sad, so i stop being sad and feel awesome instead.1 -
Working 18 hours per day was tough, at the beginning coffee helped a lot. However I started loosing friends and the little free time I had, I spent it drinking, lonely in anonymous pubs, trying to socialise.
Workload increased and stress started to affect me, so I began smoking weed to relax.
To recover and work with renewed energy coffee was not enough anymore, I started with pills, amphetamines, coke, crack. After the biggest deployments I would disappear for days in an opium den.
Work, it's a gateway drug.6 -
Management directed a 2-month project including 5 developers, 3 DBAs, plus Q&A to replace a SOAP service that retrieved data from a single table. End result, project lasted 9+ months, 5 spectacular failed 3:00AM deployments, and a WCF service that retrieved data from a single table. Justification? Management wanted to eliminate SOAP, because SOAP uses XML and XML is slow. Thank goodness no one opened up Fiddler to see how WCF communicates.3
-
So it's Monday and its holiday in my country. But I have a deployment to do. Anyway I have this buddy accompany me which is nice.3
-
Oh boy, converting the whole codebase from vb.net to c#
Pain point 1: CType all over the place (Convert.To*)
Pain point 2: almost everything is static!
Pain point 3: "I learned about DI just 3 months ago..."
Paint point 4: deployments ever happened by hand!
But I'm happy to be there because the guy who's running the thing is a very nice one and he's absolutely grateful for every bit of learning lesson I give him.5 -
During one of our 'pop-up' meetings last week.
Ralph: "The test code the developers are checking in is a mess. They don't know what they are doing."
ex.
var foo = SomeLibrary.GetFoo();
Assert.IsNotNull(foo);
Fred: "Ha ha..someone should talk to HR about our hiring practices. These people are literally driving the company backwards."
Me: "I think unit testing is complete waste of time."
- You could almost see the truck hit the wall and splatter watermelon everwhere..took Ralph and Fred a couple of seconds to respond
Fred: "Uh..unit testing is industry best practice. There is scientific evidence that prove testing reduces bugs and increases code quality"
Ralph: "Over 90% of our deployments are rolled back because of bugs. Unit testing will eliminate that."
Me: "Sorry, I disagree."
- Stepping on kittens wouldn't have gotten a worse look from Fred and Ralph
Fred: 'Pretty sure if you ask any professional developer, they'll tell you unit testing and code coverage reduces bugs.'
Me: "I'm not asking anyone else, I'm asking you. Find one failed deployment, just one, over the past 6 months that unit testing or code coverage would have prevented."
- good 3 seconds of awkward silence.
Ralph: "Well, those rollbacks are all mostly due to server mis-configurations. That's not a fair comparison."
Me: "I'm using your words. Unit tests reduces bugs and lack of good tests is the direct reason why we have so many failed deployments"
Boss: "Yea, Ralph...you and Fred kinda said that."
Fred: "No...we need to write good tests. Not this mess."
Me: "Like I said, show me one test you've written that would have prevented a rollback. Just one."
Ralph: "So, what? We do nothing?"
Me: "No, we have to stop worshiping this made up 80% code coverage idol. If not, developers are going to keep writing useless test code just to meet some percent. If we wrote device drivers or frameworks for other developers maybe, but we write CRUD apps. We execute a stored procedure or call a service. This 80% rule doesn't fit for code we write."
Fred: "If the developers took their head out of their ass.."
Me: "Hey!..uh..no, they are doing exactly what they are being told. Meet the 80% requirement, even if doesn't make sense."
Ralph: "Nobody told them to write *that* code."
Boss: "My gosh, what have you and Fred been complaining about for the past hour?"
- Ralph looks at his monitor and brilliantly changes the subject
Ralph: "Oh my f-king god...Trump said something stupid again ..."
At that point I put my headphones on went back to what I was doing. I'm pretty sure Fred and Ralph spent the rest of the day messaging back-n-forth, making fun of me or some random code I wrote 3 years ago (lots of typing and giggling). How can highly educated grown men (one has a masters in CS) get so petty and insecure?6 -
Boss just told me that he thought him making changing directly on production was "OK", because it always seemed to automatically update the other environments when his changes appeared "magically" a week later.
Yeah, because the frequent deployments I do and then come across your un-approved changes which I'm constantly flagging up has nothing to do with it!!
*facepalm* -
TLDR: I need advice on reasonable salary expectations for sysadmin work in the rural United States.
I need some community advice. I’m the sysadmin at a small (35 employee) credit card processing company. I began as an intern and have now become their full time sysadmin/networking specialist. Since I was hired in January I have:
-migrated their 2007 Exchange server to Office 365
-Upgraded their ailing Windows server 2003 based architecture to 2012R2
-Licensed their unlicensed VMware ESXi servers (which they had already paid for license keys for!!!) and then upgraded them to 6.5 while preventing downtime on hosted VMs using tricky transfers and deployments (without vMotion!)
-Deployed a vCenter server to manage said ESXi servers easier
-Fixed a three month gap in their backups by implementing Veeam, and verifying its functionality
-Migrated a ‘no downtime’ fileserver to a new hypervisor host, implemented a ‘hot standby’ server as a backup kept up to date by the minute with DFS replication.
-Replaced failing hard drives in a RAID array underlying their one ‘business critical’ fileserver, which had no backups for 3 months at that time
-Reorganized Active Directory and Group Policy deployment from a nightmare spiderweb of OUs and duplicate policies
-Documented the entire old network and now the new one as I’ve been upgrading this
-Audited the developers AWS instances and removed redundant machines, optimized load balancing on front end Nginx servers, joined developer run Fedora workstations to the AD domain and implemented centralized syslog monitoring on them.
-Performed network scans and rewrote firewall exceptions to tighten security
There’s more, but you get the idea. I’ve now been tasked with taking point on an upcoming PCI audit which will be my first.
I’m being paid $16/hr US, with marginal health benefits. This is roughly $32,000 a year, before taxes.
I have two years previous work experience managing a third party Apple repair facility (SimplyMac) and every Apple certification for warranty repair and software troubleshooting. I have a two year degree in general sciences, with about 4 years of college credit (Two years of a physics education and two years of computer science after I switched focus) I’m actively pursuing a CCNA and MCSA server 2016 with exams paid for and scheduled.
I’m going into a salary negotiation in two months. What is a reasonable salary to request, from your perspective, for someone in my position?
Thanks in advance!6 -
Crazy deadlines> Director: "You need to design a new architecture that has failover, multi-AZ, automated deployments, CI/CD pipeline, automated builds/tests as well, for our new SaaS product. You have 3 days to complete it"
Me: "Ok cool. Do we have the new product developed? Can I have the spec docs of the new software, libs and packages required for the env?"
Product Lead: "No we dont have anything yet. The POC is on my local PC, but I dont know what packages are needed to run it"
Me: "So I cant design anything unless I have the minimum requirements to run the new software"
Director: "Just get it up and running in a live environment and we'll take it from there"
Me: *sigh*..this is going to be a big mistake -
Take over responsibility you fucking morons!
We are the engineering team and we cannot know how you operate our product in every detail. And for god's sake don't blame us when shit happens in production when you don't test upcoming deployments by yourself! -
A loooong time ago...
I've started my first serious job as a developer. I was young yet enthusiastic as well as a kind of a greenhorn. First time working in a business, working with a team full of experienced full-lowered ultra-seniors which were waiting to teach me the everything about software engineering.
Kind of.
Beside one senior which was the team lead as well there were two other devs. One of them was very experienced and a pretty nice guy, I could ask him anytime and he would sit down with me a give me advice. I've learned a lot of him.
Fast forward three months (yes, three months).
I was not that full kind of greenhorn anymore and people started to give me serious tasks. I had some experience in doing deployments and stuff from my other job as a sysadmin before so I was soon known as the "deployment guy", setting up deployments for our projects the right way and monitoring as well as executing them. But as it should be in every good team we had to share our knowledge so one can be on vacation or something and another colleague was able to do the task as well.
So now we come to the other teammate. The one I was not talking about till now. And that for a reason.
He was very nice too and had a couple of years as a dev on his CV, but...yeah...like...
When I switched some production systems to Linux he had to learn something about Linux. Everytime he encountered an error message he turned around and asked me how to fix it. Even. For. The. Simplest. Error. He. Could. Google. Up.
I mean okay, when one's new to a system it's not that easy, but when you have an error message which prints out THE SOLUTION FOR THE ERROR and he asks me how to fix it...excuse me?
This happened over 30 times.
A. Week.
Later on I had to introduce him to the deployment workflow for a project, so he could eventually deploy the staging environment and the production environment by hisself.
I introduced him. Not for 10 minutes. I explained him the whole workflow and the very main techniques and tools used for like two hours. Every then and when I stopped and asked him if he had any questions. He had'nt! Wonderful!
Haha. Oh no.
So he had to do his first production deployment. I sat by his side to monitor everything. He did well. One or two questions but he did well.
The same when he did his second prod deploy. Everythings fine.
And then. It. Frikkin. Begins.
I was working on the project, did some changes to the code. Okay, deploy it to dev, time for testing.
Hm.
Error checking out git. Okay, awkward. Got to investigate...
On the dev server were some files changed. Strange. The repo was all up to date. But these changes seemed newer because they were fixing at least one bug I was working on.
This doubles the strangeness.
I want over to my colleague's desk.
I asked him about any recent changes to the codebase.
"Yeah, there was a bug you were working on right? But the ticket was open like two days so I thought I'll fix it"
What the Heck dude, this bug was not critical at all and I had other tasks which were more important. Okay, but what about the changed files?
"Oh yeah, I could not remember the exact deployment steps (hint from the author: I wrote them down into our internal Wiki, he wrote them done by hisself when introducing him and after all it's two frikkin commands), so I uploaded them via FTP"
"Uhm... that's not how we do it buddy. We have to follow the procedure to avoid..."
"The boss said it was fine so I uploaded the changes directly to the production servers. It's so much easier via FTP and not this deployment crap, sorry to say that"
You. Did. What?
I could not resist and asked the boss about this. But this had not Effect at all, was the long-time best-buddy-schmuddy-friend of the boss colleague's father.
So in the end I sat there reverting, committing and deploying.
Yep
It's soooo much harder this deployment crap.
Years later, a long time after I quit the job and moved to another company, I get to know that the colleague now is responsible for technical project management.
Hm.
Project Management.
Karma's a bitch, right? -
Live deployment next week Monday but Product Owner really need this new fix in.
Can you code, push to test, test it, push to acceptance test, UAT it before Friday...
It's Wednesday today 😐
Project manager says ok 😱1 -
this really happened:
Interface Team Lead: "hey I want any time deployments and better QA"
Me: "ok sure. I have CI/CD, but yiu need to work in feature branches / tags, and make sure your code passes automated builds and unit tests"
Team Lead: "I dont have time to test it makes me unproductive! and creating a branch is an extra step which is going to set me back. Im telling the boss you are impacting performance!"
Me: "you want better deployments and QA, but you can even create a branch or tes your work?"
Team Lead: "We have deadlines!" -
Tl;Dr Im the one of the few in my area that sees sftping as the prod service account shouldn't be a deployment process. And the ONLY ONE THAT CARES THAT THIS IS GONNA BREAK A BUNCH OF SHIT AT SOME POINT.
The non tl;dr:
For a whole year I've been trying to convince my area that sshing as the production service account is not the proper way to deploy and/or develop batch code. My area (my team and 3 sister teams) have no concept of using version control for our various Unix components (shell scripts and configuration files) that our CRITICAL for our teams ongoing success. Most develop in a "prodqa like" system and the remainder straight in production. Those that develop straight in prodqa have no "test" deployment so when they ssh files straight to actual production. Our area has no concept of continuous integration and automated build checking. There is no "test cases", no "systems testing" or "regression testing". No gate checks for changing production are enforced. There is a standing "approved" deployment process by the enterprise (my company is Whyyyyyyyyyy bigger than my area ) but no one uses it. In fact idk anyone in my area who knows HOW to deploy using the official deployment method. Yes, there is privileged access management on the service account. Yes the managers gets notified everytime someone accesses the privileged production account. The managers don't see fixing this as a priority. In fact I think I've only talk to ONE other person in my area who truly understands how terrible it is that we have full production change access on a daily basis. Ive brought this up so many times and so many times nothing has been done and I've tried to get it changed yet nothing has happened and I'm just SO FUCKING SICK that no one sees how big of a deal this. I mean, overall I live the area I work in, I love the people, yet this one glaring deficiency causes me so much fucking stress cause it's so fucking simple to fix.
We even have an newer enterprise deployment. Method leveraging a product called "urban code deploy" (ucd) to deploy a git repository. JUST FUCKING GIT WITH THE PROGRAM!!!!..... IT WAS RELEASED FUCKING 12 YEARS AGO......
Please..... Please..... I just want my otherwise normally awesome team to understand the importance and benefits of version control and approved/revertable deployments2 -
JBoss deployments, because nothing ever works if you dont completely restructure your project for every single version and my company cant decide on which version to use.
This is pretty much the main reason I dislike backend developement. -
Hoo laddie.
I write web software that gets sold to enterprise customers. A major part of the work flow is running reports that get exported as PDFs that users have to keep track of for compliance purposes. Just under a week ago, a select few reports quit printing. Once the issue worked its way through the red tape and eventually got to the point where a developer (me) could/had to look at it and pull server logs, I noticed that the report was trying to access a column that I had just created a week or so ago.
We have a six week release cycle. Six is a bigger number than one.
Turns out the production reports server was pointed at the preview environment which has a release cycle of whatever the fuck we want. To compound the problem, our operations team had a national holiday, so running reports was broken a full day before anything could be done. Then the next day, when the ops person got into the office, it took a few hours to convince them that yes this is a problem and yes this needs to be fixed.
But of course midday deployments/restarts of anything ever is out of the question. Chalk up another day of downtime. And of course we *just* sold to a new major customer.
Happy onboarding week guys.1 -
Today at work I started doing 1 month old task with production problem.
First of all why now ?
Because I already fixed all the other urgent production problems I had during last month, done about 4 deployments of those super urgent errors.
Now I can start with not trivial one that are pending for quite time.
I am the only backend developer in this project ...
This is a dtp application and the problem is that we are not verifying if we got all fonts embedded in customer provided pdf files.
We are generating high quality images of those pdf for printing just fine from the beginning but now we need valid PDF with all fonts embedded in it. ( don’t ask me why I am only a hammer in this process )
After running simple test using python script against database it turned out we have over 500 broken PDF files without fonts.
So I guess I have just one sentence to say about it.
Fuck you PDF format for not being strict and allowing this shit. -
Most fun i had was reverse engineering lg tone & talk where my headset would vibrate and talk to me on deployments or when something happened
-
A top food chain client wants a feature Fx
and has a deadline on Friday.
We are still working on it and already estimated hours and set deployment on Monday.
(No deployments on Friday)
And the business/sales guy comes up with new deadline to submit it at Friday morning.
And was only discussing with one of my team member already working on it. And i knew there is more hours required for testing and need to deployment pre deployment phase (staging of dev)
I was over hearing the conversation between them and I got pissed off and jumped in and said Not Possible at all.
He tries to argues about giving something to him. I said we can give it to you but will not garauntee anything. Now project manager jumps in. PM and my team already know that we will be delivering on Monday.
He arguing that if the Fx is not ready then I will call client developer to office to test it directly on my team members laptop.
I said, No way. We are not ready yet and havent finished yet. Major work will be on Thursday and on Friday we will be testing till end of the day.
PM explains him blah blah stuff.
He calms down and says no worries we will check the status on Friday afternoon amd roll out something to Client.
PM, developer and I looked each other and I said, sure will deploy but will not garauntee anything. He goes back to his desk.
Seriously.
WE ALREADY ESTIMATED F* MAN HOURS AND WILL BE READY ON MONDAY MEANS MONDAY DONT F* BUILD MORE PRESSURE ON US. F* SALES2 -
Yesterday whole 12 hours we were working on deployment about a feature X that has deadline yesterday itself.
Everything damn perfectly running on Test env but not on Prod.
We made Prod into Dev/Test/Fucking garabage env. Haha.
I was laughing to myself at same time crying hard in my deep heart.
Business guys chasing PM
PM chasing us
And from morning till night we were in same room. Had lunch, and dinner only went out for toilet and to refil water bottles.
And found that feature Y is not working at same time that is related to our feature X. Fucking we have been wasted hours on it.
One of my devs got so fucked up emotionally that he messed up the code (not his fault) he didnt had his lunch and dinner. Had to console him later that its not his fault. Poor guy not sure whether he slept or not; will find out in few hours.
Anyways reported a bug.
But that bug assigned to us for fixing.
Are you fucking kidding me.
Anyways no choice. Had to do it.
Hope today everything goes good or horribly bad. FYI no deployments on Friday damn we are in stalememt till Monday.
Fuck that bug
Or
May be fuck our stupiditiy while makiing mistakes.1 -
So I'm making a dashboard app for the TV in our IT dept. just as an accessory and convenience to show important dates and the status of our environments. They want a scrolling banner for alerts on failed deployments, or a similarly functioning notification system. The plan is to have the TV play audio when an alert is introduced. I personally hear the stereotypical piano falling from a skyscraper.. I thought it would be pretty fun to start a thread of what people "hear" when a build or deployment fails
-
Does anyone of you fellow devs ever pushes to production during working hours?
I have the luxury to do so and at first was uncomfortable, as this of course takes the system offline for a few seconds, and next web requests from a user are painful due to cold start of web server (and we have 40-100 active users at any given time)...
...but you know what? They all complain SharePoint is slow (it is) anyway, so. I do it.
Sometimes it fucking fails, so I do have all of the historic deployments handy, ready to revert. :)10 -
What are your favorite emoticons for working on automated cloud deployments or new open source integrations?
-
!Rant
Tldr: great spike to solve deployment problem may be a wasted effort.
Deployments of an ancient electron application need to be done in CodeDeploy to deploy the latest build. Customer hour restrictions cause this to be done only after midnight, and manually checked.
The whole team knows this is the wrong method of deployment and that there are many other operational problems with the project.
A few other senior team members get together and decide to spike out a way to use electron auto-deployment to accomplish this without using code-deploy at all.
After a shallow dive into this subject, we all get pulled aside to handle a change in another part of the software ecosystem. It happens. We leave the spike behind.
A junior-intermediate developer on the team pics the project up and gets a good spike going in a day and a half! We are all high fives and beers. This is Friday.
By Monday there is a pull request in for code review and it looks solid. Seems like it will make deployments a lot better.
Preparing the last deployment (hopefully) with CodeDeploy ever...
Marketing team members inform us that they are running an add system on the customer devices and to do it they are using Linux.
The current application being deployed is using Windows 10 (yeah, another problem).
They say they have made plans to move our application over to Linux. This means we may not be able to launch the junior devs great spike and the old deployment method may stay for the time being.
Meetings soon to find out how all of this will hash out.
End of rant. I hope I'm doing this right -
I work at a small company (4 devs, CTO, a senior, me: mid level, and a new junior dev). Junior and I handle the client projects and the Senior and CTO handle the overall platform and server deployments and such. Our senior dev just gave his 2 weeks notice. I was told they are not replacing him and now ALL of his tasks have been pushed onto me on top of all my already full plate. My issue is, although I am excited to learn about the upper management and deployment stuff, they (CTO and CEO) just dumped all these tasks onto me without even asking if I wanted the added responsibility and also told me there is no monetary bonus for taking it all on. Am I right in being a little mad that I was not even asked if I wanted it and it was just assumed I would handle it all without any bonus or monetary promotion?5
-
So Docker is pretty amazing, but I'm finding myself immensely frustrated at all the stupid shit devs do with their Dockfiles and stacks. Like the surprise of finding out Jenkins clients aren't setup for SSH or stacks opening up 5 public ports when all they really need are a bunch of private ports. Or how Jenkins deployments expect crazy tags so I have to add some really stupid tags to my own nodes.
How is it so hard to comprehend Docker for devs? It's so easy that I'm in utter bliss when I stop trying to use 3rd party stacks.1 -
Thought I'd post this for my friend in QA, because she's been having a horrible week at work.
So we were supposed to have production deployments last night (Tuesday) and tonight (Wednesday). We were told these dates a week ago, which is fine. The QA support cleared their after-office schedules on those dates to accommodate, since the deployments would be happening at 10pm.
Last Monday they moved the deployments to Thursday and Friday, because our "project managers" want to cram as many fixes and resolutions as possible. So of course, we devs are being rushed to speed these additional tasks through to being included (bypassing a LOT of quality checks).
Of course, the QA team finds defects (we devs were expecting that, so no big) and the PMs start blaming them for the delays. Which is just stupid. And my QA friend? They're trying to make her a scapegoat by throwing her under the bus with business.
Fortunately, she's a smart cookie and not only has all communications with the PMs documented, she also has the other QAs backing her up by running the same tests.
tldr; Fuck those project managers who suck up to business and don't give a shit about the people who do the actual work. May they burn in hell and their souls rot in a cesspool of acidic farts for all eternity. -
Been developing a FAAS backend for a mobile app while going back and forth to work in the train, constantly loosing wifi and failing deployments, it's like waterboarding for geeks4
-
Gotta love the client forced deployments, making the team work all weekend. Having the push to live at 9pm at night and then with 10 minutes left cancelling the whole thing. With a lovely "good job but we are not ready yet"
-
The computers and network seem to be CRAWLING today. Which is great, gives me plenty to time to imagine the many ways I could get myself fired for doing the deployments the way I am. 😅💀
-
For me, it was when I was on a team doing government work. We had an entire team devoted to deployments etc which were handled via ansible.
Ansible was fairly new at the time (~2015, they had just been bought by RedHat) but the team was definitely doing a great job picking it up and creating install playbooks for _every_ piece of our distributed infrastructure (load balancers, application servers, queues, databases, everything).
I luckily left before stuff got too hairy, but last I heard they are more than 6 months behind schedule. They STILL can't get a reproducible install process with the ansible playbooks! And it's all due to tech debt ie not giving any time to fix things, so its just band aid after band aid.
It's really sad to hear because the sytem itself was pretty cool, completely horizontally scalable and definitely miles ahead of the program they've been using for the last 20 years. -
We're busy upgrading our application and our hosting servers from its current legacy setup. In a meeting today suggesting that we rebuild our stack with packer and Ansible and automate deployments with rundeck. Our head engineer said that automation is too risky and we should continue doing things manually. Right now he just sshd into a production machine to do a git pull...2
-
TL;DR As time goes by, I'm feel deeply in love with linux. An infatuation? :D
Before, I really dont mind how the file system works, permission setup, library installation, etc. as long I finished my project (before like 90% of the time I copy paste cmds). But now, after many hair pulling while debugging times, crying while rolling on the floor moments, and painful production deployments (wtf! it's working on my machine/dev server rants), it helps me clearly realized how amazing it is. I might be relatively new with the OS compare to others so maybe what I feel like now is like having a crush on someone in a bus :). But still, I just wanted to say thank you to all who are giving their time in developing/improving linux distros - you are heroes!
I'm hoping that I can contribute something soon :)
senti_mode off1 -
Good times: Migrating a Jenkins build pipeline patched together out of groovy, python, bash, awk, perl scripts and God knows what else since I have only scratched the surface so far, from Maven to Gradle while not breaking day-to-day builds, integrations and deployments of features, hotfixes and releases. I'm actually enjoying the challenge but it's taking forever due to several issues:
- Jenkins breaks/hangs randomly because it's Jenkins
- Gradle can't handle sets of version ranges but Maven can
- Maven can't handle Gradle style version ranges
- Gradle doesn't have a concept of parent poms, you need to write a plugin and apply stuff programmatically. But plug-ins being part of the buildscript{} don't fall under depency management rules :clap:
- Meme incompatibility issues of BSD vs GNU versions of CLI tools like sed, grep etc1 -
Spend weeks porting everything to Docker and automating deployments so we can be cloud provider agnostic...
Hosting providers all want long term contracts.2 -
If you compare a software developer's job with another, let's say a doctor or a lawyer, the former doesn't require mastery and there is continuous chase on fast changing version numbers or an entire platform coming out. Former innovates without question and gets burned out in the process. While the latter demands mastery of certain fields and the specialization isn't diverse enough compared to former. Yet the pay for latter might be higher. What are the pros and cons have you felt as a developer and how do you cope to address it internally? Is it just the thrill and excitement of new things coming out? What fulfillment do we get aside from the satisfaction of clean code, unit test and successful deployments? How much impact have we really given? And is there a place for developers to final settle down? Don't get me wrong; I won't stop until death probably but I hope adulting responsibilites won't make us break.
-
Docker deployment
Wondering how you guys are doing docker deployment (angular, php, whatever) on self hosted servers from a private gitlab instance ?
Also most recent gitlab release seems Very promisssing on this.
There's a lot info on deploying to aws or google but not on this case (at least clear)
Would like to hear from you about your setups1 -
Not exactly related to the topic but the exact thing is chilling the fuck out .
I always was anxious and was completely paranoid about minor bugs in my application during prod deployments(that is when I didn't know about testing utils and so on) , till the point that I couldn't fix a minor bug in the CSS and I puked 5 times over.
It was rough times but then I got over it and it really helped me alot.
I know bugs are like really not the kind of things you'd want to see in any application but it will arise in every application :3 -
This invite to an ElasticSearch webinar is epic:
webinars/proven-architectural-patterns-for-mature-elastic-stack-deployments?ultron=reference-architecture-webinar&blade=invite&hulk=email -
Do you guys have people in your office that just REFUSE to cooperate, or people who tell you they'll cooperate, but then they literally do anything except for cooperate?
I'm having trouble with the latter; I've been trying to get one of our less experienced members to work on our deployment. He's successfully configured at least 4 other deployments, and this one is the EXACT SAME as the other ones. The issue is that the person who is im control of this particular master console is someone higher up than me, but they don't know how to delegate. Thus everything that they touch becomes their own little pet project that no one else can dare touch, because they'll "mess it up" (not do it the right way according to his limited bible of best practices).
So now I'm stuck here, trying to convince HIS BOSS to get him access, but i even HE cant get him to do it! Now I'm sitting here waiting, getting more and more fed up with this guy, because like i said, it's his MO: im on two other projects with him, and they're all moving at a GLACIER'S pace.
Seriously, if you dont have the time for a project, but it on the backburner, dont start it and make your other projects suffer.6 -
Hey there devRant, how many of you have experience with large scale meteorJS deployments?
Is it reliable? What was your approach in structuring your app and infrastructure?
Thanks in advance :)1 -
Just heard someone saying it's bad security practise to have composer and git on production server for deployments.... did I miss the memo?1
-
I guess these days I work with Golang, gRPC, and Kubernetes. I guess that's a dev stack. Or turning into one at the very least. The only thing that annoys me about this stack, is how different deployments for kubernetes are different for CSPs. The fact that setting up a kubernetes/Golang dev environment is take a lot of time and effort. And gRPC can be a pain in the ass to work with as well. Since it's fairly new in large scale enterprise use, finding best practices can be pretty hard, and everything is "feet in the fire" and "trial by error" when dealing with gRPC.
And Golang channels can get very hairy and complicated really really fast. As well as the context package in Golang. And Golang drama with package managers. I wish they would just settle on GoDeps or vgo and call it a day.
And for the love of God, ADD FUCKING GENERICS! Go code can be needlessly long and wordy. The alternative "struct function members" can be pretty clunky at times. -
I just pulled an all-nighter for some homework for grad school with a good friend and now I have 2 deployments today. Guess I can come off my coffee hiatus because I need it! This day is going to long unlike my patience.
-
Just now while having dinner, we saw Troy was on TV. The part where Achilles' younger brother went onto himself, disguised as Achilles, into war... even when Achilles said we're going home.
In my mind, seeing it as... That's how a junior developer fucks up when he is overfilled with enthusiasm and patriotism towards company and deploys on server with senior's credentials, even though senior said "NO DEPLOYMENTS ON FRIDAYS"... and now everybody has to deal with this shit. -
Whats the fucking purpose of our companys dev test and prod env. Dev always only has a single instance. Sometimes clustered services run as cluster on test. Producing headaches because the clustering behaviour couldnt be seen on a single instance and Prod lacks all the nice deployment tools off dev/test. Fuck thinking you could dev then test and prod without any major reconfiguration and headaches. And all because the Storage costs is RETARDEDLY expensive because the backup EVERYTHING with ridiculess overkill. That results in headaches when requesting new servers. Took an old Workstation from the shelves and made it my vm slave so at least i could reliably deploy to test.. Fuck this process
-
Designers use Dreamweaver, developers use Visual Studio... only our stuff is in TFS. Beyond Compare gets a workout on deployments.
-
Work as a SOE Engineer and have a lot of custom application deployments managed via PowerShell. Collegue came over today and suggested that I include a few more "sleep" breaks as newer processers run code to fast and can skip over cmdlts.
I can't even.2 -
I was planning on migrating my Mastodon instance to a new node, but then I looked carefully at my deployment scripts. I had built in support for multiple servers, but not everything supports it, and the configuration is messy now that I think about it.
Now I need to write a bunch of tests, and then refactor a bunch of my code. Hopefully I can get this done before I run out of space on my Mastodon instance. It's gonna be a fun day. :-P -
Bit of a stupid oopsie I had today that someone might appreciate.
We’re working on a microservice project in Spring Boot, running in a docker swarm. Past few days I get a Spring Cloud config server going in separate stack, create an overlay network, and get CI deployments to use the right profiles etc. It’s looking great, and the first component is working spectacularly.
Now just to do the other 6. Move config files to the Git repo, tweak CI, all the other faffing and hoohas; and deploy. Health checks keep failing, the containers are murdering themselves and resurrecting ad infinitum. They’re doing this so quickly that by the time I get the container ID to exec in and curl health, it’s no longer running. Cue frustration, increased caffeine and nicotine consumption; my sanity is slipping.
No errors in the logs, because from experience the Cloud Config errors ar at debug level. Whhhyyyy?? Some time later (way longer than it should have been) I realize I had never actually included the Spring Cloud Config starter. Boot 101, get your starter!
Since config client is just additional setup in properties.yml, there’s no issue of the dep isn’t there, it just doesn’t try to get the config.
The containers are still unhealthy, I can hear them screaming. But now at least it’s about something else... -
When a production roll out goes better than expected and no issue happens.
https://goo.gl/images/XwxfJp -
So at my last job we had an AM deployment and a PM deployment. We had code reviews, QA, a slow roll process (deployed to three servers), monitoring process, and once everything checked out we fast rolled to the other servers.
At my current job we have a QA process, and we deploy once every three weeks.
My first job I deployed as needed, with no QA at all (I was the only web dev there).
I'm currently at a major e-commerce site, my last job was more of a click-bait site (though it still made millions in revenue each year).
So my question is: is there a "normal" as far as deployment schedules? I realize that each business type is going to have their own needs, but what's the "average" time between deployments?