Ranter
Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Comments
-
No idea if you are launching many instances or using an ASG, invest some time using packer and/or some configuration manager + terraform, aws and instances fail all the time, keep that in mind, dns, network, disk, etc.
-
@elgringo the error is happening inside Packer. Packer beats AWS to the sources.list update.
-
@devphobe you can always customize the mirrors and maybe have a local repo but might be overkill, it depends of what you are doing
-
@devphobe
Priority of APT, Pinning by Origin
You could add a sources.list.d entry containing a mirror and set it to highest priority.
Otherwise...
If you want to avoid the usage of the AWS mirror...
/etc/hosts, unresolvable DNS entry like apt-mirror pointing to fixed IP address
I could have some more crude ideas... -
@IntrusionCM crude indeed! I'm trying to stay as close to "idiomatic" as possible. I think I can wait for cloud-init to finish and just proceed.
-
i can relate with feeling 'dirty' after spending hours trying to figure out why 'thing' is happening and then once you've figured it out your fix is hack line of code.
-
@devphobe i’m considering to move to EKS, orchestration might help you to have a good coffee 😬
Related Rants
-
cdrice105"You gave us bad code! We ran it and now production is DOWN! Join this bridgeline now and help us fix this!" ...
-
gururaju56*Now that's what I call a Hacker* MOTHER OF ALL AUTOMATIONS This seems a long post. but you will definitely ...
-
featurenotbug30So I accidentally published my AWS keys to GitHub, stupid me. I realize this the next day. $ git reset $ git ...
I'm a DevOps engineer. It's my job to understand why this type of shit is broken, and when I finally figure it out, I get so mad at bullish players like AWS.
It's simple. Install Python3 from apt.
`apt-get update && apt-get install -y python3-dev`
I've done this thousands of times, and it just works.
Docker? Yup.
AWS AMI? Yup.
Automation? Nope.
WTF? Let's waste 2.5 hours and figure out why this morning.
In docker: `apt-cache policy python3-dev` shows us:
python3-dev:
http://archive.ubuntu.com/ubuntu focal/main amd64 Packages
But in AWS instance, we see we're reading from "http://us-east-1.ec2.archive.ubuntu.com/... focal/main" instead!
Ah, but why does it fail? AWS is just using a mirror, right? Not quite.
When the automation script is running, it's beating AWS to the apt mirror update! My instance, running on AWS is trying to access the same archive.ubuntu.com that the Docker container tried to use. "python3-dev" was not a candidate for installation! WTF Amazon? Shouldn't that just work, even if I'm not using your mirror?
So I try again, and again, and again. It works, on average, 1 out of every 5 times. I'm assuming this means we're seeing some strange shit configuration between EC2 racks where some are configured to redirect archive.ubuntu.com to the ec2 mirror, and others are configured to block. I haven't dug this far into the issue yet, because by the time I can SSH into the machine after automation, the apt list has already received it's blessed update from EC2.
Now I have to build a graceful delay into my automation while I wait for AWS to mangle, I mean "fix up" my apt sources list to their whim.
After completely blowing my allotted time on this task, I just shipped a "sleep" statement in my code. I feel so dirty. I'm going to go brew some more coffee to be okay with my life. Then figure out a proper wait statement.
rant
smoke and mirrors
aws
automation