Recently some of our Elastic Beanstalk deployments suddenly started failing. It turns out it was a change to EB's configuration validation which caused some misleading error messages.
With the release of
MySQL 5.6, there was no longer a default user account with an empty password. For security reasons the
root account is now allocated a random password when MySQL is installed, which is written to a log file.
You then look up the password from the log file, and use it to login and change it to something else...
$ sudo grep 'temporary password' /var/log/mysqld.log $ mysqladmin -u root --password=RANDOM_PASSWORD_FROM_LOG password myNewSuperSecretPassword1!
This is all well and good, unless you are automating deployment of MySQL. We use
ansible to spin up our local dev servers, and as soon as we upgraded MySQL, all our MySQL commands started failing as they could no longer authenticate.
There was no obvious way to install with a predefined password, or no password, so we came up with the following to automate setting up MySQL.
So, last week I was writing a post about cleaning up after docker, and I got a bit carried away and ended up deleting a data container. The container was no longer running, but it's docker volume was still in use by other running containers.
Thankfully, I didn't use the
-v flag (which removes the containers volumes as well - although volumes in use wont get removed), so I still had all the data, I just had lost the container that created it.
The problem is, docker references the volume via the container when mounting it as a shared volume. As I had deleted the container, I could no longer reference the volume.
Thankfully there is a very simple solution to get the volume mapped back to the data container.
We have some
ansible scripts that deploy
docker containers on to various environments. All has been going fine until this week, when we fell foul of the untidy teenager that is docker!
Our deployment was failing as the box had run out of disk space. Turns out you need to tidy up after docker...
Sometimes EB fails to deploy and times out with a generic error:
Unsuccessful command execution on instance id(s) 'i-********'. Aborting the operation.
As of yet there is no clean way to get out of this, but here are the top 3 ways that we have got it to work - depending on your needs...
As part of our deployment process, we include the short git SHA as a build number against our SemVer version number.
We wanted this available in our application UI to help identify running versions. Easy I thought, Beanstalk sets this as the
Version Label when deploying, so we should be able to access that from the environment vars in PHP.
Unfortunately not, this is one of the many EB values you cant easily get to. After a lot of failed attempts, here is how we managed it as a one-liner in the
Had this little annoying error with docker the other day.
We had this error the other day when running our vagrant script (centOS box):
[Errno -1] repomd.xml does not match metalink for epel
Trying other mirror.
This tried and failed on every single mirror, and basically hung our deployment.
UPDATE : EB Cli tools v3 now supports multiple aws profiles
AWS makes it very simple to push to different environments of an application, but what if you need to push to a completely separate AWS account?
It's not really possible, but you can shoehorn it with the following set up....