Skip to content

AWS S3 CLI – some tips for automation

AWS S3 cli some tips for automation

Amazon S3 is really cheap cloud storage. Below are some examples that will help with the automation processes. On S3, you can upload files that you want to access over the internet. They can be used by your site or by people using it.

On AWS S3, you can also store files that you want to use in automation processes.

Install AWS CLI

At the beginning, if you don’t have one yet, you need to install aws cli on your machine, preferably version 2:

curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install

Public S3 bucket

If you have an S3 bucket that is available to the public, you can browse and download its content without entering your login and password, just add the –no-sign-request parameter as in the example below:

#viewing content:
aws s3 ls s3://awstest-test1/ --no-sign-request
aws s3 ls s3://awstest-test1/ --no-sign-request --recursive
#copying 1 file:
aws s3 cp s3://awstest-test1/fajny_skrypt.sh ./ --no-sign-request
#copying a directory:
aws s3 cp s3://awstest-test1/test  ./ --recursive --no-sign-request

Thanks to this, without providing credentials, you can always have access to the necessary things available publicly. Of course, by giving the direct address of the file you will get to it via http, but it’s about cli.

Private S3 bucket

If S3 is not public (the recommended option), and you want to download something from it, you need to find the Id and access key of the user who has permissions and save them in the ~/.aws /credentials file. Note using the example below, the file will be overwritten.

mkdir -p ~/.aws
cat >> ~/.aws/credentials << EOF
[default]
aws_access_key_id=AKIAXDKQAR234U6EXXZO
aws_secret_access_key=deqCL/Ru7h+n2BT10jSrfgdsC6GXWU1swL5o0z/M
EOF

If in the example you replaced the Id and key correctly, and the user has S3 permissions, you can easily download files from it that are not available to the public.

#copying 1 file:
aws s3 cp s3://awstest-test1/fajny_skrypt.sh ./ 
#copying a directory:
aws s3 cp s3://awstest-test1/test  testdirectory --recursive 

You can also manually add user configurations using:

aws configure

Other useful AWS S3 commands

Summary of S3 File Sizes:

aws s3 ls s3://awstest-test1/ --recursive --human-readable --summarize

Moving files that are not in .jpg format:

aws s3 mv s3://awstest-test1/ --recursive --exclude "*.jpg"

Moving between two S3 buckets:

aws s3 mv s3://awstest-test1/ s3://awstest-test2/ --recursive

The last very cool command to synchronize files that we can run e.g. from cron:

#from S3 to VM
aws s3 sync s3://awstest-test1/test /syncfolder

#from VM od S3
aws s3 sync /syncfolder s3://awstest-test1/test 

More about S3 can be found in the AWS documentation. I encourage you to browse if something specific is needed.

If you liked such a short post about AWS, I encourage you to leave a comment and look at other articles in the AWS category.