Bash with AWS CLI - unable to locate credentials
Solution 1
sudo
will change the $HOME
directory (and therefore ~
) to /root, and remove most bash variables like AWS_CONFIG_FILE from the environment. Make sure you do everything with aws as root or as your user, dont mix.
Make sure you did sudo aws configure
for example. And try
sudo bash -c 'AWS_CONFIG_FILE=/root/.aws/config aws s3 sync s3://backup-test-s3 /s3-backup/test'
You might prefer to remove all the sudo from inside the script, and just sudo the script itself.
Solution 2
While you might have your credentials and config file properly located in ~/.aws, it might not be getting picked up by your user account.
Run this command to see if your credentials have been set:aws configure list
To set the credentials, run this command: aws configure
and then enter the credentials that are specified in your ~/.aws/credentials file.
Solution 3
Answering in case someone stumbles across this based on the question's title.
I had the same problem where by the AWS CLI was reporting unable to locate credentials
.
I had removed the [default]
set of credentials from my credentials
file as I wasn't using them and didn't think they were needed. It seems that they are.
I then reformed my file as follows and it worked...
[default]
aws_access_key_id=****
aws_secret_access_key=****
region=eu-west-2
[deployment-profile]
aws_access_key_id=****
aws_secret_access_key=****
region=eu-west-2
Solution 4
The unable to locate credentials
error usually occurs when working with different aws profiles and the current terminal can't identify the credentials for the current profile.
Notice that you don't need to fill all the credentials via aws configure
each time - you just need to reference to the relevant profile that was configured once.
From the Named profiles section in AWS docs:
The AWS CLI supports using any of multiple named profiles that are stored in the config and credentials files. You can configure additional profiles by using aws configure with the
--profile
option, or by adding entries to the config and credentials files.The following example shows a credentials file with two profiles. The first [default] is used when you run a CLI command with no profile. The second is used when you run a CLI command with the
--profile user1
parameter.
~/.aws/credentials
(Linux & Mac) or %USERPROFILE%\.aws\credentials
(Windows):
[default] aws_access_key_id=AKIAIOSFODNN7EXAMPLE aws_secret_access_key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY [user1] aws_access_key_id=AKIAI44QH8DHBEXAMPLE aws_secret_access_key=je7MtGbClwBF/2Zp9Utk/h3yCo8nvbEXAMPLEKEY
So, after setting up the specific named profile (user1 in the example above) via aws configure
or directly in the ~/.aws/credentials
file you can select the specific profile:
aws ec2 describe-instances --profile user1
Or export it to terminal:
$ export AWS_PROFILE=user1
Solution 5
This isn't necessarily related to the original question, but I came across this when googling a related issue, so I'm going to write it up in case it may help anyone else. I set up aws
on a specific user, and tested using sudo -H -u thatuser aws ...
, but it didn't work with awscli 1.2.9 installed on Ubuntu 14.04:
% sudo -H -u thatuser aws configure list
Name Value Type Location
---- ----- ---- --------
profile <not set> None None
access_key <not set> None None
secret_key <not set> None None
region us-east-1 config_file ~/.aws/config
I had to upgrade it using pip install awscli
, which brought in newer versions of awscli (1.11.93), boto, and a myriad of other stuff (awscli docutils botocore rsa s3transfer jmespath python-dateutil pyasn1 futures), but it resulted in things starting to work properly:
% sudo -H -u thatuser aws configure list
Name Value Type Location
---- ----- ---- --------
profile <not set> None None
access_key ****************WXYZ shared-credentials-file
secret_key ****************wxyz shared-credentials-file
region us-east-1 config-file ~/.aws/config
Smajl
Software developer with focus on Java and cloud technologies.
Updated on July 09, 2022Comments
-
Smajl almost 2 years
I have a shell script which is supposed to download some files from S3 and mount an ebs drive. However, I always end up with "Unable to locate credentials".
I have specified my credentials with the
aws configure
command and the commands work outside the shell script. Could somebody, please, tell me (preferably in detail) how to make it work?This is my script
#!/bin/bash AWS_CONFIG_FILE="~/.aws/config" echo $1 sudo mkfs -t ext4 $1 sudo mkdir /s3-backup-test sudo chmod -R ugo+rw /s3-backup-test sudo mount $1 /s3-backup-test sudo aws s3 sync s3://backup-test-s3 /s3-backup/test du -h /s3-backup-test ipt (short version):
Thanks for any help!
-
Smajl almost 9 yearsThanks, you are absolutely right... this was the root of the problem (running the script with sudo but specifying the credentials as a normal user). Thanks
-
alexanderdavide almost 3 yearsHad this problem in a VS Code Remote Container:
aws configure
ran as root but a npm script executingaws codeartifact login
couldn't access the~/.aws/credentials
file. Specifying thecontainerUser
helped. -
Mohit Sharma almost 3 yearsI completely agree with the export way. The reason behind it because in some scripts you can't pass the AWS profile every time. So, it's better to use export AWS_PROFILE.
-
Smitty over 2 yearsMy issue was similar. I had created a rake task in Rails that utilized some bash commands and I had removed the
[default]
block from my.aws/credentials
file. I had been previously invoking a specific profile and it was working - so I just moved the credentials from that profile under[default]
(which I added back). -
Metro Smurf over 2 yearsThe key here is the
default
profile was missing. I too deleted my default profile so I wouldn't accidently run a command w/o a named profile. 🤷🏼♀️