Amazon S3 avoid overwriting objects with the same name
Solution 1
My comment from above doesn't work. I thought the WRITE
ACL would apply to objects as well, but it only works on buckets.
Since you enabled versioning, your objects aren't overwritten. But if you don't specify the version in your GET request or URL, the latest version will be taken. This means when you put and object into S3 you need to save the versionID the response tells you in order to retrieve the very first object.
See Amazon S3 ACL for read-only and write-once access for more.
Solution 2
You can also configure an IAM user with limited permissions. Writes are still writes (i.e., updates), but using an IAM user is a best practice anyway.
The owner (i.e., your "long-term access key and secret key") always has full control unless you go completely out of your way to disable it.
Solution 3
Here is my suggestion if you are using a DB to store the key of every file on your s3 bucket.
Generate a random key. Try insert/update the key your DB, in a field with a UNIQUE constraint that allows a null entry. If it fails the key has been used, repeat until you get a unique key.
Then put your file on s3 with your key that you know is unique.
CyberJunkie
Updated on November 08, 2020Comments
-
CyberJunkie over 3 years
If I upload a file to S3 with the filename identical to a filename of an object in the bucket it overwrites it. What options exists to avoid overwriting files with identical filenames? I enabled versioning in my bucket thinking it will solve the problem but objects are still overwritten.
-
CyberJunkie over 11 yearsThanks I haven;t thought of that. A user who can't update/overwrite would be ideal if I can set it up in AWS.
-
Ryan Parman over 11 yearsYou'll have to double-check the documentation. I don't know if S3 understands the difference between a write and an update. I know that by default (i.e., full permissions), writes and updates are treated as the same thing.