Jenkins Continuous Integration with Amazon S3 - Everything is uploading to the root?

23,903

Solution 1

It doesn't look like this is possible. Instead, I'm using s3cmd to do this. You must first install it on your server, and then in one of the bash scripts within a Jenkins job you can use:

s3cmd sync -r -P $WORKSPACE/ s3://YOUR_BUCKET_NAME

That will copy all of the files to your S3 account maintaining the folder structure. The -P keeps read permissions for everyone (needed if you're using your bucket as a web server). This is a great solution using the sync feature, because it compares all your local files against the S3 bucket and only copies files that have changed (by comparing file sizes and checksums).

Solution 2

I have never worked with the S3 plugin for Jenkins (but now that I know it exists, I might give it a try), though, looking at the code, it seems you can only do what you want using a workaround.

Here's what the actual plugin code does (taken from github) --I removed the parts of the code that are not relevant for the sake of readability:

class hudson.plugins.s3.S3Profile, method upload:

final Destination dest = new Destination(bucketName,filePath.getName());
getClient().putObject(dest.bucketName, dest.objectName, filePath.read(), metadata);

Now if you take a look into hudson.FilePath.getName()'s JavaDoc:

Gets just the file name portion without directories.

Now, take a look into the hudson.plugins.s3.Destination's constructor:

public Destination(final String userBucketName, final String fileName) {

    if (userBucketName == null || fileName == null) 
        throw new IllegalArgumentException("Not defined for null parameters: "+userBucketName+","+fileName);

    final String[] bucketNameArray = userBucketName.split("/", 2);
    bucketName = bucketNameArray[0];
    if (bucketNameArray.length > 1) {
        objectName = bucketNameArray[1] + "/" + fileName;
    } else {
        objectName = fileName;
    }
}

The Destination class JavaDoc says:

The convention implemented here is that a / in a bucket name is used to construct a structure in the object name. That is, a put of file.txt to bucket name of "mybucket/v1" will cause the object "v1/file.txt" to be created in the mybucket.

Conclusion: the filePath.getName() call strips off any prefix (S3 does not have any directory, but rather prefixes, see this and this threads for more info) you add to the file. If you really need to put your files into a "folder" (i.e. having a specific prefix that contains a slash (/)), I suggest you to add this prefix to the end of your bucket name, as explicited in the Destination class JavaDoc.

Solution 3

Yes this is possible.

It looks like for each folder destination, you'll need a separate instance of the S3 plugin however.

"Source" is the file you're uploading.

"Destination bucket" is where you place your path.

Solution 4

  1. Set up your git plugin.

enter image description here

  1. Set up your Bash script

enter image description here

  1. All in your folder marked as "*" will go to bucket

Solution 5

Using Jenkins 1.532.2 and S3 Publisher Plug-In 0.5, the UI configure Job screen rejects additional S3 publish entries. There would also be a significant maintenance benefit to us if the plugin recreated the workspace directory structure as we'll have many directories to create.

Share:
23,903
Kris Anderson
Author by

Kris Anderson

Updated on September 19, 2020

Comments

  • Kris Anderson
    Kris Anderson over 3 years

    I'm running Jenkins and I have it successfully working with my Github account, but I can't get it working correctly with Amazon S3.

    I installed the S3 plugin and when I run a build it successfully uploads to the S3 bucket I specify, but all of the files uploaded end up in the root of the bucket. I have a bunch of folders (such as /css /js and so on), but all of the files in those folders from hithub end up in the root of my S3 account.

    Is it possible to get the S3 plugin to upload and retain the folder structure?

  • Aron
    Aron over 10 years
    I would suggest using the JClouds Plug in now.
  • aendra
    aendra about 10 years
    JClouds isn't great either; it keeps failing with an obscure error about streaming. Having to fallback to a CLI tool for this is ludicrous...
  • Bruce Edge
    Bruce Edge over 9 years
    s3cmd is not optimal as it requires a separate store for the aws credentials. There's merit to the s3 plugin's use of s3 profiles for authentication. Agree that preserving the artifact hierarchy would be preferable to current behavior.
  • mohamnag
    mohamnag about 9 years
    that plugin is not worse trying! it is ok for one file upload but it does not keep directory structure.
  • Alex Nauda
    Alex Nauda over 8 years
    It's not entirely recursive, but you can upload to a subdirectory within the s3 bucket.