Read file from aws s3 bucket using node fs

164,765

Solution 1

You have a couple options. You can include a callback as a second argument, which will be invoked with any error message and the object. This example is straight from the AWS documentation:

s3.getObject(params, function(err, data) {
  if (err) console.log(err, err.stack); // an error occurred
  else     console.log(data);           // successful response
});

Alternatively, you can convert the output to a stream. There's also an example in the AWS documentation:

var s3 = new AWS.S3({apiVersion: '2006-03-01'});
var params = {Bucket: 'myBucket', Key: 'myImageFile.jpg'};
var file = require('fs').createWriteStream('/path/to/file.jpg');
s3.getObject(params).createReadStream().pipe(file);

Solution 2

This will do it:

new AWS.S3().getObject({ Bucket: this.awsBucketName, Key: keyName }, function(err, data)
{
    if (!err)
        console.log(data.Body.toString());
});

Solution 3

Since you seem to want to process an S3 text file line-by-line. Here is a Node version that uses the standard readline module and AWS' createReadStream()

const readline = require('readline');

const rl = readline.createInterface({
    input: s3.getObject(params).createReadStream()
});

rl.on('line', function(line) {
    console.log(line);
})
.on('close', function() {
});

Solution 4

here is the example which i used to retrive and parse json data from s3.

    var params = {Bucket: BUCKET_NAME, Key: KEY_NAME};
    new AWS.S3().getObject(params, function(err, json_data)
    {
      if (!err) {
        var json = JSON.parse(new Buffer(json_data.Body).toString("utf8"));

       // PROCESS JSON DATA
           ......
     }
   });

Solution 5

I couldn't figure why yet, but the createReadStream/pipe approach didn't work for me. I was trying to download a large CSV file (300MB+) and I got duplicated lines. It seemed a random issue. The final file size varied in each attempt to download it.

I ended up using another way, based on AWS JS SDK examples:

var s3 = new AWS.S3();
var params = {Bucket: 'myBucket', Key: 'myImageFile.jpg'};
var file = require('fs').createWriteStream('/path/to/file.jpg');

s3.getObject(params).
    on('httpData', function(chunk) { file.write(chunk); }).
    on('httpDone', function() { file.end(); }).
    send();

This way, it worked like a charm.

Share:
164,765
Joel
Author by

Joel

I run a marketing data consulting company. I build large databases for marketing purposes using Oracle, SQL Server, MongoDB. I really enjoy SQL, Node and JavaScript and sometimes Python and PHP.

Updated on July 08, 2022

Comments

  • Joel
    Joel almost 2 years

    I am attempting to read a file that is in a aws s3 bucket using

    fs.readFile(file, function (err, contents) {
      var myLines = contents.Body.toString().split('\n')
    })
    

    I've been able to download and upload a file using the node aws-sdk, but I am at a loss as to how to simply read it and parse the contents.

    Here is an example of how I am reading the file from s3:

    var s3 = new AWS.S3();
    var params = {Bucket: 'myBucket', Key: 'myKey.csv'}
    var s3file = s3.getObject(params)