Docker mount S3 container

18,385

Solution 1

There doesn't seem to out-of-box support of Amazon S3 in the popular container storage solutions like Flocker and EMC REX-Ray. However if you're open to storing your data on Amazon EBS volumes, then EMC REX-Ray allows you to create, mount and take snapshots of your volumes.

Of course, the approach you suggested works perfectly as well. You can install the AWS CLI on the host running your containers and write a simple cron job that copies the data in the host directory mapped to your container volume to your S3 bucket.

Solution 2

There is different aproaches depending on what you want to acomplish, but here is how I did it using s3fs-fuse

I created a docker image based on ubuntu and also - included some useful dependencies(based on my requirements) - Install aws cli - Install s3fs-fuse - Mount s3 in a directory

Dockefile

FROM ubuntu:18.04

## Some utilities
RUN apt-get update -y && \
    apt-get install -y build-essential libfuse-dev libcurl4-openssl-dev libxml2-dev pkg-config libssl-dev mime-support automake libtool wget tar git unzip
RUN apt-get install lsb-release -y  && apt-get install zip -y && apt-get install vim -y

## Install AWS CLI
RUN apt-get update && \
    apt-get install -y \
        python3 \
        python3-pip \
        python3-setuptools \
        groff \
        less \
    && pip3 install --upgrade pip \
    && apt-get clean

RUN pip3 --no-cache-dir install --upgrade awscli

## Install S3 Fuse
RUN rm -rf /usr/src/s3fs-fuse
RUN git clone https://github.com/s3fs-fuse/s3fs-fuse/ /usr/src/s3fs-fuse
WORKDIR /usr/src/s3fs-fuse 
RUN ./autogen.sh && ./configure && make && make install

## Create folder
WORKDIR /var/www
RUN mkdir s3

## Set Your AWS Access credentials
ENV AWS_ACCESS_KEY=YOURAWSACCESSKEY
ENV AWS_SECRET_ACCESS_KEY=YOURAWSSECRETACCESSKEY

## Set the directory where you want to mount your s3 bucket
ENV S3_MOUNT_DIRECTORY=/var/www/s3

## Replace with your s3 bucket name
ENV S3_BUCKET_NAME=your-s3-bucket-name

## S3fs-fuse credential config
RUN echo $AWS_ACCESS_KEY:$AWS_SECRET_ACCESS_KEY > /root/.passwd-s3fs && \
    chmod 600 /root/.passwd-s3fs

## change workdir to /
WORKDIR /

## Entry Point
ADD start-script.sh /start-script.sh
RUN chmod 755 /start-script.sh 
CMD ["/start-script.sh"]

and the start script specified should be :

start-script.sh

#!/bin/bash
s3fs $S3_BUCKET_NAME $S3_MOUNT_DIRECTORY

Then build your image and if you create a file into the directory that you specified it should also be reflected on the s3 console and viceversa.

I have a more detailed explanation here with a working example: https://github.com/skypeter1/docker-s3-bucket

Share:
18,385
Adam
Author by

Adam

Updated on June 14, 2022

Comments

  • Adam
    Adam almost 2 years

    What is your best practise for mounting an S3 container inside of a docker host? Is there a way to do this transparently? Or do I rather need to mount volume to the host drive using the VOLUME directive, and then backup files to S3 with CRON manually?