Python script to use data from Azure Storage Blob by stream, and update blob by stream without local file reading and uploading

15,514

So the document is still in progress, I think it is getting better and better... Useful link:

To download a file as a stream from blob storage, you can use BytesIO:

from azure.storage.blob import BlockBlobService
from io import BytesIO
from shutil import copyfileobj 
with BytesIO() as input_blob:
    with BytesIO() as output_blob:
        block_blob_service = BlockBlobService(account_name='my_account_name', account_key='my_account_key')
        # Download as a stream
        block_blob_service.get_blob_to_stream('mycontainer', 'myinputfilename', input_blob)

        # Do whatever you want to do - here I am just copying the input stream to the output stream
        copyfileobj(input_blob, output_blob)
        ...

        # Create the a new blob
        block_blob_service.create_blob_from_stream('mycontainer', 'myoutputfilename', output_blob)

        # Or update the same blob
        block_blob_service.create_blob_from_stream('mycontainer', 'myinputfilename', output_blob)
Share:
15,514
Pepin Peng
Author by

Pepin Peng

Updated on June 26, 2022

Comments

  • Pepin Peng
    Pepin Peng almost 2 years

    I have a python code for data processing , i want to use azure block blob as the data input for the code, to be specify, a csv file from block blob. its all good to download the csv file from azure blob to local path, and upload other way around for the python code if running locally, but the problem is my code running on azure virtual machine because its quite heavy for my Apple Air , pandas read_csv from local path does not work in this case, therefore i have to download and upload and update csv files to azure storage by stream without local saving. both download and upload csv are quite small volume, much less than the blob block limits

    there wasn't that much tutorial to explain how to do this step by step, MS Docs are generally suck to explain as well, my minimal code are as following:

    for downloading from azure blob storage

    from azure.storage.blob import BlockBlobService
    storage = BlockBlobService(account_name='myname', account_key = 'mykey')
    #here i don't know how to make a csv stream that could could be used in next steps#
    file = storage.get_blob_to_stream('accountname','blobname','stream')
    df = pd.read_csv(file)
    #df for later steps#
    

    for uploading and updating a blob by a dataframe from code by stream

    df #dataframe generated by code 
    'i don't know how to do the preparation steps for df and the final fire up operation'
    storage.put_blob_to_list _by_stream('accountname','blobname','stream')
    

    can you please make a step by step tutorial for me, for ppl has experience to azure blob , this should be not very difficult.

    or if you have better solution other than use blob for my case , please drop some hits. Thanks.

  • Pepin Peng
    Pepin Peng about 6 years
    HI , i want to ask the Creat_blob_from Stream method , if repeatly executed without change filename, will it replace the blob with same name in the azure storage, or it will append the changes to the exist blob ? also what is the best method to solve the header lost problems when download a blob ?
  • Thomas
    Thomas about 6 years
    the create_blob_from_stream will replace the file. what do you mean by header lost problems ?
  • Pepin Peng
    Pepin Peng about 6 years
    Hi thanks again, as far as i understand, azure blob storage does not have column headers, thats why when i upload csv to blob storage, then download, i found tthe csv lost its header, all the column names are gone.
  • Pepin Peng
    Pepin Peng about 6 years
    also i tried use the method you provided to download a csv to string stream, i tried use read_csv method and failed because empty value error, use the copyfileobject create a new blob, its also empty, that means there is no data download into the stream , why is that happen.. why this has to be so complicate to use..
  • Thomas
    Thomas about 6 years
    It is not related to the blob storage. Blob storage stores files, any type. are you just copying the file ? My guess is that you need to copy the header also when you read the file. Is the example I gave you work ? Check the read_csv method. you may need to copy the header also
  • Pepin Peng
    Pepin Peng about 6 years
    Hi Thomas, you are my hope here, the script you give to me works to upload a stream to blob, but can not download it properly, with no error, but seems like the input_blob turns out to be empty, because the output_blob is empty after executed your scripts with my blob account and actual csv file stored there
  • Pepin Peng
    Pepin Peng about 6 years
    block_blob_service.get_blob_to_stream('flowshop', 'datatest1.csv', input_blob) df = pd.read_csv(input_blob,header=None) print(df) ,error message i got from here is pandas.parser.TextReader.__cinit__ (pandas\parser.c:6171) pandas.io.common.EmptyDataError: No columns to parse from file
  • Thomas
    Thomas about 6 years
    Could you check on your storage account how your file looks like ? Sounds weird
  • Pepin Peng
    Pepin Peng about 6 years
    its normal, i can download the test csv file directly from azure storage explore , and there is data inside, i tried other csv file, no data acuttally download into the input_blob neither, its wired really..
  • Pepin Peng
    Pepin Peng about 6 years
    I have issued a new question with code and picture, please check out .