Save hourly file from cURL response

13,740
curl -s -u twitterusername:twitterpassword https://stream.twitter.com/1/statuses/sample.json -o "somefile $(date + format).txt"

Where, format can be any one of the following:

%a : Abbreviated weekday name (Sun..Sat)
%b : Abbreviated month name (Jan..Dec)
%B : Full month name, variable length (January..December)
%d : Day of month (01..31)
%e : Day of month, blank padded ( 1..31)
%m : Month (01..12)
%Y : Year
%d : Day of month (e.g, 01)
%H : 24 hour format (00..23)
%I : 12 hour format (01..12)
%M : Minutes of the current time (00...59)
%j : day of year (001..366)
%D : date; same as %m/%d/%y
%F : full date; same as %Y-%m-%d

So, for you this will save the file and dynamically add the hour (%H) and the minutes (%M) of the current time

curl -s -u twitterusername:twitterpassword https://stream.twitter.com/1/statuses/sample.json -o "somefile $(date +\"%H:%M\").txt"

Because you want curl to get data for 1 hour and save that data to the file resuming again operations you need to at least use a small script, this will do the job:

#! /bin/bash

while true; do
    curl -s -m 3600 -u twitterusername:twitterpassword https://stream.twitter.com/1/statuses/sample.json -o "somefile $(date +%H:%M).txt"
done

It will, while you leave the script running, execute the command, every 3600 seconds (1 hour, the -m 3600 parameter) curl will close and the command will be gain executed.

Note that this will not just slit the stream, it will actually close curl and re-open it, do not think its possible to slit the stream while curl is running.

You need to same the script somewhere, ie ~/curl_script.sh and make it executable with chmod 755 ~/curl_script.sh before using it on the terminal, to use it move the folder where the script was saved and just type ./curl_script.sh.

To interrupt the script press Ctrl+c.

If you interrupt the script and resume it on the same minute it will by default overwrite the previous collected data, so beware.

Let me know if you want to make some other modifications to the script. For further curl parameters I recommend the read of the curl man page (man curl on a terminal).

Have fun.

Share:
13,740

Related videos on Youtube

Btibert3
Author by

Btibert3

Updated on September 18, 2022

Comments

  • Btibert3
    Btibert3 almost 2 years

    I am following along with Twitter's developer and want to do a very basic call against their filter real-time API. The code below almost gets me there:

    curl -s -u twitterusername:twitterpassword https://stream.twitter.com/1/statuses/sample.json -o "somefile.txt"
    

    My hope is dynamically name the file such that hourly logs of the data are captured.

    EDIT: It is worth nothing that I am hoping this command remains open, and that that the data I receive are continuous. I am looking to redirect the output every hour to different files.

    I am completely new to command line and ubuntu, so I don't even know where to start. Any help will be much appreciated.

  • Btibert3
    Btibert3 over 12 years
    Thanks for this great response, but this is creating a file with the data, but not an hourly file. I am hoping to create hourly file from data that is streaming into my computer using the Curl command.
  • Bruno Pereira
    Bruno Pereira over 12 years
    What happens if you run the command on a terminal? Can you give more output on why it is not what you are looking for?
  • Btibert3
    Btibert3 over 12 years
    When I run the command, the data file is saved with the time info in the file name, which is great, but the file continues to collect data well after the hour is completed. I just want to command to run non-stop, with seperate files created every hour or so.
  • Bruno Pereira
    Bruno Pereira over 12 years
    Sorry, your 100% right. Its fixed, drop a comment when you have tested it.
  • Btibert3
    Btibert3 over 12 years
    Great, thanks for your help. Fantastic response. I am really excited about learning Ubuntu.