How to successfully parse the output of FFMpeg in NodeJS

10,411

Solution 1

An update on this, I worked with one of the guys off the IRC channel: #ffmpeg on FreeNode. The answer was to send the output via pipe to stdout.

For example I appended the following to the FFMpeg command:

-progress pipe:1

The progress flag is used to give an output every second with information about the stream, so this is pretty much everything you get from the stderr stream every second, but piped to the stdout stream in a format that I can parse. Below is taken from the documentation.

-progress url (global) Send program-friendly progress information to url. Progress information is written approximately every second and at the end of the encoding process. It is made of "key=value" lines. key consists of only alphanumeric characters. The last key of a sequence of progress information is always "progress".

Here is an example of the code I used to parse the stream information:

ffmpeg.stdout.on('data', function (data) {

    var tLines = data.toString().split('\n');
    var progress = {};
    for (var i = 0; i < tLines.length; i++) {
        var key = tLines[i].split('=');
        if (typeof key[0] != 'undefined' && typeof key[1] != 'undefined') {
            progress[key[0]] = key[1];
        }
    }

    // The 'progress' variable contains a key value array of the data
    console.log(progress);

});

Thanks to all that commented!

Solution 2

In the spirit of not reinventing the wheel, you might want to try using fluent-ffmpeg. It dispatches a progress event with a number of useful fields

'progress': transcoding progress information

The progress event is emitted every time ffmpeg reports progress information. It is emitted with an object argument with the following keys:

  • frames: total processed frame count
  • currentFps: framerate at which FFmpeg is currently processing
  • currentKbps: throughput at which FFmpeg is currently processing
  • targetSize: current size of the target file in kilobytes
  • timemark: the timestamp of the current frame in seconds
  • percent: an estimation of the progress percentage

If you're curious about how they do this, you can read the source, starting from here

Ffmpeg uses stderr to output log info because stdout is used for piping the output to other processes. The stuff in stderr is actually just debug information, and not the actual output of the process.


BONUS ROUND

I've seen some hacky video players that use websockets to stream videos, but that approach has a number of issues with it. I'm not going to go over those, but I will explain why I think you should use hls.js.

Support is pretty good; basically works everywhere except old IE. It uses MSE to upgrade the standard video element, so you don't have to wrestle with building a custom player.

Here are the docs for the hls format flag

Here's some code that I'm using to stream from an IPTV box to a web page.

this.ffmpeg = new FFFmpeg()
this.ffmpeg.input(request(this.http_stream))
    .videoCodec('copy')
    .audioCodec('copy')
    .outputOptions([
        '-f hls',
        '-hls_list_size 6',
        '-hls_flags delete_segments'
    ])
    .output( path.join(this.out_dir, 'video.m3u8') )
    .run()

It generates a .m3u8 manifest file along with segmented mpeg-ts video files. All you need to do after that is load the m3u8 file into the hls.js player and you have a live stream!

If you're going to re-encode the stream, you will probably see some low fps and glitchiness. I'm lucky since my source stream is already encoded as mpeg-ts.

Share:
10,411
Dahknee
Author by

Dahknee

Danny Full Stack Web Developer &amp; Hybrid App Developer. A developer with knowledge in many web based languages, but always wanting to learn more, specialise in TypeScript, WebSockets and Node.JS, on the side learning Rust! Languages, Frameworks &amp; Databases HTML, HTML5 CSS, SCSS, SASS JavaScript, TypeScript, Vue.JS, React, Node.JS PHP, Symfony 4+, MVC, RESTful, CRUD Lua (5.1, 5.2, 5.3) Java (Maven, SpringFramework) MySQL, Redis, MongoDB WebSockets, SSE, WebRTC

Updated on June 28, 2022

Comments

  • Dahknee
    Dahknee almost 2 years

    So I have seen a lot of topics on FFMPeg and it's a great tool I learnt about today, but I have spent the day perfecting the command and now am a little stuck with the NodeJS part.

    In essence the command does the following: take input from a Mac OSX webcam, and then stream it to a web-socket. Now I looked at a lot of the NodeJS libraries but I couldn't find one that did what I need; or did not understand how to. Here is an example of the command that I am using:

    ffmpeg -f avfoundation -framerate 30 -video_size 640x480 -pix_fmt uyvy422 -i "0:1" -f mpegts -codec:v mpeg1video -s 640x480 -b:v 1000k -bf 0 http://localhost:8081/stream
    

    This does everything I need for the streaming side of things, but I wish to call it via NodeJS, and then be able to monitor the log, and parse the data that comes back for example:

    frame= 4852 fps= 30 q=6.8 size=   30506kB time=00:02:41.74 bitrate=1545.1kbits/s speed=   1x    \r
    

    and use it to get a JSON array back for me to output to a webpage.

    Now all I am doing is working on ways of actually parsing the data, and I have looked at lots of other answers for things like this, but I can't seem to split/replace/regex it. I can't get anything but a long string from it.

    Here is the code I am using (NodeJS):

    var ffmpeg = require('child_process').spawn('/usr/local/Cellar/ffmpeg/3.3.1/bin/ffmpeg', ['-f', 'avfoundation', '-framerate', '30', '-video_size', '640x480', '-pix_fmt', 'uyvy422', '-i', '0:1', '-f', 'mpegts', '-codec:v', 'mpeg1video', '-s', '640x480', '-b:v', '1000k', '-bf', '0', 'http://localhost:8081/test']);
    
    ffmpeg.on('error', function (err) {
        console.log(err);
    });
    
    ffmpeg.on('close', function (code) {
        console.log('ffmpeg exited with code ' + code);
    });
    
    ffmpeg.stderr.on('data', function (data) {
        // console.log('stderr: ' + data);
        var tData = data.toString('utf8');
        // var a = tData.split('[\\s\\xA0]+');
        var a = tData.split('\n');
        console.log(a);
    });
    
    ffmpeg.stdout.on('data', function (data) {
        var frame = new Buffer(data).toString('base64');
        // console.log(frame);
    });
    

    I have tried splitting with new lines, carridge return, spaces, tabs, but I just can't seem to get a basic array of bits, that I can work with.

    Another thing to note, is you will notice the log comes back via stderr, I have seen this online and apparently it does it for a lot of people? So I am not sure what the deal is with that? but the code is is the sdterr callback.

    Any help is very appreciated as I am truly confused on what I am doing wrong.

    Thanks.

  • Dahknee
    Dahknee almost 7 years
    Thanks for this, do you know how to map the current command I am using to use this? I looked at this library but wasn't sure how to actually get it to run the command to get the inputs and then stream them to a websocket, as I couldn't find an example and the examples seemed to only be videos...? If so I will definitely use this!!
  • posit labs
    posit labs almost 7 years
    There is an example of streaming to hls format here: github.com/fluent-ffmpeg/node-fluent-ffmpeg/blob/master/…
  • Dahknee
    Dahknee almost 7 years
    Where it says .save(), do you think I can add the url for the websocket there? and hmmm not sure what hls is? The .addOption etc, is that to just add more to the command?
  • posit labs
    posit labs almost 7 years
    Yeah, i think addOption will just add a flag in place. I'm not sure. Anyway, I updated the answer with some more info
  • Dahknee
    Dahknee almost 7 years
    Ahh okay, thank you! and what do you mean by your bonus stuff? The websocket way seems fairly stable, I can't see a better way of doing this though? being able to broadcast to a large base of users anyway? we tested it internally at work and was able to get 50 people on it, with less than a second latency, and that was on an external server? The only thing we had an issue with was audio in Safari iOS... :/
  • posit labs
    posit labs almost 7 years
    I'm still new to this, but if I were going to production with a video streamer, I'd probably upload the mpeg-ts and m3u8 files to cloud storage and serve from there. I imagine a static file server will perform much better than websockets. Your websocket method might be ok, but how are you handling playback? Canvas and WebAudio? Surely the native player is more efficient.
  • Dahknee
    Dahknee almost 7 years
    The issue with this is we need to live stream the audio, it's not specifically a web player, but actually a live webcam and audio stream. Definitely with you though for serving static videos, but not for a live stream. For the live stream I am using jsmpeg and canvas.
  • posit labs
    posit labs almost 7 years
  • user1742529
    user1742529 about 6 years
    hls.js is not realtime, it has some seconds delay, because of buffering and playlist.
  • posit labs
    posit labs about 6 years
    That shouldn't stop you from using it. Really, no stream is realtime. The delay from the truck to the satellite to the office was 11 seconds on my last shoot. That was BEFORE re-encoding for broadcast