FFmpeg decoding .mp4 video file

11,332

See the "detailed description" in the muxing docs. You:

  1. set ctx->oformat using av_guess_format
  2. set ctx->pb using avio_open2
  3. call avformat_new_stream for each stream in the output file. If you're re-encoding, this is by adding each stream of the input file into the output file.
  4. call avformat_write_header
  5. call av_interleaved_write_frame in a loop
  6. call av_write_trailer
  7. close the file (avio_close) and clear up all allocated memory
Share:
11,332

Related videos on Youtube

Sir DrinksCoffeeALot
Author by

Sir DrinksCoffeeALot

Updated on June 04, 2022

Comments

  • Sir DrinksCoffeeALot
    Sir DrinksCoffeeALot almost 2 years

    I'm working on a project that needs to open .mp4 file format, read it's frames 1 by 1, decode them and encode them with better type of lossless compression and save them into a file.

    Please correct me if i'm wrong with order of doing things, because i'm not 100% sure how this particular thing should be done. From my understanding it should go like this:

    1. Open input .mp4 file
    2. Find stream info -> find video stream index
    3. Copy codec pointer of found video stream index into AVCodecContext type pointer
    4. Find decoder -> allocate codec context -> open codec
    5. Read frame by frame -> decode the frame -> encode the frame -> save it into a file
    

    So far i encountered couple of problems. For example, if i want to save a frame using av_interleaved_write_frame() function, i can't open input .mp4 file using avformat_open_input() since it's gonna populate filename part of the AVFormatContext structure with input file name and therefore i can't "write" into that file. I've tried different solution using av_guess_format() but when i dump format using dump_format() i get nothing so i can't find stream information about which codec is it using.

    So if anyone have any suggestions, i would really appreciate them. Thank you in advance.

    • Eugene
      Eugene about 8 years
      Are you trying to convert a .mp4 to a series of lossless still images?
    • Sir DrinksCoffeeALot
      Sir DrinksCoffeeALot about 8 years
      Well i'm trying to split .mp4 file into frames, compress them and send them through a network. On the other end those frames will get concatenated back into a .mp4 file.
    • Eugene
      Eugene about 8 years
      What's the point of that? And compressing lossless images is pointless.The mp4 format is already designed as an efficient container. This process is slow, processor intensive, and bandwidth inefficient, etc.
    • Sir DrinksCoffeeALot
      Sir DrinksCoffeeALot about 8 years
      It's because i need to find out differences between multiple codecs, main goal is to split 4k video into frames, compress them and send them through 1Gbit connection. So i need to find out which codec would be the best compression-wise. And as an example i have HD file on which i need to do experiments.
  • Sir DrinksCoffeeALot
    Sir DrinksCoffeeALot about 8 years
    I'm not using static build of ffmpeg, i'm using their API in my own project.
  • Eugene
    Eugene about 8 years
    FFmpeg is the command line utility. Which library and language are you using? Libavcodec?
  • Sir DrinksCoffeeALot
    Sir DrinksCoffeeALot about 8 years
    I'm using libavcodec/format/device/filter/util/postproc/swscale. I'm writing in C. Basicly it's an old dev build of FFmpeg.
  • Gyan
    Gyan about 8 years
    FFmpeg is a project containing a bunch of libraries for A/V manipulation and an optional set of binaries such as ffmpeg, ffplay..etc which allow you to use the functions available in the libraries.
  • Sir DrinksCoffeeALot
    Sir DrinksCoffeeALot about 8 years
    Do you know how can i copy content of AVFrame structure to a char* buffer, so i can send that buffer using winsock2's send() function to a different process? I'm trying to create local server-client communication to measure time needed to send a frame using different compression methods.
  • Ronald S. Bultje
    Ronald S. Bultje about 8 years
    for (y = 0; y < height; y++) memcpy(ptr + y *width, frame->data[0] + y * frame->linesize[0], width). You can also use avcodec_encode_video2() to compress frames (instead of gzip or so, which is what you're probably trying to do).
  • Sir DrinksCoffeeALot
    Sir DrinksCoffeeALot about 8 years
    Hmm if i do it like that rather than copying whole AVFrame structure, considering i'm copying raw video frames, would i be able to encode that frame (based only on frame->data[0] and frame->linesize[0]) on the recipient side using avcodec_encode_video2() function?
  • Sir DrinksCoffeeALot
    Sir DrinksCoffeeALot about 8 years
    About compression, i was planning to use lzw. I will take closer look into avcodec_encode_video2() compression-wise. Btw, thank you for your responses i really appreciate it.
  • Ronald S. Bultje
    Ronald S. Bultje about 8 years
    You would do the same for data[1-2] with linesize[1-2] (assuming some planar YUV format), sorry, forgot to mention that; or rather, it depends on the number of planes for your pixfmt (see pixdesc).
  • Sir DrinksCoffeeALot
    Sir DrinksCoffeeALot about 8 years
    I succeeded sending single decoded frame from one process to another one using send/recv functions. I determined size of char* buffer needed to store a frame of type PIX_FMT_RGB24 using avpicture_get_size(), copying frame->data[0] was done like this : memcpy(buffer + y * frame->linesize[0], frame->data[0] + y*frame->linesize[0], width * 3); and writing to a .ppm file on client side was done using: fwrite(buffer + y * width * 3, 1, width * 3, pf);. Basicly i just needed to multiply width from your code by 3 (since its RGB24) or use exact data stored in frame->linesize[0].
  • Sir DrinksCoffeeALot
    Sir DrinksCoffeeALot about 8 years
    I have one more question before i hopefully stop bugging you about FFmpeg, for example if i want to use LZW compression method which is already implemented in FFmpeg, how would i do it using avcodec_encode_video2() function? I guess i need to populate AVCodecContext variable with different parameters before calling the encode funtion?
  • Ronald S. Bultje
    Ronald S. Bultje about 8 years
    ffmpeg has no vanilla lzw encoder. There's a tiff and a gif encoder, which use lzw internally...
  • Sir DrinksCoffeeALot
    Sir DrinksCoffeeALot about 8 years
    Yea i saw tiff uses LZW internally, was hoping i could somehow implement their source into mine. Ill try to do that today.