doc/muxers.texi
85466e1e
 @chapter Muxers
 @c man begin MUXERS
 
a6be21d3
 Muxers are configured elements in FFmpeg which allow writing
85466e1e
 multimedia streams to a particular type of file.
 
a6be21d3
 When you configure your FFmpeg build, all the supported muxers
85466e1e
 are enabled by default. You can list all available muxers using the
 configure option @code{--list-muxers}.
 
 You can disable all the muxers with the configure option
 @code{--disable-muxers} and selectively enable / disable single muxers
 with the options @code{--enable-muxer=@var{MUXER}} /
 @code{--disable-muxer=@var{MUXER}}.
 
 The option @code{-formats} of the ff* tools will display the list of
 enabled muxers.
 
 A description of some of the currently available muxers follows.
 
77d4ed7a
 @anchor{crc}
a4effe43
 @section crc
 
 CRC (Cyclic Redundancy Check) testing format.
 
 This muxer computes and prints the Adler-32 CRC of all the input audio
 and video frames. By default audio frames are converted to signed
 16-bit raw audio and video frames to raw video before computing the
 CRC.
 
 The output of the muxer consists of a single line of the form:
 CRC=0x@var{CRC}, where @var{CRC} is a hexadecimal number 0-padded to
 8 digits containing the CRC for all the decoded input frames.
 
 For example to compute the CRC of the input, and store it in the file
 @file{out.crc}:
 @example
 ffmpeg -i INPUT -f crc out.crc
 @end example
 
 You can print the CRC to stdout with the command:
 @example
 ffmpeg -i INPUT -f crc -
 @end example
 
c59b80c8
 You can select the output format of each frame with @command{ffmpeg} by
a4effe43
 specifying the audio and video codec and format. For example to
 compute the CRC of the input audio converted to PCM unsigned 8-bit
 and the input video converted to MPEG-2 video, use the command:
 @example
c59b80c8
 ffmpeg -i INPUT -c:a pcm_u8 -c:v mpeg2video -f crc -
a4effe43
 @end example
 
4c989761
 See also the @ref{framecrc} muxer.
77d4ed7a
 
 @anchor{framecrc}
 @section framecrc
 
60e4e430
 Per-packet CRC (Cyclic Redundancy Check) testing format.
77d4ed7a
 
60e4e430
 This muxer computes and prints the Adler-32 CRC for each audio
 and video packet. By default audio frames are converted to signed
77d4ed7a
 16-bit raw audio and video frames to raw video before computing the
 CRC.
 
 The output of the muxer consists of a line for each audio and video
60e4e430
 packet of the form:
 @example
 @var{stream_index}, @var{packet_dts}, @var{packet_pts}, @var{packet_duration}, @var{packet_size}, 0x@var{CRC}
 @end example
77d4ed7a
 
60e4e430
 @var{CRC} is a hexadecimal number 0-padded to 8 digits containing the
 CRC of the packet.
 
 For example to compute the CRC of the audio and video frames in
 @file{INPUT}, converted to raw audio and video packets, and store it
 in the file @file{out.crc}:
77d4ed7a
 @example
 ffmpeg -i INPUT -f framecrc out.crc
 @end example
 
60e4e430
 To print the information to stdout, use the command:
77d4ed7a
 @example
 ffmpeg -i INPUT -f framecrc -
 @end example
 
60e4e430
 With @command{ffmpeg}, you can select the output format to which the
 audio and video frames are encoded before computing the CRC for each
 packet by specifying the audio and video codec. For example, to
77d4ed7a
 compute the CRC of each decoded input audio frame converted to PCM
 unsigned 8-bit and of each decoded input video frame converted to
 MPEG-2 video, use the command:
 @example
c59b80c8
 ffmpeg -i INPUT -c:a pcm_u8 -c:v mpeg2video -f framecrc -
77d4ed7a
 @end example
 
4c989761
 See also the @ref{crc} muxer.
77d4ed7a
 
815d8f88
 @anchor{framemd5}
 @section framemd5
 
 Per-packet MD5 testing format.
 
 This muxer computes and prints the MD5 hash for each audio
 and video packet. By default audio frames are converted to signed
 16-bit raw audio and video frames to raw video before computing the
 hash.
 
 The output of the muxer consists of a line for each audio and video
 packet of the form:
 @example
 @var{stream_index}, @var{packet_dts}, @var{packet_pts}, @var{packet_duration}, @var{packet_size}, @var{MD5}
 @end example
 
 @var{MD5} is a hexadecimal number representing the computed MD5 hash
 for the packet.
 
 For example to compute the MD5 of the audio and video frames in
 @file{INPUT}, converted to raw audio and video packets, and store it
 in the file @file{out.md5}:
 @example
 ffmpeg -i INPUT -f framemd5 out.md5
 @end example
 
 To print the information to stdout, use the command:
 @example
 ffmpeg -i INPUT -f framemd5 -
 @end example
 
 See also the @ref{md5} muxer.
 
02e8f032
 @anchor{image2}
e771d2e3
 @section image2
 
 Image file muxer.
 
0cad24ce
 The image file muxer writes video frames to image files.
e771d2e3
 
0cad24ce
 The output filenames are specified by a pattern, which can be used to
 produce sequentially numbered series of files.
 The pattern may contain the string "%d" or "%0@var{N}d", this string
e771d2e3
 specifies the position of the characters representing a numbering in
0cad24ce
 the filenames. If the form "%0@var{N}d" is used, the string
e771d2e3
 representing the number in each filename is 0-padded to @var{N}
 digits. The literal character '%' can be specified in the pattern with
 the string "%%".
 
 If the pattern contains "%d" or "%0@var{N}d", the first filename of
 the file list specified will contain the number 1, all the following
 numbers will be sequential.
 
 The pattern may contain a suffix which is used to automatically
 determine the format of the image files to write.
 
 For example the pattern "img-%03d.bmp" will specify a sequence of
 filenames of the form @file{img-001.bmp}, @file{img-002.bmp}, ...,
 @file{img-010.bmp}, etc.
 The pattern "img%%-%d.jpg" will specify a sequence of filenames of the
 form @file{img%-1.jpg}, @file{img%-2.jpg}, ..., @file{img%-10.jpg},
 etc.
 
c59b80c8
 The following example shows how to use @command{ffmpeg} for creating a
e771d2e3
 sequence of files @file{img-001.jpeg}, @file{img-002.jpeg}, ...,
 taking one image every second from the input video:
 @example
c59b80c8
 ffmpeg -i in.avi -vsync 1 -r 1 -f image2 'img-%03d.jpeg'
e771d2e3
 @end example
 
c59b80c8
 Note that with @command{ffmpeg}, if the format is not specified with the
e771d2e3
 @code{-f} option and the output filename specifies an image file
 format, the image2 muxer is automatically selected, so the previous
 command can be written as:
 @example
c59b80c8
 ffmpeg -i in.avi -vsync 1 -r 1 'img-%03d.jpeg'
e771d2e3
 @end example
 
 Note also that the pattern must not necessarily contain "%d" or
 "%0@var{N}d", for example to create a single image file
 @file{img.jpeg} from the input video you can employ the command:
 @example
c59b80c8
 ffmpeg -i in.avi -f image2 -frames:v 1 img.jpeg
e771d2e3
 @end example
 
0bb240ac
 The image muxer supports the .Y.U.V image file format. This format is
 special in that that each image frame consists of three files, for
 each of the YUV420P components. To read or write this image file format,
 specify the name of the '.Y' file. The muxer will automatically open the
 '.U' and '.V' files as required.
 
815d8f88
 @anchor{md5}
 @section md5
 
 MD5 testing format.
 
 This muxer computes and prints the MD5 hash of all the input audio
 and video frames. By default audio frames are converted to signed
 16-bit raw audio and video frames to raw video before computing the
 hash.
 
 The output of the muxer consists of a single line of the form:
 MD5=@var{MD5}, where @var{MD5} is a hexadecimal number representing
 the computed MD5 hash.
 
 For example to compute the MD5 hash of the input converted to raw
 audio and video, and store it in the file @file{out.md5}:
 @example
 ffmpeg -i INPUT -f md5 out.md5
 @end example
 
 You can print the MD5 to stdout with the command:
 @example
 ffmpeg -i INPUT -f md5 -
 @end example
 
 See also the @ref{framemd5} muxer.
 
fe47ea8f
 @section MOV/MP4/ISMV
 
 The mov/mp4/ismv muxer supports fragmentation. Normally, a MOV/MP4
 file has all the metadata about all packets stored in one location
 (written at the end of the file, it can be moved to the start for
 better playback using the @command{qt-faststart} tool). A fragmented
 file consists of a number of fragments, where packets and metadata
 about these packets are stored together. Writing a fragmented
 file has the advantage that the file is decodable even if the
 writing is interrupted (while a normal MOV/MP4 is undecodable if
 it is not properly finished), and it requires less memory when writing
 very long files (since writing normal MOV/MP4 files stores info about
 every single packet in memory until the file is closed). The downside
 is that it is less compatible with other applications.
 
 Fragmentation is enabled by setting one of the AVOptions that define
 how to cut the file into fragments:
7d7e3023
 
 @table @option
 @item -moov_size @var{bytes}
fc3203fc
 Reserves space for the moov atom at the beginning of the file instead of placing the
7d7e3023
 moov atom at the end. If the space reserved is insufficient, muxing will fail.
fe47ea8f
 @item -movflags frag_keyframe
 Start a new fragment at each video keyframe.
 @item -frag_duration @var{duration}
 Create fragments that are @var{duration} microseconds long.
 @item -frag_size @var{size}
 Create fragments that contain up to @var{size} bytes of payload data.
 @item -movflags frag_custom
 Allow the caller to manually choose when to cut fragments, by
 calling @code{av_write_frame(ctx, NULL)} to write a fragment with
 the packets written so far. (This is only useful with other
92f9b26c
 applications integrating libavformat, not from @command{ffmpeg}.)
39f5a546
 @item -min_frag_duration @var{duration}
 Don't create fragments that are shorter than @var{duration} microseconds long.
fe47ea8f
 @end table
 
39f5a546
 If more than one condition is specified, fragments are cut when
 one of the specified conditions is fulfilled. The exception to this is
 @code{-min_frag_duration}, which has to be fulfilled for any of the other
 conditions to apply.
 
fe47ea8f
 Additionally, the way the output file is written can be adjusted
 through a few other options:
 
 @table @option
 @item -movflags empty_moov
 Write an initial moov atom directly at the start of the file, without
 describing any samples in it. Generally, an mdat/moov pair is written
 at the start of the file, as a normal MOV/MP4 file, containing only
 a short portion of the file. With this option set, there is no initial
 mdat atom, and the moov atom only describes the tracks but has
 a zero duration.
 
 Files written with this option set do not work in QuickTime.
 This option is implicitly set when writing ismv (Smooth Streaming) files.
 @item -movflags separate_moof
 Write a separate moof (movie fragment) atom for each track. Normally,
 packets for all tracks are written in a moof atom (which is slightly
 more efficient), but with this option set, the muxer writes one moof/mdat
 pair for each track, making it easier to separate tracks.
 
 This option is implicitly set when writing ismv (Smooth Streaming) files.
7d7e3023
 @end table
 
fe47ea8f
 Smooth Streaming content can be pushed in real time to a publishing
 point on IIS with this muxer. Example:
 @example
92f9b26c
 ffmpeg -re @var{<normal input/transcoding options>} -movflags isml+frag_keyframe -f ismv http://server/publishingpoint.isml/Streams(Encoder1)
fe47ea8f
 @end example
 
445996aa
 @section mpegts
 
 MPEG transport stream muxer.
 
 This muxer implements ISO 13818-1 and part of ETSI EN 300 468.
 
 The muxer options are:
 
 @table @option
 @item -mpegts_original_network_id @var{number}
 Set the original_network_id (default 0x0001). This is unique identifier
 of a network in DVB. Its main use is in the unique identification of a
 service through the path Original_Network_ID, Transport_Stream_ID.
 @item -mpegts_transport_stream_id @var{number}
 Set the transport_stream_id (default 0x0001). This identifies a
 transponder in DVB.
 @item -mpegts_service_id @var{number}
 Set the service_id (default 0x0001) also known as program in DVB.
 @item -mpegts_pmt_start_pid @var{number}
 Set the first PID for PMT (default 0x1000, max 0x1f00).
 @item -mpegts_start_pid @var{number}
 Set the first PID for data packets (default 0x0100, max 0x0f00).
 @end table
 
 The recognized metadata settings in mpegts muxer are @code{service_provider}
 and @code{service_name}. If they are not set the default for
2d2b5a14
 @code{service_provider} is "FFmpeg" and the default for
445996aa
 @code{service_name} is "Service01".
 
 @example
c59b80c8
 ffmpeg -i file.mpg -c copy \
445996aa
      -mpegts_original_network_id 0x1122 \
      -mpegts_transport_stream_id 0x3344 \
      -mpegts_service_id 0x5566 \
      -mpegts_pmt_start_pid 0x1500 \
      -mpegts_start_pid 0x150 \
      -metadata service_provider="Some provider" \
      -metadata service_name="Some Channel" \
      -y out.ts
 @end example
 
f4acb837
 @section null
 
 Null muxer.
 
 This muxer does not generate any output file, it is mainly useful for
 testing or benchmarking purposes.
 
c59b80c8
 For example to benchmark decoding with @command{ffmpeg} you can use the
f4acb837
 command:
 @example
 ffmpeg -benchmark -i INPUT -f null out.null
 @end example
 
 Note that the above command does not read or write the @file{out.null}
c59b80c8
 file, but specifying the output file is required by the @command{ffmpeg}
f4acb837
 syntax.
 
 Alternatively you can write the command as:
 @example
 ffmpeg -benchmark -i INPUT -f null -
 @end example
 
5f3c436b
 @section matroska
 
 Matroska container muxer.
 
 This muxer implements the matroska and webm container specs.
 
 The recognized metadata settings in this muxer are:
 
 @table @option
 
 @item title=@var{title name}
 Name provided to a single track
 @end table
 
 @table @option
 
 @item language=@var{language name}
 Specifies the language of the track in the Matroska languages form
 @end table
 
 @table @option
 
4c509fe3
 @item stereo_mode=@var{mode}
5f3c436b
 Stereo 3D video layout of two views in a single video track
 @table @option
 @item mono
 video is not stereo
 @item left_right
 Both views are arranged side by side, Left-eye view is on the left
 @item bottom_top
 Both views are arranged in top-bottom orientation, Left-eye view is at bottom
 @item top_bottom
 Both views are arranged in top-bottom orientation, Left-eye view is on top
 @item checkerboard_rl
 Each view is arranged in a checkerboard interleaved pattern, Left-eye view being first
 @item checkerboard_lr
 Each view is arranged in a checkerboard interleaved pattern, Right-eye view being first
 @item row_interleaved_rl
 Each view is constituted by a row based interleaving, Right-eye view is first row
 @item row_interleaved_lr
 Each view is constituted by a row based interleaving, Left-eye view is first row
 @item col_interleaved_rl
 Both views are arranged in a column based interleaving manner, Right-eye view is first column
 @item col_interleaved_lr
 Both views are arranged in a column based interleaving manner, Left-eye view is first column
 @item anaglyph_cyan_red
 All frames are in anaglyph format viewable through red-cyan filters
 @item right_left
 Both views are arranged side by side, Right-eye view is on the left
 @item anaglyph_green_magenta
 All frames are in anaglyph format viewable through green-magenta filters
 @item block_lr
 Both eyes laced in one Block, Left-eye view is first
 @item block_rl
 Both eyes laced in one Block, Right-eye view is first
 @end table
 @end table
 
 For example a 3D WebM clip can be created using the following command line:
 @example
c59b80c8
 ffmpeg -i sample_left_right_clip.mpg -an -c:v libvpx -metadata stereo_mode=left_right -y stereo_clip.webm
5f3c436b
 @end example
 
02e8f032
 @section segment
 
 Basic stream segmenter.
 
 The segmenter muxer outputs streams to a number of separate files of nearly
 fixed duration. Output filename pattern can be set in a fashion similar to
 @ref{image2}.
 
 Every segment starts with a video keyframe, if a video stream is present.
 The segment muxer works best with a single constant frame rate video.
 
 Optionally it can generate a flat list of the created segments, one segment
 per line.
 
 @table @option
 @item segment_format @var{format}
 Override the inner container format, by default it is guessed by the filename
 extension.
 @item segment_time @var{t}
 Set segment duration to @var{t} seconds.
 @item segment_list @var{name}
 Generate also a listfile named @var{name}.
 @item segment_list_size @var{size}
 Overwrite the listfile once it reaches @var{size} entries.
0c1759ac
 @item segment_wrap @var{limit}
 Wrap around segment index once it reaches @var{limit}.
02e8f032
 @end table
 
 @example
0edf7ebc
 ffmpeg -i in.mkv -c copy -map 0 -f segment -list out.list out%03d.nut
02e8f032
 @end example
 
32253747
 @section mp3
 
 The MP3 muxer writes a raw MP3 stream with an ID3v2 header at the beginning and
 optionally an ID3v1 tag at the end. ID3v2.3 and ID3v2.4 are supported, the
 @code{id3v2_version} option controls which one is used. The legacy ID3v1 tag is
 not written by default, but may be enabled with the @code{write_id3v1} option.
 
 For seekable output the muxer also writes a Xing frame at the beginning, which
 contains the number of frames in the file. It is useful for computing duration
 of VBR files.
 
 The muxer supports writing ID3v2 attached pictures (APIC frames). The pictures
 are supplied to the muxer in form of a video stream with a single packet. There
 can be any number of those streams, each will correspond to a single APIC frame.
 The stream metadata tags @var{title} and @var{comment} map to APIC
 @var{description} and @var{picture type} respectively. See
 @url{http://id3.org/id3v2.4.0-frames} for allowed picture types.
 
 Note that the APIC frames must be written at the beginning, so the muxer will
 buffer the audio frames until it gets all the pictures. It is therefore advised
 to provide the pictures as soon as possible to avoid excessive buffering.
 
 Examples:
 
 Write an mp3 with an ID3v2.3 header and an ID3v1 footer:
 @example
79ae084e
 ffmpeg -i INPUT -id3v2_version 3 -write_id3v1 1 out.mp3
32253747
 @end example
 
 Attach a picture to an mp3:
 @example
79ae084e
 ffmpeg -i input.mp3 -i cover.png -c copy -metadata:s:v title="Album cover"
32253747
 -metadata:s:v comment="Cover (Front)" out.mp3
 @end example
02e8f032
 
85466e1e
 @c man end MUXERS