精华内容
下载资源
问答
  • ffplay源码分析

    2021-05-23 17:11:38
    ffplay源码分析 -------------------------------------------------------------------解码部分 4.4版本的源码简要分析,不会深究细节,仅以流程和原理为主,做为学习和记录之用,希望能在音视频领域有所成就。...

    ffplay源码分析

    -------------------------------------------------------------------解码部分

    4.4版本的源码简要分析,不会深究细节,仅以流程和原理为主,做为学习和记录之用,希望能在音视频领域有所成就。ffplay在windows平台,使用sdl显示渲染, 其余解码同步等流程基本一致,ffplay.c总计3000多行,不算太多,以本人有限的C语言功力,尝试解读一把。

    入口函数 Main 的重要函数

    int main(int argc, char **argv)
    {
    //懒加载主要是加载sdl库
    init_dynload();
    
    //设置相关日志级别
    av_log_set_flags(AV_LOG_SKIP_REPEATED);
    parse_loglevel(argc, argv, options);
    
    //初始化相关参数?注册信号量,解析命令参数
    init_opts();
    signal(SIGINT , sigterm_handler); /* Interrupt (ANSI).    */
    signal(SIGTERM, sigterm_handler); /* Termination (ANSI).  */
    parse_options(NULL, argc, argv, options, opt_input_file);
    /**
    创建命令SDL render 事件监听等等就不说明了
    ...
    */
    
    //打开文件开始解码,此处为重点
    VideoState *is;
    is = stream_open(input_filename, file_iformat);    
    
    //监听事件,播放命令等
    event_loop(is);
    }
    

    1、开始解码函数 stream_open(input_filename, file_iformat);

    //此函数返回VideoState结构体指针,包含了当前音视频输入的一系列内存数据、状态,和几乎所有与解码播放相关的队列和控制信息等数据和结构体
    static VideoState *stream_open(const char *filename,
                                   const AVInputFormat *iformat)
    {
    //malloc VideoState ... 初始化等
    VideoState *is;
    is = av_mallocz(sizeof(VideoState));
    //...
        
    /* start video display 有几个重要函数   frame_queue_init  packet_queue_init*/
    
        
        if (frame_queue_init(&is->pictq, &is->videoq, VIDEO_PICTURE_QUEUE_SIZE, 1) < 0)
            goto fail;
        if (frame_queue_init(&is->subpq, &is->subtitleq, SUBPICTURE_QUEUE_SIZE, 0) < 0)
            goto fail;
        if (frame_queue_init(&is->sampq, &is->audioq, SAMPLE_QUEUE_SIZE, 1) < 0)
            goto fail;
    
        if (packet_queue_init(&is->videoq) < 0 ||
            packet_queue_init(&is->audioq) < 0 ||
            packet_queue_init(&is->subtitleq) < 0)
            goto fail;
    
    //...初始化时钟,音量等方法
    
    //启动读取线程,执行读取方法
    is->read_tid = SDL_CreateThread(read_thread, "read_thread", is);
    }
    
    
    

    1.1 初始化Frame Queue,frame_queue_init

    //frame_queue_init 是对framequeue进行初始化,遍历frame初始化,frame就是包装了一下avframe,
    //把avframe的数据取出来了,而FrameQueue是一个ringbuffer,保存的就是 frame的队列
    /**
    typedef struct  FrameQueue  {
        Frame  queue[ FRAME_QUEUE_SIZE  ];
        int  rindex; //表示循环队列的结尾处
        int  windex; //表示循环队列的开始处
        int  size;
        int  max_size;
        int  keep_last;
        int  rindex_shown;//一开始为0,之后一直为1
        SDL_mutex  *mutex;
        SDL_cond  *cond;
        PacketQueue  *pktq;
    }  FrameQueue ;
    
    typedef struct Frame {
        AVFrame *frame;
        AVSubtitle sub;
        int serial;
        double pts;           /* presentation timestamp for the frame */
        double duration;      /* estimated duration of the frame */
        int64_t pos;          /* byte position of the frame in the input file */
        int width;
        int height;
        int format;
        AVRational sar;
        int uploaded;
        int flip_v;
    } Frame;
    */
        
    static int frame_queue_init(FrameQueue *f, PacketQueue *pktq, int max_size, int keep_last)
    {
        int i;
        memset(f, 0, sizeof(FrameQueue));
        if (!(f->mutex = SDL_CreateMutex())) { //创建锁
            av_log(NULL, AV_LOG_FATAL, "SDL_CreateMutex(): %s\n", SDL_GetError());
            return AVERROR(ENOMEM);
        }
        if (!(f->cond = SDL_CreateCond())) { //创建条件变量
            av_log(NULL, AV_LOG_FATAL, "SDL_CreateCond(): %s\n", SDL_GetError());
            return AVERROR(ENOMEM);
        }
        f->pktq = pktq;
        f->max_size = FFMIN(max_size, FRAME_QUEUE_SIZE);
        f->keep_last = !!keep_last;
        for (i = 0; i < f->max_size; i++)
            if (!(f->queue[i].frame = av_frame_alloc()))
                return AVERROR(ENOMEM);
        return 0;
    }
    

    1.2 初始化Packet Queue,packet_queue_init

    //4.4多了一个pkt_list的初始化  av_fifo_alloc,是通过新的方法 libavutil/fifo.c 创建了一个环形缓冲区
    static int packet_queue_init(PacketQueue *q)
    {
        memset(q, 0, sizeof(PacketQueue));
        q->pkt_list = av_fifo_alloc(sizeof(MyAVPacketList));
        if (!q->pkt_list)
            return AVERROR(ENOMEM);
        q->mutex = SDL_CreateMutex();
        if (!q->mutex) {
            av_log(NULL, AV_LOG_FATAL, "SDL_CreateMutex(): %s\n", SDL_GetError());
            return AVERROR(ENOMEM);
        }
        q->cond = SDL_CreateCond();
        if (!q->cond) {
            av_log(NULL, AV_LOG_FATAL, "SDL_CreateCond(): %s\n", SDL_GetError());
            return AVERROR(ENOMEM);
        }
        q->abort_request = 1;
        return 0;
    }
    

    1.3 开始执行解码线程中的方法read_thread

    方法非常长,简言之,初始化ffmpeg相关的解封装,解复用环境,启动对应的stream解码线程,阻塞等seek操作,来对Packet Queue来进行清空和put操作

    /* this thread gets the stream from the disk or the network */
    static int read_thread(void *arg)
    {
    
        AVPacket *pkt = NULL;
        pkt = av_packet_alloc();
    //avformat初始化
        ic = avformat_alloc_context();
    
        ic->interrupt_callback.callback = decode_interrupt_cb;
        ic->interrupt_callback.opaque = is;
        if (!av_dict_get(format_opts, "scan_all_pmts", NULL, AV_DICT_MATCH_CASE)) {
            av_dict_set(&format_opts, "scan_all_pmts", "1", AV_DICT_DONT_OVERWRITE);
            scan_all_pmts_set = 1;
        }
    //打开文件
        err = avformat_open_input(&ic, is->filename, is->iformat, &format_opts);
    
        if (scan_all_pmts_set)
            av_dict_set(&format_opts, "scan_all_pmts", NULL, AV_DICT_MATCH_CASE);
    
        is->ic = ic;
        
        av_format_inject_global_side_data(ic);
        
        if (find_stream_info) {
            AVDictionary **opts = setup_find_stream_info_opts(ic, codec_opts);
            int orig_nb_streams = ic->nb_streams;
        
            err = avformat_find_stream_info(ic, opts);
        
            for (i = 0; i < orig_nb_streams; i++)
                av_dict_free(&opts[i]);
            av_freep(&opts);
    
        }
        
        if (ic->pb)
            ic->pb->eof_reached = 0; // FIXME hack, ffplay maybe should not use avio_feof() to test for the end
        
        if (seek_by_bytes < 0)
            seek_by_bytes = !!(ic->iformat->flags & AVFMT_TS_DISCONT) && strcmp("ogg", ic->iformat->name);
        
        is->max_frame_duration = (ic->iformat->flags & AVFMT_TS_DISCONT) ? 10.0 : 3600.0;
        
        if (!window_title && (t = av_dict_get(ic->metadata, "title", NULL, 0)))
            window_title = av_asprintf("%s - %s", t->value, input_filename);
        
        /* if seeking requested, we execute it */
        if (start_time != AV_NOPTS_VALUE) {
            int64_t timestamp;
        
            timestamp = start_time;
            /* add the stream start time */
            if (ic->start_time != AV_NOPTS_VALUE)
                timestamp += ic->start_time;
            ret = avformat_seek_file(ic, -1, INT64_MIN, timestamp, INT64_MAX, 0);
    
        }
        
        is->realtime = is_realtime(ic);
    
        
        if (!video_disable)
            st_index[AVMEDIA_TYPE_VIDEO] =
                av_find_best_stream(ic, AVMEDIA_TYPE_VIDEO,
                                    st_index[AVMEDIA_TYPE_VIDEO], -1, NULL, 0);
        if (!audio_disable)
            st_index[AVMEDIA_TYPE_AUDIO] =
                av_find_best_stream(ic, AVMEDIA_TYPE_AUDIO,
                                    st_index[AVMEDIA_TYPE_AUDIO],
                                    st_index[AVMEDIA_TYPE_VIDEO],
                                    NULL, 0);
        if (!video_disable && !subtitle_disable)
            st_index[AVMEDIA_TYPE_SUBTITLE] =
                av_find_best_stream(ic, AVMEDIA_TYPE_SUBTITLE,
                                    st_index[AVMEDIA_TYPE_SUBTITLE],
                                    (st_index[AVMEDIA_TYPE_AUDIO] >= 0 ?
                                     st_index[AVMEDIA_TYPE_AUDIO] :
                                     st_index[AVMEDIA_TYPE_VIDEO]),
                                    NULL, 0);
        
        is->show_mode = show_mode;
        if (st_index[AVMEDIA_TYPE_VIDEO] >= 0) {
            AVStream *st = ic->streams[st_index[AVMEDIA_TYPE_VIDEO]];
            AVCodecParameters *codecpar = st->codecpar;
            AVRational sar = av_guess_sample_aspect_ratio(ic, st, NULL);
            if (codecpar->width)
                set_default_window_size(codecpar->width, codecpar->height, sar);
        }
        
    /*下面这三个stream_component_open函数分别打开音频流,视频流,字幕流,并在其中开启了解码线程,
    留待后面再说
    */
        if (st_index[AVMEDIA_TYPE_AUDIO] >= 0) {
            stream_component_open(is, st_index[AVMEDIA_TYPE_AUDIO]);
        }
        
        ret = -1;
        if (st_index[AVMEDIA_TYPE_VIDEO] >= 0) {
            ret = stream_component_open(is, st_index[AVMEDIA_TYPE_VIDEO]);
        }
        if (is->show_mode == SHOW_MODE_NONE)
            is->show_mode = ret >= 0 ? SHOW_MODE_VIDEO : SHOW_MODE_RDFT;
        
        if (st_index[AVMEDIA_TYPE_SUBTITLE] >= 0) {
            stream_component_open(is, st_index[AVMEDIA_TYPE_SUBTITLE]);
        }
        
    
        
        if (infinite_buffer < 0 && is->realtime)
            infinite_buffer = 1;
    //阻塞等操作命令    
        for (;;) {
            if (is->abort_request)
                break;
            if (is->paused != is->last_paused) {
                is->last_paused = is->paused;
                if (is->paused)
                    is->read_pause_return = av_read_pause(ic);
                else
                    av_read_play(ic);
            }
    
    #if CONFIG_RTSP_DEMUXER || CONFIG_MMSH_PROTOCOL
            if (is->paused &&
                    (!strcmp(ic->iformat->name, "rtsp") ||
                     (ic->pb && !strncmp(input_filename, "mmsh:", 5)))) {
                /* wait 10 ms to avoid trying to get another packet */
                /* XXX: horrible */
                SDL_Delay(10);
                continue;
            }
    #endif
            if (is->seek_req) {
                int64_t seek_target = is->seek_pos;
                int64_t seek_min    = is->seek_rel > 0 ? seek_target - is->seek_rel + 2: INT64_MIN;
                int64_t seek_max    = is->seek_rel < 0 ? seek_target - is->seek_rel - 2: INT64_MAX;
    // FIXME the +-2 is due to rounding being not done in the correct direction in generation
    //      of the seek_pos/seek_rel variables
    
                
    //seek操作发生时,清理PacketQueue          
                ret = avformat_seek_file(is->ic, -1, seek_min, seek_target, seek_max, is->seek_flags);
                if (ret < 0) {
                    av_log(NULL, AV_LOG_ERROR,
                           "%s: error while seeking\n", is->ic->url);
                } else {
                    if (is->audio_stream >= 0)
                        packet_queue_flush(&is->audioq);
                    if (is->subtitle_stream >= 0)
                        packet_queue_flush(&is->subtitleq);
                    if (is->video_stream >= 0)
                        packet_queue_flush(&is->videoq);
                    if (is->seek_flags & AVSEEK_FLAG_BYTE) {
                       set_clock(&is->extclk, NAN, 0);
                    } else {
                       set_clock(&is->extclk, seek_target / (double)AV_TIME_BASE, 0);
                    }
                }
                is->seek_req = 0;
                is->queue_attachments_req = 1;
                is->eof = 0;
                if (is->paused)
                    step_to_next_frame(is);
            }
            if (is->queue_attachments_req) {
                if (is->video_st && is->video_st->disposition & AV_DISPOSITION_ATTACHED_PIC) {
                    if ((ret = av_packet_ref(pkt, &is->video_st->attached_pic)) < 0)
                        goto fail;
                    packet_queue_put(&is->videoq, pkt);
                    packet_queue_put_nullpacket(&is->videoq, pkt, is->video_stream);
                }
                is->queue_attachments_req = 0;
            }
        
            /* if the queue are full, no need to read more */
            if (infinite_buffer<1 &&
                  (is->audioq.size + is->videoq.size + is->subtitleq.size > MAX_QUEUE_SIZE
                || (stream_has_enough_packets(is->audio_st, is->audio_stream, &is->audioq) &&
                    stream_has_enough_packets(is->video_st, is->video_stream, &is->videoq) &&
                    stream_has_enough_packets(is->subtitle_st, is->subtitle_stream, &is->subtitleq)))) {
                /* wait 10 ms */
                SDL_LockMutex(wait_mutex);
                SDL_CondWaitTimeout(is->continue_read_thread, wait_mutex, 10);
                SDL_UnlockMutex(wait_mutex);
                continue;
            }
            if (!is->paused &&
                (!is->audio_st || (is->auddec.finished == is->audioq.serial && frame_queue_nb_remaining(&is->sampq) == 0)) &&
                (!is->video_st || (is->viddec.finished == is->videoq.serial && frame_queue_nb_remaining(&is->pictq) == 0))) {
                if (loop != 1 && (!loop || --loop)) {
                    stream_seek(is, start_time != AV_NOPTS_VALUE ? start_time : 0, 0, 0);
                } else if (autoexit) {
                    ret = AVERROR_EOF;
                    goto fail;
                }
            }
            ret = av_read_frame(ic, pkt);
            if (ret < 0) {
                if ((ret == AVERROR_EOF || avio_feof(ic->pb)) && !is->eof) {
                    if (is->video_stream >= 0)
                        packet_queue_put_nullpacket(&is->videoq, pkt, is->video_stream);
                    if (is->audio_stream >= 0)
                        packet_queue_put_nullpacket(&is->audioq, pkt, is->audio_stream);
                    if (is->subtitle_stream >= 0)
                        packet_queue_put_nullpacket(&is->subtitleq, pkt, is->subtitle_stream);
                    is->eof = 1;
                }
                if (ic->pb && ic->pb->error) {
                    if (autoexit)
                        goto fail;
                    else
                        break;
                }
                SDL_LockMutex(wait_mutex);
                SDL_CondWaitTimeout(is->continue_read_thread, wait_mutex, 10);
                SDL_UnlockMutex(wait_mutex);
                continue;
            } else {
                is->eof = 0;
            }
            /* check if packet is in play range specified by user, then queue, otherwise discard */
            stream_start_time = ic->streams[pkt->stream_index]->start_time;
            pkt_ts = pkt->pts == AV_NOPTS_VALUE ? pkt->dts : pkt->pts;
            pkt_in_play_range = duration == AV_NOPTS_VALUE ||
                    (pkt_ts - (stream_start_time != AV_NOPTS_VALUE ? stream_start_time : 0)) *
                    av_q2d(ic->streams[pkt->stream_index]->time_base) -
                    (double)(start_time != AV_NOPTS_VALUE ? start_time : 0) / 1000000
                    <= ((double)duration / 1000000);
            if (pkt->stream_index == is->audio_stream && pkt_in_play_range) {
                packet_queue_put(&is->audioq, pkt);
            } else if (pkt->stream_index == is->video_stream && pkt_in_play_range
                       && !(is->video_st->disposition & AV_DISPOSITION_ATTACHED_PIC)) {
                packet_queue_put(&is->videoq, pkt);
            } else if (pkt->stream_index == is->subtitle_stream && pkt_in_play_range) {
                packet_queue_put(&is->subtitleq, pkt);
            } else {
                av_packet_unref(pkt);
            }
        }
        
        ret = 0;
    
     fail:
        if (ic && !is->ic)
            avformat_close_input(&ic);
    
        av_packet_free(&pkt);
        if (ret != 0) {
            SDL_Event event;
        
            event.type = FF_QUIT_EVENT;
            event.user.data1 = is;
            SDL_PushEvent(&event);
        }
        SDL_DestroyMutex(wait_mutex);
        return 0;
    
    }
    

    2 启动对应流的解码线程stream_component_open

    /* open a given stream. Return 0 if OK */
    static int stream_component_open(VideoState *is, int stream_index)
    {
        AVFormatContext *ic = is->ic;
        AVCodecContext *avctx;
        const AVCodec *codec;
        const char *forced_codec_name = NULL;
        AVDictionary *opts = NULL;
        AVDictionaryEntry *t = NULL;
        int sample_rate, nb_channels;
        int64_t channel_layout;
        int ret = 0;
        int stream_lowres = lowres;
    
        if (stream_index < 0 || stream_index >= ic->nb_streams)
            return -1;
    
    //初始化avcodec   
        avctx = avcodec_alloc_context3(NULL);
        if (!avctx)
            return AVERROR(ENOMEM);
        
        ret = avcodec_parameters_to_context(avctx, ic->streams[stream_index]->codecpar);
        if (ret < 0)
            goto fail;
        avctx->pkt_timebase = ic->streams[stream_index]->time_base;
    
    //拿到对应的avcodec    
        codec = avcodec_find_decoder(avctx->codec_id);
    
    switch(avctx->codec_type){
            case AVMEDIA_TYPE_AUDIO   : is->last_audio_stream    = stream_index; forced_codec_name =    audio_codec_name; break;
            case AVMEDIA_TYPE_SUBTITLE: is->last_subtitle_stream = stream_index; forced_codec_name = subtitle_codec_name; break;
            case AVMEDIA_TYPE_VIDEO   : is->last_video_stream    = stream_index; forced_codec_name =    video_codec_name; break;
        }
        if (forced_codec_name)
            codec = avcodec_find_decoder_by_name(forced_codec_name);
        if (!codec) {
            if (forced_codec_name) av_log(NULL, AV_LOG_WARNING,
                                          "No codec could be found with name '%s'\n", forced_codec_name);
            else                   av_log(NULL, AV_LOG_WARNING,
                                          "No decoder could be found for codec %s\n", avcodec_get_name(avctx->codec_id));
            ret = AVERROR(EINVAL);
            goto fail;
        }
        
        avctx->codec_id = codec->id;
        if (stream_lowres > codec->max_lowres) {
            av_log(avctx, AV_LOG_WARNING, "The maximum value for lowres supported by the decoder is %d\n",
                    codec->max_lowres);
            stream_lowres = codec->max_lowres;
        }
        avctx->lowres = stream_lowres;
        
        if (fast)
            avctx->flags2 |= AV_CODEC_FLAG2_FAST;
        
        opts = filter_codec_opts(codec_opts, avctx->codec_id, ic, ic->streams[stream_index], codec);
        if (!av_dict_get(opts, "threads", NULL, 0))
            av_dict_set(&opts, "threads", "auto", 0);
        if (stream_lowres)
            av_dict_set_int(&opts, "lowres", stream_lowres, 0);
        if ((ret = avcodec_open2(avctx, codec, &opts)) < 0) {
            goto fail;
        }
        if ((t = av_dict_get(opts, "", NULL, AV_DICT_IGNORE_SUFFIX))) {
            av_log(NULL, AV_LOG_ERROR, "Option %s not found.\n", t->key);
            ret =  AVERROR_OPTION_NOT_FOUND;
            goto fail;
        }
        
        is->eof = 0;
        ic->streams[stream_index]->discard = AVDISCARD_DEFAULT;
        switch (avctx->codec_type) {
        case AVMEDIA_TYPE_AUDIO:
    
    #if CONFIG_AVFILTER
            {
                AVFilterContext *sink;
    
                is->audio_filter_src.freq           = avctx->sample_rate;
                is->audio_filter_src.channels       = avctx->channels;
                is->audio_filter_src.channel_layout = get_valid_channel_layout(avctx->channel_layout, avctx->channels);
                is->audio_filter_src.fmt            = avctx->sample_fmt;
                if ((ret = configure_audio_filters(is, afilters, 0)) < 0)
                    goto fail;
                sink = is->out_audio_filter;
                sample_rate    = av_buffersink_get_sample_rate(sink);
                nb_channels    = av_buffersink_get_channels(sink);
                channel_layout = av_buffersink_get_channel_layout(sink);
            }
    
    #else
            sample_rate    = avctx->sample_rate;
            nb_channels    = avctx->channels;
            channel_layout = avctx->channel_layout;
    #endif
    
            /* prepare audio output */
            if ((ret = audio_open(is, channel_layout, nb_channels, sample_rate, &is->audio_tgt)) < 0)
                goto fail;
            is->audio_hw_buf_size = ret;
            is->audio_src = is->audio_tgt;
            is->audio_buf_size  = 0;
            is->audio_buf_index = 0;
        
            /* init averaging filter */
            is->audio_diff_avg_coef  = exp(log(0.01) / AUDIO_DIFF_AVG_NB);
            is->audio_diff_avg_count = 0;
            /* since we do not have a precise anough audio FIFO fullness,
               we correct audio sync only if larger than this threshold */
            is->audio_diff_threshold = (double)(is->audio_hw_buf_size) / is->audio_tgt.bytes_per_sec;
        
            is->audio_stream = stream_index;
            is->audio_st = ic->streams[stream_index];
        
            if ((ret = decoder_init(&is->auddec, avctx, &is->audioq, is->continue_read_thread)) < 0)
                goto fail;
            if ((is->ic->iformat->flags & (AVFMT_NOBINSEARCH | AVFMT_NOGENSEARCH | AVFMT_NO_BYTE_SEEK)) && !is->ic->iformat->read_seek) {
                is->auddec.start_pts = is->audio_st->start_time;
                is->auddec.start_pts_tb = is->audio_st->time_base;
            }
            if ((ret = decoder_start(&is->auddec, audio_thread, "audio_decoder", is)) < 0)
                goto out;
            SDL_PauseAudioDevice(audio_dev, 0);
            break;
        case AVMEDIA_TYPE_VIDEO:
            is->video_stream = stream_index;
            is->video_st = ic->streams[stream_index];
        
            if ((ret = decoder_init(&is->viddec, avctx, &is->videoq, is->continue_read_thread)) < 0)
                goto fail;
            if ((ret = decoder_start(&is->viddec, video_thread, "video_decoder", is)) < 0)
                goto out;
            is->queue_attachments_req = 1;
            break;
        case AVMEDIA_TYPE_SUBTITLE:
            is->subtitle_stream = stream_index;
            is->subtitle_st = ic->streams[stream_index];
        
            if ((ret = decoder_init(&is->subdec, avctx, &is->subtitleq, is->continue_read_thread)) < 0)
                goto fail;
            if ((ret = decoder_start(&is->subdec, subtitle_thread, "subtitle_decoder", is)) < 0)
                goto out;
            break;
        default:
            break;
        }
        goto out;
    
    fail:
        avcodec_free_context(&avctx);
    out:
        av_dict_free(&opts);
    
        return ret;
    
    }
    

    启动线程解码,执行解码方法video_thread和audio_thread

    static int decoder_start(Decoder *d, int (*fn)(void *), const char *thread_name, void* arg)
    {
        packet_queue_start(d->queue);
        d->decoder_tid = SDL_CreateThread(fn, thread_name, arg);
        if (!d->decoder_tid) {
            av_log(NULL, AV_LOG_ERROR, "SDL_CreateThread(): %s\n", SDL_GetError());
            return AVERROR(ENOMEM);
        }
        return 0;
    }
    

    内部解码音频线程和视频线程都调用对应的Decoder,decoder_decode_frame解码对应的frame

    static int decoder_decode_frame(Decoder *d, AVFrame *frame, AVSubtitle *sub) {
        int ret = AVERROR(EAGAIN);
        
    
        for (;;) {
            AVPacket pkt;
            
            /*
             如果Decoder所解码队列的serail 和 Decoder即将要解码的serial不相等,则表示有seek操作,要放弃seek前的pkt
             第一次调用时两值不相等
             */
            if (d->queue->serial == d->pkt_serial) {
                do {
                    if (d->queue->abort_request)
                        return -1;
                    
                    switch (d->avctx->codec_type) {
                        case AVMEDIA_TYPE_VIDEO:
                            ret = avcodec_receive_frame(d->avctx, frame);
                            if (ret >= 0) {
                                if (decoder_reorder_pts == -1) {
                                    frame->pts = frame->best_effort_timestamp;
                                } else if (!decoder_reorder_pts) {
                                    frame->pts = frame->pkt_dts;
                                }
                            }
                            break;
                        case AVMEDIA_TYPE_AUDIO:
                            ret = avcodec_receive_frame(d->avctx, frame);
                            if (ret >= 0) {
                                AVRational tb = (AVRational){1, frame->sample_rate};
                                if (frame->pts != AV_NOPTS_VALUE)
                                    frame->pts = av_rescale_q(frame->pts, d->avctx->pkt_timebase, tb);
                                else if (d->next_pts != AV_NOPTS_VALUE)
                                    frame->pts = av_rescale_q(d->next_pts, d->next_pts_tb, tb);
                                if (frame->pts != AV_NOPTS_VALUE) {
                                    d->next_pts = frame->pts + frame->nb_samples;
                                    d->next_pts_tb = tb;
                                }
                            }
                            break;
                    }
                    if (ret == AVERROR_EOF) {
                        //所有的解码结束后,Decoder的finished等于Decoder的pkt_serial
                        d->finished = d->pkt_serial;
                        avcodec_flush_buffers(d->avctx);
                        return 0;
                    }
                    if (ret >= 0)
                        return 1;
                } while (ret != AVERROR(EAGAIN));
            }
            
            //循环找到serial为当前packet队列的serial的pkt,也就是放弃seek前的pkt
            do {
                if (d->queue->nb_packets == 0)
                    SDL_CondSignal(d->empty_queue_cond);
                if (d->packet_pending) {
                    av_packet_move_ref(&pkt, &d->pkt);
                    d->packet_pending = 0;
                } else {
                    //packet_queue_get目标是获得pkt,如果队列为空,则等待,否则获取到pkt,删除头节点
                    if (packet_queue_get(d->queue, &pkt, 1, &d->pkt_serial) < 0)
                        return -1;
                }
            } while (d->queue->serial != d->pkt_serial);
            
            //表示有seek操作,清除解码器内部状态
            if (pkt.data == flush_pkt.data) {
                avcodec_flush_buffers(d->avctx);
                d->finished = 0;
                d->next_pts = d->start_pts;
                d->next_pts_tb = d->start_pts_tb;
            } else {
                if (d->avctx->codec_type == AVMEDIA_TYPE_SUBTITLE) {
                    int got_frame = 0;
                    ret = avcodec_decode_subtitle2(d->avctx, sub, &got_frame, &pkt);
                    if (ret < 0) {
                        ret = AVERROR(EAGAIN);
                    } else {
                        if (got_frame && !pkt.data) {
                            d->packet_pending = 1;
                            av_packet_move_ref(&d->pkt, &pkt);
                        }
                        ret = got_frame ? 0 : (pkt.data ? AVERROR(EAGAIN) : AVERROR_EOF);
                    }
                } else {
                    if (avcodec_send_packet(d->avctx, &pkt) == AVERROR(EAGAIN)) {
                        av_log(d->avctx, AV_LOG_ERROR, "Receive_frame and send_packet both returned EAGAIN, which is an API violation.\n");
                        d->packet_pending = 1;
                        av_packet_move_ref(&d->pkt, &pkt);
                    }
                }
                av_packet_unref(&pkt);
            }
        }
    
    }
    

    avcodec_send_packet还有非常复杂的调用栈,纯好奇看了一下,没有完全看懂,只能附一个链接,后面再看看了。

    avcodec_send_packet-> decode_receive_frame_internal-> decode_simple_receive_frame-> decode_simple_internal-> ff_decode_get_packet, ff_thread_decode_frame-> ff_thread_decode_frame-> submit_packet-> ff_attach_decode_data-> ff_attach_decode_data->

    https://blog.csdn.net/weixin_43360707/article/details/115953843?utm_medium=distribute.pc_relevant.none-task-blog-baidujs_baidulandingword-4&spm=1001.2101.3001.4242

    展开全文
  • 作者裁剪了ffplay, 只留下AVI解码播放, 详细分析了代码结构. 是学习ffmpeg不可多得的好资料. 源码包里有需要的库, 可直接使用vs编译,调试. 目前试过VS2013, 可以编译,调试. 其他版本没试过
  • ffplay源码分析1

    2021-05-24 00:57:03
    ffplay源码分析 -------------------------------------------------------------------音视频同步部分 1、音视频同步 main-> event_loop-> refresh_loop_wait_event-> video_refresh 主函数里面,event_...

    ffplay源码分析

    -------------------------------------------------------------------音视频同步部分

    1、音视频同步 main-> event_loop-> refresh_loop_wait_event-> video_refresh

    主函数里面,event_loop阻塞等待GUI操作发来的事件,在阻塞里面会有wait方法,再调用video_refresh进行音视频同步,并显示视频帧。

    static void video_refresh(void *opaque, double *remaining_time)
    {
        VideoState *is = opaque;
        double time;
        
        Frame *sp, *sp2;
        
        if (is->video_st) {
        retry:
            if (frame_queue_nb_remaining(&is->pictq) == 0) {
                // nothing to do, no picture to display in the queue
            } else {
                double last_duration, duration, delay;
                Frame *vp, *lastvp;
                
                //取得上一已播放的帧
                lastvp = frame_queue_peek_last(&is->pictq);
                //取得本帧(待播放的第一帧)
                vp = frame_queue_peek(&is->pictq);
                
                /*如果本帧的serial和视频packet队列的serial不想等,继续找下一帧,直到相等。
                也就是如果有seek操作,要显示seek操作后的帧
                 */
                if (vp->serial != is->videoq.serial) {
                    frame_queue_next(&is->pictq);
                    goto retry;
                }
                
                /*
                 如果本帧和上一帧的serial不想等,则使is->frame_timer等于当前系统时间。
                 is->frame_timer是上一帧的播放时刻。
                 */
                if (lastvp->serial != vp->serial)
                    is->frame_timer = av_gettime_relative() / 1000000.0;
                
                //如果暂停,去显示上一帧
                if (is->paused)
                    goto display;
                
                //计算vp.pts - lastvp.pts,也就是上一帧的播放时长
                last_duration = vp_duration(is, lastvp, vp);
                /*
                 计算delay值,通过调节delay值来达到提前或延后播放本帧。
                 如果视频落后于音频进度,则通过减小delay值,提前播放本帧(没等到last_duration的间隔,就播放),具体还要结合下面代码来判断。
                 如果视频超前音频进度,则增加delay值,延后播放本帧(也就是要超过last_duration),先暂时显示上一帧。
                 */
                delay = compute_target_delay(last_duration, is);
                
                time= av_gettime_relative()/1000000.0;
                /*
                 当前时间 < (上一帧播放时刻 + delay),也就是当前时间小于本帧播放时刻,意味着本帧的播放时刻未到,
                 则修改remaining_time,先显示上一帧
                 */
                if (time < is->frame_timer + delay) {
                    *remaining_time = FFMIN(is->frame_timer + delay - time, *remaining_time);
                    goto display;
                }
                //更新本帧播放时刻,更新后变为上一帧播放时刻
                is->frame_timer += delay;
                
                //如果当前时间和上一帧播放时刻差值超过阈值,则更新frame_timer为当前时间
                if (delay > 0 && time - is->frame_timer > AV_SYNC_THRESHOLD_MAX)
                    is->frame_timer = time;
                
                //更新视频时钟,目的是用于compute_target_delay函数中比较视频时钟和音频时钟上一帧的差值。
                SDL_LockMutex(is->pictq.mutex);
                if (!isnan(vp->pts))
                    update_video_pts(is, vp->pts, vp->pos, vp->serial);
                SDL_UnlockMutex(is->pictq.mutex);
                
                //如果丢帧策略生效(framedrop),并且视频落后(下一帧的播放时刻 < 当前时间),则丢弃本帧
                if (frame_queue_nb_remaining(&is->pictq) > 1) {
                    Frame *nextvp = frame_queue_peek_next(&is->pictq);
                    duration = vp_duration(is, vp, nextvp);
                    if(!is->step && (framedrop>0 || (framedrop && get_master_sync_type(is) != AV_SYNC_VIDEO_MASTER)) && time > is->frame_timer + duration){
                        is->frame_drops_late++;
                        frame_queue_next(&is->pictq);
                        goto retry;
                    }
                }
                //移动读索引rindex,使本帧变为上一帧
                frame_queue_next(&is->pictq);
                is->force_refresh = 1;
                
                if (is->step && !is->paused)
                    stream_toggle_pause(is);
            }
        display:
            /* 显示视频帧*/
            if (!display_disable && is->force_refresh && is->show_mode == SHOW_MODE_VIDEO && is->pictq.rindex_shown)
                video_display(is);
        }
        is->force_refresh = 0;
        
    }
    
    
    static double compute_target_delay(double delay, VideoState *is)
    {
        double sync_threshold, diff = 0;
        
        /* update delay to follow master synchronisation source */
        if (get_master_sync_type(is) != AV_SYNC_VIDEO_MASTER) {
            
            //时钟差值 = 最后播放的视频帧的pts - 主时钟最后播放的帧的pts(这里仅分析音频时钟)
            diff = get_clock(&is->vidclk) - get_master_clock(is);
            
            //阈值
            sync_threshold = FFMAX(AV_SYNC_THRESHOLD_MIN, FFMIN(AV_SYNC_THRESHOLD_MAX, delay));
            if (!isnan(diff) && fabs(diff) < is->max_frame_duration) {
                if (diff <= -sync_threshold)    //视频落后于音频,并且超过阈值
                    /*
                     diff是小于0的,delay是大于0的。
                     如果delay + diff = 本帧的pts,小于0(仍未追上音频),则delay=0,加速追赶音频。
                     否则 delay = delay + diff,表示经过delay时间,正好视频赶上音频
                     */
                    delay = FFMAX(0, delay + diff);
                else if (diff >= sync_threshold && delay > AV_SYNC_FRAMEDUP_THRESHOLD)
                    delay = delay + diff;
                else if (diff >= sync_threshold)
                    //视频超前音频,大于阈值,简单扩大2倍delay,等待音频
                    delay = 2 * delay;
            }
        }
        
        av_log(NULL, AV_LOG_TRACE, "video: delay=%0.3f A-V=%f\n",
               delay, -diff);
        
        //如果diff未超过sync_threshold阈值,可以认为音视频仍然处于同步状态,直接返回delay值。
        return delay;
    }
    
    

    能力有限,前有大神分析到位,可以详看https://www.cnblogs.com/leisure_chn/p/10307089.html

    展开全文
  • 在上一篇介绍了ffplay的main函数与三个主要线程的创建,这一篇主要介绍ffplay的帧队列。 ffplay用PacketQueue保存解封装后的数据,即保存AVPacket,PacketQueue的第一个参数是结构体MyAVPacketList,如下: typedef ...

    在上一篇介绍了ffplay的main函数与三个主要线程的创建,这一篇主要介绍ffplay的队列。

    两个队列PacketQueue、FrameQueue

    ffplay源码主要有两个队列PacketQueue,FrameQueue ,在说这两个队列前,我们得先搞清楚ffmpeg的两个数据结构,AVPacket 与AVFrame.

    AVPacket

    AVPacket是压缩的数据,通常指解码前或编码后的数据,对于播放器开发,那么就是解封装后,解码前的数据;如果是做视频录制,那么就是编码后的压缩数据。

    AVFrame

    AVFrame是AVPacket经过解码后的数据,av_read_frame得到压缩的数据包AVPacket,一般有三种压缩的数据包(视频、音频和字幕),都用AVPacket表示,然后调用send和recive对AVPacket进行解码得到AVFrame。
    所以在ffplay中PacketQueue,FrameQueue 的关系如下:
    在这里插入图片描述
    本篇主要介绍PacketQueue
    ffplay用PacketQueue保存解封装后解码前的数据ÿ

    展开全文
  • ffplay源码分析(2)

    2020-11-22 11:33:55
    ffplay源码分析(1) 1音视频同步基础 因为音视频解码,输出都是在不同线程中完成的,且有些片源音视频本身的pts 就存在飘动, 因此需要引入音视频同步机制。用一句话来总结音视频同步就是"慢了等,快了丢"。 在...

    ffplay源码分析(1)

    1音视频同步基础

    因为音视频解码,输出都是在不同线程中完成的,且有些片源音视频本身的pts 就存在飘动, 因此需要引入音视频同步机制。用一句话来总结音视频同步就是"慢了等,快了丢"。

    在ffplay 中, 需要时刻将音频,视频时间与系统时间对齐(set_clock)。通过引入pts_drift 变量,记录系统时间与音频,视频时间的差值。 当set_clock() 时,pts_drift = pts - os time。

    当get_clock() 时,得到的pts = os_time + pts_drift.。

    在ffplay 中, 音视频同步策略有三种:

    1 视频同步到音频:将视频的pts 与master  clock比较,慢了等,快了丢。

    2 音频同步到视频, 将音频的pts 与master clock 做比较,通过重采样库,控制样本输出。

    3 音视频同步到外部时钟:复用了前两种的策略,等效与前两种的叠加。

    什么是master clock? 

    以谁为准的时间, 比如视频同步到音频, 那就以音频的pts 作为master clock.

    /* get the current master clock value */
    static double get_master_clock(VideoState *is)
    {
        double val;
    
        switch (get_master_sync_type(is)) {
            case AV_SYNC_VIDEO_MASTER:
                val = get_clock(&is->vidclk);
                break;
            case AV_SYNC_AUDIO_MASTER:
                val = get_clock(&is->audclk);
                break;
            default:
                val = get_clock(&is->extclk);
                break;
        }
        return val;
    }
    

     

    2 视频同步到音频

    视频同步到音频是在ffplay.c 中的video_refresh() 函数中完成的。通过引入frame_timer 来决定当前帧是否够了,慢了等,快了丢

     /* compute nominal last_duration */
     last_duration = vp_duration(is, lastvp, vp);//计算上一帧应该显示的时长
     delay = compute_target_delay(last_duration, is);//上一帧显示时长 + 音视频pts 的diff
    
     time= av_gettime_relative()/1000000.0;
     if (time < is->frame_timer + delay) { //查看上一帧显示时间是否够了
        *remaining_time = FFMIN(is->frame_timer + delay - time, *remaining_time);
        goto display;// 不够,慢了等,继续上一帧显示
        }
        is->frame_timer += delay;//够了, 更新当前帧显示时间
                    ............
     /*下面为丢帧逻辑*/
     if (frame_queue_nb_remaining(&is->pictq) > 1) {
                    Frame *nextvp = frame_queue_peek_next(&is->pictq);
                    duration = vp_duration(is, vp, nextvp);
                    //如果系统时间 > 当前帧显示时间 + 当前帧显示的duration, 则丢弃
                    if(!is->step && (framedrop>0 || (framedrop && get_master_sync_type(is) != AV_SYNC_VIDEO_MASTER)) && time > is->frame_timer + duration){
                        is->frame_drops_late++;
                        frame_queue_next(&is->pictq);//快了丢,跳到下一帧,丢弃当前帧
                        goto retry;//retry,当前帧不再显示
                    }

     

    3 音视同步到视频

    与视频同步到音频逻辑不同, 音频同步到视频并不能简单的慢了等,快了丢,因为人耳对声音特别敏感。 因此音频同步到视频,通过重采样库,控制音频样本的输出时间。

    在ffplay中, 音频同步到视频逻辑在audio_decode_frame().

    static int audio_decode_frame(VideoState *is)
    {
     ......
     //与video clk 比较,计算应该数据的样本数
     wanted_nb_samples = synchronize_audio(is, af->frame->nb_samples);
    //判断是否需要重采样
    if (af->frame->format        != is->audio_src.fmt            ||
            dec_channel_layout       != is->audio_src.channel_layout ||
            af->frame->sample_rate   != is->audio_src.freq           ||
            (wanted_nb_samples       != af->frame->nb_samples && !is->swr_ctx)) {
            swr_free(&is->swr_ctx);
    //重新设置重采样参数
            is->swr_ctx = swr_alloc_set_opts(NULL,
                                             is->audio_tgt.channel_layout, is->audio_tgt.fmt, is->audio_tgt.freq,
                                             dec_channel_layout,           af->frame->format, af->frame->sample_rate,
                                             0, NULL);
            i
            }
    ..........
           //利用重采样库,实现对样本的插入删除
           len2 = swr_convert(is->swr_ctx, out, out_count, in, af->frame->nb_samples);
    ............
    }

    4 音视频同步到外部时钟

    我们可以自由的选择音视频同步的方式,其实仔细看代码可以发现,音视频同步的选择,其实就是选择不同的clock 来作为master clock 。当选择外部时钟时,就会get_clock(&is->extck)。然后音频,视频输出就与extck 进行比较。因此同步到外部时钟 相当于 视频同步到音频,音频同步到视频的叠加。

    /* get the current master clock value */
    static double get_master_clock(VideoState *is)
    {
        double val;
    
        switch (get_master_sync_type(is)) {
            case AV_SYNC_VIDEO_MASTER:
                val = get_clock(&is->vidclk);
                break;
            case AV_SYNC_AUDIO_MASTER:
                val = get_clock(&is->audclk);
                break;
            default:
                val = get_clock(&is->extclk);
                break;
        }
        return val;
    }
    

     

    展开全文
  • ffmpeg/ffplay源码分析

    2015-11-02 16:59:18
    详细分析ffplay源码、数据结构及关系、处理流程,让你对视频开发以及编解码有更深入的理解。
  • ffplay是FFmpeg工程自带的简单播放器,使用FFmpeg提供的解码器和SDL库进行视频播放。本文基于FFmpeg工程4.1版本进行分析,其中ffplay源码清单如下: https://github.com/FFmpeg/FFmpeg/blob/n4.1/f...
  • ffplay是FFmpeg工程自带的简单播放器,使用FFmpeg提供的解码器和SDL库进行视频播放。本文基于FFmpeg工程4.1版本进行分析,其中ffplay源码清单如下: https://github.com/FFmpeg/FFmpeg/blob/n4.1/f...
  • ffplay源码分析(1)

    2020-11-15 15:47:36
    我们可以顺着如下六个函数,去阅读ffplay 源码: 1)read_thread() : 读取读取音频,视频,字幕packet 并将将其放入对应的packet_queue 中去。 2)audio_thread(): 音频解码,取音频packet_queue 内容
  • ffplay的键盘事件,这段代码,相对于前面的代码来说,很简单。 ffplay在播放视频时,可以通过键盘进行视频的控制,代码如下: switch (event.type) { case SDL_KEYDOWN: if (exit_on_keydown || event.key....
  • ffplay第二篇介绍了read_thread的stream_component_open里又创建了3个线程,分别进行视频、音频、字幕解码。 来看看decoder_start的实现,第二个参数fn是函数指针,传给SDL_CreateThread做线程运行的函数。 ...
  • ffplay的视频显示入口是在main函数的最后面: 在event_loop函数中处理显示 /* handle an event sent by the GUI */ static void event_loop(VideoState *cur_stream) { SDL_Event event; double incr, pos, frac;...
  • 把ffmpeg环境配好后,在资源所在的目录,打开控制台输入ffplay [资源名字]即可播放该媒体资源,ffplay源码只有3800多行,C工程,音视频显示渲染是用的SDL库。下载ffmpeg源码后,找到ffplay.c,这既是ffplay播放器的...
  • 本文基于FFmpeg工程4.1版本进行分析,其中ffplay源码清单如下: https://github.com/FFmpeg/FFmpeg/blob/n4.1/fftools/ffplay.c 播放控制 暂停/继续 暂停/继续状态的切换是由用户按空格键实现的,每按一次空格键,...
  • ffplay源码分析-播放控制
  • ffplay源码分析1-概述

    2019-02-25 22:17:15
    ffplay是一个很简单的播放器,但是初次接触仍会感到概念和细节相当繁多,分析并不容易。深入理解一项技术需要足够的时间和大量的实践,由浅入深逐步迭代,没有时间就成了最大难题。本次分析过程断断续续持续了挺久,...
  • 本文基于FFmpeg工程4.1版本进行分析,其中ffplay源码清单如下: https://github.com/FFmpeg/FFmpeg/blob/n4.1/fftools/ffplay.c 在尝试分析源码前,可先阅读如下参考文章作为铺垫: [1]. 雷霄骅,视音频编解码...
  • 音频播放是一个被动的过程,声卡设备主动向程序要数据,而不是...在这个模型中,sdl通过sdl_audio_callback函数向ffplay要音频数据,ffplay将sampq中的数据通过audio_decode_frame函数取出,放入is->audio_buf,然后
  • 本文基于FFmpeg工程4.1版本进行分析,其中ffplay源码清单如下: https://github.com/FFmpeg/FFmpeg/blob/n4.1/fftools/ffplay.c 在尝试分析源码前,可先阅读如下参考文章作为铺垫: [1]. 雷霄骅,视音频编解码...
  • 在上一篇介绍了解封装后解码前的队列PacketQueue,本篇分析解码后的数据队列FrameQueue,该队列的第一个成员是Frame, 声明如下: /* Common struct for handling all types of decoded data and allocated render ...

空空如也

空空如也

1 2 3 4 5 ... 20
收藏数 1,119
精华内容 447
关键字:

ffplay源码分析