精华内容
下载资源
问答
  • FFmpeg录屏

    热门讨论 2016-04-16 00:48:52
    FFmpeg录屏
  • ffmpeg录屏

    2020-10-31 15:33:31
    ffmpeg录屏 screen recorder 的下载地址: https://www.it610.com/article/1290547546384572416.htm ffmpeg的下载地址: https://pan.baidu.com/s/1JH_NMM09A-ezMY0A6mUWGg

    ffmpeg录屏

    screen recorder 的下载地址: https://www.it610.com/article/1290547546384572416.htm
    ffmpeg的下载地址: https://pan.baidu.com/s/1JH_NMM09A-ezMY0A6mUWGg

    文章一: https://www.cnblogs.com/vczf/p/13471865.html

    音视频录制
    录制视频(默认参数)
    桌面: ffmpeg -f dshow -i video=“screen capture recorder” v-out.mp4
    摄像头:ffmpeg -f dshow -i video=“Integrated Webcam” -y v-out2.flv (要根据自己摄像头名称)

    录制声音(默认参数)
    系统声音:ffmpeg -f dshow -i audio=“virtual audio capturer” a-out.aac
    系统+麦克风声音: ffmpeg -f dshow -i audio=“麦克风(Realtek Audio)” -f dshow -i audio=“virtual-audio-capturer” -filter_complex amix=inputs=2:duration=first:dropout_transition=2 a-out2.aac

    同时录制声音和视频(默认参数)
    ffmpeg -f dshow -i audio=“麦克风(Realtek Audio)” -f dshow -i audio=“virtual-audio-capturer” -filter_complex
    amix=inputs=2:duration=first:dropout_transition=2 -f dshow -i video=“screen-capture-recorder” -y av-out.flv

    查看视频录制的可选参数
    ffmpeg -f dshow -list_options true -i video=“screen-capture-recorder”

    文章二:https://blog.csdn.net/tanhuifang520/article/details/79623978

    展开全文
  • ffmpeg 录屏

    千次阅读 2019-04-03 10:28:14
    ffmpeg -y -f gdigrab -t 25 -r 15 -i desktop -vcodec libx265 net.mkv ...使用libx265 录屏,时长25,帧率15,一般不是定死的时间,因为随着帧率浮动,时间也会浮动,如果想只录30s,那就别加-r这个option ...
    ffmpeg -y -f gdigrab -t 25 -r 15 -i desktop -vcodec libx265 net.mkv

     

    使用libx265 录屏,时长25,帧率15,一般不是定死的时间,因为随着帧率浮动,时间也会浮动,如果想只录30s,那就别加-r这个option

    展开全文
  • Linux系统下使用ffmpeg录屏的代码,使用C++语言编写。
  • Qt+FFmpeg录屏录音

    千次阅读 热门讨论 2019-03-06 23:56:47
    源码:Qt+FFmpeg录屏录音 录屏功能支持:开始,暂停,结束。 使用Qt+C++封装FFmpeg API,没有使用废弃的FFmpeg API。 主线程:Qt GUI线程,以后可接入录屏UI。 MuxThreadProc:复用线程,启动音视频采集线程。...

    源码:Qt+FFmpeg录屏录音

    • 录屏功能支持:开始,暂停,结束。
    • 使用Qt+C++封装FFmpeg API,没有使用废弃的FFmpeg API。
    • 主线程:Qt GUI线程,以后可接入录屏UI。
    • MuxThreadProc:复用线程,启动音视频采集线程。打开输入/输出流,然后从fifoBuffer读取帧,编码生成各种格式视频。
    • ScreenRecordThreadProc:视频采集线程,从输入流采集帧,缩放后写入fifoBuffer。
    • SoundRecordThreadProc:音频采集线程,从输入流采集样本,重采样后写入fifoBuffer。

    ScreenRecordImpl.h

    #pragma once
    #include <Windows.h>
    #include <atomic>
    #include <QObject>
    #include <QString>
    #include <QMutex>
    #include <condition_variable>
    
    #ifdef	__cplusplus
    extern "C"
    {
    #endif
    struct AVFormatContext;
    struct AVCodecContext;
    struct AVCodec;
    struct AVFifoBuffer;
    struct AVAudioFifo;
    struct AVFrame;
    struct SwsContext;
    struct SwrContext;
    #ifdef __cplusplus
    };
    #endif
    
    class ScreenRecordImpl : public QObject
    {
    	Q_OBJECT
    private:
    	enum RecordState {
    		NotStarted,
    		Started,
    		Paused,
    		Stopped,
    		Unknown,
    	};
    public:
    	ScreenRecordImpl(QObject * parent = Q_NULLPTR);
    	void Init(const QVariantMap& map);
    
    	private slots :
    	void Start();
    	void Pause();
    	void Stop();
    
    private:
    	//从fifobuf读取音视频帧,写入输出流,复用,生成文件
    	void MuxThreadProc();
    	//从视频输入流读取帧,写入fifobuf
    	void ScreenRecordThreadProc();
    	//从音频输入流读取帧,写入fifobuf
    	void SoundRecordThreadProc();
    	int OpenVideo();
    	int OpenAudio();
    	int OpenOutput();
    	QString GetSpeakerDeviceName();
    	//获取麦克风设备名称
    	QString GetMicrophoneDeviceName();
    	AVFrame* AllocAudioFrame(AVCodecContext* c, int nbSamples);
    	void InitVideoBuffer();
    	void InitAudioBuffer();
    	void FlushVideoDecoder();
    	void FlushAudioDecoder();
    	//void FlushVideoEncoder();
    	//void FlushAudioEncoder();
    	void FlushEncoders();
    	void Release();
    
    private:
    	QString				m_filePath;
    	int					m_width;
    	int					m_height;
    	int					m_fps;
    	int					m_audioBitrate;
    
    	int m_vIndex;		//输入视频流索引
    	int m_aIndex;		//输入音频流索引
    	int m_vOutIndex;	//输出视频流索引
    	int m_aOutIndex;	//输出音频流索引
    	AVFormatContext		*m_vFmtCtx;
    	AVFormatContext		*m_aFmtCtx;
    	AVFormatContext		*m_oFmtCtx;
    	AVCodecContext		*m_vDecodeCtx;
    	AVCodecContext		*m_aDecodeCtx;
    	AVCodecContext		*m_vEncodeCtx;
    	AVCodecContext		*m_aEncodeCtx;
    	SwsContext			*m_swsCtx;
    	SwrContext			*m_swrCtx;
    	AVFifoBuffer		*m_vFifoBuf;
    	AVAudioFifo			*m_aFifoBuf;
    
    	AVFrame				*m_vOutFrame;
    	uint8_t				*m_vOutFrameBuf;
    	int					m_vOutFrameSize;
    
    	int					m_nbSamples;
    	RecordState			m_state;
    	std::condition_variable m_cvNotPause;	//当点击暂停的时候,两个采集线程挂起
    	std::mutex				m_mtxPause;
    	std::condition_variable m_cvVBufNotFull;
    	std::condition_variable m_cvVBufNotEmpty;
    	std::mutex				m_mtxVBuf;
    	std::condition_variable m_cvABufNotFull;
    	std::condition_variable m_cvABufNotEmpty;
    	std::mutex				m_mtxABuf;
    	int64_t					m_vCurPts;
    	int64_t					m_aCurPts;
    };

    ScreenRecordImpl.cpp

    #ifdef	__cplusplus
    extern "C"
    {
    #endif
    #include "libavcodec/avcodec.h"
    #include "libavformat/avformat.h"
    #include "libswscale/swscale.h"
    #include "libavdevice/avdevice.h"
    #include "libavutil/audio_fifo.h"
    #include "libavutil/imgutils.h"
    #include "libswresample/swresample.h"
    #include <libavutil\avassert.h>
    #ifdef __cplusplus
    };
    #endif
    
    #include "ScreenRecordImpl.h"
    #include <QDebug>
    #include <QAudioDeviceInfo>
    #include <thread>
    #include <fstream>
    
    #include <dshow.h>
    
    using namespace std;
    
    int g_vCollectFrameCnt = 0;	//视频采集帧数
    int g_vEncodeFrameCnt = 0;	//视频编码帧数
    int g_aCollectFrameCnt = 0;	//音频采集帧数
    int g_aEncodeFrameCnt = 0;	//音频编码帧数
    
    ScreenRecordImpl::ScreenRecordImpl(QObject * parent) :
    	QObject(parent)
    	, m_fps(30)
    	, m_vIndex(-1), m_aIndex(-1)
    	, m_vFmtCtx(nullptr), m_aFmtCtx(nullptr), m_oFmtCtx(nullptr)
    	, m_vDecodeCtx(nullptr), m_aDecodeCtx(nullptr)
    	, m_vEncodeCtx(nullptr), m_aEncodeCtx(nullptr)
    	, m_vFifoBuf(nullptr), m_aFifoBuf(nullptr)
    	, m_swsCtx(nullptr)
    	, m_swrCtx(nullptr)
    	, m_state(RecordState::NotStarted)
    	, m_vCurPts(0), m_aCurPts(0)
    {
    }
    
    void ScreenRecordImpl::Init(const QVariantMap& map)
    {
    	m_filePath = map["filePath"].toString();
    	m_width = map["width"].toInt();
    	m_height = map["height"].toInt();
    	m_fps = map["fps"].toInt();
    	m_audioBitrate = map["audioBitrate"].toInt();
    }
    
    void ScreenRecordImpl::Start()
    {
    	if (m_state == RecordState::NotStarted)
    	{
    		qDebug() << "start record";
    		m_state = RecordState::Started;
    		std::thread muxThread(&ScreenRecordImpl::MuxThreadProc, this);
    		muxThread.detach();
    	}
    	else if (m_state == RecordState::Paused)
    	{
    		qDebug() << "continue record";
    		m_state = RecordState::Started;
    		m_cvNotPause.notify_one();
    	}
    }
    
    void ScreenRecordImpl::Pause()
    {
    	qDebug() << "pause record";
    	m_state = RecordState::Paused;
    }
    
    void ScreenRecordImpl::Stop()
    {
    	qDebug() << "stop record";
    	RecordState state = m_state;
    	m_state = RecordState::Stopped;
    	if (state == RecordState::Paused)
    		m_cvNotPause.notify_one();
    }
    
    int ScreenRecordImpl::OpenVideo()
    {
    	int ret = -1;
    	AVInputFormat *ifmt = av_find_input_format("gdigrab");
    	AVDictionary *options = nullptr;
    	AVCodec *decoder = nullptr;
    	av_dict_set(&options, "framerate", QString::number(m_fps).toStdString().c_str(), NULL);
    
    	if (avformat_open_input(&m_vFmtCtx, "desktop", ifmt, &options) != 0)
    	{
    		qDebug() << "Cant not open video input stream";
    		return -1;
    	}
    	if (avformat_find_stream_info(m_vFmtCtx, nullptr) < 0)
    	{
    		printf("Couldn't find stream information.(无法获取视频流信息)\n");
    		return -1;
    	}
    	for (int i = 0; i < m_vFmtCtx->nb_streams; ++i)
    	{
    		AVStream *stream = m_vFmtCtx->streams[i];
    		if (stream->codecpar->codec_type == AVMEDIA_TYPE_VIDEO)
    		{
    			decoder = avcodec_find_decoder(stream->codecpar->codec_id);
    			if (decoder == nullptr)
    			{
    				printf("Codec not found.(没有找到解码器)\n");
    				return -1;
    			}
    			//从视频流中拷贝参数到codecCtx
    			m_vDecodeCtx = avcodec_alloc_context3(decoder);
    			if ((ret = avcodec_parameters_to_context(m_vDecodeCtx, stream->codecpar)) < 0)
    			{
    				qDebug() << "Video avcodec_parameters_to_context failed,error code: " << ret;
    				return -1;
    			}
    			m_vIndex = i;
    			break;
    		}
    	}
    	if (avcodec_open2(m_vDecodeCtx, decoder, nullptr) < 0)
    	{
    		printf("Could not open codec.(无法打开解码器)\n");
    		return -1;
    	}
    
    	m_swsCtx = sws_getContext(m_vDecodeCtx->width, m_vDecodeCtx->height, m_vDecodeCtx->pix_fmt,
    		m_width, m_height, AV_PIX_FMT_YUV420P, SWS_FAST_BILINEAR, nullptr, nullptr, nullptr);
    	return 0;
    }
    
    static char *dup_wchar_to_utf8(wchar_t *w)
    {
    	char *s = NULL;
    	int l = WideCharToMultiByte(CP_UTF8, 0, w, -1, 0, 0, 0, 0);
    	s = (char *)av_malloc(l);
    	if (s)
    		WideCharToMultiByte(CP_UTF8, 0, w, -1, s, l, 0, 0);
    	return s;
    }
    
    static int check_sample_fmt(const AVCodec *codec, enum AVSampleFormat sample_fmt)
    {
    	const enum AVSampleFormat *p = codec->sample_fmts;
    
    	while (*p != AV_SAMPLE_FMT_NONE) {
    		if (*p == sample_fmt)
    			return 1;
    		p++;
    	}
    	return 0;
    }
    
    int ScreenRecordImpl::OpenAudio()
    {
    	int ret = -1;
    	AVCodec *decoder = nullptr;
    	qDebug() << GetMicrophoneDeviceName();
    
    	AVInputFormat *ifmt = av_find_input_format("dshow");
    	QString audioDeviceName = "audio=" + GetMicrophoneDeviceName();
    
    	if (avformat_open_input(&m_aFmtCtx, audioDeviceName.toStdString().c_str(), ifmt, nullptr) < 0)
    	{
    		qDebug() << "Can not open audio input stream";
    		return -1;
    	}
    	if (avformat_find_stream_info(m_aFmtCtx, nullptr) < 0)
    		return -1;
    
    	for (int i = 0; i < m_aFmtCtx->nb_streams; ++i)
    	{
    		AVStream * stream = m_aFmtCtx->streams[i];
    		if (stream->codecpar->codec_type == AVMEDIA_TYPE_AUDIO)
    		{
    			decoder = avcodec_find_decoder(stream->codecpar->codec_id);
    			if (decoder == nullptr)
    			{
    				printf("Codec not found.(没有找到解码器)\n");
    				return -1;
    			}
    			//从视频流中拷贝参数到codecCtx
    			m_aDecodeCtx = avcodec_alloc_context3(decoder);
    			if ((ret = avcodec_parameters_to_context(m_aDecodeCtx, stream->codecpar)) < 0)
    			{
    				qDebug() << "Audio avcodec_parameters_to_context failed,error code: " << ret;
    				return -1;
    			}
    			m_aIndex = i;
    			break;
    		}
    	}
    	if (0 > avcodec_open2(m_aDecodeCtx, decoder, NULL))
    	{
    		printf("can not find or open audio decoder!\n");
    		return -1;
    	}
    	return 0;
    }
    
    int ScreenRecordImpl::OpenOutput()
    {
    	int ret = -1;
    	AVStream *vStream = nullptr, *aStream = nullptr;
    	const char *outFileName = "test.mp4";
    	ret = avformat_alloc_output_context2(&m_oFmtCtx, nullptr, nullptr, outFileName);
    	if (ret < 0)
    	{
    		qDebug() << "avformat_alloc_output_context2 failed";
    		return -1;
    	}
    
    	if (m_vFmtCtx->streams[m_vIndex]->codecpar->codec_type == AVMEDIA_TYPE_VIDEO)
    	{
    		vStream = avformat_new_stream(m_oFmtCtx, nullptr);
    		if (!vStream)
    		{
    			printf("can not new stream for output!\n");
    			return -1;
    		}
    		//AVFormatContext第一个创建的流索引是0,第二个创建的流索引是1
    		m_vOutIndex = vStream->index;
    		vStream->time_base = AVRational{ 1, m_fps };
    
    		m_vEncodeCtx = avcodec_alloc_context3(NULL);
    		if (nullptr == m_vEncodeCtx)
    		{
    			qDebug() << "avcodec_alloc_context3 failed";
    			return -1;
    		}
    		m_vEncodeCtx->width = m_width;
    		m_vEncodeCtx->height = m_height;
    		m_vEncodeCtx->codec_type = AVMEDIA_TYPE_VIDEO;
    		m_vEncodeCtx->time_base.num = 1;
    		m_vEncodeCtx->time_base.den = m_fps;
    		m_vEncodeCtx->pix_fmt = AV_PIX_FMT_YUV420P;
    		m_vEncodeCtx->codec_id = AV_CODEC_ID_H264;
    		m_vEncodeCtx->bit_rate = 800 * 1000;
    		m_vEncodeCtx->rc_max_rate = 800 * 1000;
    		m_vEncodeCtx->rc_buffer_size = 500 * 1000;
    		//设置图像组层的大小, gop_size越大,文件越小 
    		m_vEncodeCtx->gop_size = 30;
    		m_vEncodeCtx->max_b_frames = 3;
    		 //设置h264中相关的参数,不设置avcodec_open2会失败
    		m_vEncodeCtx->qmin = 10;	//2
    		m_vEncodeCtx->qmax = 31;	//31
    		m_vEncodeCtx->max_qdiff = 4;
    		m_vEncodeCtx->me_range = 16;	//0	
    		m_vEncodeCtx->max_qdiff = 4;	//3	
    		m_vEncodeCtx->qcompress = 0.6;	//0.5
    
    		//查找视频编码器
    		AVCodec *encoder;
    		encoder = avcodec_find_encoder(m_vEncodeCtx->codec_id);
    		if (!encoder)
    		{
    			qDebug() << "Can not find the encoder, id: " << m_vEncodeCtx->codec_id;
    			return -1;
    		}
    		m_vEncodeCtx->codec_tag = 0;
    		//正确设置sps/pps
    		m_vEncodeCtx->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
    		//打开视频编码器
    		ret = avcodec_open2(m_vEncodeCtx, encoder, nullptr);
    		if (ret < 0)
    		{
    			qDebug() << "Can not open encoder id: " << encoder->id << "error code: " << ret;
    			return -1;
    		}
    		//将codecCtx中的参数传给输出流
    		ret = avcodec_parameters_from_context(vStream->codecpar, m_vEncodeCtx);
    		if (ret < 0)
    		{
    			qDebug() << "Output avcodec_parameters_from_context,error code:" << ret;
    			return -1;
    		}
    	}
    	if (m_aFmtCtx->streams[m_aIndex]->codecpar->codec_type == AVMEDIA_TYPE_AUDIO)
    	{
    		aStream = avformat_new_stream(m_oFmtCtx, NULL);
    		if (!aStream)
    		{
    			printf("can not new audio stream for output!\n");
    			return -1;
    		}
    		m_aOutIndex = aStream->index;
    
    		AVCodec *encoder = avcodec_find_encoder(m_oFmtCtx->oformat->audio_codec);
    		if (!encoder)
    		{
    			qDebug() << "Can not find audio encoder, id: " << m_oFmtCtx->oformat->audio_codec;
    			return -1;
    		}
    		m_aEncodeCtx = avcodec_alloc_context3(encoder);
    		if (nullptr == m_vEncodeCtx)
    		{
    			qDebug() << "audio avcodec_alloc_context3 failed";
    			return -1;
    		}
    		m_aEncodeCtx->sample_fmt = encoder->sample_fmts ? encoder->sample_fmts[0] : AV_SAMPLE_FMT_FLTP;
    		m_aEncodeCtx->bit_rate = m_audioBitrate;
    		m_aEncodeCtx->sample_rate = 44100;
    		if (encoder->supported_samplerates) 
    		{
    			m_aEncodeCtx->sample_rate = encoder->supported_samplerates[0];
    			for (int i = 0; encoder->supported_samplerates[i]; ++i)
    			{
    				if (encoder->supported_samplerates[i] == 44100)
    					m_aEncodeCtx->sample_rate = 44100;
    			}
    		}
    		m_aEncodeCtx->channels = av_get_channel_layout_nb_channels(m_aEncodeCtx->channel_layout);
    		m_aEncodeCtx->channel_layout = AV_CH_LAYOUT_STEREO;
    		if (encoder->channel_layouts) 
    		{
    			m_aEncodeCtx->channel_layout = encoder->channel_layouts[0];
    			for (int i = 0; encoder->channel_layouts[i]; ++i) 
    			{
    				if (encoder->channel_layouts[i] == AV_CH_LAYOUT_STEREO)
    					m_aEncodeCtx->channel_layout = AV_CH_LAYOUT_STEREO;
    			}
    		}
    		m_aEncodeCtx->channels = av_get_channel_layout_nb_channels(m_aEncodeCtx->channel_layout);
    		aStream->time_base = AVRational{ 1, m_aEncodeCtx->sample_rate };
    
    		m_aEncodeCtx->codec_tag = 0;
    		m_aEncodeCtx->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
    
    		if (!check_sample_fmt(encoder, m_aEncodeCtx->sample_fmt)) 
    		{
    			qDebug() << "Encoder does not support sample format " << av_get_sample_fmt_name(m_aEncodeCtx->sample_fmt);
    			return -1;
    		}
    
    		//打开音频编码器,打开后frame_size被设置
    		ret = avcodec_open2(m_aEncodeCtx, encoder, 0);
    		if (ret < 0)
    		{
    			qDebug() << "Can not open the audio encoder, id: " << encoder->id << "error code: " << ret;
    			return -1;
    		}
    		//将codecCtx中的参数传给音频输出流
    		ret = avcodec_parameters_from_context(aStream->codecpar, m_aEncodeCtx);
    		if (ret < 0)
    		{
    			qDebug() << "Output audio avcodec_parameters_from_context,error code:" << ret;
    			return -1;
    		}
    
    		m_swrCtx = swr_alloc();
    		if (!m_swrCtx)
    		{
    			qDebug() << "swr_alloc failed";
    			return -1;
    		}
    		av_opt_set_int(m_swrCtx, "in_channel_count", m_aDecodeCtx->channels, 0);	//2
    		av_opt_set_int(m_swrCtx, "in_sample_rate", m_aDecodeCtx->sample_rate, 0);	//44100
    		av_opt_set_sample_fmt(m_swrCtx, "in_sample_fmt", m_aDecodeCtx->sample_fmt, 0);	//AV_SAMPLE_FMT_S16
    		av_opt_set_int(m_swrCtx, "out_channel_count", m_aEncodeCtx->channels, 0);	//2
    		av_opt_set_int(m_swrCtx, "out_sample_rate", m_aEncodeCtx->sample_rate, 0);	//44100
    		av_opt_set_sample_fmt(m_swrCtx, "out_sample_fmt", m_aEncodeCtx->sample_fmt, 0);	//AV_SAMPLE_FMT_FLTP
    
    		if ((ret = swr_init(m_swrCtx)) < 0) 
    		{
    			qDebug() << "swr_init failed";
    			return -1;
    		}
    	}
    
    	//打开输出文件
    	if (!(m_oFmtCtx->oformat->flags & AVFMT_NOFILE))
    	{
    		if (avio_open(&m_oFmtCtx->pb, outFileName, AVIO_FLAG_WRITE) < 0)
    		{
    			printf("can not open output file handle!\n");
    			return -1;
    		}
    	}
    	//写文件头
    	if (avformat_write_header(m_oFmtCtx, nullptr) < 0)
    	{
    		printf("can not write the header of the output file!\n");
    		return -1;
    	}
    	return 0;
    }
    
    QString ScreenRecordImpl::GetSpeakerDeviceName()
    {
    	char sName[256] = { 0 };
    	QString speaker = "";
    	bool bRet = false;
    	::CoInitialize(NULL);
    
    	ICreateDevEnum* pCreateDevEnum;//enumrate all speaker devices
    	HRESULT hr = CoCreateInstance(CLSID_SystemDeviceEnum,
    		NULL,
    		CLSCTX_INPROC_SERVER,
    		IID_ICreateDevEnum,
    		(void**)&pCreateDevEnum);
    
    	IEnumMoniker* pEm;
    	hr = pCreateDevEnum->CreateClassEnumerator(CLSID_AudioRendererCategory, &pEm, 0);
    	if (hr != NOERROR)
    	{
    		::CoUninitialize();
    		return "";
    	}
    
    	pEm->Reset();
    	ULONG cFetched;
    	IMoniker *pM;
    	while (hr = pEm->Next(1, &pM, &cFetched), hr == S_OK)
    	{
    
    		IPropertyBag* pBag = NULL;
    		hr = pM->BindToStorage(0, 0, IID_IPropertyBag, (void**)&pBag);
    		if (SUCCEEDED(hr))
    		{
    			VARIANT var;
    			var.vt = VT_BSTR;
    			hr = pBag->Read(L"FriendlyName", &var, NULL);//还有其他属性,像描述信息等等
    			if (hr == NOERROR)
    			{
    				//获取设备名称
    				WideCharToMultiByte(CP_ACP, 0, var.bstrVal, -1, sName, 256, "", NULL);
    				speaker = QString::fromLocal8Bit(sName);
    				SysFreeString(var.bstrVal);
    			}
    			pBag->Release();
    		}
    		pM->Release();
    		bRet = true;
    	}
    	pCreateDevEnum = NULL;
    	pEm = NULL;
    	::CoUninitialize();
    	return speaker;
    }
    
    QString ScreenRecordImpl::GetMicrophoneDeviceName()
    {
    	char sName[256] = { 0 };
    	QString capture = "";
    	bool bRet = false;
    	::CoInitialize(NULL);
    
    	ICreateDevEnum* pCreateDevEnum;//enumrate all audio capture devices
    	HRESULT hr = CoCreateInstance(CLSID_SystemDeviceEnum,
    		NULL,
    		CLSCTX_INPROC_SERVER,
    		IID_ICreateDevEnum,
    		(void**)&pCreateDevEnum);
    
    	IEnumMoniker* pEm;
    	hr = pCreateDevEnum->CreateClassEnumerator(CLSID_AudioInputDeviceCategory, &pEm, 0);
    	if (hr != NOERROR)
    	{
    		::CoUninitialize();
    		return "";
    	}
    
    	pEm->Reset();
    	ULONG cFetched;
    	IMoniker *pM;
    	while (hr = pEm->Next(1, &pM, &cFetched), hr == S_OK)
    	{
    
    		IPropertyBag* pBag = NULL;
    		hr = pM->BindToStorage(0, 0, IID_IPropertyBag, (void**)&pBag);
    		if (SUCCEEDED(hr))
    		{
    			VARIANT var;
    			var.vt = VT_BSTR;
    			hr = pBag->Read(L"FriendlyName", &var, NULL);//还有其他属性,像描述信息等等
    			if (hr == NOERROR)
    			{
    				//获取设备名称
    				WideCharToMultiByte(CP_ACP, 0, var.bstrVal, -1, sName, 256, "", NULL);
    				capture = QString::fromLocal8Bit(sName);
    				SysFreeString(var.bstrVal);
    			}
    			pBag->Release();
    		}
    		pM->Release();
    		bRet = true;
    	}
    	pCreateDevEnum = NULL;
    	pEm = NULL;
    	::CoUninitialize();
    	return capture;
    }
    
    AVFrame* ScreenRecordImpl::AllocAudioFrame(AVCodecContext* c, int nbSamples)
    {
    	AVFrame *frame = av_frame_alloc();
    	int ret;
    
    	frame->format = c->sample_fmt;
    	frame->channel_layout = c->channel_layout ? c->channel_layout: AV_CH_LAYOUT_STEREO;
    	frame->sample_rate = c->sample_rate;
    	frame->nb_samples = nbSamples;
    
    	if (nbSamples)
    	{
    		ret = av_frame_get_buffer(frame, 0);
    		if (ret < 0) 
    		{
    			qDebug() << "av_frame_get_buffer failed";
    			return nullptr;
    		}
    	}
    	return frame;
    }
    
    void ScreenRecordImpl::InitVideoBuffer()
    {
    	m_vOutFrameSize = av_image_get_buffer_size(m_vEncodeCtx->pix_fmt, m_width, m_height, 1);
    	m_vOutFrameBuf = (uint8_t *)av_malloc(m_vOutFrameSize);
    	m_vOutFrame = av_frame_alloc();
    	//先让AVFrame指针指向buf,后面再写入数据到buf
    	av_image_fill_arrays(m_vOutFrame->data, m_vOutFrame->linesize, m_vOutFrameBuf, m_vEncodeCtx->pix_fmt, m_width, m_height, 1);
    	//申请30帧缓存
    	if (!(m_vFifoBuf = av_fifo_alloc_array(30, m_vOutFrameSize)))
    	{
    		qDebug() << "av_fifo_alloc_array failed";
    		return;
    	}
    }
    
    void ScreenRecordImpl::InitAudioBuffer()
    {
    	m_nbSamples = m_aEncodeCtx->frame_size;
    	if (!m_nbSamples)
    	{
    		qDebug() << "m_nbSamples==0";
    		m_nbSamples = 1024;
    	}
    	m_aFifoBuf = av_audio_fifo_alloc(m_aEncodeCtx->sample_fmt, m_aEncodeCtx->channels, 30 * m_nbSamples);
    	if (!m_aFifoBuf)
    	{
    		qDebug() << "av_audio_fifo_alloc failed";
    		return;
    	}
    }
    
    void ScreenRecordImpl::FlushVideoDecoder()
    {
    	int ret = -1;
    	int y_size = m_width * m_height;
    	AVFrame	*oldFrame = av_frame_alloc();
    	AVFrame *newFrame = av_frame_alloc();
    
    	ret = avcodec_send_packet(m_vDecodeCtx, nullptr);
    	if (ret != 0)
    	{
    		qDebug() << "flush video avcodec_send_packet failed, ret: " << ret;
    		return;
    	}
    	while (ret >= 0)
    	{
    		ret = avcodec_receive_frame(m_vDecodeCtx, oldFrame);
    		if (ret < 0)
    		{
    			if (ret == AVERROR(EAGAIN))
    			{
    				qDebug() << "flush EAGAIN avcodec_receive_frame";
    				ret = 1;
    				continue;
    			}
    			else if (ret == AVERROR_EOF)
    			{
    				qDebug() << "flush video decoder finished";
    				break;
    			}
    			qDebug() << "flush video avcodec_receive_frame error, ret: " << ret;
    			return;
    		}
    		++g_vCollectFrameCnt;
    		sws_scale(m_swsCtx, (const uint8_t* const*)oldFrame->data, oldFrame->linesize, 0,
    			m_vEncodeCtx->height, newFrame->data, newFrame->linesize);
    
    		{
    			unique_lock<mutex> lk(m_mtxVBuf);
    			m_cvVBufNotFull.wait(lk, [this] { return av_fifo_space(m_vFifoBuf) >= m_vOutFrameSize; });
    		}
    		av_fifo_generic_write(m_vFifoBuf, newFrame->data[0], y_size, NULL);
    		av_fifo_generic_write(m_vFifoBuf, newFrame->data[1], y_size / 4, NULL);
    		av_fifo_generic_write(m_vFifoBuf, newFrame->data[2], y_size / 4, NULL);
    		m_cvVBufNotEmpty.notify_one();
    	}
    	qDebug() << "video collect frame count: " << g_vCollectFrameCnt;
    }
    
    //void ScreenRecordImpl::FlushVideoEncoder()
    //{
    //	int ret = -1;
    //	AVPacket pkt = { 0 };
    //	av_init_packet(&pkt);
    //	ret = avcodec_send_frame(m_vEncodeCtx, nullptr);
    //	qDebug() << "avcodec_send_frame ret:" << ret;
    //	while (ret >= 0)
    //	{
    //		ret = avcodec_receive_packet(m_vEncodeCtx, &pkt);
    //		if (ret < 0)
    //		{
    //			av_packet_unref(&pkt);
    //			if (ret == AVERROR(EAGAIN))
    //			{
    //				qDebug() << "flush EAGAIN avcodec_receive_packet";
    //				ret = 1;
    //				continue;
    //			}
    //			else if (ret == AVERROR_EOF)
    //			{
    //				qDebug() << "flush video encoder finished";
    //				break;
    //			}
    //			qDebug() << "flush video avcodec_receive_packet failed, ret: " << ret;
    //			return;
    //		}
    //		pkt.stream_index = m_vOutIndex;
    //		av_packet_rescale_ts(&pkt, m_vEncodeCtx->time_base, m_oFmtCtx->streams[m_vOutIndex]->time_base);
    //
    //		ret = av_interleaved_write_frame(m_oFmtCtx, &pkt);
    //		if (ret == 0)
    //			qDebug() << "flush Write video packet id: " << ++g_vEncodeFrameCnt;
    //		else
    //			qDebug() << "video av_interleaved_write_frame failed, ret:" << ret;
    //		av_free_packet(&pkt);
    //	}
    //}
    
    void ScreenRecordImpl::FlushAudioDecoder()
    {
    	int ret = -1;
    	AVPacket pkt = { 0 };
    	av_init_packet(&pkt);
    	int dstNbSamples, maxDstNbSamples;
    	AVFrame *rawFrame = av_frame_alloc();
    	AVFrame *newFrame = AllocAudioFrame(m_aEncodeCtx, m_nbSamples);
    	maxDstNbSamples = dstNbSamples = av_rescale_rnd(m_nbSamples,
    		m_aEncodeCtx->sample_rate, m_aDecodeCtx->sample_rate, AV_ROUND_UP);
    
    	ret = avcodec_send_packet(m_aDecodeCtx, nullptr);
    	if (ret != 0)
    	{
    		qDebug() << "flush audio avcodec_send_packet  failed, ret: " << ret;
    		return;
    	}
    	while (ret >= 0)
    	{
    		ret = avcodec_receive_frame(m_aDecodeCtx, rawFrame);
    		if (ret < 0)
    		{
    			if (ret == AVERROR(EAGAIN))
    			{
    				qDebug() << "flush audio EAGAIN avcodec_receive_frame";
    				ret = 1;
    				continue;
    			}
    			else if (ret == AVERROR_EOF)
    			{
    				qDebug() << "flush audio decoder finished";
    				break;
    			}
    			qDebug() << "flush audio avcodec_receive_frame error, ret: " << ret;
    			return;
    		}
    		++g_aCollectFrameCnt;
    
    		dstNbSamples = av_rescale_rnd(swr_get_delay(m_swrCtx, m_aDecodeCtx->sample_rate) + rawFrame->nb_samples,
    			m_aEncodeCtx->sample_rate, m_aDecodeCtx->sample_rate, AV_ROUND_UP);
    		if (dstNbSamples > maxDstNbSamples)
    		{
    			qDebug() << "flush audio newFrame realloc";
    			av_freep(&newFrame->data[0]);
    			ret = av_samples_alloc(newFrame->data, newFrame->linesize, m_aEncodeCtx->channels,
    				dstNbSamples, m_aEncodeCtx->sample_fmt, 1);
    			if (ret < 0)
    			{
    				qDebug() << "flush av_samples_alloc failed";
    				return;
    			}
    			maxDstNbSamples = dstNbSamples;
    			m_aEncodeCtx->frame_size = dstNbSamples;
    			m_nbSamples = newFrame->nb_samples;
    		}
    		newFrame->nb_samples = swr_convert(m_swrCtx, newFrame->data, dstNbSamples,
    			(const uint8_t **)rawFrame->data, rawFrame->nb_samples);
    		if (newFrame->nb_samples < 0)
    		{
    			qDebug() << "flush swr_convert failed";
    			return;
    		}
    
    		{
    			unique_lock<mutex> lk(m_mtxABuf);
    			m_cvABufNotFull.wait(lk, [newFrame, this] { return av_audio_fifo_space(m_aFifoBuf) >= newFrame->nb_samples; });
    		}
    		if (av_audio_fifo_write(m_aFifoBuf, (void **)newFrame->data, newFrame->nb_samples) < newFrame->nb_samples)
    		{
    			qDebug() << "av_audio_fifo_write";
    			return;
    		}
    		m_cvABufNotEmpty.notify_one();
    	}
    	qDebug() << "audio collect frame count: " << g_aCollectFrameCnt;
    }
    
    //void ScreenRecordImpl::FlushAudioEncoder()
    //{
    //}
    
    void ScreenRecordImpl::FlushEncoders()
    {
    	int ret = -1;
    	bool vBeginFlush = false;
    	bool aBeginFlush = false;
    
    	m_vCurPts = m_aCurPts = 0;
    
    	int nFlush = 2;
    
    	while (1)
    	{
    		AVPacket pkt = { 0 };
    		av_init_packet(&pkt);
    		if (av_compare_ts(m_vCurPts, m_oFmtCtx->streams[m_vOutIndex]->time_base,
    			m_aCurPts, m_oFmtCtx->streams[m_aOutIndex]->time_base) <= 0)
    		{
    			if (!vBeginFlush)
    			{
    				vBeginFlush = true;
    				ret = avcodec_send_frame(m_vEncodeCtx, nullptr);
    				if (ret != 0)
    				{
    					qDebug() << "flush video avcodec_send_frame failed, ret: " << ret;
    					return;
    				}
    			}
    			ret = avcodec_receive_packet(m_vEncodeCtx, &pkt);
    			if (ret < 0)
    			{
    				av_packet_unref(&pkt);
    				if (ret == AVERROR(EAGAIN))
    				{
    					qDebug() << "flush video EAGAIN avcodec_receive_packet";
    					ret = 1;
    					continue;
    				}
    				else if (ret == AVERROR_EOF)
    				{
    					qDebug() << "flush video encoder finished";
    					//break;
    					if (!(--nFlush))
    						break;
    					m_vCurPts = INT_MAX;
    					continue;
    				}
    				qDebug() << "flush video avcodec_receive_packet failed, ret: " << ret;
    				return;
    			}
    			pkt.stream_index = m_vOutIndex;
    			//将pts从编码层的timebase转成复用层的timebase
    			av_packet_rescale_ts(&pkt, m_vEncodeCtx->time_base, m_oFmtCtx->streams[m_vOutIndex]->time_base);
    			m_vCurPts = pkt.pts;
    			qDebug() << "m_vCurPts: " << m_vCurPts;
    
    			ret = av_interleaved_write_frame(m_oFmtCtx, &pkt);
    			if (ret == 0)
    				qDebug() << "flush Write video packet id: " << ++g_vEncodeFrameCnt;
    			else
    				qDebug() << "flush video av_interleaved_write_frame failed, ret:" << ret;
    			av_free_packet(&pkt);
    		}
    		else
    		{
    			if (!aBeginFlush)
    			{
    				aBeginFlush = true;
    				ret = avcodec_send_frame(m_aEncodeCtx, nullptr);
    				if (ret != 0)
    				{
    					qDebug() << "flush audio avcodec_send_frame failed, ret: " << ret;
    					return;
    				}
    			}
    			ret = avcodec_receive_packet(m_aEncodeCtx, &pkt);
    			if (ret < 0)
    			{
    				av_packet_unref(&pkt);
    				if (ret == AVERROR(EAGAIN))
    				{
    					qDebug() << "flush EAGAIN avcodec_receive_packet";
    					ret = 1;
    					continue;
    				}
    				else if (ret == AVERROR_EOF)
    				{
    					qDebug() << "flush audio encoder finished";
    					/*break;*/
    					if (!(--nFlush))
    						break;
    					m_aCurPts = INT_MAX;
    					continue;
    				}
    				qDebug() << "flush audio avcodec_receive_packet failed, ret: " << ret;
    				return;
    			}
    			pkt.stream_index = m_aOutIndex;
    			//将pts从编码层的timebase转成复用层的timebase
    			av_packet_rescale_ts(&pkt, m_aEncodeCtx->time_base, m_oFmtCtx->streams[m_aOutIndex]->time_base);
    			m_aCurPts = pkt.pts;
    			qDebug() << "m_aCurPts: " << m_aCurPts;
    			ret = av_interleaved_write_frame(m_oFmtCtx, &pkt);
    			if (ret == 0)
    				qDebug() << "flush write audio packet id: " << ++g_aEncodeFrameCnt;
    			else
    				qDebug() << "flush audio av_interleaved_write_frame failed, ret: " << ret;
    			av_free_packet(&pkt);
    		}
    	}
    }
    
    void ScreenRecordImpl::Release()
    {
    	if (m_vOutFrame)
    	{
    		av_frame_free(&m_vOutFrame);
    		m_vOutFrame = nullptr;
    	}
    	if (m_vOutFrameBuf)
    	{
    		av_free(m_vOutFrameBuf);
    		m_vOutFrameBuf = nullptr;
    	}
    	if (m_oFmtCtx)
    	{
    		avio_close(m_oFmtCtx->pb);
    		avformat_free_context(m_oFmtCtx);
    		m_oFmtCtx = nullptr;
    	}
    	//if (m_vDecodeCtx)
    	//{
        //  // FIXME: 为什么这里会崩溃
    	//	avcodec_free_context(&m_vDecodeCtx);
    	//	m_vDecodeCtx = nullptr;
    	//}
    	if (m_aDecodeCtx)
    	{
    		avcodec_free_context(&m_aDecodeCtx);
    		m_aDecodeCtx = nullptr;
    	}
    	if (m_vEncodeCtx)
    	{
    		avcodec_free_context(&m_vEncodeCtx);
    		m_vEncodeCtx = nullptr;
    	}
    	if (m_aEncodeCtx)
    	{
    		avcodec_free_context(&m_aEncodeCtx);
    		m_aEncodeCtx = nullptr;
    	}
    	if (m_vFifoBuf)
    	{
    		av_fifo_freep(&m_vFifoBuf);
    		m_vFifoBuf = nullptr;
    	}
    	if (m_aFifoBuf)
    	{
    		av_audio_fifo_free(m_aFifoBuf);
    		m_aFifoBuf = nullptr;
    	}
    	if (m_vFmtCtx)
    	{
    		avformat_close_input(&m_vFmtCtx);
    		m_vFmtCtx = nullptr;
    	}
    	if (m_aFmtCtx)
    	{
    		avformat_close_input(&m_aFmtCtx);
    		m_aFmtCtx = nullptr;
    	}
    }
    
    void ScreenRecordImpl::MuxThreadProc()
    {
    	int ret = -1;
    	bool done = false;
    	int vFrameIndex = 0, aFrameIndex = 0;
    
    	av_register_all();
    	avdevice_register_all();
    	avcodec_register_all();
    
    	if (OpenVideo() < 0)
    		return;
    	if (OpenAudio() < 0)
    		return;
    	if (OpenOutput() < 0)
    		return;
    
    	InitVideoBuffer();
    	InitAudioBuffer();
    
    	//启动音视频数据采集线程
    	std::thread screenRecord(&ScreenRecordImpl::ScreenRecordThreadProc, this);
    	std::thread soundRecord(&ScreenRecordImpl::SoundRecordThreadProc, this);
    	screenRecord.detach();
    	soundRecord.detach();
    
    	while (1)
    	{
    		if (m_state == RecordState::Stopped && !done)
    			done = true;
    		if (done)
    		{
    			unique_lock<mutex> vBufLock(m_mtxVBuf, std::defer_lock);
    			unique_lock<mutex> aBufLock(m_mtxABuf, std::defer_lock);
    			std::lock(vBufLock, aBufLock);
    			if (av_fifo_size(m_vFifoBuf) < m_vOutFrameSize &&
    				av_audio_fifo_size(m_aFifoBuf) < m_nbSamples)
    			{
    				qDebug() << "both video and audio fifo buf are empty, break";
    				break;
    			}
    		}
    		if (av_compare_ts(m_vCurPts, m_oFmtCtx->streams[m_vOutIndex]->time_base,
    			m_aCurPts, m_oFmtCtx->streams[m_aOutIndex]->time_base) <= 0)
    	/*	if (av_compare_ts(vCurPts, m_vEncodeCtx->time_base,
    			aCurPts, m_aEncodeCtx->time_base) <= 0)*/
    		{
    			if (done)
    			{
    				lock_guard<mutex> lk(m_mtxVBuf);
    				if (av_fifo_size(m_vFifoBuf) < m_vOutFrameSize)
    				{
    					qDebug() << "video wirte done";
    					//break;
    					//m_vCurPts = 0x7ffffffffffffffe;	//int64_t最大有符号整数
    					m_vCurPts = INT_MAX;
    					continue;
    				}
    			}
    			else 
    			{
    				unique_lock<mutex> lk(m_mtxVBuf);
    				m_cvVBufNotEmpty.wait(lk, [this] { return av_fifo_size(m_vFifoBuf) >= m_vOutFrameSize; });
    			}
    			av_fifo_generic_read(m_vFifoBuf, m_vOutFrameBuf, m_vOutFrameSize, NULL);
    			m_cvVBufNotFull.notify_one();
    
    			//设置视频帧参数
    			//m_vOutFrame->pts = vFrameIndex * ((m_oFmtCtx->streams[m_vOutIndex]->time_base.den / m_oFmtCtx->streams[m_vOutIndex]->time_base.num) / m_fps);
    			m_vOutFrame->pts = vFrameIndex++;
    			m_vOutFrame->format = m_vEncodeCtx->pix_fmt;
    			m_vOutFrame->width = m_vEncodeCtx->width;
    			m_vOutFrame->height = m_vEncodeCtx->height;
    
    			AVPacket pkt = { 0 };
    			av_init_packet(&pkt);
    			ret = avcodec_send_frame(m_vEncodeCtx, m_vOutFrame);
    			if (ret != 0)
    			{
    				qDebug() << "video avcodec_send_frame failed, ret: " << ret;
    				av_packet_unref(&pkt);
    				continue;
    			}
    			ret = avcodec_receive_packet(m_vEncodeCtx, &pkt);
    			if (ret != 0)
    			{
    				qDebug() << "video avcodec_receive_packet failed, ret: " << ret;
    				av_packet_unref(&pkt);
    				continue;
    			}
    			pkt.stream_index = m_vOutIndex;
    			//将pts从编码层的timebase转成复用层的timebase
    			av_packet_rescale_ts(&pkt, m_vEncodeCtx->time_base, m_oFmtCtx->streams[m_vOutIndex]->time_base);
    
    			m_vCurPts = pkt.pts;
    			qDebug() << "m_vCurPts: " << m_vCurPts;
    
    			ret = av_interleaved_write_frame(m_oFmtCtx, &pkt);
    			if (ret == 0)
    				qDebug() << "Write video packet id: " << ++g_vEncodeFrameCnt;
    			else
    				qDebug() << "video av_interleaved_write_frame failed, ret:" << ret;
    			av_free_packet(&pkt);
    		}
    		else
    		{
    			if (done)
    			{
    				lock_guard<mutex> lk(m_mtxABuf);
    				if (av_audio_fifo_size(m_aFifoBuf) < m_nbSamples)
    				{
    					qDebug() << "audio write done";
    					//m_aCurPts = 0x7fffffffffffffff;
    					m_aCurPts = INT_MAX;
    					continue;
    				}
    			}
    			else
    			{
    				unique_lock<mutex> lk(m_mtxABuf);
    				m_cvABufNotEmpty.wait(lk, [this] { return av_audio_fifo_size(m_aFifoBuf) >= m_nbSamples; });
    			}
    
    			int ret = -1;
    			AVFrame *aFrame = av_frame_alloc();
    			aFrame->nb_samples = m_nbSamples;
    			aFrame->channel_layout = m_aEncodeCtx->channel_layout;
    			aFrame->format = m_aEncodeCtx->sample_fmt;
    			aFrame->sample_rate = m_aEncodeCtx->sample_rate;
    			aFrame->pts = m_nbSamples * aFrameIndex++;
    			//分配data buf
    			ret = av_frame_get_buffer(aFrame, 0);
    			av_audio_fifo_read(m_aFifoBuf, (void **)aFrame->data, m_nbSamples);
    			m_cvABufNotFull.notify_one();
    
    			AVPacket pkt = { 0 };
    			av_init_packet(&pkt);
    			ret = avcodec_send_frame(m_aEncodeCtx, aFrame);
    			if (ret != 0)
    			{
    				qDebug() << "audio avcodec_send_frame failed, ret: " << ret;
    				av_frame_free(&aFrame);
    				av_packet_unref(&pkt);
    				continue;
    			}
    			ret = avcodec_receive_packet(m_aEncodeCtx, &pkt);
    			if (ret != 0)
    			{
    				qDebug() << "audio avcodec_receive_packet failed, ret: " << ret;
    				av_frame_free(&aFrame);
    				av_packet_unref(&pkt);
    				continue;
    			}
    			pkt.stream_index = m_aOutIndex;
    
    			av_packet_rescale_ts(&pkt, m_aEncodeCtx->time_base, m_oFmtCtx->streams[m_aOutIndex]->time_base);
    
    			m_aCurPts = pkt.pts;
    			qDebug() << "aCurPts: " << m_aCurPts;
    
    			ret = av_interleaved_write_frame(m_oFmtCtx, &pkt);
    			if (ret == 0)
    				qDebug() << "Write audio packet id: " << ++g_aEncodeFrameCnt;
    			else
    				qDebug() << "audio av_interleaved_write_frame failed, ret: " << ret;
    
    			av_frame_free(&aFrame);
    			av_free_packet(&pkt);
    		}
    	}
    	FlushEncoders();
    	av_write_trailer(m_oFmtCtx);
    	Release();
    	qDebug() << "parent thread exit";
    }
    
    void ScreenRecordImpl::ScreenRecordThreadProc()
    {
    	int ret = -1;
    	AVPacket pkt = { 0 };
    	av_init_packet(&pkt);
    	int y_size = m_width * m_height;
    	AVFrame	*oldFrame = av_frame_alloc();
    	AVFrame *newFrame = av_frame_alloc();
    
    	int newFrameBufSize = av_image_get_buffer_size(m_vEncodeCtx->pix_fmt, m_width, m_height, 1);
    	uint8_t *newFrameBuf = (uint8_t*)av_malloc(newFrameBufSize);
    	av_image_fill_arrays(newFrame->data, newFrame->linesize, newFrameBuf,
    		m_vEncodeCtx->pix_fmt, m_width, m_height, 1);
    
    	while (m_state != RecordState::Stopped)
    	{
    		if (m_state == RecordState::Paused)
    		{
    			unique_lock<mutex> lk(m_mtxPause);
    			m_cvNotPause.wait(lk, [this] { return m_state != RecordState::Paused; });
    		}
    		if (av_read_frame(m_vFmtCtx, &pkt) < 0)
    		{
    			qDebug() << "video av_read_frame < 0";
    			continue;
    		}
    		if (pkt.stream_index != m_vIndex)
    		{
    			qDebug() << "not a video packet from video input";
    			av_packet_unref(&pkt);
    		}
    		ret = avcodec_send_packet(m_vDecodeCtx, &pkt);
    		if (ret != 0)
    		{
    			qDebug() << "video avcodec_send_packet failed, ret:" << ret;
    			av_packet_unref(&pkt);
    			continue;
    		}
    		ret = avcodec_receive_frame(m_vDecodeCtx, oldFrame);
    		if (ret != 0)
    		{
    			qDebug() << "video avcodec_receive_frame failed, ret:" << ret;
    			av_packet_unref(&pkt);
    			continue;
    		}
    		++g_vCollectFrameCnt;
    		sws_scale(m_swsCtx, (const uint8_t* const*)oldFrame->data, oldFrame->linesize, 0,
    			m_vEncodeCtx->height, newFrame->data, newFrame->linesize);
    
    		{
    			unique_lock<mutex> lk(m_mtxVBuf);
    			m_cvVBufNotFull.wait(lk, [this] { return av_fifo_space(m_vFifoBuf) >= m_vOutFrameSize; });
    		}
    		av_fifo_generic_write(m_vFifoBuf, newFrame->data[0], y_size, NULL);
    		av_fifo_generic_write(m_vFifoBuf, newFrame->data[1], y_size / 4, NULL);
    		av_fifo_generic_write(m_vFifoBuf, newFrame->data[2], y_size / 4, NULL);
    		m_cvVBufNotEmpty.notify_one();
    
    		av_packet_unref(&pkt);
    	}
    	FlushVideoDecoder();
    
    	av_free(newFrameBuf);
    	av_frame_free(&oldFrame);
    	av_frame_free(&newFrame);
    	qDebug() << "screen record thread exit";
    }
    
    void ScreenRecordImpl::SoundRecordThreadProc()
    {
    	int ret = -1;
    	AVPacket pkt = { 0 };
    	av_init_packet(&pkt);
    	int nbSamples = m_nbSamples;
    	int dstNbSamples, maxDstNbSamples;
    	AVFrame *rawFrame = av_frame_alloc();
    	AVFrame *newFrame = AllocAudioFrame(m_aEncodeCtx, nbSamples);
    
    	maxDstNbSamples = dstNbSamples = av_rescale_rnd(nbSamples, 
    		m_aEncodeCtx->sample_rate, m_aDecodeCtx->sample_rate, AV_ROUND_UP);
    
    	while (m_state != RecordState::Stopped)
    	{
    		if (m_state == RecordState::Paused)
    		{
    			unique_lock<mutex> lk(m_mtxPause);
    			m_cvNotPause.wait(lk, [this] { return m_state != RecordState::Paused; });
    		}
    		if (av_read_frame(m_aFmtCtx, &pkt) < 0)
    		{
    			qDebug() << "audio av_read_frame < 0";
    			continue;
    		}
    		if (pkt.stream_index != m_aIndex)
    		{
    			qDebug() << "not a audio packet";
    			av_packet_unref(&pkt);
    			continue;
    		}
    		ret = avcodec_send_packet(m_aDecodeCtx, &pkt);
    		if (ret != 0)
    		{
    			qDebug() << "audio avcodec_send_packet failed, ret: " << ret;
    			av_packet_unref(&pkt);
    			continue;
    		}
    		ret = avcodec_receive_frame(m_aDecodeCtx, rawFrame);
    		if (ret != 0)
    		{
    			qDebug() << "audio avcodec_receive_frame failed, ret: " << ret;
    			av_packet_unref(&pkt);
    			continue;
    		}
    		++g_aCollectFrameCnt;
    
    		dstNbSamples = av_rescale_rnd(swr_get_delay(m_swrCtx, m_aDecodeCtx->sample_rate) + rawFrame->nb_samples,
    			m_aEncodeCtx->sample_rate, m_aDecodeCtx->sample_rate, AV_ROUND_UP);
    		if (dstNbSamples > maxDstNbSamples) 
    		{
    			qDebug() << "audio newFrame realloc";
    			av_freep(&newFrame->data[0]);
    			//nb_samples*nb_channels*Bytes_sample_fmt
    			ret = av_samples_alloc(newFrame->data, newFrame->linesize, m_aEncodeCtx->channels,
    				dstNbSamples, m_aEncodeCtx->sample_fmt, 1);
    			if (ret < 0)
    			{
    				qDebug() << "av_samples_alloc failed";
    				return;
    			}
    
    			maxDstNbSamples = dstNbSamples;
    			m_aEncodeCtx->frame_size = dstNbSamples;
    			m_nbSamples = newFrame->nb_samples;	//1024
    			/*
    			 * m_nbSamples = dstNbSamples;		//22050
    			 * 如果改为m_nbSamples = dstNbSamples;则av_audio_fifo_write会异常,不明白为什么?
    			 * 我觉得应该改为22050,不然编码线程一次编码的帧sample太少了,
    			 * 但是用1024生成的音频好像没问题?
    			 * 音频是否应该根据采集的nb_samples而重新分配fifo?
    			*/
    		}
    
    		newFrame->nb_samples = swr_convert(m_swrCtx, newFrame->data, dstNbSamples,
    			(const uint8_t **)rawFrame->data, rawFrame->nb_samples);
    		if (newFrame->nb_samples < 0)
    		{
    			qDebug() << "swr_convert error";
    			return;
    		}
    		{
    			unique_lock<mutex> lk(m_mtxABuf);
    			m_cvABufNotFull.wait(lk, [newFrame, this] { return av_audio_fifo_space(m_aFifoBuf) >= newFrame->nb_samples; });
    		}
    		if (av_audio_fifo_write(m_aFifoBuf, (void **)newFrame->data, newFrame->nb_samples) < newFrame->nb_samples)
    		{
    			qDebug() << "av_audio_fifo_write";
    			return;
    		}
    		m_cvABufNotEmpty.notify_one();
    	}
    	FlushAudioDecoder();
    	av_frame_free(&rawFrame);
    	av_frame_free(&newFrame);
    	qDebug() << "sound record thread exit";
    }
    

    ScreenRecordTest.h

    #pragma once
    #include <QObject>
    #include <QVariant>
    
    class ScreenRecord : public QObject
    {
    	Q_OBJECT
    public:
    	ScreenRecord(QObject *parent = Q_NULLPTR);
    
    private:
    	QVariantMap m_args;
    };

    ScreenRecordTest.cpp

    #include "ScreenRecordTest.h"
    #include "ScreenRecordImpl.h"
    #include <QTimer>
    
    ScreenRecord::ScreenRecord(QObject *parent) :
    	QObject(parent)
    {
    	ScreenRecordImpl *sr = new ScreenRecordImpl(this);
    	QVariantMap args;
    	args["filePath"] = "test.mp4";
    	//args["width"] = 1920;
    	//args["height"] = 1080;
    	args["width"] = 1440;
    	args["height"] = 900;
    	args["fps"] = 30;
    	args["audioBitrate"] = 128000;
    
    	sr->Init(args);
    
    	QTimer::singleShot(1000, sr, SLOT(Start()));
    	//QTimer::singleShot(5000, sr, SLOT(Pause()));
    	QTimer::singleShot(11000, sr, SLOT(Stop()));
    }
    

    main.cpp

    #include <QApplication>
    #include "ScreenRecordImpl.h"
    #include "ScreenRecordTest.h"
    
    int main(int argc, char *argv[])
    {
    	QApplication a(argc, argv);
    
    	ScreenRecord sr;
    
    	return a.exec();
    }
    

     

    展开全文
  • ffmpeg录屏并通过UDP发送出去,接收端的可以直接播放或者保存: 1)播放:ffplay -f h264 udp://本机IP:6666 2)保存:ffmpeg -i udp://本机IP:6666 -c copy dump.flv 由于是采用UDP方式发送出去的,接收端是否在线...
  • 这是本人研究一个音视频项目的中间测试工程,vs2015,结合了一些其他资源修改而成,来挣点分。...功能介绍:windows下通过FFmpeg录屏,录音,视频rgb转yuv通过libx264转h264,通过faac把pcm转aac,再通过mp4v2合并成MP4
  • FFMPEG录屏软件1.0

    2019-11-08 23:25:49
    基于ffmpeg开发的windows录屏软件,同时录制桌面、麦克风、系统声音并合成MP4文件,Visual Studio版本最低vs2015,启动前请修改本地的声音捕获设备名称。
  • 今天看到一篇文章,用树莓派 + ffmpeg 来...于是折腾了一天的 ffmpeg录屏并推流直接用这个命令即可:ffmpeg -f pulse -i alsa_output.xxxxxxxxxxxxx.0.analog-stereo.monitor-f x11grab -s 1920x1080 -framerate 15 ...

    今天看到一篇文章,用树莓派 + ffmpeg 来推流电视剧到 B 站,突然也想到我可以用树莓派做一个音乐站,加上电子相册什么的。于是折腾了一天的 ffmpeg

    录屏并推流直接用这个命令即可:

    ffmpeg -f pulse -i alsa_output.xxxxxxxxxxxxx.0.analog-stereo.monitor-f x11grab -s 1920x1080 -framerate 15 -i :0.0 -preset ultrafast -pix_fmt yuv420p -s 1280x720 -threads 0 -f flv "rtmp://balabala"

    其中 1920x1080 是原始的分辨率,1280x720 是缩放后的输出。

    -f pulse -i alsa_output

    是指设置输出系统内部声音的设备。

    至于怎么获得那一段

    alsa_output.xxxxxxxxxxxxx.0.analog-stereo.monitor

    ,是这样获得的:

    pactl list | grep -A2 'monitor'

    // 能看到像这样的输出

    监视器信源:alsa_output.pci-0000_00_1b.0.analog-stereo.monitor

    延迟:24504 微秒,设置为 24988 微秒

    标记:HARDWARE HW_MUTE_CTRL HW_VOLUME_CTRL DECIBEL_VOLUME LATENCY

    --

    名称:alsa_output.pci-0000_00_1b.0.analog-stereo.monitor

    描述:Monitor of 内置音频 模拟立体声

    驱动程序:module-alsa-card.c

    --

    device.class = "monitor"

    alsa.card = "1"

    alsa.card_name = "HDA Intel PCH"

    现在就能看到了(

    执行即可

    # EOF.

    参考资料:Linux下流畅地录屏

    FFmpeg Wiki

    展开全文
  • 版权声明:本文为博主原创文章,未经博...Qt+FFmpeg录屏录音 录屏功能支持:开始,暂停,结束。 使用Qt+C++封装FFmpeg API,没有使用废弃的FFmpeg API。 主线程:Qt GUI线程,以后可接入录屏UI。 MuxThreadProc:复...
  • 最近在做局域网内屏幕分享方面的东西,要把录制一台设备的屏幕然后实时推送给内网的一个或多...一、 启动crtmp推流服务二、 使用命令行工具调用FFmpeg程序,所用命令如下(参数略有不同,每次使用一条即可):ffmpeg ...
  • c++录屏、FFmpeg录屏、录屏格式转换

    千次阅读 2017-07-21 16:48:37
    需要对软件的客户区进行录屏,但是找了半天资料发现并不好集成到我的软件当中,最后发现利用cmd命令调用ffmpeg.exe可以实现录屏功能,实现录屏以及录屏格式转换,相当有趣。 知识点: 1.FFmpeg是什么 FFmpeg...
  • Qt+FFmpeg录屏录音 录屏功能支持:开始,暂停,结束。 使用Qt+C++封装FFmpeg API,没有使用废弃的FFmpeg API。 主线程:Qt GUI线程,以后可接入录屏UI。 MuxThreadProc:复用线程,启动音视频采集线程。打开输入/...
  • 使用FFMPEG录屏

    2021-03-06 23:34:13
    ffmpeg -f gdigrab -r 10 -i desktop output.mp4 如果您想限制到一个区域,并显示被抓的区域: ffmpeg -f gdigrab -r 10 -offset_x 10 -offset_y 20 -s 640x480 -show_region 1 -i desktop output.mp4 ...
  • AForge.Video.FFMPEG实现桌面录屏demo,AForge.Video.FFMPEG为开源项目,源代码可在http://www.aforgenet.com/framework/downloads.html下载。
  • ffmpeg录屏是不是不能录制5760*1080这么大的。已经试到4000了,但是再大,录制的视频文件就失效了。是不是不能录制这么大的呢? D:\\TGQ\\hzlh\\VS2017\\Qt\\project2\\Project1\\bin\\ffmpeg.exe -f gdigrab -...
  • ffmpeg 录屏命令

    千次阅读 2019-08-20 09:55:47
    window (安装dshow) ...ffmpeg -rtbufsize 100M -f dshow -i video="screen-capture-recorder":audio="virtual-audio-capturer" -vcodec libx264 -preset veryfast -crf 22 -tune:v zerolatency -pix_fmt yuv420p ...
  • ffmpeg录屏录音

    2019-11-11 17:39:33
    -f fmt (input/output) 强制设定输入或输出文件格式。通常会自动检测输入文件的格式,并从输出文件的文件扩展名猜测格式,因此大多数情况下不需要这个选项。 -i url (input) 输入 -y (global) 在没有请求的情况下...
  • Qt+FFmpeg录屏

    千次阅读 2019-02-27 23:54:47
    使用Qt+C++封装FFmpeg API,没有使用废弃的FFmpeg API。 主线程:Qt GUI线程,以后可接入录屏UI。 父线程(读):ScreenRecordThreadProc() 打开输入/输出流,创建子线程,然后从fifoBuffer读取帧,编码生成各种...
  • linux下使用ffmpeg录屏

    千次阅读 2019-09-08 20:08:23
    linux系统中,使用ffmpeg进行录屏与截图 把/dev/fb0设备的framebuffer显示图像录制为视频 ffmpeg -f fbdev -framerate 10 -i /dev/fb0 out.avi 编码帧率默认值为25fps 把/dev/fb0设备的framebuffer显示图像录制一帧...
  • ffmpeg录屏(graphedt)

    2020-08-06 12:57:15
    录屏需要先装一个工具。 ffmpeg -rtbufsize 1500M -f dshow -i video="screen-capture-recorder" -f dshow -i audio="virtual-audio-capturer" -r 5 -vcodec libx264 -preset:v ultrafast -tune:v zerolatency -...
  • ffmpeg 录屏和录音命令

    2019-06-18 21:18:29
    1.录屏和录音推流命令,不带GPU加速 ffmpeg -f x11grab -video_size 1920x1080 -framerate 25 -i :0.0+0,0 -f alsa -ac 2 -i default -vcodec libx264 -acodec libmp3lame -ar 44100 -b:a 128k -f mpegts udp://...
  • ffmpeg录屏(win)

    2020-12-13 21:24:19
    https://zhuanlan.zhihu.com/p/38229790

空空如也

空空如也

1 2 3 4 5 ... 20
收藏数 452
精华内容 180
关键字:

ffmpeg录屏