精华内容
下载资源
问答
  • Android audio

    2018-04-24 10:31:46
    主要Android电视平台下Audio应用框架,主要是针对UEC。
  • Android Audio Framework

    2019-03-01 23:55:24
    Android Audio架构分析,从上层到底层的分析Android Audio架构
  • Android audioAudioRecord 分析上

    千次阅读 2019-04-21 11:27:19
    Android audio 一 源码路径 Android audioAudioRecord 分析上 Android audioAudioRecord 分析下 Android audio 四 AudioTrack 分析上 Android audio 五 AudioTrack 分析下 Android audioAudio...

    Android audio 一 源码路径

    Android audio 二 AudioRecord 分析上

    Android audio 三 AudioRecord 分析下

    Android audio 四 AudioTrack 分析上

    Android audio 五 AudioTrack 分析下

    Android audio 六 AudioRecord AudiTrack 拾音放音例子

    Android 采集音频类 AudioRecord

    文件:

    frameworks/base/media/java/android/media/AudioRecord.java

    frameworks/base/core/jni/android_media_AudioRecord.cpp

    frameworks/av/media/libmedia/AudioRecord.cpp

     

    在 APP 里创建一个拾音线程,先要实例化 AudioRecord 对象,下面从实例化对象 AudioRecord 开始分析 

    private AudioRecord audiorecord = new AudioRecord(......)

     

    AudioRecord 源码如下, 有三个构造函数 AudioRecord :

    第一个构造函数实例化 AudioRecord 对象, APP 调用。

    第二个 @SystemApi 是系统 API , 这个函数中调用了 native_setup ,实例化本地 AudioRecord 对象。

        // 调用 JNI ,创建本地 audiorecord 实例
        int initResult = native_setup(new WeakReference<AudioRecord>(this),
                                      mAudioAttributes, sampleRate, mChannelMask, mChannelIndexMask,
                                      mAudioFormat, mNativeBufferSizeInBytes,
                                      session, ActivityThread.currentOpPackageName(), 0 /*nativeRecordInJavaObj*/);
    // frameworks/base/media/java/android/media/AudioRecord.java
    //---------------------------------------------------------
    // Constructor, Finalize
    //--------------------
    /**
     * Class constructor.
     * Though some invalid parameters will result in an {@link IllegalArgumentException} exception,
     * other errors do not.  Thus you should call {@link #getState()} immediately after construction
     * to confirm that the object is usable.
     * @param audioSource the recording source.
     *   See {@link MediaRecorder.AudioSource} for the recording source definitions.
     * @param sampleRateInHz the sample rate expressed in Hertz. 44100Hz is currently the only
     *   rate that is guaranteed to work on all devices, but other rates such as 22050,
     *   16000, and 11025 may work on some devices.
     *   {@link AudioFormat#SAMPLE_RATE_UNSPECIFIED} means to use a route-dependent value
     *   which is usually the sample rate of the source.
     *   {@link #getSampleRate()} can be used to retrieve the actual sample rate chosen.
     * @param channelConfig describes the configuration of the audio channels.
     *   See {@link AudioFormat#CHANNEL_IN_MONO} and
     *   {@link AudioFormat#CHANNEL_IN_STEREO}.  {@link AudioFormat#CHANNEL_IN_MONO} is guaranteed
     *   to work on all devices.
     * @param audioFormat the format in which the audio data is to be returned.
     *   See {@link AudioFormat#ENCODING_PCM_8BIT}, {@link AudioFormat#ENCODING_PCM_16BIT},
     *   and {@link AudioFormat#ENCODING_PCM_FLOAT}.
     * @param bufferSizeInBytes the total size (in bytes) of the buffer where audio data is written
     *   to during the recording. New audio data can be read from this buffer in smaller chunks
     *   than this size. See {@link #getMinBufferSize(int, int, int)} to determine the minimum
     *   required buffer size for the successful creation of an AudioRecord instance. Using values
     *   smaller than getMinBufferSize() will result in an initialization failure.
     * @throws java.lang.IllegalArgumentException
     */
    public AudioRecord(int audioSource, int sampleRateInHz, int channelConfig, int audioFormat,
                       int bufferSizeInBytes)
    throws IllegalArgumentException
    {
        this((new AudioAttributes.Builder())
             .setInternalCapturePreset(audioSource)
             .build(),
             (new AudioFormat.Builder())
             .setChannelMask(getChannelMaskFromLegacyConfig(channelConfig,
                             true/*allow legacy configurations*/))
             .setEncoding(audioFormat)
             .setSampleRate(sampleRateInHz)
             .build(),
             bufferSizeInBytes,
             AudioManager.AUDIO_SESSION_ID_GENERATE);
    }
    
    /**
     * @hide
     * Class constructor with {@link AudioAttributes} and {@link AudioFormat}.
     * @param attributes a non-null {@link AudioAttributes} instance. Use
     *     {@link AudioAttributes.Builder#setAudioSource(int)} for configuring the audio
     *     source for this instance.
     * @param format a non-null {@link AudioFormat} instance describing the format of the data
     *     that will be recorded through this AudioRecord. See {@link AudioFormat.Builder} for
     *     configuring the audio format parameters such as encoding, channel mask and sample rate.
     * @param bufferSizeInBytes the total size (in bytes) of the buffer where audio data is written
     *   to during the recording. New audio data can be read from this buffer in smaller chunks
     *   than this size. See {@link #getMinBufferSize(int, int, int)} to determine the minimum
     *   required buffer size for the successful creation of an AudioRecord instance. Using values
     *   smaller than getMinBufferSize() will result in an initialization failure.
     * @param sessionId ID of audio session the AudioRecord must be attached to, or
     *   {@link AudioManager#AUDIO_SESSION_ID_GENERATE} if the session isn't known at construction
     *   time. See also {@link AudioManager#generateAudioSessionId()} to obtain a session ID before
     *   construction.
     * @throws IllegalArgumentException
     */
    @SystemApi
    public AudioRecord(AudioAttributes attributes, AudioFormat format, int bufferSizeInBytes,
                       int sessionId) throws IllegalArgumentException
    {
        mRecordingState = RECORDSTATE_STOPPED;
    
        if(attributes == null)
        {
            throw new IllegalArgumentException("Illegal null AudioAttributes");
        }
        if(format == null)
        {
            throw new IllegalArgumentException("Illegal null AudioFormat");
        }
    
        // remember which looper is associated with the AudioRecord instanciation
    	// 记住哪个线程与音频记录的安装相关
        if((mInitializationLooper = Looper.myLooper()) == null)
        {
            mInitializationLooper = Looper.getMainLooper();
        }
    
        // is this AudioRecord using REMOTE_SUBMIX at full volume?
        if(attributes.getCapturePreset() == MediaRecorder.AudioSource.REMOTE_SUBMIX)
        {
            final AudioAttributes.Builder filteredAttr = new AudioAttributes.Builder();
            final Iterator<String> tagsIter = attributes.getTags().iterator();
            while(tagsIter.hasNext())
            {
                final String tag = tagsIter.next();
                if(tag.equalsIgnoreCase(SUBMIX_FIXED_VOLUME))
                {
                    mIsSubmixFullVolume = true;
                    Log.v(TAG, "Will record from REMOTE_SUBMIX at full fixed volume");
                }
                else     // SUBMIX_FIXED_VOLUME: is not to be propagated to the native layers
                {
                    filteredAttr.addTag(tag);
                }
            }
            filteredAttr.setInternalCapturePreset(attributes.getCapturePreset());
            mAudioAttributes = filteredAttr.build();
        }
        else
        {
            mAudioAttributes = attributes;
        }
    
        int rate = format.getSampleRate();
        if(rate == AudioFormat.SAMPLE_RATE_UNSPECIFIED)
        {
            rate = 0;
        }
    
        int encoding = AudioFormat.ENCODING_DEFAULT;
        if((format.getPropertySetMask() & AudioFormat.AUDIO_FORMAT_HAS_PROPERTY_ENCODING) != 0)
        {
            encoding = format.getEncoding();
        }
    
        audioParamCheck(attributes.getCapturePreset(), rate, encoding);
    
        if((format.getPropertySetMask()
                & AudioFormat.AUDIO_FORMAT_HAS_PROPERTY_CHANNEL_INDEX_MASK) != 0)
        {
            mChannelIndexMask = format.getChannelIndexMask();
            mChannelCount = format.getChannelCount();
        }
        if((format.getPropertySetMask()
                & AudioFormat.AUDIO_FORMAT_HAS_PROPERTY_CHANNEL_MASK) != 0)
        {
            mChannelMask = getChannelMaskFromLegacyConfig(format.getChannelMask(), false);
            mChannelCount = format.getChannelCount();
        }
        else if(mChannelIndexMask == 0)
        {
            mChannelMask = getChannelMaskFromLegacyConfig(AudioFormat.CHANNEL_IN_DEFAULT, false);
            mChannelCount =  AudioFormat.channelCountFromInChannelMask(mChannelMask);
        }
    
        audioBuffSizeCheck(bufferSizeInBytes);
    
        int[] sampleRate = new int[] {mSampleRate};
        int[] session = new int[1];
        session[0] = sessionId;
        //TODO: update native initialization when information about hardware init failure
        //      due to capture device already open is available.
    	// 调用 native 方法,创建 audiorecord 实例
        int initResult = native_setup(new WeakReference<AudioRecord>(this),
                                      mAudioAttributes, sampleRate, mChannelMask, mChannelIndexMask,
                                      mAudioFormat, mNativeBufferSizeInBytes,
                                      session, ActivityThread.currentOpPackageName(), 0 /*nativeRecordInJavaObj*/);
        if(initResult != SUCCESS)
        {
            loge("Error code "+initResult+" when initializing native AudioRecord object.");
            return; // with mState == STATE_UNINITIALIZED
        }
    
        mSampleRate = sampleRate[0];
        mSessionId = session[0];
    
        mState = STATE_INITIALIZED;
    }
    
    /**
     * A constructor which explicitly connects a Native (C++) AudioRecord. For use by
     * the AudioRecordRoutingProxy subclass.
     * @param nativeRecordInJavaObj A C/C++ pointer to a native AudioRecord
     * (associated with an OpenSL ES recorder). Note: the caller must ensure a correct
     * value here as no error checking is or can be done.
     */
    /*package*/ AudioRecord(long nativeRecordInJavaObj)
    {
        mNativeRecorderInJavaObj = 0;
        mNativeCallbackCookie = 0;
        mNativeDeviceCallback = 0;
    
        // other initialization...
        if(nativeRecordInJavaObj != 0)
        {
            deferred_connect(nativeRecordInJavaObj);
        }
        else
        {
            mState = STATE_UNINITIALIZED;
        }
    }

     

    在实例化 Audio Record 调用 native_setup 方法,进入 native 。

    本地方法接口如下列表:

    // frameworks/base/core/jni/android_media_AudioRecord.cpp
    static const JNINativeMethod gMethods[] = {
        // name,               signature,  funcPtr
        {"native_start",         "(II)I",    (void *)android_media_AudioRecord_start},
        {"native_stop",          "()V",    (void *)android_media_AudioRecord_stop},
        {"native_setup",         "(Ljava/lang/Object;Ljava/lang/Object;[IIIII[ILjava/lang/String;J)I",
                                          (void *)android_media_AudioRecord_setup},
        {"native_finalize",      "()V",    (void *)android_media_AudioRecord_finalize},
        {"native_release",       "()V",    (void *)android_media_AudioRecord_release},
        {"native_read_in_byte_array",
                                 "([BIIZ)I",
                                         (void *)android_media_AudioRecord_readInArray<jbyteArray>},
        {"native_read_in_short_array",
                                 "([SIIZ)I",
                                         (void *)android_media_AudioRecord_readInArray<jshortArray>},
        {"native_read_in_float_array",
                                 "([FIIZ)I",
                                         (void *)android_media_AudioRecord_readInArray<jfloatArray>},
        {"native_read_in_direct_buffer","(Ljava/lang/Object;IZ)I",
                                           (void *)android_media_AudioRecord_readInDirectBuffer},
        {"native_get_buffer_size_in_frames",
                                 "()I", (void *)android_media_AudioRecord_get_buffer_size_in_frames},
        {"native_set_marker_pos","(I)I",   (void *)android_media_AudioRecord_set_marker_pos},
        {"native_get_marker_pos","()I",    (void *)android_media_AudioRecord_get_marker_pos},
        {"native_set_pos_update_period",
                                 "(I)I",   (void *)android_media_AudioRecord_set_pos_update_period},
        {"native_get_pos_update_period",
                                 "()I",    (void *)android_media_AudioRecord_get_pos_update_period},
        {"native_get_min_buff_size",
                                 "(III)I",   (void *)android_media_AudioRecord_get_min_buff_size},
        {"native_setInputDevice", "(I)Z", (void *)android_media_AudioRecord_setInputDevice},
        {"native_getRoutedDeviceId", "()I", (void *)android_media_AudioRecord_getRoutedDeviceId},
        {"native_enableDeviceCallback", "()V", (void *)android_media_AudioRecord_enableDeviceCallback},
        {"native_disableDeviceCallback", "()V",
                                            (void *)android_media_AudioRecord_disableDeviceCallback},
        {"native_get_timestamp", "(Landroid/media/AudioTimestamp;I)I",
                                           (void *)android_media_AudioRecord_get_timestamp},
    };

    从 JNI 的接口声明映射,知道

    native_setup  <-->  android_media_AudioRecord_setup

    接下来分析 android_media_AudioRecord_setup

    在 android_media_AudioRecord_setup 中先判断是否已经存在 nativeRecordInJavaObj 

    如果不存在 nativeRecordInJavaObj 

          lpRecorder = new AudioRecord(String16(opPackageNameStr.c_str()));

    如果存在

         lpRecorder = (AudioRecord*)nativeRecordInJavaObj;

         把 long 类型指针,转换成 AudioRecord 对象指针,然后 setAudioRecord(env, thiz, lpRecorder);

    在 CPP 中一般使用  static_cast 和 reinterpret_cast 模板转换 type-id 类型。

    这里使用  (AudioRecord*)  强制指针类型转换。 

    // ----------------------------------------------------------------------------
    static jint
    android_media_AudioRecord_setup(JNIEnv *env, jobject thiz, jobject weak_this,
                                    jobject jaa, jintArray jSampleRate, jint channelMask, jint channelIndexMask,
                                    jint audioFormat, jint buffSizeInBytes, jintArray jSession, jstring opPackageName,
                                    jlong nativeRecordInJavaObj)
    {
    	......
        audio_attributes_t *paa = NULL;
        sp<AudioRecord> lpRecorder = 0;
        audiorecord_callback_cookie *lpCallbackData = NULL;
    
        jclass clazz = env->GetObjectClass(thiz);
        if(clazz == NULL)
        {
            ALOGE("Can't find %s when setting up callback.", kClassPathName);
            return (jint) AUDIORECORD_ERROR_SETUP_NATIVEINITFAILED;
        }
    
        // if we pass in an existing *Native* AudioRecord, we don't need to create/initialize one.
        if(nativeRecordInJavaObj == 0)
        {
    		......
    
            // create an uninitialized AudioRecord object
            lpRecorder = new AudioRecord(String16(opPackageNameStr.c_str()));
    
            // read the AudioAttributes values
            paa = (audio_attributes_t *) calloc(1, sizeof(audio_attributes_t));
            const jstring jtags =
                (jstring) env->GetObjectField(jaa, javaAudioAttrFields.fieldFormattedTags);
            const char* tags = env->GetStringUTFChars(jtags, NULL);
            // copying array size -1, char array for tags was calloc'd, no need to NULL-terminate it
            strncpy(paa->tags, tags, AUDIO_ATTRIBUTES_TAGS_MAX_SIZE - 1);
            env->ReleaseStringUTFChars(jtags, tags);
            paa->source = (audio_source_t) env->GetIntField(jaa, javaAudioAttrFields.fieldRecSource);
            paa->flags = (audio_flags_mask_t)env->GetIntField(jaa, javaAudioAttrFields.fieldFlags);
            ALOGV("AudioRecord_setup for source=%d tags=%s flags=%08x", paa->source, paa->tags, paa->flags);
    
            audio_input_flags_t flags = AUDIO_INPUT_FLAG_NONE;
            if(paa->flags & AUDIO_FLAG_HW_HOTWORD)
            {
                flags = AUDIO_INPUT_FLAG_HW_HOTWORD;
            }
            // create the callback information:
            // this data will be passed with every AudioRecord callback
            lpCallbackData = new audiorecord_callback_cookie;
            lpCallbackData->audioRecord_class = (jclass)env->NewGlobalRef(clazz);
            // we use a weak reference so the AudioRecord object can be garbage collected.
            lpCallbackData->audioRecord_ref = env->NewGlobalRef(weak_this);
            lpCallbackData->busy = false;
    
            const status_t status = lpRecorder->set(paa->source,
                                                    sampleRateInHertz,
                                                    format,        // word length, PCM
                                                    localChanMask,
                                                    frameCount,
                                                    recorderCallback,// callback_t
                                                    lpCallbackData,// void* user
                                                    0,             // notificationFrames,
                                                    true,          // threadCanCallJava
                                                    sessionId,
                                                    AudioRecord::TRANSFER_DEFAULT,
                                                    flags,
                                                    -1, -1,        // default uid, pid
                                                    paa);
    
            if(status != NO_ERROR)
            {
                ALOGE("Error creating AudioRecord instance: initialization check failed with status %d.",
                      status);
                goto native_init_failure;
            }
        }
        else     // end if nativeRecordInJavaObj == 0)
        {
            lpRecorder = (AudioRecord*)nativeRecordInJavaObj;
    
            // create the callback information:
            // this data will be passed with every AudioRecord callback
            lpCallbackData = new audiorecord_callback_cookie;
            lpCallbackData->audioRecord_class = (jclass)env->NewGlobalRef(clazz);
            // we use a weak reference so the AudioRecord object can be garbage collected.
            lpCallbackData->audioRecord_ref = env->NewGlobalRef(weak_this);
            lpCallbackData->busy = false;
        }
    	......
        // save our newly created C++ AudioRecord in the "nativeRecorderInJavaObj" field
        // of the Java object
        setAudioRecord(env, thiz, lpRecorder);
    
        // save our newly created callback information in the "nativeCallbackCookie" field
        // of the Java object (in mNativeCallbackCookie) so we can free the memory in finalize()
        env->SetLongField(thiz, javaAudioRecordFields.nativeCallbackCookie, (jlong)lpCallbackData);
    
        return (jint) AUDIO_JAVA_SUCCESS;
    
        // failure:
        native_init_failure:
        env->DeleteGlobalRef(lpCallbackData->audioRecord_class);
        env->DeleteGlobalRef(lpCallbackData->audioRecord_ref);
        delete lpCallbackData;
        env->SetLongField(thiz, javaAudioRecordFields.nativeCallbackCookie, 0);
    
        // lpRecorder goes out of scope, so reference count drops to zero
        return (jint) AUDIORECORD_ERROR_SETUP_NATIVEINITFAILED;
    }
    

    接下来分析 setAudioRecord 函数,该函数返回 AudioRecord 对象的指针

    // frameworks/base/core/jni/android_media_AudioRecord.cpp
    // 结构体中的成员很重要, 把 AudioRecord 对象, 回调函数 和回调的音频数据保存到属性 jfielID,
    // 并保存到 java 层,提高 JAVA <--> CPP 相互调用的效率。
    struct audio_record_fields_t {
        // these fields provide access from C++ to the...
        jmethodID postNativeEventInJava; //... event post callback method
        jfieldID  nativeRecorderInJavaObj; // provides access to the C++ AudioRecord object
        jfieldID  nativeCallbackCookie;    // provides access to the AudioRecord callback data
        jfieldID  nativeDeviceCallback;    // provides access to the JNIDeviceCallback instance
    };
    
    
    static sp<AudioRecord> setAudioRecord(JNIEnv* env, jobject thiz, const sp<AudioRecord>& ar)
    {
        Mutex::Autolock l(sLock);
        sp<AudioRecord> old =
                (AudioRecord*)env->GetLongField(thiz, javaAudioRecordFields.nativeRecorderInJavaObj);
        if (ar.get()) {
            ar->incStrong((void*)setAudioRecord);
        }
        if (old != 0) {
            old->decStrong((void*)setAudioRecord);
        }
        env->SetLongField(thiz, javaAudioRecordFields.nativeRecorderInJavaObj, (jlong)ar.get());
        return old;
    }

     

    分析了 stepup ,接下来分析 start 和 stop 函数,发现在 JNI 接口里调用本地 AudioRecord 对象。

    主要的拾音业务逻辑在 AudioRecord.cpp 中。下一节分析 AudioRecord.cpp 。

    // frameworks/base/core/jni/android_media_AudioRecord.cpp
    // ----------------------------------------------------------------------------
    static jint
    android_media_AudioRecord_start(JNIEnv *env, jobject thiz, jint event, jint triggerSession)
    {
        sp<AudioRecord> lpRecorder = getAudioRecord(env, thiz);
        if (lpRecorder == NULL ) {
            jniThrowException(env, "java/lang/IllegalStateException", NULL);
            return (jint) AUDIO_JAVA_ERROR;
        }
    
        return nativeToJavaStatus(
                lpRecorder->start((AudioSystem::sync_event_t)event, (audio_session_t) triggerSession));
    }
    
    
    // ----------------------------------------------------------------------------
    static void
    android_media_AudioRecord_stop(JNIEnv *env, jobject thiz)
    {
        sp<AudioRecord> lpRecorder = getAudioRecord(env, thiz);
        if (lpRecorder == NULL ) {
            jniThrowException(env, "java/lang/IllegalStateException", NULL);
            return;
        }
    
        lpRecorder->stop();
        //ALOGV("Called lpRecorder->stop()");
    }

     

    展开全文
  • android audio system

    2012-11-29 10:36:28
    android audio system
  • Android audio知识总结,学习音视频的朋友可以看一看,内容比较详细
  • 介绍了android AudioRecorder简单心得,有需要的朋友可以参考一下
  • Android audio player

    2012-11-02 11:00:56
    Android audio player for lossless (FLAC/APE/MPC/WV) and other files
  • Android audio 一 源码路径 Android audioAudioRecord 分析上 Android audioAudioRecord 分析下 Android audio 四 AudioTrack 分析上 Android audio 五 AudioTrack 分析下 Android audioAudio...

    Android audio 一 源码路径

    Android audio 二 AudioRecord 分析上

    Android audio 三 AudioRecord 分析下

    Android audio 四 AudioTrack 分析上

    Android audio 五 AudioTrack 分析下

    Android audio 六 AudioRecord AudiTrack 拾音放音例子

     

    本例采用  AudioRecord AudiTrack cpp的方法实现拾音放音的例子,加深理解 Android 音频 native 层代码代码。

    下载链接

        sp <AudioTrack> pTrack;
        sp <AudioRecord> pRecord;
    // 创建拾音 AudioRecord 实例
      pRecord = new AudioRecord(AUDIO_SOURCE_DEFAULT,
                                  48000,
                                  AUDIO_FORMAT_PCM_16_BIT,
                                  AUDIO_CHANNEL_IN_MONO,
                                  pAck,
                                  recordframeSize,
                                  NULL,
                                  NULL,
                                  0,
                                  AUDIO_SESSION_ALLOCATE,
                                  AudioRecord::TRANSFER_SYNC,
                                  AUDIO_INPUT_FLAG_NONE,
                                  -1,
                                  -1,
                                  NULL);
    
    // 创建拾音 AudioTrack实例
        pTrack = new AudioTrack( AUDIO_STREAM_MUSIC,
                                 48000,
                                 AUDIO_FORMAT_PCM_16_BIT,
                                 0x01,
                                 frameCount * 2,
                                 AUDIO_OUTPUT_FLAG_NONE,
                                 NULL,
                                 NULL,
                                 0,
                                 AUDIO_SESSION_ALLOCATE,
                                 AudioTrack::TRANSFER_SYNC,
                                 NULL,
                                 -1,
                                 -1,
                                 NULL,
                                 false,
                                 1.0f);
    
    // 拾音放音在同一线程中
        do
        {
            // num_read = fread(pBuffer, 1, 1024, wavFile);
            num_read = pRecord->read(pBuffer, 2048, true);
            if (num_read > 0)
            {
                adLOGI("num_read %d", num_read);
                pTrack->write(pBuffer, num_read, 1);
            }
        }
        while (capturing && (num_read > 0));

     

    展开全文
  • Android audio 一 源码路径

    千次阅读 2019-04-20 10:46:21
    Android audio 一 源码路径 Android audioAudioRecord 分析上 Android audioAudioRecord 分析下 Android audio 四 AudioTrack 分析上 Android audio 五 AudioTrack 分析下 Android audioAudio...

    Android audio 一 源码路径

    Android audio 二 AudioRecord 分析上

    Android audio 三 AudioRecord 分析下

    Android audio 四 AudioTrack 分析上

    Android audio 五 AudioTrack 分析下

    Android audio 六 AudioRecord AudiTrack 拾音放音例子

    android7

    audio 源码路径:

    frameworks/base/media/java/android/media/
    frameworks/base/core/jni/
    frameworks/av/services/audioflinger   
    frameworks/av/services/audiopolicy
    frameworks/av/media/audioserver
    frameworks/av/media/libstagefright
    frameworks/av/media/libmedia/   # AudioTrack AudioRecord

    external/tinyalsa/
    kernel/sound/

     

    Android 音频框架中重要的方法类

    • AudioFlinger  # 音频隔离,启到承上(为上层提供访问接口),启下(通过HAL来管理音频设备)的作用。是整个音频系统的核心与难点。
    • AudioRecord # 拾音类
    • AudioTrack   # 放音类
    • AudioPolicy

     

    记录下,调试 usb 麦克风时,查找广播的的位置,后续会开一章节写 usb 麦克风热插拔的上报事件流程。

    frameworks/base/services/usb/java/com/android/server/usb/UsbSettingsManager.java

     

    展开全文
  • android Audio机制

    千次阅读 2015-11-23 16:25:58
    android Audio机制 ALSA tinyalsa Audioflinger AudioPolicyService 前言 这篇文章是最近自己学习android audio的学习心得,希望大牛提出宝贵意见。 本文内容基于android 5.0 目录 一. 硬件架构 (1).编...

    android Audio机制

    前言

    这篇文章是最近自己学习android audio的学习心得,希望大牛提出宝贵意见。
    本文内容基于android 5.0

    目录

    一. 硬件架构

    (1).编解码器(codec) 
    

    二. 软件架构

    (1). kernel里面关于Audio的driver机制
    (2). Audio机制服务的核心AudioflingerAudioPolicyService启动流程
    (3). AudioService在SystemServer里面的启动
    (4). 应用程序接口MediaplayerAudioTrack和AudioRecoder。
    

    一,硬件架构

    audio硬件架构

    1,上图所示为手机上audio的硬件架构

    (1).其中PM8916里面主要是我们的hardware codec他的作用主要是将
        模拟的音频信号采样转换为pcm格式
    (2).MODEM DSP是我们的modem部分,可以看到我们采集到的PCM
        数据可以通过modem的DSP处理,没有注意这里没有给我们的
        audio单独集成DSP可以节省成本
    (3).application processor是我们的主处理器目前手机上面主要
        还是arm
    总结:
    架构图所示,我们的Audio数据主要通过I2s接口如图Audio Interface 和Digital 
    codec core之间传输,而我们的命令通过spmi接口传输到hardware codec,注意这里我们只是一本图作为例子实际中不同的厂商可能使用不同的接口,这个主要在driver里面与硬件相关的部分,例如有些厂商接口是通过i2c或者spi接口
    

    二,软件架构

    audio的软件架构

    软件架构模块:

    (1).ALSA是kernel里面管理audio的核心,我们的audio driver
        部分一般会调用snd_soc_register_card(),
        将我们在软件层面抽象出来的声卡注册进audio核心
    (2).在用户空间android同样给我们提供了tinyalsa方便
        我们操作我们的driver
    (3).audio的hal层,这部分也是和具体的厂商有关,后面我会具体讲
    (4).mediaservice,图中的media是在init.rc里面启动的native进程
        在我们所用的android版本中这个service其实还启动了camera
        service,这里只讲解与audio相关的AudioFlinger和
        AudioPolicyService
    (5).在mediaservice启动之后会zygote进程,zygote紧接着会启动
        android的核心进程systemserver进程。在systemserver进程
        里面会注册AudioService,其实这里AudioService虽然叫做
        service,其实他是通过调用AudioSystem间接地获得
        AudioFlinger的代理到具体的audio,之所以service是由于
        我们一般在应用程序中又是通过获得AudioService的代理
        操作的,可见虽然android的binder进程间通讯功能强大
        但是效率可能并没有其他的进程间通讯机制高效。
        比如我们一般在应用程序中通过AudioManager操作得到当前
        的音量会首先通过binder机制先获得AudioService代理,然后
        AudioService在通过AudioSystem获得AudioFlinger的代理才
        能操作到实际的实现部分会发现这里一次调用却用了两次
        binder通信,显然没有普通的进程间通信机制高效,优点确实功能很强大。
    (6).应用程序中有AudioTrack,AudioRecoder,AudioManager,
        MediaPlayer等接口实际操作我们的audio的。
    

    几个模块的单独分析

    1. ALSA kernel部分 代码位置:
      kernel/sound/sound_core.c(这部分暂时没有看完)
      kernel/sound/soc/soc-core.c(我们主要分析这里)
      kernel/sound/soc/soc-pcm.c
      kernel/sound/soc/soc-compress.c
      kernel/sound/core/pcm_lib.c
      kernel/sound/core/pcm_native.c
      1.soc-core.c里面注册了一个platform driver,这份code是与具体的硬件无关的主要是在内存里面建立几个重要的链表维护我们注册的关于audio的driver
      2.soc-pcm.c,soc-compress.c,pcm_lib.c,pcm_native.c等这些
      文件里面主要实现对于用户具体实现audio pcm读写操作
      注册框架(实际的注册有具体的driver实现,这个里面只是管理)。
      3.我们在这幅框架图里面看到的platform driver,cpu driver
      ,codec driver,machine driver里面才audio的具体实现操作
      (不同的厂商实现不一样的框架是一样的)
      4.在这幅图里能看到音量按键volumekey driver,这个是我自己假设
      出来的。实际中由于volume key 功能比较小,可能会和其他
      的driver写在一起,我们暂且这样理解,从图中,我们能看到当我们
      按下volume key时我们的按键事件会通过input子系统传到
      InputManagerService,进而传送到PhoneWindowManager。这个里面
      PhoneWindowManagerService和AudioService 都在
      systemserver进程里面,PhoneWindowManager又会通过binder得到
      AudioService的代理进而按照之前说的又往下实现真正的音量设置。
      我们发现这里我们会发现binder不仅可以用于进程间通信,即使是同一个进程中,只只要将service注册进servicemanager,也可以
      通过binder通信
      2.tinyalsa 代码位置:
      1.external/tinyalsa/
      这个目录底下主要的两个文件分别是pcm.c,mixer.c
      主要是我们pcm音频的播放和录制以及混音接口
      2.external/tinycompress/
      这个目录底下重要的一个文件compress.c
      主要是对于压缩格式的音频文件的播放。比如MP3,AAC。当然这个得硬件codec支持这种格式的解码才可以(硬解码)我们还会提到软解码
      3.Audio HAL
      这一层主要是调用tinyalsa以及一些厂商相关的具体实现。与具体的厂商有关所以我们也是略过。

    4.mediaservice
    1,代码在framework/av/media/mediaservice/main_mediaservice.c
    在init进程中会解析init.rc 之后在on boot有一个
    class_start main 在这个期间会启动mediaserver进程是我摘抄的
    init.rc中的一段
    service media /system/bin/mediaserver
    class main
    user media
    。。。
    进程启动了mediaservice进程,下面我们来看mediaservice的代码

    块代码

    ```mediaservice
    int main(int argc __unused, char** argv)
    {
        signal(SIGPIPE, SIG_IGN);
        char value[PROPERTY_VALUE_MAX];
        bool doLog = (property_get("ro.test_harness", value, "0") > 0) && (atoi(value) == 1);
        pid_t childPid;
        // FIXME The advantage of making the process containing media.log service the parent process of
        // the process that contains all the other real services, is that it allows us to collect more
        // detailed information such as signal numbers, stop and continue, resource usage, etc.
        // But it is also more complex.  Consider replacing this by independent processes, and using
        // binder on death notification instead.
        if (doLog && (childPid = fork()) != 0) {
            // media.log service
            //prctl(PR_SET_NAME, (unsigned long) "media.log", 0, 0, 0);
            // unfortunately ps ignores PR_SET_NAME for the main thread, so use this ugly hack
            strcpy(argv[0], "media.log");
            sp<ProcessState> proc(ProcessState::self());
            MediaLogService::instantiate();
            ProcessState::self()->startThreadPool();
            for (;;) {
                siginfo_t info;
                int ret = waitid(P_PID, childPid, &info, WEXITED | WSTOPPED | WCONTINUED);
                if (ret == EINTR) {
                    continue;
                }
                if (ret < 0) {
                    break;
                }
                char buffer[32];
                const char *code;
                switch (info.si_code) {
                case CLD_EXITED:
                    code = "CLD_EXITED";
                    break;
                case CLD_KILLED:
                    code = "CLD_KILLED";
                    break;
                case CLD_DUMPED:
                    code = "CLD_DUMPED";
                    break;
                case CLD_STOPPED:
                    code = "CLD_STOPPED";
                    break;
                case CLD_TRAPPED:
                    code = "CLD_TRAPPED";
                    break;
                case CLD_CONTINUED:
                    code = "CLD_CONTINUED";
                    break;
                default:
                    snprintf(buffer, sizeof(buffer), "unknown (%d)", info.si_code);
                    code = buffer;
                    break;
                }
                struct rusage usage;
                getrusage(RUSAGE_CHILDREN, &usage);
                ALOG(LOG_ERROR, "media.log", "pid %d status %d code %s user %ld.%03lds sys %ld.%03lds",
                        info.si_pid, info.si_status, code,
                        usage.ru_utime.tv_sec, usage.ru_utime.tv_usec / 1000,
                        usage.ru_stime.tv_sec, usage.ru_stime.tv_usec / 1000);
                sp<IServiceManager> sm = defaultServiceManager();
                sp<IBinder> binder = sm->getService(String16("media.log"));
                if (binder != 0) {
                    Vector<String16> args;
                    binder->dump(-1, args);
                }
                switch (info.si_code) {
                case CLD_EXITED:
                case CLD_KILLED:
                case CLD_DUMPED: {
                    ALOG(LOG_INFO, "media.log", "exiting");
                    _exit(0);
                    // not reached
                    }
                default:
                    break;
                }
            }
        } else {
            // all other services
            if (doLog) {
                prctl(PR_SET_PDEATHSIG, SIGKILL);   // if parent media.log dies before me, kill me also
                setpgid(0, 0);                      // but if I die first, don't kill my parent
            }
            sp<ProcessState> proc(ProcessState::self());
            sp<IServiceManager> sm = defaultServiceManager();
            ALOGI("ServiceManager: %p", sm.get());
            AudioFlinger::instantiate();
            MediaPlayerService::instantiate();
            CameraService::instantiate();
    #ifdef AUDIO_LISTEN_ENABLED
            ALOGI("ListenService instantiated");
            ListenService::instantiate();
    #endif
            AudioPolicyService::instantiate();
            SoundTriggerHwService::instantiate();
            registerExtensions();
            ProcessState::self()->startThreadPool();
            IPCThreadState::self()->joinThreadPool();
        }
    }
    
    ```
    

    我们可以看到这个里面注册了MediaPlayerService,CameraService,

    还有与我们audio相关的AudioFlinge和AudioPolicyService:
    AudioFlinger::instantiate();
    AudioPolicyService::instantiate();
    之后下面两句等待binder进程间通信,是一个死循环,如果没有通信请求就休眠
    ProcessState::self()->startThreadPool();
    IPCThreadState::self()->joinThreadPool();
    这个里面关键的一句是AudioPolicyService::instantiate();
    instantiate()和其他的没有区别就是创建service并且注册进
    servicemanager。如果不懂先暂且这样理解,可以看我之后关于
    binder进程间通信的博文,主要的是AudioPolicyService的
    onFirstRef()函数,里面会创建AudioPolicyManager
    而AudioPolicyManager里面又会加载我们的Hal层
    (通过AudioPolicyClient的loadHwModule)AudioPolicyClient的
    loadHwModule()又是通过AudioFlinger的loadHwModule()加载hal层
    绕了这么一大圈。我们接下来分析AudioFlinger的loadHwModule()

    AudioPolicyManager

    ```
    AudioPolicyManager::AudioPolicyManager(AudioPolicyClientInterface *clientInterface)
    。。。
    {
        。。。
        for (size_t i = 0; i < mHwModules.size(); i++) 
        {
            //这里通过binder间接调用AudioFlinger的loadHwModule
            mHwModules[i]->mHandle = mpClientInterface->loadHwModule(mHwModules[i]->mName);
            if (mHwModules[i]->mHandle == 0) {
                ALOGW("could not open HW module %s", mHwModules[i]->mName);
                continue;
            }
        
            for (size_t j = 0; j < mHwModules[i]->mOutputProfiles.size(); j++)
            {
                。。。。
                。。。。
                //这里类似于loadHwModule调用AudioFinger
                //的openOutput
                status_t status = mpClientInterface->openOutput(outProfile->mModule->mHandle,
                                                                &output,
                                                                &config,
                                                                &outputDesc->mDevice,
                                                                String8(""),
                                                                &outputDesc->mLatency,
                                                                outputDesc->mFlags);
                。。。。
                。。。。
            }
            
            for (size_t j = 0; j < mHwModules[i]->mInputProfiles.size(); j++)
            {
                。。。。
                。。。。
                //这里类似于loadHwModule调用AudioFinger
                //的openIntput
                status_t status = mpClientInterface->openInput(inProfile->mModule->mHandle,
                                                               &input,
                                                               &config,
                                                               &inputDesc->mDevice,
                                                               String8(""),
                                                               AUDIO_SOURCE_MIC,
                                                               AUDIO_INPUT_FLAG_NONE);
                。。。。
                。。。。
                                                               
            }
        }
    }
    ```
    

    LoadHwModule

    ```
    audio_module_handle_t AudioFlinger::loadHwModule(const char *name)
    {
        if (name == NULL) {
            return 0;
        }
        if (!settingsAllowed()) {
            return 0;
        }
        Mutex::Autolock _l(mLock);
        return loadHwModule_l(name);
    }
    
    // loadHwModule_l() must be called with AudioFlinger::mLock held
    audio_module_handle_t AudioFlinger::loadHwModule_l(const char *name)
    {
        for (size_t i = 0; i < mAudioHwDevs.size(); i++) {
            if (strncmp(mAudioHwDevs.valueAt(i)->moduleName(), name, strlen(name)) == 0) {
                ALOGW("loadHwModule() module %s already loaded", name);
                return mAudioHwDevs.keyAt(i);
            }
        }
    
        audio_hw_device_t *dev;
        //这里会加载层
        int rc = load_audio_interface(name, &dev);
        if (rc) {
            ALOGI("loadHwModule() error %d loading module %s ", rc, name);
            return 0;
        }
        。。。。
        。。。。
        
        //将我们加载hal层之后创建的AudioHwDevice添加进mAudioHwDevs
        mAudioHwDevs.add(handle,
        new AudioHwDevice(handle,name,dev,flags))
        return handle;
    
    }
    
    ```
    

    后面openInput与openOutput与加载hal层类似fenbie会创建
    RecoderThread和PlaybackThread并分别添加进RecoderThreads和
    mPlaybackThreads供之后应用使用
    需要注意的是PlaybackThread有三个子类offloadThread,
    MixerThread和DirectOutputThread。mPlaybackThreads里面add的就是这三种线程。应用程序会根据实际选择这些从里面查找使用哪一种thread播放。

    AudioFlinger类视图

    只列出刚才讲的thread和AudioHwDevice

    这幅图我们只是将刚才讲的mAudioHwDevs,mPlaybackThreads
    和mRecoderThread

    到这里我们的mediaservice结束,Audio也已经完成
    5.systemserver注册AudioService
    一个java层的service虽然叫做service实质上是通过AudioSystem取得
    AudioFlinger的binder代理让AudioFlinger来实现实际功能的。

    systemserver之startOtherService()

    ```startOtherService
    private void startOtherServices() 
    {
        。。。。
        。。。。
        if (!disableMedia && !"0".equals(SystemProperties.get("system_init.startaudioservice"))) 
        {
            try {
                Slog.i(TAG, "Audio Service");
                //创建AudioService并且注册进servicemanager之后可以通//过binder通信调用
                audioService = new AudioService(context);
                ServiceManager.addService(Context.AUDIO_SERVICE, audioService);
            } catch (Throwable e) {
                reportWtf("starting Audio Service", e);
            }
        }
        。。。。
        。。。。
    }
    ```
    

    至此我们的Audio全部初始化完毕。
    6.应用程序
    应用程序中我们通常可以使用MediaPlayer,AudioManager,AudioTrack
    和AudioRecoder。
    如果我们的硬件上面的codec比较低端那么我们就不能使用硬件解码,只能通过软件解码。应用程序中通常使用MediaPlayer我在软件框架图中所画,先通过libstagefright软件解码,然后才会使用AudioTrack播放。当然如果有一种格式libstagefright也不支持,那么一般是先使用第三方库
    解码之后再去播放。
    AudioManager在应用程序中一般是需要权限的,他是通过binder使用
    AudioService的来实现的
    AudioRecode与AudioTrack类似只不过用于录音。
    具体怎么实现逻辑关系图请看上面的软件架构图。由于篇幅原因这里就不详细讨论了。
    其他的SDK接口这里就不讨论了

    最后欢迎大神批评指正

    展开全文
  • Android audioRecord录音Demo

    热门讨论 2012-10-24 22:27:43
    使用Android audioRecord录音完整Demo,最终生成wav文件。
  • Android Audio系统概述

    2013-05-31 13:58:37
    主要介绍 述写了Android Audio部分,感觉 很不错的
  • 实现android audioRecorder 录音并保存为m4a文件,mediaRecorder也集成了,尽量不要用mediaRecorder,因为声音小,音质也差一些。
  • android audio effects框架笔记

    千次阅读 2018-04-03 17:44:53
    android audio effects框架笔记 标签(空格分隔): android sound effect android audio effects框架笔记 1 相关类及成员说明 1.1 AudioPolicyService类 1.2 AudioPolicyEffects类 1.3 audioEffect相关 1.3.1...
  • android audio architecture

    千次阅读 2012-08-28 22:56:25
    android audio architecture:
  • android audio arch

    千次阅读 2017-02-04 19:24:46
    android audio arch ALSA System on Chip(ASoC) ASoC 驱动将一个audio子系统分成四个部分: Machine driver, Platform driver, CPU driver以及Codec driver。 Machine 驱动 将平台,CPU以及codec驱动绑定在一块...
  • android audio 框架流程分析图
  • android audioflinger.pdf

    2010-04-02 09:15:47
    android audioflinger.pdf
  • Android Audio Effect 机制初探

    千次阅读 2016-09-18 17:03:23
    Android Audio Effect Android 原生Audio Effect框架
  • android audiorecord

    千次阅读 2014-05-30 13:59:05
    Android Audio Android是架构分为三层: 底层 Linux Kernel 中间层 主要由C++实现 (Android 60%源码都是C++实现) 应用层 主要由JAVA开发的应用程序 应用程序执行过程大致如下: JAVA应用程序产生操作(播放...
  • Android Audio 数据流详解
  • Android Audio音频系统之深入浅出

    千次阅读 2021-01-02 21:48:36
    二、Android Audio系统框架 三、Audio架构以及各层的代码分布图 四、音频框架在Android系统中的进一步细化 五、创建声卡和注册声卡 六、Android Audio系统的结构 七、Audio音频原理介绍 八、Audio音频策略制定...
  • Android Audio System

    千次阅读 2012-01-05 22:32:30
    由于这一阵子一直在做Android Audio相关的东西,了解了很多东西却感觉很乱,所以有必要总结一下。 正文 这次详细的记录一下MediaPlayer的工作流程吧,与Audio相关的模块一般有以下几种:播放音乐、录音、电话。 ...
  • android audiorecord录音并绘图
  • android AudioRecord AudioTrack实现录音并播放 并支持参数选择(频率、编码格式、声道) 更多信息可参考http://blog.sina.com.cn/u/1788464665
  • Android AudioRecorder录制mp3文件(已经完整封装好,直接调用) 源码包中有2个类 AudioRecorder2Mp3Util 负责录音和转换 MainActivity 用户的操作界面 注意用的时候需要加上权限 <uses-permission android:name=...
  • Android Audio:AudioTrack构造函数分析

    千次阅读 2020-01-13 15:19:30
    Android Audio:AudioTrack构造函数分析
  • [Android Audio]audioserver初步理解

    千次阅读 2020-03-08 22:06:22
    Android Audio-audioserver 时序图 audioserver.cpp: mainchildforkMediaLogService::instantiate()AudioFlinger::instantiate()AudioPolicyService::instantiate()AAudioService::instantiate()...
  • Android Audio Overview

    千次阅读 2013-08-13 09:22:30
    Android Audio Overview 音乐播放器是手机中重要的娱乐应用,我们在开发播放器有可能用到mediaplayer,或者AudioTrack来播放音频文件,对应应用开发者来说,我们只要了解了暴露出来的API就可以了,但是光了解API还是...
  • Android Audio架构

    2016-09-21 17:08:47
    下面是来自Android官方的Audio架构图,比较清楚地说明了androidaudio的组成。 出处:https://source.android.com/devices/audio/index.html Audio Android's audio Hardware Abstraction ...

空空如也

空空如也

1 2 3 4 5 ... 20
收藏数 77,105
精华内容 30,842
关键字:

androidaudio