精华内容
下载资源
问答
  • 高通camera框架_流程浅析(1)

    千次阅读 2017-04-18 22:47:32
    该文档主要浅析camera框架,后续会增加机制相关内容: 1.  Introduction ...本文档主要讲解高通Camera整体框架。...2. 高通Camera框架简介 总体框图如下: 下面简要走一下流程,

    该文档主要浅析camera框架,后续会增加机制相关内容:

    1.    Introduction

    本文档主要讲解高通Camera整体框架。

    部分内容或参考资料可参考个人博客Android开发栏目:http://blog.sina.com.cn/betterforlife 

    2.   高通Camera框架简介

    总体框图如下:

    下面简要走一下流程,不涉及具体代码:

    1、初始化CameraService

    在frameworks/av/media/mediaserver/Main_mediaserver.cpp中会对CameraService进行初始化:

    CameraService::instantiate();

    CameraService的父类BinderService定义了instantiate()函数:

    static void instantiate() { publish(); }

    CameraService的启动发生在init.rc中:

    service media /system/bin/mediaserver

    class main

    user media

    group audio camerainet net_bt net_bt_admin net_bw_acct drmrpc

    在CameraService初始化过程会从hal层获取一些基本信息,如支持的最大camera数目,如下图所示:

    2、连接CameraService

    如下图所示:

    2.1 Hal 1.0版本框架分析

    以设置camera sharpness(锐度)参数为例:

    数据流app parameter->Java interface->jni->cameraclient->binder->camera service->hal->daemon->kernel

        如下图所示:

     

    2.2 Hal v3与v1区别与过渡

    2.2.1 简介

    在Android 5.0上,Google正式的将Camera HAL 3.0作为一个标准配置进行了发行,当然Camera HALV1也是作为兼容标准可用。

    HAL V3与V1本质区别是把帧的参数和帧的图像数据绑定到了一起,比如V1的时候一张preview上来的YUV帧,APP是不知道这个 YUV帧采用的Gain和曝光时间究竟是多少,但是在V3

    里面,每一帧都有一个数据结构来描述,其中包括了帧的参数和帧的数据,当APP发送一个request的时候是需要指定使用什么样的参数,到request返回的时候,返回数据中就有图像数据和相关的参数配置。

    2.2.2 HAL 1.0参数设置

    A、V1增加设定参数:增加OIS光学防抖参数设置(ois参数一般不作为设置参数,在本文档仅作实验测试),仅作流程分析对比。

    1)  添加接口函数,参考public void setSaturation(int saturation)设置

    在code/frameworks/base/core/Java/android/hardware/Camera.java文件增加接口:

            publicvoid setOis(int saturation){

                  …………

                 set(KEY_QC_OIS, String.valueOf(OIS));

    }

    2)  App设置参数调用,假设设置ois值为1

    参考packages/apps/SnapdragonCamera/src/com/android/camera/PhotoModule.java

    mParameters.setSaturation(saturation);函数调用;

    mParameters.setOis(ois);

    由于HAL V1参数传递是通过字符串来完成的,最后传递到HAL层的字符串里面会有“ois=1”,在HAL层进行解析。

       B、Hal层相关修改:

    1、    添加相关定义

    1.1、 文件:hardware/qcom/camera/QCamera2/HAL/QCameraParameters.h

         static const char KEY_QC_SCE_FACTOR_STEP[];

    +    static const char KEY_QC_OIS[];

         staticconst char KEY_QC_HISTOGRAM[] ;

     

         int32_tsetSharpness(const QCameraParameters& );

    +    int32_t setOis(const QCameraParameters&);

         int32_tsetSaturation(const QCameraParameters& );

     

         int32_tsetSharpness(int sharpness);

    +    int32_t setOis(int ois);

         int32_tsetSaturation(int saturation);

    1.2、 文件:hardware/qcom/camera/QCamera2/stack/common/cam_types.h

                typedef enum {

                    CAM_INTF_PARM_FLASH_BRACKETING,

                    CAM_INTF_PARM_GET_IMG_PROP,

     

                        CAM_INTF_PARM_MAX

    +    CAM_INTF_PARM_OIS

                }cam_intf_parm_type_t;

    1.3、 文件:hardware/qcom/camera/QCamera2/stack/common/cam_intf.h

    typedefstruct{

         cam_af_bracketing_t  mtf_af_bracketing_parm;

         /* Sensor type information */

         cam_sensor_type_t sensor_type;

    +    /*ois default value*/

    +   int32_t ois_default_value;

     } cam_capability_t;

    2、    添加相关设置

    文件:hardware/qcom/camera/QCamera2/HAL/QCameraParameters.cpp

    const charQCameraParameters::KEY_QC_SCE_FACTOR_STEP[] = "sce-factor-step";

    +const char QCameraParameters::KEY_QC_OIS[] = "ois";

     

    //open camera时OIS默认值,该值在vendor中设置

    int32_t QCameraParameters::initDefaultParameters()

    {

           ………

    +   // Set Ois

    +   setOis(m_pCapability->ois_default_value);

    +   ALOGE("the default_ois = %d",m_pCapability->ois_default_value);

         // Set Contrast

        set(KEY_QC_MIN_CONTRAST,m_pCapability->contrast_ctrl.min_value);

        set(KEY_QC_MAX_CONTRAST, m_pCapability->contrast_ctrl.max_value);

        ………

    }

     

    +int32_t QCameraParameters::setOis(constQCameraParameters& params)

    +{

    +    int ois = params.getInt(KEY_QC_OIS);

    +    int prev_ois = getInt(KEY_QC_OIS);

    +    if(params.get(KEY_QC_OIS) == NULL) {

    +       CDBG_HIGH("%s: Ois not set by App",__func__);

    +       return NO_ERROR;

    +    }

    +    ALOGE("haljay ois=%dprev_ois=%d",ois, prev_ois);

    +    if (prev_ois !=  ois) {

    +        if((ois >= 0) && (ois <=2)) {

    +            CDBG(" new ois value : %d", ois);

    +            return setOis(ois);

    +        } else {

    +            ALOGE("%s: invalid value%d",__func__, ois);

    +            return BAD_VALUE;

    +        }

    +    } else {

    +        ALOGE("haljay no valuechange");

    +        CDBG("%s: No value change inois", __func__);

    +        return NO_ERROR;

    +    }

    +}

     

    +int32_t QCameraParameters::setOis(intois)

    +{

    +    charval[16];

    +   sprintf(val, "%d", ois);

    +   updateParamEntry(KEY_QC_OIS, val);

    +   CDBG("%s: Setting ois %s", __func__, val);

    +    ALOGE("haljay%s set ois=%s OIS=%d", __func__, val, CAM_INTF_PARM_OIS);

    +    int32_tvalue = ois;

    +    returnAddSetParmEntryToBatch(m_pParamBuf,

    +                                 CAM_INTF_PARM_OIS,

    +                                 sizeof(value),

    +                                  &value);

    +}

     

    函数int32_tQCameraParameters::updateParameters添加setOis

         if ((rc =setBrightness(params)))                  final_rc = rc;

         if ((rc =setZoom(params)))                        final_rc = rc;

         if ((rc = setSharpness(params)))                    final_rc = rc;

    +    if ((rc = setOis(params)))                          final_rc = rc;

         if ((rc =setSaturation(params)))                   final_rc = rc;

       C、Vendor层相关修改:

    1、    添加相关定义

    1.1、 文件:kernel/include/media/msm_cam_sensor.h

    enum msm_actuator_cfg_type_t {

      CFG_SET_POSITION,

      CFG_ACTUATOR_POWERDOWN,

      CFG_ACTUATOR_POWERUP,

    + CFG_ACTUATOR_OIS,

     };

    struct msm_actuator_cfg_data {

          struct msm_actuator_get_info_t get_info;

          struct msm_actuator_set_position_t setpos;

          enum af_camera_name cam_name;

    +      void*setting;

      } cfg;

    1.2、 文件:vendor/qcom/proprietary/mm-camera/mm-camera2/media-controller/mct/pipeline/mct_pipeline.c

          在函数boolean mct_pipeline_populate_query_cap_buffer(mct_pipeline_t*pipeline)中添加:

                    hal_data->sharpness_ctrl.min_value= 0;

                    hal_data->sharpness_ctrl.step= 6;

     

    +  hal_data->ois_default_value= 1;

                    hal_data->contrast_ctrl.def_value= 5;

                    hal_data->contrast_ctrl.max_value= 10;

    1.3、 文件:vendor/qcom/proprietary/mm-camera/mm-camera2/media-controller/modules/sensors/module/sensor_common.h

    typedefenum {

       /* End of CSID enums*/

       /* video hdr enums */

       SENSOR_SET_AWB_UPDATE, /*sensor_set_awb_data_t * */

    + ACTUATOR_SET_OIS

     } sensor_submodule_event_type_t;

    2、    添加相关设置

    文件:vendor/qcom/proprietary/mm-camera/mm-camera2/media-controller/modules/sensors/module/module_sensor.c

    2.1、 获取hal层参数

    在函数static boolean module_sensor_event_control_set_parm中增加:

    +  case CAM_INTF_PARM_OIS:{

    +    if (!event_control->parm_data) {

    +        SERR("failed parm_dataNULL");

    +        ret = FALSE;

    +        break;

    +      }

    +    module_sensor_params_t        *ois_module_params = NULL;

    +    ois_module_params =s_bundle->module_sensor_params[SUB_MODULE_ACTUATOR];

    +    if (ois_module_params->func_tbl.process != NULL) {

    +      rc =ois_module_params->func_tbl.process(

    +        ois_module_params->sub_module_private,

    +        ACTUATOR_SET_OIS,event_control->parm_data);

    +    }

    +    if (rc < 0) {

    +      SERR("failed");

    +      ret = FALSE;

    +    }

    +     break;

    +  }

    文件:vendor/qcom/proprietary/mm-camera/mm-camera2/media-controller/modules/sensors/actuators/actuator.c

    2.2、在函数int32_t actuator_process中增加:

           case ACTUATOR_SET_POSITION:

                rc =actuator_set_position(actuator_ctrl, data);

                break;

    +  /*set ois*/

    +   case ACTUATOR_SET_OIS:

    +   rc = actuator_set_ois(actuator_ctrl,data);

    +   break;

    2.3、将参数通过ioctl方法下至内核

            +staticint actuator_set_ois(void *ptr, void*data) {

    +  int rc = 0;

    +  int32_t *ois_level = (int32_t*)data;

    +  actuator_data_t *ois_actuator_ptr =(actuator_data_t *)ptr;

    +  struct msm_actuator_cfg_data cfg;

    +  if (ois_actuator_ptr->fd <= 0)

    +    return -EINVAL;

    +  cfg.cfgtype = CFG_ACTUATOR_OIS;

    +  cfg.cfg.setting = ois_level;

    +  /* Invoke the IOCTL to set the ois */

    +  rc = ioctl(ois_actuator_ptr->fd,VIDIOC_MSM_ACTUATOR_CFG, &cfg);

    +  if (rc < 0) {

    +    SERR("failed-errno:%s!!!",strerror(errno));

    +  }

    +  return rc;

    +}

    2.2.3 HAL 3.0参数设置

    V3增加设定参数:对于HAL V3,从framework到HAL层的参数传递是通过metadata方式完成的,即每一个设置现在都变成了一个参数对,例如:设置AE mode为auto,V1版本参数可能是“AE mode=auto”字符串;V3版本假设AE mode功能序号是10,参数auto为1,传到HAL层的参数类似(10,1)这样的参数对,在HAL层需要通过10这个参数,获取设置值1;对于在V1版本对ois的设置需要在V3中添加新的处理来实现。

    如何在V3中定义自己特定参数(如ois设置):谷歌考虑到厂商可能需要定义自己特定的参数,因此在metadata里面定义了vendor tag的数据范围来让vendor可以添加自己特定的操作,如ois设置,可以通过vendor tag来实现。

    步骤

    1)  定义自己的vendor tag序号值

    vim system/media/camera/include/system/camera_metadata_tags.h

              typedefenum camera_metadata_tag {

                 ANDROID_SYNC_START,

                 ANDROID_SYNC_MAX_LATENCY,

                 ANDROID_SYNC_END,

    + VENDOR_TAG_OIS =

    + VENDOR_SECTION_START,  //由于参数少,没有重新定义section,使用默认section 0x8000

                    ......................

               } camera_metadata_tag_t;

    2)  所需支持配置

    Vendor Tag都需要在VENDOR_SECTION_START后面添加,此处添加了VENDOR_TAG_OIS。在HAL里面如果需要处理 Vendor Tag,一个是需要camera module的版本是2.2以上,因为Google在这个版本之后才稳定支持vendor tag。一个是需要vendor tag的的operations函数

    vim ./hardware/libhardware/modules/camera/CameraHAL.cpp +186

    版本和操作函数如下图所示:

    vim ./hardware/qcom/camera/QCamera2/HAL3/QCamera3VendorTags.cpp +184

     

               get_tag_count:返回所有vendor tag的个数;

    get_all_tags:把所有vendor tag依次放在service传下来的uint32_t * tag_array里面,这样上层就知道每一个tag对应的序号值了;

    get_section_name:获取vendor tag的section对应的section名称,比如可以把某几个vendor tag放在一个section里面,其它的放在其它的section里面。查看metadata.h里面的定义很好理解,如果你想增加自己的section,就可以在VENDOR_SECTION = 0x8000后面添加自己的section。由于本次只设置ois参数,没有分类的必要,所以就使用默认的VENDOR_SECTION.

    vim system/media/camera/include/system/camera_metadata_tags.h

     

    get_tag_name:用于获取每一个vendor tag的名称,比如我们这个地方返回“VENDOR_TAG_OIS”就可以了;

    get_tag_type:这个函数返回vendor tag对应的设置数据的类型,可以用TYPE_INT32, TYPE_FLOAT等多种数据格式,取决于需求,我们ois参数只要是INT32就行了。

    3)  加载vendor tag

    这样CameraService.cpp在启动的时候就会调用onFirstRef里面的下面代码来加载我们所写的vendor tag

    if (mModule->common.module_api_version >= CAMERA_MODULE_API_VERSION_2_2) {

                           setUpVendorTags();

            }

    4)  V1到V3参数转化

    由于我们这个ois设置是在V1的APP里面使用,因此首先需要实现V1和V3参数的转换,Google在services/camera/libcameraservice/api1/client2/Parameters.cpp实现相应的转换,因此首先需要在如下函数里面获取V1 APP传下来的OIS的值,其中的paramString就是V1的参数设置的字符串

    status_t Parameters::set(const String8& paramString)

    {

        …………

        mOis = newParams.get(CameraParameters::KEY_OIS);

        …………

    }

    由于V3的参数都是在request frame的时候一起下发的,因此需要讲mSaturation的值在Parameters::updateRequest(CameraMetadata *request)里面下发到HAL,即

    +  res = request->update(VENDOR_TAG_SATURATION,&mOis, 1);

     这样就将saturation的vendor tag和其设置值发送到了HAL V3。

    5)  HAL V3获取设置的OIS参数

    使用CameraMetadata::find(uint32_ttag)函数来获取参数:

    oisMapMode =                frame_settings.find(VENDOR_TAG_OIS).data.i32[0];

    通过ADD_SET_PARAM_ENTRY_TO_BATCH函数将设置下到vendor层:

    ADD_SET_PARAM_ENTRY_TO_BATCH(hal_metadata, CAM_INTF_META_OIS,

    oisMapMode);

     

    2.3 Hal 3.0版本框架分析

    2.3.1 Frameworks层总体框架

    Frameworks之CameraService部分架构图如下图所示:

    v3将更多的工作集中在了Framework去完成,将更多的控制权掌握在自己的手里,从而与HAL的交互的数据信息更少,也进一步减轻了一些在旧版本中HAL层所需要做的事情,也更加模块化。

    Camera2Client建立与初始化过程如下图所示:

     

    由上图可知建立好Camera2Client后会进行initialize操作,完成各个处理模块的创建:

    代码目录:frameworks/av/services/camera/libcameraservice/api1/Camera2Client.cpp

    status_tCamera2Client::initialize(CameraModule *module)

    {

        ………

        mStreamingProcessor = new StreamingProcessor(this);//preview和recorder

        threadName =String8::format(C2-%d-StreamProc, mCameraId);

       mStreamingProcessor->run(threadName.string());//预览与录像

     

        mFrameProcessor = new FrameProcessor(mDevice, this);// 3A

        threadName = String8::format(C2-%d-FrameProc,mCameraId);

       mFrameProcessor->run(threadName.string()); //3A

     

        mCaptureSequencer = new CaptureSequencer(this);

        threadName =String8::format(C2-%d-CaptureSeq, mCameraId);

       mCaptureSequencer->run(threadName.string());//录像,拍照

     

       mJpegProcessor = new JpegProcessor(this,mCaptureSequencer);

        threadName =String8::format(C2-%d-JpegProc, mCameraId);

       mJpegProcessor->run(threadName.string());

    ………

        mCallbackProcessor = new CallbackProcessor(this);//回调处理

        threadName = String8::format(C2-%d-CallbkProc,mCameraId);

       mCallbackProcessor->run(threadName.string());

        ………

    }

    依次分别创建了:

    1、StreamingProcessor并启动一个它所属的thread,该模块主要负责处理previews与record两种视频流的处理,用于从hal层获取原始的视频数据

    2、FrameProcessor并启动一个thread,该模块专门用于处理回调回来的每一帧的3A等信息,即每一帧视频除去原始视频数据外,还应该有其他附加的数据信息,如3A值。

    3、CaptureSequencer并启动一个thread,该模块需要和其他模块配合使用,主要用于向APP层告知capture到的picture。

    4、JpegProcessor并启动一个thread,该模块和streamprocessor类似,他启动一个拍照流,一般用于从HAL层获取jpeg编码后的图像照片数据。

    5、另外ZslProcessor模块称之为0秒快拍,其本质是直接从原始的Preview流中获取预存着的最近的几帧,直接编码后返回给APP,而不 需要再经过take picture去请求获取jpeg数据。0秒快拍技术得意于当下处理器CSI2 MIPI性能的提升以及Sensor支持全像素高帧率的实时输出。一般手机拍照在按下快门后都会有一定的延时,是因为需要切换底层Camera以及ISP 等的工作模式,并重新设置参数以及重新对焦等等,都需要花一定时间后才抓取一帧用于编码为jpeg图像。

    以上5个模块整合在一起基本上实现了Camera应用开发所需的基本业务功能。

    2.3.2 Preview模式下的控制流

    代码目录,直接以Camera2Client::startPreview()作为入口来分析整个Framework层中Preview相关的数据流

       1、调用Camera2Client::startPreview函数

    代码目录-1:frameworks/av/services/camera/libcameraservice/api1/Camera2Client.cpp

    status_t Camera2Client::startPreview() {

        ATRACE_CALL();

        ALOGV(%s: E, __FUNCTION__);

        Mutex::Autolockicl(mBinderSerializationLock);

        status_t res;

        if ( (res = checkPid(__FUNCTION__) ) != OK)return res;

        SharedParameters::Lock l(mParameters);

        return startPreviewL(l.mParameters,false);

    }

    startPreview通过startPreviewL提取参数后真正的开始执行Preview相关的控制流。该函数看上去内容虽然较多,但基本采用了同一种处理方式:

    2、    调用Camera2Client::startPreviewL函数

    代码目录-1:frameworks/av/services/camera/libcameraservice/api1/Camera2Client.cpp

    后面会详细介绍2.1-2.6粗体标注部分;

    status_tCamera2Client::startPreviewL(Parameters &params, bool restart){

    ......

    //获取上一层Preview stream id

    intlastPreviewStreamId = mStreamingProcessor->getPreviewStreamId();

    //2.1创建camera3device stream, Camera3OutputStream

        res =mStreamingProcessor->updatePreviewStream(params);

    .....

    intlastJpegStreamId = mJpegProcessor->getStreamId();

    //2.2预览启动时就建立一个jpeg的outstream

    res= updateProcessorStream(mJpegProcessor,params);

    .....

    //2.3回调处理建立一个Camera3outputstream

    res= mCallbackProcessor->updateStream(params);

    ………

    //2.4

    outputStreams.push(getCallbackStreamId());

    ......

    outputStreams.push(getPreviewStreamId());//预览stream

    ......

    if(!params.recordingHint) {

       if (!restart) {

          //2.5 request处理,更新了mPreviewrequest

          res = mStreamingProcessor->updatePreviewRequest(params); 

    ......

        }

            //2.6

            res = mStreamingProcessor->startStream(StreamingProcessor::PREVIEW,

                    outputStreams);//启动stream,传入outputStreams即stream 的id

        }

    ......

    }

    2.1、调用mStreamingProcessor->updatePreviewStream函数

       代码目录-2:

        frameworks/av/services/camera/libcameraservice/api1/client2/StreamingProcessor.cpp

    status_t StreamingProcessor::updatePreviewStream (constParameters &params) {

    ......

        sp<cameradevicebase> device =mDevice.promote();//Camera3Device

    ......

        if (mPreviewStreamId != NO_STREAM) {

            // Check if stream parameters have tochange

           uint32_t currentWidth, currentHeight;

            res =device->getStreamInfo(mPreviewStreamId,

                    &tWidth, &tHeight, 0);

        ......

            if (currentWidth !=(uint32_t)params.previewWidth ||

                    currentHeight != (uint32_t)params.previewHeight){

            ......    

                res =device->waitUntilDrained();

            ......   

                res =device->deleteStream(mPreviewStreamId);

                ......

                mPreviewStreamId = NO_STREAM;

            }

        }

    if (mPreviewStreamId == NO_STREAM) {//首次create stream

            //创建一个Camera3OutputStream

            res = device->createStream(mPreviewWindow,

                    params.previewWidth,params.previewHeight,

                   CAMERA2_HAL_PIXEL_FORMAT_OPAQUE, &mPreviewStreamId);

            ......

            }

        }

        res =device->setStreamTransform(mPreviewStreamId,

                params.previewTransform);

        ......

    }

    该函数首先是查看当前StreamingProcessor模块下是否存在Stream,没有的话,则交由Camera3Device创建一个 stream。显然,一个StreamingProcessor只能拥有一个PreviewStream,而一个Camera3Device显然控制着所 有的Stream。

    注意:在Camera2Client中,5大模块的数据交互均以stream作为基础。

    下面我们来重点关注Camera3Device的接口createStream,他是5个模块创建stream的基础:

          代码目录-3:

           frameworks/av/services/camera/libcameraservice/device3/Camera3Device.cpp

    status_tCamera3Device::createStream(spconsumer,

            uint32_t width, uint32_t height, intformat, int *id) {

        ......

        assert(mStatus != STATUS_ACTIVE);

        sp<camera3outputstream> newStream;

        if (format == HAL_PIXEL_FORMAT_BLOB) {//图片

            ssize_t jpegBufferSize =getJpegBufferSize(width, height);

           ......

            newStream = new Camera3OutputStream(mNextStreamId, consumer,

                    width, height, jpegBufferSize,format);//jpeg 缓存的大小

        } else {

            newStream = new Camera3OutputStream(mNextStreamId, consumer,

                    width, height, format);//Camera3OutputStream

        }

    newStream->setStatusTracker(mStatusTracker);

    //一个streamid与Camera3OutputStream绑定

        res = mOutputStreams.add(mNextStreamId,newStream);

        ......

        *id = mNextStreamId++;//至少一个previewstream 一般还有CallbackStream

        mNeedConfig = true;

        // Continue captures if active at start

        if (wasActive) {

            ALOGV(%s: Restarting activity toreconfigure streams, __FUNCTION__);

            res = configureStreamsLocked();

           ......

            internalResumeLocked();

        }

        ALOGV(Camera %d: Created new stream, mId);

        return OK;

    }

    该函数重点是关注一个new Camera3OutputStream,在Camera3Device主要存在Camera3OutputStream和Camera3InputStream,两种stream,前者主要作为HAL的输出,是请求HAL填充数据的OutPutStream,后者是由Framework将Stream进行填充。无论是Preview、record还是capture均是从HAL层获取数据,故都会以OutPutStream的形式存在,是我们关注的重点,后面在描述Preview的数据流时还会进一步的阐述。

    每当创建一个OutPutStream后,相关的stream信息被push维护在一个mOutputStreams的KeyedVector表中,分别是该stream在Camera3Device中创建时的ID以及Camera3OutputStream的sp值。同时对mNextStreamId记录下一个Stream的ID号。

    上述过程完成StreamingProcessor模块中一个PreviewStream的创建,其中Camera3OutputStream创建时的ID值被返回记录作为mPreviewStreamId的值,此外每个Stream都会有一个对应的ANativeWindow,这里称之为Consumer。

    2.2、调用updateProcessorStream(mJpegProcessor, params)函数

        代码目录-2:

        frameworks/av/services/camera/libcameraservice/api1/client2/StreamingProcessor.cpp

           status_tCamera2Client::updateProcessorStream(sp<processort> processor,

                                                 camera2::Parameters params) {

                //No default template arguments until C++11, so we need this overload

                 return updateProcessorStream<processort,processort::updatestream="">(

                    processor,params);

    }

    template <typename const="" parameters=""status_t="">

    status_tCamera2Client::updateProcessorStream(sp<processort> processor,

                                                 Parameters params) {

                status_tres;

                //Get raw pointer since sp<t> doesn't have operator->*

                ProcessorT*processorPtr = processor.get();

                res= (processorPtr->*updateStreamF)(params);

    .......

    }

    该模板函数处理过程最终通过非显示实例到显示实例调用JpegProcessor::updateStream,该函数处理的逻辑基本和Callback 模块处理一致,创建的一个OutPutStream和CaptureWindow相互绑定,同时Stream的ID保存在 mCaptureStreamId中。

    此外需要说明一点:

    在preview模式下,就去创建一个jpeg处理的stream,目的在于启动takepicture时,可以更快的进行capture操作,是通过牺牲内存空间来提升效率。

    2.3、调用mCallbackProcessor->updateStream函数

    代码目录-2:

        frameworks/av/services/camera/libcameraservice/api1/client2/CallbackProcessor.cpp

    对比StreamingProcessor模块创建previewstream的过程,很容易定位到Callback模块是需要建立一个 callback流,同样需要创建一个Camera3OutputStream来接收HAL返回的每一帧帧数据,是否需要callback可以通过 callbackenable来控制。一般但预览阶段可能不需要回调每一帧的数据到APP,但涉及到相应的其他业务如视频处理时,就需要进行 callback的enable。

    status_t CallbackProcessor::updateStream(constParameters &params) {

        ………

        sp<cameradevicebase> device =mDevice.promote();

        ………

        // If possible, use the flexible YUV format

        int32_t callbackFormat =params.previewFormat;

        if (mCallbackToApp) {

            // TODO: etalvala: This should use theflexible YUV format as well, but

            // need to reconcile HAL2/HAL3requirements.

            callbackFormat = HAL_PIXEL_FORMAT_YV12;

        } else if(params.fastInfo.useFlexibleYuv&&

                (params.previewFormat ==HAL_PIXEL_FORMAT_YCrCb_420_SP ||

                 params.previewFormat ==HAL_PIXEL_FORMAT_YV12) ) {

            callbackFormat =HAL_PIXEL_FORMAT_YCbCr_420_888;

        }

        if (!mCallbackToApp &&mCallbackConsumer == 0) {

            // Create CPU buffer queue endpoint,since app hasn't given us one

            // Make it async to avoid disconnectdeadlocks

            sp<igraphicbufferproducer>producer;

            sp<igraphicbufferconsumer>consumer;

           //BufferQueueProducer与BufferQueueConsumer

            BufferQueue::createBufferQueue(&producer, &consumer);

            mCallbackConsumer = new CpuConsumer(consumer,kCallbackHeapCount);

    //当前CallbackProcessor继承于CpuConsumer::FrameAvailableListener

            mCallbackConsumer->setFrameAvailableListener(this);

           mCallbackConsumer->setName(String8(Camera2Client::CallbackConsumer));

    //用于queue操作,这里直接进行本地的buffer操作

            mCallbackWindow = new Surface(producer);

        }

        if (mCallbackStreamId != NO_STREAM) {

            // Check if stream parameters have tochange

            uint32_t currentWidth, currentHeight,currentFormat;

            res =device->getStreamInfo(mCallbackStreamId,

                    &tWidth, &tHeight, &tFormat);

           ………

        }

        if (mCallbackStreamId == NO_STREAM) {

            ALOGV(Creating callback stream: %d x%d, format 0x%x, API format 0x%x,

                    params.previewWidth,params.previewHeight,

                    callbackFormat,params.previewFormat);

            res = device->createStream(mCallbackWindow,

                   params.previewWidth, params.previewHeight,

                    callbackFormat,&mCallbackStreamId);//Creating callback stream

            ………

        }

        return OK;

    }

    2.4、整合startPreviewL中所有的stream 到Vector outputStreams

    outputStreams.push(getPreviewStreamId());//预览stream

    outputStreams.push(getCallbackStreamId())//Callback stream

    目前一次Preview构建的stream数目至少为两个。

    2.5、调用mStreamingProcessor->updatePreviewRequest函数

    代码目录-2:

        frameworks/av/services/camera/libcameraservice/api1/client2/StreamingProcessor.cpp

    在创建好多路stream后,由StreamingProcessor模块来将所有的stream信息交由Camera3Device去打包成Request请求。

    注意:

    Camera HAL2/3的特点是:将所有stream的请求都转化为几个典型的Request请求,而这些Request需要由HAL去解析,进而处理所需的业务,这也是Camera3数据处理复杂化的原因所在。

    status_t StreamingProcessor::updatePreviewRequest(constParameters &params) {

        ………

        if (mPreviewRequest.entryCount()== 0) {

            sp<camera2client> client =mClient.promote();

            if (client == 0) {

                ALOGE(%s: Camera %d: Client doesnot exist, __FUNCTION__, mId);

                return INVALID_OPERATION;

            }

            // UseCAMERA3_TEMPLATE_ZERO_SHUTTER_LAG for ZSL streaming case.

            if (client->getCameraDeviceVersion()>= CAMERA_DEVICE_API_VERSION_3_0) {

                if (params.zslMode &&!params.recordingHint) {

                    res = device->createDefaultRequest(CAMERA3_TEMPLATE_ZERO_SHUTTER_LAG,

                            &mPreviewRequest);

                } else {

                    res = device->createDefaultRequest(CAMERA3_TEMPLATE_PREVIEW,

                            &mPreviewRequest);

                }

            } else {

              //创建一个Preview相关的request,由底层的hal来完成default创建

                res =device->createDefaultRequest(CAMERA2_TEMPLATE_PREVIEW,

                        &mPreviewRequest);

            ………

    }

    //根据参数来更新CameraMetadatarequest,用于app设置参数,如antibanding设置

    res= params.updateRequest(&mPreviewRequest);  

        ………

        res = mPreviewRequest.update(ANDROID_REQUEST_ID,

                &mPreviewRequestId,1);//mPreviewRequest的ANDROID_REQUEST_ID

        ………

    }

    a mPreviewRequest是一个CameraMetadata类型数据,用于封装当前previewRequest;

    b 调用device->createDefaultRequest(CAMERA3_TEMPLATE_PREVIEW,&mPreviewRequest)函数

    代码目录-3:

    frameworks/av/services/camera/libcameraservice/device3/Camera3Device.cpp

    status_t Camera3Device::createDefaultRequest(int templateId, CameraMetadata*request) {

        ………

    const camera_metadata_t *rawRequest;

     ATRACE_BEGIN(camera3->construct_default_request_settings);

     rawRequest = mHal3Device->ops->construct_default_request_settings(

        mHal3Device, templateId);

     ATRACE_END();

     if (rawRequest == NULL) {

        SET_ERR_L(HAL is unable to construct default settings for template %d,

                 templateId);

        return DEAD_OBJECT;

     }

     *request = rawRequest;

     mRequestTemplateCache[templateId] =rawRequest;

    ………

    }

    最终是由hal来实现构建一个rawrequest,即对于Preview,而言是构建了一个CAMERA3_TEMPLATE_PREVIEW类型的 Request。其实对HAL而言,rawrequest本质是用于操作一个camera_metadata_t类型的数据:

    struct camera_metadata {

        metadata_size_t          size;

        uint32_t                 version;

        uint32_t                 flags;

        metadata_size_t          entry_count;

        metadata_size_t          entry_capacity;

        metadata_uptrdiff_t      entries_start; // Offset fromcamera_metadata

        metadata_size_t          data_count;

        metadata_size_t          data_capacity;

        metadata_uptrdiff_t      data_start; // Offset fromcamera_metadata

        uint8_t                 reserved[];

    };

    该数据结构可以存储多种数据,且可以根据entry tag的不同类型来存储数据,同时数据量的大小也可以自动调整;

    c mPreviewRequest.update(ANDROID_REQUEST_ID,&mPreviewRequestId,1)

    将当前的PreviewRequest相应的ID保存到camera metadata。

    2.6、调用mStreamingProcessor->startStream函数启动整个预览的stream流

    代码目录-2:

      frameworks/av/services/camera/libcameraservice/api1/client2/StreamingProcessor.cpp

    该函数的处理过程较为复杂,可以说是整个Preview正常工作的核心控制:

    tatus_tStreamingProcessor::startStream(StreamType type,

            const Vector<int32_t>&outputStreams) {

    .....

    CameraMetadata&request = (type == PREVIEW) ?

                mPreviewRequest :mRecordingRequest;//取preview的CameraMetadata request

    //CameraMetadata中添加outputStreams

    res = request.update(ANDROID_REQUEST_OUTPUT_STREAMS,outputStreams);

    res= device->setStreamingRequest(request);//向hal发送request

    .....

    }

    该函数首先是根据当前工作模式来确定StreamingProcessor需要处理的Request,该模块负责Preview和Record两个Request。

    以PreviewRequest就是之前createDefaultRequest构建的,这里先是将这个Request所需要操作的Outputstream打包到一个tag叫ANDROID_REQUEST_OUTPUT_STREAMS的entry当中。

          a 调用setStreamingRequest函数

          代码目录:

           frameworks/av/services/camera/libcameraservice/device3/Camera3Device.cpp

    真正的请求Camera3Device去处理这个带有多路stream的PreviewRequest。

    a.1 status_t Camera3Device::setStreamingRequest(constCameraMetadata &request,

                                               int64_t* /*lastFrameNumber*/) {

        ATRACE_CALL();

        List<constcamerametadata=""> requests;

        requests.push_back(request);

        return setStreamingRequestList(requests,/*lastFrameNumber*/NULL);

    }

    该函数将mPreviewRequest push到一个list,调用setStreamingRequestList

    a.2 status_t Camera3Device::setStreamingRequestList(constList<const camerametadata=""> &requests, int64_t*lastFrameNumber) {

            ATRACE_CALL();

            returnsubmitRequestsHelper(requests,/*repeating*/true, lastFrameNumber);

    }

    a.3 status_t Camera3Device::submitRequestsHelper(

           const List<const camerametadata=""> &requests, boolrepeating,

           /*out*/

           int64_t *lastFrameNumber) {//repeating = 1;lastFrameNumber = NULL

       ………

       status_t res = checkStatusOkToCaptureLocked();

       ………

        RequestList requestList;

    //返回的是CaptureRequest RequestList

    res = convertMetadataListToRequestListLocked(requests,/*out*/&requestList);   

    ………

       if (repeating) {

    //重复的request存入到RequestThread

    res = mRequestThread->setRepeatingRequests(requestList, lastFrameNumber); 

    }  else {

    //capture模式,拍照单词

           res = mRequestThread->queueRequestList(requestList,lastFrameNumber);  

     }

       if (res == OK) {

           waitUntilStateThenRelock(/*active*/true, kActiveTimeout);

           if (res != OK) {

                SET_ERR_L(Can't transition toactive in %f seconds!,

                        kActiveTimeout/1e9);

           }

           ALOGV(Camera %d: Capture request % PRId32  enqueued, mId,

                 (*(requestList.begin()))->mResultExtras.requestId);

       } else {

           CLOGE(Cannot queue request. Impossible.);

           return BAD_VALUE;

       }

       return res;

    }

    a.4 convertMetadataListToRequestListLocked

    这个函数是需要将Requestlist中保存的CameraMetadata数据转换为List;

    status_tCamera3Device::convertMetadataListToRequestListLocked(

    const List<constcamerametadata=""> &metadataList, RequestList *requestList) {

       ………

       for (List<const camerametadata="">::const_iterator it =metadataList.begin();//CameraMetadata, mPreviewRequest

                it != metadataList.end(); ++it) {

            //新建CaptureRequest由CameraMetadata转化而来

           sp<capturerequest>newRequest = setUpRequestLocked(*it);       

            ………

           // Setup burst Id and request Id

           newRequest->mResultExtras.burstId = burstId++;

           if (it->exists(ANDROID_REQUEST_ID)) {

                if(it->find(ANDROID_REQUEST_ID).count == 0) {

                    CLOGE(RequestID entry exists;but must not be empty in metadata);

                    return BAD_VALUE;

                }

            //设置该request对应的id

            newRequest->mResultExtras.requestId =it->find(ANDROID_REQUEST_ID).data.i32[0];

           } else {

                CLOGE(RequestID does not exist inmetadata);

                return BAD_VALUE;

           }

           requestList->push_back(newRequest);

            ………

       }

       return OK;

    }

    这里是对List进行迭代解析处理,如当前模式下仅存在PreviewRequest这一个CameraMetadata,通过setUpRequestLocked将其转换为一个CaptureRequest。

            a.5 setUpRequestLocked

               sp<camera3device::capturerequest>Camera3Device::setUpRequestLocked(

                    constCameraMetadata &request) {//mPreviewRequest

                    status_tres;

                    if(mStatus == STATUS_UNCONFIGURED || mNeedConfig) {

                    res= configureStreamsLocked();

                    ......

        //CameraMetadata转为CaptureRequest,包含mOutputStreams

       </strong>sp<capturerequest> newRequest = createCaptureRequest(request);

                    return newRequest;

    }

    configureStreamsLocked函数主要是将Camera3Device侧建立的所有Stream包括Output与InPut格式 的交由HAL3层的Device去实现处理的核心接口是configure_streamsregister_stream_buffer

    createCaptureRequest函数是将一个CameraMetadata格式的数据如PreviewRequest转换为一个CaptureRequest:

               a.6 sp<camera3device::capturerequest>Camera3Device::createCaptureRequest(

                    constCameraMetadata &request) {//mPreviewRequest

                    ………

                    sp<capturerequest>newRequest = new CaptureRequest;

                    newRequest->mSettings= request;//CameraMetadata

                    camera_metadata_entry_tinputStreams =

                        newRequest->mSettings.find(ANDROID_REQUEST_INPUT_STREAMS);

                    if(inputStreams.count > 0) {

                        if(mInputStream == NULL ||

                            mInputStream->getId() != inputStreams.data.i32[0]) {

                            CLOGE(Requestreferences unknown input stream %d,

                            inputStreams.data.u8[0]);

                            returnNULL;

                        }

                    ………

                        newRequest->mInputStream= mInputStream;

                        newRequest->mSettings.erase(ANDROID_REQUEST_INPUT_STREAMS);

                    }

    //读取存储在CameraMetadata的stream id信息

                    camera_metadata_entry_tstreams =

                        newRequest->mSettings.find(ANDROID_REQUEST_OUTPUT_STREAMS);

                        ………

    for (size_t i = 0; i < streams.count; i++) {

                        //Camera3OutputStream的id在mOutputStreams中

                        intidx = mOutputStreams.indexOfKey(streams.data.i32[i]);

                        ………

                     }

                    //返回的是Camera3OutputStream,preview/callback等stream

                    sp<camera3outputstreaminterface>stream =

                         mOutputStreams.editValueAt(idx);

                    ………

    //Camera3OutputStream添加到CaptureRequest的mOutputStreams

                    newRequest->mOutputStreams.push(stream);

        }

                    newRequest->mSettings.erase(ANDROID_REQUEST_OUTPUT_STREAMS);

                    returnnewRequest;

    }

    该函数主要处理指定的这个CameraMetadata mPreviewRequest下对应所拥有的Output与Input Stream,对于Preview而言,至少存在OutPutStream包括一路StreamProcessor与一路可选的 CallbackProcessor。

    在构建这个PreviewRequest时,已经将ANDROID_REQUEST_OUTPUT_STREAMS这个Tag进行了初始化,相应的内容为Vector &outputStreams,包含着属于PreviewRequest这个Request所需要的输出stream的ID值,通过这个IDindex值,可以遍历到Camera3Device下所createstream创造的Camera3OutputStream,即说明不同类型的 Request在Camera3Device端存在多个Stream,而每次不同业务下所需要Request的对应的Stream又仅是其中的个别而已。

    idx = mOutputStreams.indexOfKey(streams.data.i32[i])是通过属于PreviewRequest中包含的一个 stream的ID值来查找到mOutputStreams这个KeyedVector中对应的标定值index。注意:两个索引值不一定是一致的。

    mOutputStreams.editValueAt(idx)是获取一个与该ID值(如Previewstream ID、CallbackStream ID等等)相对应的Camera3OutputStream。

    在找到了当前Request中所有的Camera3OutputStream后,将其维护在CaptureRequest中:

    class CaptureRequest : public LightRefBase<capturerequest> {

          public:

            CameraMetadata                      mSettings;

            sp<camera3::camera3stream>          mInputStream;

           Vector<sp<camera3::camera3outputstreaminterface> >

                                                mOutputStreams;

            CaptureResultExtras                 mResultExtras;

        };

    mSettings是保存CameraMetadata PreviewRequest,vectormOutPutStreams保存着当前Request提取出来的Camera3OutputStream,至此构建了一个CaptureRequest。

               回到a.4:convertMetadataListToRequestListLocked

    返回到convertMetadataListToRequestListLocked中,现在已经完成了一个CameraMetadata Request的处理,生产的是一个CaptureRequest。我们将这个ANDROID_REQUEST_ID的ID值,保留在newRequest->mResultExtras.requestId =it->find(ANDROID_REQUEST_ID).data.i32[0]。

    这个值在整个Camera3的架构中,仅存在3大种Request类型,说明了整个和HAL层交互的Request类型是不多的:

    预览RequestmPreviewRequest:mPreviewRequestId(Camera2Client::kPreviewRequestIdStart),

    拍照RequestmCaptureRequest:mCaptureId(Camera2Client::kCaptureRequestIdStart),

    录像RequestmRecordingRequest: mRecordingRequestId(Camera2Client::kRecordingRequestIdStart);

    staticconst int32_t kPreviewRequestIdStart = 10000000;

    staticconst int32_t kPreviewRequestIdEnd   =20000000;

    staticconst int32_t kRecordingRequestIdStart  =20000000;

    staticconst int32_t kRecordingRequestIdEnd    =30000000;

    staticconst int32_t kCaptureRequestIdStart = 30000000;

    staticconst int32_t kCaptureRequestIdEnd   =40000000;

               回到a.3:mRequestThread->setRepeatingRequests(requestList)

    对于Preview来说,一次Preview后底层硬件就该可以连续的工作,而不需要进行过多的切换,故Framework每次向HAL发送的Request均是一种repeat的操作模式,故调用了一个重复的RequestQueue来循环处理每次的Request。

    status_tCamera3Device::RequestThread::setRepeatingRequests(

            const RequestList &requests,

            /*out*/

            int64_t *lastFrameNumber) {

        Mutex::Autolock l(mRequestLock);

        if (lastFrameNumber != NULL) {//第一次进来为null

            *lastFrameNumber =mRepeatingLastFrameNumber;

        }

        mRepeatingRequests.clear();

        mRepeatingRequests.insert(mRepeatingRequests.begin(),

                requests.begin(), requests.end());

       unpauseForNewRequests();//signal request_thread in waitfornextrequest

        mRepeatingLastFrameNumber =NO_IN_FLIGHT_REPEATING_FRAMES;

        return OK;

    }

    将Preview线程提交的Request加入到mRepeatingRequests中后,唤醒RequestThread线程去处理当前新的Request。

    2.7、经过2.6步骤将开启RequestThread 请求处理线程

    RequestThread::threadLoop()函数主要用于响应并处理新加入到Request队列中的请求。

    代码目录-2:

    frameworks/av/services/camera/libcameraservice/device3/Camera3Device.cpp

    boolCamera3Device::RequestThread::threadLoop(){

    ....

    //返回的是mRepeatingRequests,mPreviewRequest

     sp<capturerequest> nextRequest = waitForNextRequest();  

    ………

        // Create request to HAL

    //CaptureRequest转为给HAL3.0的camera3_capture_request_t

    camera3_capture_request_t request =camera3_capture_request_t();   request.frame_number = nextRequest->mResultExtras.frameNumber;//当前帧号

        Vector<camera3_stream_buffer_t>outputBuffers;

        // Get the request ID, if any

        int requestId;

        camera_metadata_entry_t requestIdEntry =

                nextRequest->mSettings.find(ANDROID_REQUEST_ID);

        if (requestIdEntry.count > 0) {

    //获取requestid,这里是mPreviewRequest的id

            requestId = requestIdEntry.data.i32[0];

        }

             .....

       for (size_t i = 0; i <nextRequest->mOutputStreams.size(); i++) {

             res =nextRequest->mOutputStreams.editItemAt(i)->

                     getBuffer(&outputBuffers.editItemAt(i));

    .....

        // Submit request and block until ready fornext one

        ATRACE_ASYNC_BEGIN(frame capture,request.frame_number);

       ATRACE_BEGIN(camera3->process_capture_request);

       //调用底层hal的process_capture_request,如antibanding参数设置

    res = mHal3Device->ops->process_capture_request(mHal3Device,&request);    ATRACE_END();

         .......

    }

    a.1 waitForNextRequest()

        Camera3Device::RequestThread::waitForNextRequest() {

       ………

        while (mRequestQueue.empty()) {

            if (!mRepeatingRequests.empty()) {

                // Always atomically enqueue allrequests in a repeating request

                // list. Guarantees a completein-sequence set of captures to

                // application.

                const RequestList &requests =mRepeatingRequests;

                RequestList::const_iteratorfirstRequest =

                        requests.begin();

                nextRequest = *firstRequest;

                //把当前的mRepeatingRequests插入到mRequestQueue

               mRequestQueue.insert(mRequestQueue.end(),

                        ++firstRequest,

                        requests.end());

                // No need to wait any longer

                mRepeatingLastFrameNumber = mFrameNumber+ requests.size() - 1;

                break;

            }

            //等待下一个request

            res =mRequestSignal.waitRelative(mRequestLock, kRequestTimeout);

           if ((mRequestQueue.empty() && mRepeatingRequests.empty()) ||

                    exitPending()) {

                Mutex::Autolock pl(mPauseLock);

                if (mPaused == false) {

                    ALOGV(%s: RequestThread: Goingidle, __FUNCTION__);

                    mPaused = true;

                    // Let the tracker know

                    sp<statustracker>statusTracker = mStatusTracker.promote();

                    if (statusTracker != 0) {

                       statusTracker->markComponentIdle(mStatusId, Fence::NO_FENCE);

                    }

                }

                // Stop waiting for now and letthread management happen

                return NULL;

            }

        }

        if (nextRequest == NULL) {

            // Don't have a repeating requestalready in hand, so queue

            // must have an entry now.

            RequestList::iterator firstRequest =

                    mRequestQueue.begin();

            nextRequest = *firstRequest;

    //取一根mRequestQueue中的CaptureRequest,来自于mRepeatingRequests的next

            mRequestQueue.erase(firstRequest);

        }

        ………

        if (nextRequest != NULL) {

            //对每一个非空的request需要帧号++

    nextRequest->mResultExtras.frameNumber= mFrameNumber++;       nextRequest->mResultExtras.afTriggerId = mCurrentAfTriggerId;

           nextRequest->mResultExtras.precaptureTriggerId = mCurrentPreCaptureTriggerId;

        }

        return nextRequest;

    }

    该函数是响应RequestList的核心,通过不断的轮训休眠等待一旦mRepeatingRequests有Request可处理时,就将他内部所有的CaptureRequest加入到mRequestQueue 中去,理论来说每一个CaptureRequest对应着一帧的请求处理,每次响应时可能会出现mRequestQueue包含了多个 CaptureRequest。

    通过nextRequest->mResultExtras.frameNumber= mFrameNumber++表示当前CaptureRequest在处理的一帧图像号。

    对于mRepeatingRequests而言,只有其非空,在执行完一次queue操作后,在循环进入执行时,会自动对 mRequestQueue进行erase操作,是的mRequestQueue变为empty后再次重新加载mRepeatingRequests中的 内容,从而形成一个队repeatRequest的重复响应过程。

    a.2

    camera_metadata_entry_t requestIdEntry =nextRequest->mSettings.find(ANDROID_REQUEST_ID);提取该CaptureRequest对应的 Request 类型值;

    a.3 getBuffer操作

    a.4 mHal3Device->ops->process_capture_request(mHal3Device,&request)

    这里的request是已经由一个CaptureRequest转换为和HAL3.0交互的camera3_capture_request_t结构。

    3、    总结

    至此已经完成了一次向HAL3.0 Device发送一次完整的Request的请求。从最初Preview启动建立多个OutPutStream,再是将这些Stream打包成一个 mPreviewRequest来启动stream,随后将这个Request又转变为一个CaptureRequest,直到转为Capture list后交由RequestThread来处理这些请求。每一次的Request简单可以说是Camera3Device向HAL3.0请求一帧数据, 当然每一次Request也可以包含各种控制操作,如AutoFocus等内容。

    2.3.3 opencamera过程调用device3 initialize函数

        App至framework流程上面章节已做简要分析,frameworks -> hal初始化框图如下:

     

    2.3.4 frameworks层设置参数流程

        设置参数setParameters流程图如下所示:

        Frameworks层:

     

    2.3.5设置参数下至hal层流程

    由2.3.2节可知,开启并在request线程中--Camera3Device::RequestThread::threadLoop调用hal层接口函数mHal3Device->ops->process_capture_request(mHal3Device, &request),接口函数中会完成参数设置相关工作,如antibanding的设置。

    根据2.3.6节可知,antibanding等相关参数已经更新到requestlist中,hal层参数设置如下图所示:

    展开全文
  • 许久未更新博客,今天再次重磅推出鄙人的浅显的一些总结,以供参考,该文主要是基于camera一个小功能来详细讲解java层接口如何步步调用至hardware层接口,涉及到一些机制的简单介绍,希望可以为您提供更多的参考,...

               许久未更新博客,今天再次重磅推出鄙人的浅显的一些总结,以供参考,该文主要是基于camera一个小功能来详细讲解java层接口如何步步调用至hardware层接口,涉及到一些机制的简单介绍,希望可以为您提供更多的参考,希望为技术类资源的整合做点微小的贡献。

    目录...3

    2.1 特性简介

    1)   Feature name:

    ObjectTracking;

    2)   Feature简介:手动对焦物体,移动设备的过程中会自动追踪物体,该feature主要通过ArcSoft算法处理返回的坐标信息确定物体位置;

    2.2 特性分析

    1)   处理过程:

    选择追踪物体:

    Application层选择物体 -> 物体坐标信息 -> ArcSoft算法确定要追踪物体信息

    定位物体:

    ArcSoft算法获取物体位置信息 –> 物体坐标信息 -> callback至Application层

    2)   应用层:

    a、Draw rectangle on detected and tracking area;

    b、Add touch focus mode with object tracking or just replace current touchfocus mode;

    后续完善该部分内容

    4.1.1 函数初始化调用流程

    Focus mode: objecttracking

    vim./vendor/semc/packages/apps/camera-addons/CameraUI/src/com/sonyericsson/android/camera/controller/EventDispatcher.java

    static class ObjectTrackingEventimplementsControllerEventSender {

        @Override

        public voidsend(EventAction action,ControllerEventSourcesource,Rect position) {

            switch(action){

                case DOWN:

                    break;

                case UP:

                    Executor.sendEvent(ControllerEvent.EV_OBJECT_TRACKING_START,source,

                           0, position);

                    break;

                case CANCEL:

                    Executor.sendEvent(ControllerEvent.EV_OBJECT_TRACKING_LOST,source);

                    break;

            }

        }

    }

    Thenext is ControllerMessageHandler.java dispatch EV_AF_START event:

    vim./vendor/semc/packages/apps/camera-addons/CameraUI/src/com/sonyericsson/android/camera/controller/ControllerMessageHandler.java

    private synchronized voiddispatch(ControllerMessage message) {

        ……

        switch(message.mEventId) {

            caseEV_ABORT:

                mCurrentState.handleAbort(message);

                break;

            …...

           caseEV_OBJECT_TRACKING_LOST:

              mCurrentState.handleObjectTrackingLost(message);

              break;

           caseEV_OBJECT_TRACKING_START:

               mCurrentState.handleObjectTrackingStart(message);

           break;

           …...

       }

    }

    vim./vendor/semc/packages/apps/camera-addons/CameraUI/src/com/sonyericsson/android/camera/controller/StateTransitionController.java

    public void handleObjectTrackingStart(ControllerMessage message) {

        mObjectTracking.start((Rect) message.mArg2);

    }

    vim./vendor/semc/packages/apps/camera-addons/CameraUI/src/com/sonyericsson/android/camera/controller/ObjectTracking.java

    public void start(Rect position) {

        if(position != null) {

            mPosition=position;

            Rect rectIS =newRect();

            if(mController.mCameraDevice.isObjectTrackingRunning() &&

                    !mController.getParams().getTouchCapture().getBooleanValue()&&

                    rectIS.intersects(mPosition,

                           PositionConverter.getInstance().

                           convertDeviceToFace(mTrackingPosition))){

                Executor.sendEmptyEvent(ControllerEvent.EV_OBJECT_TRACKING_LOST);

                return;

            }

            if(mIsAlreadyLost) {

                startTracking(position);

            }else {

                // Wait till previous object is lost when restart.

                stop(false);

                mShouldWaitForLost= true;

            }

        }

    }

    private void startTracking(Rect position) {

        mCallback=new ObjectTrackingCallback();

        mController.mCameraWindow.startObjectTrackingAnimation(position);

        mController.mCameraDevice.startObjectTracking(

                PositionConverter.getInstance().convertFaceToDevice(position),

                mCallback);

    }

    vim./vendor/semc/packages/apps/camera-addons/CameraUI/src/com/sonyericsson/android/camera/device/CameraDevice.java

    public void startObjectTracking(Rect position,ObjectTrackingCallbackcb) {

        ……

        mCameraExtension.setObjectTrackingCallback(cb, //设置callback函数,用于从ArcSoft获取位置信息

                CameraDeviceUtil.OBJECT_TRACKING_LOW_PASS_FILTER_STRENGTH,

                CameraDeviceUtil.OBJECT_TRACKING_MINIMAL_INTERVAL_MS);

        mCameraExtension.startObjectTracking();  //选择tracking位置,设置到ArcSoft

        mCameraExtension.selectObject(position.centerX(),position.centerY()); 

        mIsObjectTrackingRunning= true;

        ……

        newEachCameraStatusPublisher(mCameraActivity,mCameraId)

                .put(new ObjectTracking(ObjectTracking.Value.ON))

                .publish();

    }

    vimvendor/semc/frameworks/base/libs/camera-extension/api/src/com/sonyericsson/cameraextension/CameraExtension.java

    public final voidstartObjectTracking() {

        ……

        if(mCamera != null) {

            mCamera.startObjectTracking();

        }

    }

    vimframeworks/base/core/java/android/hardware/Camera.java

    private native final void_startObjectTracking();

     publicvoidstartObjectTracking() {

           _startObjectTracking();

      }

    vimframeworks/base/core/jni/android_hardware_Camera.cpp

    static voidandroid_hardware_Camera_startObjectTracking(JNIEnv *env,jobject thiz)

    {

        JNICameraContext* context;

        sp<Camera>camera = get_native_camera(env,thiz, &context);

        if(camera == 0)return;

        boolisSuccess = context->setUpObjectTracking(env);

        ……

        if(camera->sendCommand(CAMERA_CMD_START_OBJECT_TRACKING,0, 0) != NO_ERROR) {

            jniThrowRuntimeException(env,"start objecttracking failed");

        }

    }

     vim frameworks/av/camera/Camera.cpp

    status_t Camera::sendCommand(int32_t cmd,int32_t arg1,int32_t arg2)

    {

        sp <ICamera> c = mCamera;

        if(c == 0)return NO_INIT;

        returnc->sendCommand(cmd,arg1, arg2);

    }

    vimframeworks/av/services/camera/libcameraservice/api1/CameraClient.cpp

         status_t CameraClient::sendCommand(int32_t cmd,int32_t arg1,int32_t arg2) {

             …...

           elseif(cmd == CAMERA_CMD_START_OBJECT_TRACKING) {

           enableMsgType(CAMERA_MSG_OBJECT_TRACKING);

           mLowPassFilterObjectTracking->isStartObjectTracking =true;

          }

           …...

          /* MM-MC-SomcAddForSoMCAP-00+} */

          return mHardware->sendCommand(cmd,arg1, arg2);

    }

    void CameraClient::enableMsgType(int32_t msgType) {

        android_atomic_or(msgType,&mMsgEnabled);

        mHardware->enableMsgType(msgType);

    }

    vimframeworks/av/services/camera/libcameraservice/device1/CameraHardwareInterface.h

    void enableMsgType(int32_t msgType)

    {

        if(mDevice->ops->enable_msg_type)

            mDevice->ops->enable_msg_type(mDevice,msgType);

    }

    status_t sendCommand(int32_t cmd,int32_t arg1,int32_t arg2)

    {

        if(mDevice->ops->send_command)

            returnmDevice->ops->send_command(mDevice,cmd, arg1, arg2);

        returnINVALID_OPERATION;

    }

    vimhardware/qcom/camera/QCamera2/HAL/QCamera2HWI.cpp

    camera_device_ops_tQCamera2HardwareInterface::mCameraOps = {

        set_preview_window:        QCamera2HardwareInterface::set_preview_window,

        set_callbacks:             QCamera2HardwareInterface::set_CallBacks,

        enable_msg_type:           QCamera2HardwareInterface::enable_msg_type,

        disable_msg_type:          QCamera2HardwareInterface::disable_msg_type,

        ……

        get_parameters:            QCamera2HardwareInterface::get_parameters,

        put_parameters:            QCamera2HardwareInterface::put_parameters,

        send_command:            QCamera2HardwareInterface::send_command,

        release:                   QCamera2HardwareInterface::release,

        dump:                    QCamera2HardwareInterface::dump,

    };

    4.1.2数据Callback流程

       下面从hal层接口开始往上层进行分析数据回调过程:

        1)、CameraHardwareInterface提供的回调函数:

    vimframeworks/av/services/camera/libcameraservice/device1/CameraHardwareInterface.h

    Callback函数:

    static void__notify_cb(int32_t msg_type,int32_t ext1,

                            int32_t ext2,void*user)

    {

        CameraHardwareInterface *__this =

                static_cast<CameraHardwareInterface *>(user);

        __this->mNotifyCb(msg_type,ext1, ext2, __this->mCbUser);

    }

    static void__data_cb(int32_t msg_type,

                         constcamera_memory_t *data,unsignedint index,

                         camera_frame_metadata_t *metadata,

                         void*user)

    {

        CameraHardwareInterface *__this =

                static_cast<CameraHardwareInterface *>(user);

        sp<CameraHeapMemory> mem(static_cast<CameraHeapMemory *>(data->handle));

        if(index >= mem->mNumBufs) {

            return;

        }

        __this->mDataCb(msg_type,mem->mBuffers[index],metadata,__this->mCbUser);

    }

       Set callback函数:

    void setCallbacks(notify_callback notify_cb,

                     data_callback data_cb,

                     data_callback_timestamp data_cb_timestamp,

                     void* user)

    {

        mNotifyCb = notify_cb;

        mDataCb = data_cb;

        mDataCbTimestamp = data_cb_timestamp;

        mCbUser = user;

        if(mDevice->ops->set_callbacks) {

            mDevice->ops->set_callbacks(mDevice, //调用halset_callback函数

                                  __notify_cb,

                                  __data_cb,

                                  __data_cb_timestamp,

                                  __get_memory,

                                  this);

        }

    }

          上面的set_callback函数最终会调用hal层的QCamera2HardwareInterface::setCallBacks函数,简要分析Hal层callback函数相关处理:

           vimhardware/qcom/camera/QCamera2/HAL/QCamera2HWI.cpp

    int QCamera2HardwareInterface::setCallBacks(camera_notify_callbacknotify_cb,

                                                camera_data_callbackdata_cb,

                                               camera_data_timestamp_callbackdata_cb_timestamp,

                                               camera_request_memoryget_memory,

                                                void*user)

    {

        mNotifyCb        = notify_cb;

        mDataCb          = data_cb;

        mDataCbTimestamp =data_cb_timestamp;

        mGetMemory       = get_memory;

        mCallbackCookie  = user;

        m_cbNotifier.setCallbacks(notify_cb,data_cb, data_cb_timestamp,user);

        returnNO_ERROR;

    }

         vimhardware/qcom/camera/QCamera2/HAL/QCamera2HWICallbacks.cpp

    void QCameraCbNotifier::setCallbacks(camera_notify_callbacknotifyCb,

                                        camera_data_callbackdataCb,

                                         camera_data_timestamp_callbackdataCbTimestamp,

                                        void *callbackCookie)

    {

        if( (NULL == mNotifyCb ) &&

             ( NULL ==mDataCb ) &&

             ( NULL ==mDataCbTimestamp ) &&

             ( NULL == mCallbackCookie) ) {

            mNotifyCb =notifyCb;

            mDataCb =dataCb;

            mDataCbTimestamp= dataCbTimestamp;

            mCallbackCookie= callbackCookie;

            mActive =true;

            mProcTh.launch(cbNotifyRoutine,this); //开启线程处理callback信息

        }else {

            ALOGE("%s: Camera callback notifier already initialized!",

                  __func__);

        }

    }

    void *QCameraCbNotifier::cbNotifyRoutine(void* data)

    {

        intrunning =1;

        intret;

        QCameraCbNotifier *pme =(QCameraCbNotifier *)data;

        QCameraCmdThread *cmdThread =&pme->mProcTh;

        cmdThread->setName("CAM_cbNotify");

        uint8_t isSnapshotActive =FALSE;

        boollongShotEnabled =false;

        uint32_t numOfSnapshotExpected=0;

        uint32_t numOfSnapshotRcvd =0;

        int32_t cbStatus = NO_ERROR;

        CDBG("%s:E",__func__);

        do{

            do{

                ret =cam_sem_wait(&cmdThread->cmd_sem);

            ……

            }while (ret !=0);

            camera_cmd_type_tcmd = cmdThread->getCmd();

            switch(cmd) {

            caseCAMERA_CMD_TYPE_START_DATA_PROC:

                {

                    isSnapshotActive= TRUE;

                    numOfSnapshotExpected= pme->mParent->numOfSnapshotsExpected();

    /*MM-YW-Integrate Arcsoft Snapshot Fature-00+{*/               

    #ifdef USE_ARCSOFT_FEATURE

                    if(NULL !=pme->mParent->mArcSoft_Feature)

                       numOfSnapshotExpected+= pme->mParent->mArcSoft_Feature->mSnapshotInfo.extra_burst_cnt;

    #endif

    /*MM-YW-Integrate Arcsoft Snapshot Fature-00+}*/

                    longShotEnabled= pme->mParent->isLongshotEnabled();

                    numOfSnapshotRcvd= 0;

                }

                break;

            caseCAMERA_CMD_TYPE_STOP_DATA_PROC:

                {

                    pme->mDataQ.flushNodes(matchSnapshotNotifications);

                    isSnapshotActive= FALSE;

                    numOfSnapshotExpected= 0;

                    numOfSnapshotRcvd= 0;

                }

                break;

            caseCAMERA_CMD_TYPE_DO_NEXT_JOB:

                {

                    qcamera_callback_argm_t*cb = //从队取出信息

                        (qcamera_callback_argm_t*)pme->mDataQ.dequeue();

                    cbStatus =NO_ERROR;

                    if (NULL != cb) {

                       CDBG("%s:cb type %d received",

                             __func__,

                             cb->cb_type);

                       if (pme->mParent->msgTypeEnabledWithLock(cb->msg_type)){

                           switch (cb->cb_type){

                           case QCAMERA_NOTIFY_CALLBACK:

                               {

                                   if (cb->msg_type== CAMERA_MSG_FOCUS) {

                                       ATRACE_INT("Camera:AutoFocus",0);

                                       CDBG_HIGH("[KPIPerf] %s : PROFILE_SENDING_FOCUS_EVT_TO APP",

                                               __func__);

                                    }

                                   if (pme->mNotifyCb){

                                       pme->mNotifyCb(cb->msg_type,

                                                     cb->ext1,

                                                     cb->ext2,

                                                     pme->mCallbackCookie);

                                   } else{

                                       ALOGE("%s: notify callback not set!",

                                             __func__);

                                    }

                               }

                               break;

                           case QCAMERA_DATA_CALLBACK:

                               {

                                   if (pme->mDataCb){

                                        pme->mDataCb(cb->msg_type,

                                                    cb->data,

                                                    cb->index,

                                                    cb->metadata,

                                                     pme->mCallbackCookie);

                                   } else{

                                       ALOGE("%s: data callback not set!",

                                             __func__);

                                   }

                                }

                               break;

                           case QCAMERA_DATA_TIMESTAMP_CALLBACK:

                               {

                                   if(pme->mDataCbTimestamp){

                                       pme->mDataCbTimestamp(cb->timestamp,

                                                             cb->msg_type,

                                                             cb->data,

                                                             cb->index,

                                                              pme->mCallbackCookie);

                                   } else{

                                       ALOGE("%s:datacb with tmp not set!",

                                             __func__);

                                   }

                                }

                               break;

                           case QCAMERA_DATA_SNAPSHOT_CALLBACK:

                               {

                                   if (TRUE ==isSnapshotActive && pme->mDataCb ) {

                                        if(!longShotEnabled) {

                                           numOfSnapshotRcvd++;

                                           /*MM-YW-IntegrateArcsoft Snapshot Fature-01+{*/

                                           #ifdefUSE_ARCSOFT_FEATURE

                                            if((NULL !=pme->mParent->mArcSoft_Feature) &&pme->mParent->mArcSoft_Feature->mSnapshotInfo.is_snapshot_done)

                                           {

                                               pme->mParent->processSyncEvt(QCAMERA_SM_EVT_SNAPSHOT_DONE,NULL);

                                               pme->mParent->mArcSoft_Feature->ArcSoft_SendSnapshotEvt(ARCSOFT_S_EVT_DONE,FALSE,NULL);         

                                           }else

                                            #endif

                                           /*MM-YW-IntegrateArcsoft Snapshot Fature-01+}*/

                                           if (numOfSnapshotExpected>0 &&

                                               numOfSnapshotExpected== numOfSnapshotRcvd) {

                                               //notify HWI that snapshot is done

                                               pme->mParent->processSyncEvt(QCAMERA_SM_EVT_SNAPSHOT_DONE,

                                                                            NULL);

                                           }

                                       }

                                       pme->mDataCb(cb->msg_type,

                                                    cb->data,

                                                     cb->index,

                                                    cb->metadata,

                                                    pme->mCallbackCookie);

                                   }

                               }

                                break;

         ……

        }while (running);

        CDBG("%s:X",__func__);

        returnNULL;

    }

         2)、CameraService处理HAL消息函数:

        vimframeworks/av/services/camera/libcameraservice/api1/CameraClient.cpp

        set callback:

    status_t CameraClient::initialize(CameraModule *module) {

        intcallingPid = getCallingPid();

        status_t res;

        // Verify ops permissions

        res = startCameraOps();

        if(res != OK) {

            returnres;

        }

        charcamera_device_name[10];

        snprintf(camera_device_name,sizeof(camera_device_name),"%d", mCameraId);

        mHardware =new CameraHardwareInterface(camera_device_name);

        res = mHardware->initialize(module);

        ……

        mHardware->setCallbacks(notifyCallback,

                dataCallback,

                dataCallbackTimestamp,

                (void *)(uintptr_t)mCameraId);

        // Enable zoom, error, focus, and metadata messages by default

        enableMsgType(CAMERA_MSG_ERROR | CAMERA_MSG_ZOOM |CAMERA_MSG_FOCUS |

                     CAMERA_MSG_PREVIEW_METADATA | CAMERA_MSG_FOCUS_MOVE);

        returnOK;

    }

             Callback函数:

    void CameraClient::notifyCallback(int32_t msgType,int32_t ext1,

            int32_t ext2,void* user) {

        sp<CameraClient> client =static_cast<CameraClient*>(getClientFromCookie(user).get());

        if(client.get() == nullptr)return;

        if(!client->lockIfMessageWanted(msgType))return;

        switch(msgType) {

            caseCAMERA_MSG_SHUTTER:

                // ext1 is the dimension of the yuv picture.

                client->handleShutter();

                break;

            default:

                client->handleGenericNotify(msgType,ext1, ext2);

                break;

        }

    }

    void CameraClient::dataCallback(int32_t msgType,

            constsp<IMemory>& dataPtr,camera_frame_metadata_t *metadata,void* user) {

        sp<CameraClient> client =static_cast<CameraClient*>(getClientFromCookie(user).get());

        if(client.get() == nullptr)return;

        if(!client->lockIfMessageWanted(msgType))return;

        ……

        switch(msgType & ~CAMERA_MSG_PREVIEW_METADATA) {

            caseCAMERA_MSG_PREVIEW_FRAME:

                client->handlePreviewData(msgType,dataPtr, metadata);

                break;

            caseCAMERA_MSG_POSTVIEW_FRAME:

                client->handlePostview(dataPtr);

                break;

            caseCAMERA_MSG_RAW_IMAGE:

                client->handleRawPicture(dataPtr);

                break;

            caseCAMERA_MSG_COMPRESSED_IMAGE:

                client->handleCompressedPicture(dataPtr);

                break;

            /* MM-MC-SomcAddForSoMCAP-00+{ */

            caseCAMERA_MSG_OBJECT_TRACKING:

                client->handleObjectTracking(dataPtr);

                break;

            /* MM-MC-SomcAddForSoMCAP-00+} */

            default:

                client->handleGenericData(msgType,dataPtr, metadata);

                break;

        }

    }

    // handleObjectTracking

    void CameraClient::handleObjectTracking(constsp<IMemory>&mem) {

        LOG2("%s:",__FUNCTION__);

        sp<ICameraClient> c = mRemoteCallback;

        mLock.unlock();

        clock_t nowMilliSec =1000 * clock() /CLOCKS_PER_SEC;

        ……

            // reset isStartObjectTracking flag

            mLowPassFilterObjectTracking->isStartObjectTracking =false;

            // return callback

            if(c != NULL) { //调用客户端回调函数

                c->dataCallback(CAMERA_MSG_OBJECT_TRACKING,mem, NULL);

            }

            return;

        }

       ……

        }

        3)、Client客户端处理:

    vim frameworks/av/camera/Camera.cpp

    void Camera::notifyCallback(int32_t msgType,int32_t ext1,int32_t ext2)

    {

        returnCameraBaseT::notifyCallback(msgType,ext1, ext2);

    }

    // callback from cameraservice when frame or image is ready

    void Camera::dataCallback(int32_t msgType,constsp<IMemory>& dataPtr,

                             camera_frame_metadata_t *metadata)

    {

        sp<CameraListener> listener;

        {

            Mutex::Autolock _l(mLock);

            listener = mListener;

        }

        if(listener != NULL) {

            listener->postData(msgType,dataPtr, metadata);

        }

    }

        4)、JNI:android_hardware_Camera.cpp

            vim frameworks/base/core/jni/android_hardware_Camera.cpp

    void JNICameraContext::postData(int32_t msgType,constsp<IMemory>& dataPtr,

                                   camera_frame_metadata_t *metadata)

    {

        ……

        int32_t dataMsgType = msgType & ~CAMERA_MSG_PREVIEW_METADATA;

        // return data based on callback type

        switch(dataMsgType) {

            caseCAMERA_MSG_VIDEO_FRAME:

                // should never happen

                break;

            // For backward-compatibility purpose, if there is no callback

            // buffer for raw image, the callback returns null.

            caseCAMERA_MSG_RAW_IMAGE:

                ALOGV("rawCallback");

                if (mRawImageCallbackBuffers.isEmpty()) {

                    env->CallStaticVoidMethod(mCameraJClass,fields.post_event,

                           mCameraJObjectWeak,dataMsgType,0, 0,NULL);

                } else {

                    copyAndPost(env,dataPtr,dataMsgType);

                }

                break;

            /* MM-MC-SomcAddForSoMCAP-00+{ */

            caseCAMERA_MSG_OBJECT_TRACKING:

            {

                ……

                ssize_t offset;

                size_t size;

                sp<IMemoryHeap> heap;

                heap = dataPtr->getMemory(&offset,&size);

                ALOGV("objecttracking callback:mem off=%d, size=%d",(int) offset,(int) size);

                camera_ex_msg_object_tracking_t *cb =(camera_ex_msg_object_tracking_t *) heap->base();

                jobject object_tracking_result;

       

                if (cb != NULL) {

                    object_tracking_result = convertObjectTrackingResult(env,cb);

                } else {

                    ALOGE("objecttracking callback: heap is null");

                    env->CallStaticVoidMethod(mCameraJClass,fields.post_event,

                           mCameraJObjectWeak,CAMERA_MSG_OBJECT_TRACKING,0, 0,NULL);

                    return;

                }//数据通过jni层传给java

                env->CallStaticVoidMethod(mCameraJClass,fields.post_event,

                       mCameraJObjectWeak,CAMERA_MSG_OBJECT_TRACKING,0, 0,               object_tracking_result);

            }

            break;

           /* MM-MC-SomcAddForSoMCAP-00+} */

            // There is no data.

            case0:

                break;

            default:

                ALOGV("dataCallback(%d,%p)",dataMsgType,dataPtr.get());

                copyAndPost(env,dataPtr,dataMsgType);

                break;

        }

        // post frame metadata to Java

        if(metadata && (msgType &CAMERA_MSG_PREVIEW_METADATA)) {

            postMetadata(env,CAMERA_MSG_PREVIEW_METADATA,metadata);

        }

    }

         5)、JAVA:Camera.java

          vim frameworks/base/core/java/android/hardware/Camera.java

    //接收sendMessage数据,传输到extensoinCameraExtension.java

    private class EventHandler extendsHandler

    {

        private finalCameramCamera;

        publicEventHandler(Camera c,Looper looper) {

            super(looper);

            mCamera=c;

        }

        @Override

        public voidhandleMessage(Message msg) {

            switch(msg.what) {

            caseCAMERA_MSG_SHUTTER:

                if (mShutterCallback!= null) {

                    mShutterCallback.onShutter();

                }

                return;

            caseCAMERA_MSG_RAW_IMAGE:

                if (mRawImageCallback!= null) {

                    mRawImageCallback.onPictureTaken((byte[])msg.obj,mCamera);

                }

                return;

            ……

            caseCAMERA_MSG_OBJECT_TRACKING:

                if (mObjectTrackingFWCallback!= null) {

                    Log.e(TAG,"jay test call back");

                   mObjectTrackingFWCallback.onObjectTrackingFWCallback((ObjectTrackingResult)msg.obj, mCamera);

                }

                return;

            /* MM-MC-SomcAddForSoMCAP-00+} */

            default:

                Log.e(TAG,"Unknown message type " + msg.what);

                return;

            }

        }

    }//接受jni层数据并发送sendMessage

    private static voidpostEventFromNative(Object camera_ref,

                                           int what, intarg1, int arg2, Object obj)

    {

        Camera c = (Camera)((WeakReference)camera_ref).get();

        if(c == null)

            return;

        if(c.mEventHandler!= null) {

            Message m = c.mEventHandler.obtainMessage(what,arg1, arg2, obj);

            c.mEventHandler.sendMessage(m);

        }

    }

    6)Extension frameworkscallback

    vimvendor/semc/frameworks/base/libs/camera-extension/api/src/com/sonyericsson/cameraextension/CameraExtension.java

    public interface ObjectTrackingCallback {

        voidonObjectTracked(ObjectTrackingResultobjectTrackingResult);

    }

    public final void setObjectTrackingCallback(

            finalObjectTrackingCallback cb,

            intlowPassFilterStrength,

            intminimumIntervalMilliSec) {

        if(mIsReleased){

            return;

        }

        mObjectTrackingCallback = cb;

        if(Integer.MAX_VALUE< minimumIntervalMilliSec) {

            minimumIntervalMilliSec = minimumIntervalMilliSec;

        }

        /* ++ Somc-integrate-CameraExtension-01 */

        //setObjectTrackingLowPassFilterPrameters(lowPassFilterStrength,minimumIntervalMilliSec);

        if(mCamera!= null) {

            if(mObjectTrackingFWCallback == null) {

                mObjectTrackingFWCallback = new OTCallback();

            }

            //mObjectTrackingFWCallback传输到下层

               mCamera.setObjectTrackingLowPassFilterPrameters(mObjectTrackingFWCallback,

                    lowPassFilterStrength,minimumIntervalMilliSec);

        }

        /* -- Somc-integrate-CameraExtension-01 */

    }

    /* ++ Somc-integrate-CameraExtension-01*/

    class OTCallbackimplementsCamera.ObjectTrackingFWCallback {

        public voidonObjectTrackingFWCallback(Camera.ObjectTrackingResultobjectTrackingResult,

                Camera camera) {

            if(mObjectTrackingCallback != null&& objectTrackingResult != null) {

            ……

                if(mObjectTrackingResult == null)

                    mObjectTrackingResult = newObjectTrackingResult();

                mObjectTrackingResult.mRectOfTrackedObject =new android.graphics.Rect(

                        objectTrackingResult.mRectOfTrackedObject.left,objectTrackingResult.mRectOfTrackedObject.top,

                        objectTrackingResult.mRectOfTrackedObject.right,objectTrackingResult.mRectOfTrackedObject.bottom);

                mObjectTrackingResult.mIsLost =objectTrackingResult.mIsLost;

                mObjectTrackingCallback.onObjectTracked(mObjectTrackingResult); //传递到上层UI

            }

        }

    }

     

     

    5.      Design idea

    5.1 Callback设计机制

       1) application层定义callback函数:

        定义接口:

        vimvendor/semc/frameworks/base/libs/camera-extension/api/src/com/sonyericsson/cameraextension/CameraExtension.java

    publicinterface ObjectTrackingCallback {

        voidonObjectTracked(ObjectTrackingResultobjectTrackingResult);

    }

         定义callback函数,是真实处理底层数据的函数:

    vimvendor/semc/packages/apps/camera-addons/CameraUI/src/com/sonyericsson/android/camera/controller/ObjectTracking.java

    privateclass ObjectTrackingCallbackimplementsCameraExtension.ObjectTrackingCallback {

        @Override

        public voidonObjectTracked(ObjectTrackingResultresult) {

           ……

            if(mShouldWaitForLost) {

                if(!result.mIsLost) {

                    // Ignore detect object event for wait next lost event.

                    if(CameraLogger.DEBUG) CameraLogger.d(TAG,"onObjectTracked: ignoredetect.");

                    return;

                }else {

                    // Restart object tracking after lost event.

                    if(CameraLogger.DEBUG) CameraLogger.d(TAG,"onObjectTracked: restart.");

                    mController.mCameraDevice.stopObjectTrackingCallback();

                    startTracking(mPosition);

                    mShouldWaitForLost =false;

                }

            }

            // Ignorecontinuous lost event.

            if(mIsAlreadyLost&& result.mIsLost) {

                if(CameraLogger.DEBUG) CameraLogger.d(TAG,"onObjectTracked: ignore lost");

                return;

            }

            mIsAlreadyLost= result.mIsLost;

            if(result.mIsLost) {

                mHandler.startTimeoutCount();

            }else {

                mHandler.stopTimeoutCount();

                Executor.postEvent(ControllerEvent.EV_OBJECT_TRACKING,0,

                       result.mRectOfTrackedObject);

            }

        }

    }

        调用java接口函数注册callback函数:

    privatevoid startTracking(Rect position) {

        if(CameraLogger.DEBUG) CameraLogger.d(TAG,"startTracking: " + position);

        mCallback = new ObjectTrackingCallback(); //callback函数

        mController.mCameraWindow.startObjectTrackingAnimation(position);

        mController.mCameraDevice.startObjectTracking(

                PositionConverter.getInstance().convertFaceToDevice(position),

                mCallback); //传递callback函数

    }

    vimvendor/semc/packages/apps/camera-addons/CameraUI/src/com/sonyericsson/android/camera/device/CameraDevice.java

    public void startObjectTracking(Rect position,ObjectTrackingCallback cb) {

        …… //接口函数-frameworks层提供

        mCameraExtension.setObjectTrackingCallback(cb//cb即为上面传递下来的callback函数,

                CameraDeviceUtil.OBJECT_TRACKING_LOW_PASS_FILTER_STRENGTH,

                CameraDeviceUtil.OBJECT_TRACKING_MINIMAL_INTERVAL_MS);

        mCameraExtension.startObjectTracking();

        mCameraExtension.selectObject(position.centerX(),position.centerY());

        mIsObjectTrackingRunning= true;

        ……

        newEachCameraStatusPublisher(mCameraActivity,mCameraId)

                .put(newObjectTracking(ObjectTracking.Value.ON))

                .publish();

    }

          实现用于注册callback函数的java接口函数:

    vimvendor/semc/frameworks/base/libs/camera-extension/api/src/com/sonyericsson/cameraextension/CameraExtension.java

    public final void setObjectTrackingCallback(

            finalObjectTrackingCallback cb,

            intlowPassFilterStrength,

            intminimumIntervalMilliSec) {

        if(mIsReleased) {

            return;

        }

        mObjectTrackingCallback = cb; //获得application层传下的callback函数指针

        if(Integer.MAX_VALUE <minimumIntervalMilliSec) {

            minimumIntervalMilliSec = minimumIntervalMilliSec;

        }

        /* ++Somc-integrate-CameraExtension-01 */

        //setObjectTrackingLowPassFilterPrameters(lowPassFilterStrength,minimumIntervalMilliSec);

        if(mCamera != null) {

            if(mObjectTrackingFWCallback == null) {

                mObjectTrackingFWCallback = newOTCallback(); //又一callback函数

            }//调用另一注册函数注册另一个callback函数2(双callback

            mCamera.setObjectTrackingLowPassFilterPrameters(mObjectTrackingFWCallback,

                    lowPassFilterStrength,minimumIntervalMilliSec);

        }

        /* -- Somc-integrate-CameraExtension-01*/

    }

        2) 接口层定义callback函数:

                  定义接口:

    vimframeworks/base/core/java/android/hardware/Camera.java

    publicinterface ObjectTrackingFWCallback {

        voidonObjectTrackingFWCallback(ObjectTrackingResult objectTrackingResult,Camera camera);

    };

         定义callback函数,是真实处理底层数据的函数:

    vimvendor/semc/frameworks/base/libs/camera-extension/api/src/com/sonyericsson/cameraextension/CameraExtension.java

    classOTCallbackimplementsCamera.ObjectTrackingFWCallback {

        public voidonObjectTrackingFWCallback(Camera.ObjectTrackingResult objectTrackingResult,

                Camera camera) {

            if(mObjectTrackingCallback != null&& objectTrackingResult != null) {

            ……

                if(mObjectTrackingResult == null)

                    mObjectTrackingResult = newObjectTrackingResult();

                mObjectTrackingResult.mRectOfTrackedObject =new android.graphics.Rect(

                       objectTrackingResult.mRectOfTrackedObject.left,objectTrackingResult.mRectOfTrackedObject.top,

                       objectTrackingResult.mRectOfTrackedObject.right,objectTrackingResult.mRectOfTrackedObject.bottom);

                mObjectTrackingResult.mIsLost = objectTrackingResult.mIsLost;

                mObjectTrackingCallback.onObjectTracked(mObjectTrackingResult);

            } //由上述处理可知,callback2将得到的数据传递给callback1—>传至application

        }

    }

          实现用于注册callback的接口函数:

      vim frameworks/base/core/java/android/hardware/Camera.java

    publicvoid setObjectTrackingLowPassFilterPrameters(ObjectTrackingFWCallbackcb, int lowPassFilterStrength, intminimumIntervalMilliSec){

        mObjectTrackingFWCallback =cb;

        _setObjectTrackingLowPassFilterPrameters(lowPassFilterStrength,minimumIntervalMilliSec);

    }

      接下来通过handler机制将client端得到的数据传给当前callback函数(mObjectTrackingFWCallback)。

    5.2 Eventhandler设计机制

      1) Eventhandle初始化:

    vimframeworks/base/core/java/android/hardware/Camera.java

    privateint cameraInitVersion(intcameraId,inthalVersion) {

        mShutterCallback= null;

        mRawImageCallback= null;

        mJpegCallback= null;

        mPreviewCallback= null;

        mPostviewCallback= null;

        mUsingPreviewAllocation=false;

        mZoomListener= null;

        /* ### QC ADD-ONS: START */

        mCameraDataCallback= null;

        mCameraMetaDataCallback=null;

        /* ### QC ADD-ONS: END */

        Looper looper;

        if((looper= Looper.myLooper()) !=null) { //获得当前线程looper

            mEventHandler = new EventHandler(this, looper);//mEventHandler用于和looper交互

        }else if ((looper= Looper.getMainLooper()) !=null) {

            mEventHandler= new EventHandler(this,looper);

        }else {

            mEventHandler= null;

        }

        returnnative_setup(newWeakReference<Camera>(this),cameraId,halVersion,

                ActivityThread.currentOpPackageName());

    }

      2) 向线程队列发送数据

    privatestatic void postEventFromNative(Object camera_ref, //该函数在jni中被调用(C++调用java接口函数)

                                            intwhat,int arg1, int arg2,Objectobj)

    {

        Camera c = (Camera)((WeakReference)camera_ref).get();

        if(c == null)

            return;

        if(c.mEventHandler!=null) {

            Message m = c.mEventHandler.obtainMessage(what,arg1, arg2, obj); //组合队列信息

            c.mEventHandler.sendMessage(m);//发送数据

        }

    }

    3) 接收队列数据

    privateclass EventHandlerextendsHandler

    {

        private finalCamera mCamera;

        publicEventHandler(Camera c,Looper looper) {

            super(looper);

            mCamera = c;

        }

        @Override

        public voidhandleMessage(Message msg) {

            switch(msg.what) {

            …...

            case CAMERA_MSG_OBJECT_TRACKING:

                if(mObjectTrackingFWCallback != null) {

                    Log.e(TAG, "jay test call back");  //将数据传递给callback函数

                    mObjectTrackingFWCallback.onObjectTrackingFWCallback((ObjectTrackingResult)msg.obj,mCamera);

                }

                return;

             ……

            }

        }

    }

    5.3 JNI调用java接口函数

    vimframeworks/base/core/jni/android_hardware_Camera.cpp

    fields.post_event = GetStaticMethodIDOrDie(env,clazz, "postEventFromNative",//java层函数名

                                              "(Ljava/lang/Object;IIILjava/lang/Object;)V");

    该函数在client被调用,它会调用java层函数,将数据传java接口

    void JNICameraContext::postData(int32_t msgType,constsp<IMemory>&dataPtr,

                                   camera_frame_metadata_t *metadata)

    {

        // VM pointer will be NULL if objectis released

        Mutex::Autolock _l(mLock);

        JNIEnv *env =AndroidRuntime::getJNIEnv();

        if(mCameraJObjectWeak == NULL) {

            ALOGW("callback on dead camera object");

            return;

        }

        int32_t dataMsgType = msgType& ~CAMERA_MSG_PREVIEW_METADATA;

        // return data based on callback type

        switch(dataMsgType) {

            caseCAMERA_MSG_VIDEO_FRAME:

                // should never happen

                break;

            // For backward-compatibilitypurpose, if there is no callback

            // buffer for raw image, the callbackreturns null.

            caseCAMERA_MSG_RAW_IMAGE:

                ALOGV("rawCallback");

                if(mRawImageCallbackBuffers.isEmpty()) {

                    env->CallStaticVoidMethod(mCameraJClass,fields.post_event,

                           mCameraJObjectWeak,dataMsgType,0, 0,NULL);

                }else {

                    copyAndPost(env, dataPtr, dataMsgType);

                }

                break;

            /* MM-MC-SomcAddForSoMCAP-00+{ */

            caseCAMERA_MSG_OBJECT_TRACKING:

            {

                ALOGV("object tracking callback");

                if(dataPtr == NULL) {

                    ALOGE("%s: mem is null",__FUNCTION__);

                    env->CallStaticVoidMethod(mCameraJClass,fields.post_event,

                           mCameraJObjectWeak,dataMsgType,0, 0,NULL);

                    return;

                }

                ssize_t offset;

                size_t size;

                sp<IMemoryHeap> heap;

                heap =dataPtr->getMemory(&offset,&size);

                ALOGV("object tracking callback:mem off=%d,size=%d",(int) offset,(int) size);

                camera_ex_msg_object_tracking_t*cb = (camera_ex_msg_object_tracking_t *) heap->base();

                jobject object_tracking_result;

       

                if(cb != NULL) {

                    object_tracking_result = convertObjectTrackingResult(env,cb);

                }else {

                    ALOGE("object tracking callback: heap isnull");

                    env->CallStaticVoidMethod(mCameraJClass,fields.post_event,

                           mCameraJObjectWeak,CAMERA_MSG_OBJECT_TRACKING,0, 0,NULL);

                    return;

                }

                env->CallStaticVoidMethod(mCameraJClass,fields.post_event,

                       mCameraJObjectWeak,CAMERA_MSG_OBJECT_TRACKING,0, 0,object_tracking_result);  //使用android VM接口函数实现对java函数的调用

            }

            break;

           /* MM-MC-SomcAddForSoMCAP-00+} */

            // There is no data.

            case0:

                break;

            default:

                ALOGV("dataCallback(%d, %p)",dataMsgType,dataPtr.get());

                copyAndPost(env,dataPtr, dataMsgType);

                break;

        }

        // post frame metadata to Java

        if(metadata && (msgType &CAMERA_MSG_PREVIEW_METADATA)) {

            postMetadata(env,CAMERA_MSG_PREVIEW_METADATA,metadata);

        }

    }

       下面简要分析convertObjectTrackingResult,进一步理解jni如何调用java类和方法:

        convertObjectTrackingResult函数需要调用的java层的类和方法:

           vimframeworks/base/core/java/android/hardware/Camera.java

    public static class ObjectTrackingResult {

        publicRect mRectOfTrackedObject;

        public booleanmIsLost;

    }

        vimframeworks/base/graphics/java/android/graphics/Rect.java

    public final class Rectimplements Parcelable{

        public intleft;

        public inttop;

        public intright;

        public intbottom;

        …...

      public void set(intleft, int top, intright, int bottom) {

          this.left= left;

          this.top= top;

          this.right= right;

          this.bottom= bottom;

      }

       …...

    }

       jni获取javajclass ID(类ID)、jmethod ID(方法函数ID)、jfield ID(变量ID

       vimframeworks/base/core/jni/android_hardware_Camera.cpp

    bool JNICameraContext::setUpObjectTracking(JNIEnv* env)

    {

        Mutex::Autolock _l(mLock);

        objecttracking_callback_cookie *c = &objectTrackingCookie;

        ……

        // Get jclass ID.

        jclass class_results = env->FindClass(//获取javaObjectTrackingResult

               "android/hardware/Camera$ObjectTrackingResult");

        jclass class_rect = env->FindClass(//获取java Rect

               "android/graphics/Rect");

        c->results_clazz = (jclass)env->NewGlobalRef(class_results);

        c->rect_clazz = (jclass)env->NewGlobalRef(class_rect);

        // Get jmethod ID.

        c->rect_set_mid =env->GetMethodID(c->rect_clazz,"set","(IIII)V"); //获取set方法

        // Get jfield ID. //获取变量mRectOfTrackedObject

        c->mRectOfTrackedObject_fid= env->GetFieldID(c->results_clazz,"mRectOfTrackedObject",

               "Landroid/graphics/Rect;"); //获取变量mIsLost

        c->mIsLost_fid =env->GetFieldID(c->results_clazz,"mIsLost","Z");

        env->DeleteLocalRef(class_results);

        env->DeleteLocalRef(class_rect);

        return true;

    }

       使用java层函数,即convertObjectTrackingResult

    jobject JNICameraContext::convertObjectTrackingResult(JNIEnv *env,camera_ex_msg_object_tracking_t* cb)

    {

        ……

        objecttracking_callback_cookie *c = &objectTrackingCookie;

        if(NULL == c->results_clazz) {

            ALOGD("%s:c->results_clazz is NULL;",__FUNCTION__ );

            returnNULL;

        }//获取使用ObjectTrackingResult

        jobjectcallbackObject = env->AllocObject(c->results_clazz)// create ObjectTrackingResult class

        if(NULL == callbackObject) {

            ALOGW("%s: object isNULL;",__FUNCTION__);

            returnNULL;

        }

        // Create android.graphics.Rect object. //获取使用Rect

        jobjectrect_obj = env->AllocObject(c->rect_clazz);

        if(NULL == rect_obj) {

            ALOGW("%s Errorrect_obj = %p",__FUNCTION__,rect_obj);

            returnNULL;

        }

        // Set rect data to android.graphics.Rect object.

        env->CallVoidMethod(rect_obj,c->rect_set_mid, //使用类Rectset方法

               cb->rect[0], cb->rect[1], cb->rect[2],cb->rect[3]);

        // Set android.graphics.Rect object to ObjectTrackingResult.Rect.

        env->SetObjectField(callbackObject,c->mRectOfTrackedObject_fid,rect_obj);

        env->DeleteLocalRef(rect_obj);//将数据传给javamRectOfTrackedObject变量

        // Set isLost boolean to ObjectTrackingResult.boolean.

        env->SetBooleanField(callbackObject, c->mIsLost_fid,cb->isLost);

        if(mObjectObjectTrackingResult != NULL) {//isloat信息传给java mIsLost变量

            env->DeleteGlobalRef(mObjectObjectTrackingResult);

            mObjectObjectTrackingResult = NULL;

        }

        mObjectObjectTrackingResult= env->NewGlobalRef(callbackObject);//结果返回

        env->DeleteLocalRef(callbackObject);

        returnmObjectObjectTrackingResult;

    }

    5.4 Client端数据处理

        该函数会被service调用:

        vim frameworks/av/camera/Camera.cpp

    void Camera::dataCallback(int32_t msgType,const sp<IMemory>&dataPtr,

                             camera_frame_metadata_t *metadata)

    {

        sp<CameraListener> listener;

        {

            Mutex::Autolock_l(mLock);

            listener = mListener; //listener会在jni中注册,下面分析

        }

        if(listener != NULL) { //调用jni postData函数将数据传至jni

            listener->postData(msgType,dataPtr,metadata);

        }

    }

        在jni中调用client接口设置listener:

            client中设置listener接口函数:

    void Camera::setListener(constsp<CameraListener>&listener) //jni中被调用

    {

        Mutex::Autolock _l(mLock);

        mListener =listener;

    }

            调用接口函数设置listener:

        vimframeworks/base/core/jni/android_hardware_Camera.cpp

    static jintandroid_hardware_Camera_native_setup(JNIEnv *env,jobject thiz,

        jobject weak_this,jint cameraId,jint halVersion,jstringclientPackageName)

    {

        ……

        // We use a weak reference so the Camera object can be garbagecollected.

        // The reference is only used as a proxy for callbacks.

        sp<JNICameraContext>context = new JNICameraContext(env,weak_this, clazz, camera);

        context->incStrong((void*)android_hardware_Camera_native_setup);

        camera->setListener(context);

        // save context in opaque field

        env->SetLongField(thiz,fields.context,(jlong)context.get());

        returnNO_ERROR;

    }

    5.5 Service端数据处理

        vim./frameworks/av/services/camera/libcameraservice/api1/CameraClient.cpp

    void CameraClient::handleObjectTracking(constsp<IMemory>& mem) {

        ……

            if(c != NULL) {//调用clientdataCallback函数传输数据

                c->dataCallback(CAMERA_MSG_OBJECT_TRACKING,mem,NULL);

                LOG2("dataCallback left.top.right.bottom : %4d.%4d.%4d.%4d",

                        orgCb->rect[0]orgCb->rect[1],

                        orgCb->rect[2],orgCb->rect[3]);

            }

        }

    }

        再次用到callback机制:

           声明callback函数:

    void CameraClient::dataCallback(int32_t msgType,

            constsp<IMemory>& dataPtr,camera_frame_metadata_t *metadata,void* user) {

        LOG2("dataCallback(%d)",msgType);

        sp<CameraClient> client =static_cast<CameraClient*>(getClientFromCookie(user).get());

        if(client.get() == nullptr)return;

        if(!client->lockIfMessageWanted(msgType))return;

        if(dataPtr == 0&& metadata == NULL) {

            ALOGE("Null datareturned in data callback");

            client->handleGenericNotify(CAMERA_MSG_ERROR,UNKNOWN_ERROR,0);

            return;

        }

        switch(msgType & ~CAMERA_MSG_PREVIEW_METADATA) {

            ……

            caseCAMERA_MSG_OBJECT_TRACKING:

                client->handleObjectTracking(dataPtr);

                break;

            /* MM-MC-SomcAddForSoMCAP-00+} */

            default:

                client->handleGenericData(msgType,dataPtr,metadata);

                break;

          }

    }

        调用hal接口函数注册callback函数:

    status_t CameraClient::initialize(CameraModule *module) {

        intcallingPid = getCallingPid();

        status_t res;

        ……

        charcamera_device_name[10];

        snprintf(camera_device_name,sizeof(camera_device_name),"%d",mCameraId);

        mHardware =new CameraHardwareInterface(camera_device_name);

        res = mHardware->initialize(module);

        if(res != OK) {

            ALOGE("%s: Camera %d:unable to initialize device: %s (%d)",

                   __FUNCTION__,mCameraId,strerror(-res),res);

            mHardware.clear();

            returnres;

        }

        mHardware->setCallbacks(notifyCallback,

               dataCallback,

               dataCallbackTimestamp,

               (void*)(uintptr_t)mCameraId);

        // Enable zoom, error, focus, and metadata messages by default

        enableMsgType(CAMERA_MSG_ERROR | CAMERA_MSG_ZOOM |CAMERA_MSG_FOCUS |

                      CAMERA_MSG_PREVIEW_METADATA| CAMERA_MSG_FOCUS_MOVE);

        LOG1("CameraClient::initialize X (pid%d, id %d)",callingPid,mCameraId);

        returnOK;

    }

       定义注册callabck接口函数:

       vimframeworks/av/services/camera/libcameraservice/device1/CameraHardwareInterface.h

    void setCallbacks(notify_callbacknotify_cb,

                      data_callbackdata_cb,

                      data_callback_timestampdata_cb_timestamp,

                      void*user)

    {

        mNotifyCb = notify_cb;

        mDataCb = data_cb;

        mDataCbTimestamp = data_cb_timestamp;

        mCbUser = user;

        ALOGV("%s(%s)",__FUNCTION__,mName.string());

        if(mDevice->ops->set_callbacks) { //调用hardware层设置callback函数,类似,不做介绍

            mDevice->ops->set_callbacks(mDevice,

                                  __notify_cb,

                                  __data_cb, //会调用mDataCb传递数据

                                  __data_cb_timestamp,

                                  __get_memory,

                                  this);

        }

    }

     

    展开全文
  • 高通Camera框架--数据流浅谈01

    千次阅读 2015-11-15 16:25:09
    // When the encoder fails to be created, we need // release the camera source due to the camera's lock // and unlock mechanism. cameraSource->stop(); return UNKNOWN_ERROR; } mVideoSourceNode = ...

        本文重点:stagefrightRecorder.cpp    OMXCodec.cpp   MPEG4Writer.cpp  CameraSource.cpp 之间的调用关系

    ===============================================================================

         最初看的时候,有些地方还是不清楚,关于编码和文件的读写之间的关系不是很了解。只是知道底层回调的数据会经过CameraSource.cpp回调,只是知道数据会在OMXCodec.cpp 中完成编码,只是知道在MPEG4Writer.cpp 会有读写线程和轨迹线程,只是知道在stagefrightRecorder.cpp 中会将OMXCodex.cpp、MPEG4Writer.cpp和CameraSource.cpp 做相互的配合调用。之前一直纠结的是MPEG4Writer.cpp是直接读CameraSource.cpp 的数据,那和编码之间又是怎样的联系呢?

         是自己知识面不广,了解的不多,读代码能力也还得加强。

         这次看源码,总算是理清了上面的疑点。

        OMXCodec.cpp 的read函数,直接读的是CameraSource.cpp 中数据,然后MPEG4Writer.cpp 中的轨迹线程的mSource->read()调用的则是OMXCodec.cpp 中数据。那也就是说,底层数据经过CameraSource.cpp回调的时候,是先经过编码,然后在将数据写入文件。

       >>>>>>>>这里直接从stagefrightRecorder.cpp 的start函数开始了,会在start()函数中调用startMPEG4Recording()函数

    stagefrightRecorder.cpp

    status_t StagefrightRecorder::start() {
          ......
        switch (mOutputFormat) {
            case OUTPUT_FORMAT_DEFAULT:
            case OUTPUT_FORMAT_THREE_GPP:
            case OUTPUT_FORMAT_MPEG_4:
                status = startMPEG4Recording();
               
         ......
    }

          >>>>>>>>在startMPEG4Recording()方法中过,调用的重要方法已经标红

    status_t StagefrightRecorder::startMPEG4Recording() {
       ......

          status_t err = setupMPEG4Recording(
                mOutputFd, mVideoWidth, mVideoHeight,
                mVideoBitRate, &totalBitRate, &mWriter);

        sp<MetaData> meta = new MetaData;


        setupMPEG4MetaData(startTimeUs, totalBitRate, &meta);


        err = mWriter->start(meta.get());
      ......
    }

     

         >>>>>>>>在setupMPEG4Recording()方法中,我们看到 sp<MediaWriter> writer = new MPEG4Writer(outputFd); 这个是完成writer的初始化,那我们现在就知道这个writer是MPEG4Writer了,这个还是蛮重要的。在这个方法中,会调用setupMediaSource(),完成source的初始化,而这个source就是CameraSource,还会继续调用setupVideoEncoder()方法,完成coder的初始化,而这个coder则是OMXCodex。还得注意下的就是writer->addSource(encoder); 这里是把编码的数据交给了writer,这样MPEG4Writer.cpp和OMXCodec.cpp直接就连续起来了

     

    status_t StagefrightRecorder::setupMPEG4Recording(
          ......
            sp<MediaWriter> *mediaWriter) {
        mediaWriter->clear();

      
        sp<MediaWriter> writer = new MPEG4Writer(outputFd);


        if (mVideoSource < VIDEO_SOURCE_LIST_END) {


            sp<MediaSource> mediaSource;       
            err = setupMediaSource(&mediaSource);
            if (err != OK) {
                return err;
            }


            sp<MediaSource> encoder;
            err = setupVideoEncoder(mediaSource, videoBitRate, &encoder);
            if (err != OK) {
                return err;
            }


            writer->addSource(encoder);
            *totalBitRate += videoBitRate;
        }


        // Audio source is added at the end if it exists.
        // This help make sure that the "recoding" sound is suppressed for
        // camcorder applications in the recorded files.
        if (!mCaptureTimeLapse && (mAudioSource != AUDIO_SOURCE_CNT)) {
            err = setupAudioEncoder(writer);
            if (err != OK) return err;
            *totalBitRate += mAudioBitRate;
        }


        if (mInterleaveDurationUs > 0) {
            reinterpret_cast<MPEG4Writer *>(writer.get())->
                setInterleaveDuration(mInterleaveDurationUs);
        }
        if (mLongitudex10000 > -3600000 && mLatitudex10000 > -3600000) {
            reinterpret_cast<MPEG4Writer *>(writer.get())->
                setGeoData(mLatitudex10000, mLongitudex10000);
        }
        if (mMaxFileDurationUs != 0) {
            writer->setMaxFileDuration(mMaxFileDurationUs);
        }
        if (mMaxFileSizeBytes != 0) {
            writer->setMaxFileSize(mMaxFileSizeBytes);
        }


        mStartTimeOffsetMs = mEncoderProfiles->getStartTimeOffsetMs(mCameraId);
        if (mStartTimeOffsetMs > 0) {
            reinterpret_cast<MPEG4Writer *>(writer.get())->
                setStartTimeOffsetMs(mStartTimeOffsetMs);
        }


        writer->setListener(mListener);
        *mediaWriter = writer;
        return OK;
    }

      >>>>>>>>在setupMediaSource()方法中是完成了cameraSource的初始化

     

    status_t StagefrightRecorder::setupMediaSource(
                          sp<MediaSource> *mediaSource) {
        if (mVideoSource == VIDEO_SOURCE_DEFAULT
                || mVideoSource == VIDEO_SOURCE_CAMERA) {
            sp<CameraSource> cameraSource;
            status_t err = setupCameraSource(&cameraSource);

            if (err != OK) {
                return err;
            }
            *mediaSource = cameraSource;
        } else if (mVideoSource == VIDEO_SOURCE_GRALLOC_BUFFER) {
            // If using GRAlloc buffers, setup surfacemediasource.
            // Later a handle to that will be passed
            // to the client side when queried
            status_t err = setupSurfaceMediaSource();
            if (err != OK) {
                return err;
            }
            *mediaSource = mSurfaceMediaSource;
        } else {
            return INVALID_OPERATION;
        }
        return OK;
    }

          >>>>>>>在setupVideoEncoder()方法中是完成了OMXCodec的初始化,这里注意下OMXCodec::create(...,camerasource,...,...);我们看到create的时候,传进入的参数中,那个source是cameraSource,所以后面在OMXCodec.cpp 中调用的mSoure->read();直接调用的就是CameraSoure.cpp中的read()方法

     

    status_t StagefrightRecorder::setupVideoEncoder(
            ......
        sp<MediaSource> encoder = OMXCodec::Create(
                client.interface(), enc_meta,
                true /* createEncoder */, cameraSource,
                NULL, encoder_flags);


        if (encoder == NULL) {
            ALOGW("Failed to create the encoder");
            // When the encoder fails to be created, we need
            // release the camera source due to the camera's lock
            // and unlock mechanism.
            cameraSource->stop();
            return UNKNOWN_ERROR;
        }


        mVideoSourceNode = cameraSource;
        mVideoEncoderOMX = encoder;


        *source = encoder;


        return OK;
    }

     

    -----------------------------

        >>>>>上面有说到在stagefrigheRecorder.cpp中有调用到MPEG4Writer.cpp中addSource()方法 [writer->addSource(encoder);],而addSource中传进来的参数是编码码的数据,这样MPEG4Writer.cpp和OMXCodec.cpp之间就有了联系,MPEG4Writer.cpp 读写的就是OMXCodec.cpp 中编码后的数据。

    MPEG4Writer.cpp 

     

        >>>>>> 在MPEG4Writer.cpp 的addSource()方法中,注意看下Track *track = new Track(this, source, 1 + mTracks.size());  我们看到new Track(...,source,...)的时候,是传进去了source,而这个source,从上面的分析,我们已经知道它是编码后的数据

     

    status_t MPEG4Writer::addSource(const sp<MediaSource> &source) {
        Mutex::Autolock l(mLock);
        if (mStarted) {
            ALOGE("Attempt to add source AFTER recording is started");
            return UNKNOWN_ERROR;
        }


        // At most 2 tracks can be supported.
        if (mTracks.size() >= 2) {
            ALOGE("Too many tracks (%d) to add", mTracks.size());
            return ERROR_UNSUPPORTED;
        }


        CHECK(source.get() != NULL);


        // A track of type other than video or audio is not supported.
        const char *mime;
        sp<MetaData> meta = source->getFormat();
        CHECK(meta->findCString(kKeyMIMEType, &mime));
        bool isAudio = !strncasecmp(mime, "audio/", 6);
        bool isVideo = !strncasecmp(mime, "video/", 6);
        if (!isAudio && !isVideo) {
            ALOGE("Track (%s) other than video or audio is not supported",
                mime);
            return ERROR_UNSUPPORTED;
        }


        // At this point, we know the track to be added is either
        // video or audio. Thus, we only need to check whether it
        // is an audio track or not (if it is not, then it must be
        // a video track).


        // No more than one video or one audio track is supported.
        for (List<Track*>::iterator it = mTracks.begin();
             it != mTracks.end(); ++it) {
            if ((*it)->isAudio() == isAudio) {
                ALOGE("%s track already exists", isAudio? "Audio": "Video");
                return ERROR_UNSUPPORTED;
            }
        }


        // This is the first track of either audio or video.
        // Go ahead to add the track.
        Track *track = new Track(this, source, 1 + mTracks.size());       -------------------------------
        mTracks.push_back(track);                                                                                               |
                                                                                                                                                   |
                                                                                                                                                   |
        mHFRRatio = ExtendedUtils::HFR::getHFRRatio(meta);                                                   |
                                                                                                                                                   |
                                                                                                                                                   |
        return OK;                                                                                                                           |
    }                                                                                                                                               |

                                                                                                                                                    |

    MPEG4Writer::Track::Track(       --------------------------------------------------------------------------- |
            MPEG4Writer *owner, const sp<MediaSource> &source, size_t trackId)
        : mOwner(owner),
          mMeta(source->getFormat()),
          mSource(source),
          mDone(false),
          mPaused(false),
          mResumed(false),
          mStarted(false),
          mTrackId(trackId),
          mTrackDurationUs(0),
          mEstimatedTrackSizeBytes(0),
          mSamplesHaveSameSize(true),
          mStszTableEntries(new ListTableEntries<uint32_t>(1000, 1)),
          mStcoTableEntries(new ListTableEntries<uint32_t>(1000, 1)),
          mCo64TableEntries(new ListTableEntries<off64_t>(1000, 1)),
          mStscTableEntries(new ListTableEntries<uint32_t>(1000, 3)),
          mStssTableEntries(new ListTableEntries<uint32_t>(1000, 1)),
          mSttsTableEntries(new ListTableEntries<uint32_t>(1000, 2)),
          mCttsTableEntries(new ListTableEntries<uint32_t>(1000, 2)),
          mCodecSpecificData(NULL),
          mCodecSpecificDataSize(0),
          mGotAllCodecSpecificData(false),
          mReachedEOS(false),
          mRotation(0),
          mHFRRatio(1) {
        getCodecSpecificDataFromInputFormatIfPossible();


        const char *mime;
        mMeta->findCString(kKeyMIMEType, &mime);
        mIsAvc = !strcasecmp(mime, MEDIA_MIMETYPE_VIDEO_AVC);
        mIsAudio = !strncasecmp(mime, "audio/", 6);
        mIsMPEG4 = !strcasecmp(mime, MEDIA_MIMETYPE_VIDEO_MPEG4) ||
                   !strcasecmp(mime, MEDIA_MIMETYPE_AUDIO_AAC);


        setTimeScale();
    }

      >>>>>> theradEntry()是轨迹线程真正执行的方法,在这个方法中会通过mSource->read(&buffer)去不断的读取数据,那我们想知道,它是read的哪里的数据,所以就需要找到mSource是在哪里初始化的,我们搜索下,会发现是在

        MPEG4Writer::Track::Track(  )
            MPEG4Writer *owner, const sp<MediaSource> &source, size_t trackId)
        : mOwner(owner),
       
          mSource(source),

      这里进行了初始化。直接看上面,已经连线标出来的方法,就知道,这个source就是编码后的数据了

     

    status_t MPEG4Writer::Track::threadEntry() {

        while (!mDone && (err = mSource->read(&buffer)) == OK) {

           ......

        }
    }

    --------------------

         >>>>>>在上面的分析中,我们知道在stagefrightRecorder.cpp中会完成OMXCodec的初始化,而且在初始化中,就将CameraSource传进来,这里只是想说 下面的source就是CameraSource,这样CameraSource.cpp和OMXCodec.cpp之间就联系起来了

    OMXCodec.cpp

    OMXCodec::OMXCodec(
            const sp<IOMX> &omx, IOMX::node_id node,
            uint32_t quirks, uint32_t flags,
            bool isEncoder,
            const char *mime,
            const char *componentName,
            const sp<MediaSource> &source,
            const sp<ANativeWindow> &nativeWindow)
        : mOMX(omx),
          mOMXLivesLocally(omx->livesLocally(node, getpid())),
          mNode(node),
          mQuirks(quirks),
          mFlags(flags),
          mIsEncoder(isEncoder),
          mIsVideo(!strncasecmp("video/", mime, 6)),
          mMIME(strdup(mime)),
          mComponentName(strdup(componentName)),
          mSource(source),
          mCodecSpecificDataIndex(0),
          mState(LOADED),
          mInitialBufferSubmit(true),
          mSignalledEOS(false),
          mNoMoreOutputData(false),
          mOutputPortSettingsHaveChanged(false),
          mSeekTimeUs(-1),
          mSeekMode(ReadOptions::SEEK_CLOSEST_SYNC),
          mTargetTimeUs(-1),
          mOutputPortSettingsChangedPending(false),
          mSkipCutBuffer(NULL),
          mLeftOverBuffer(NULL),
          mPaused(false),
          mNativeWindow(
                  (!strncmp(componentName, "OMX.google.", 11))
                            ? NULL : nativeWindow),
          mNumBFrames(0),
          mInSmoothStreamingMode(false),
          mOutputCropChanged(false),
          mSignalledReadTryAgain(false),
          mReturnedRetry(false),
          mLastSeekTimeUs(-1),
          mLastSeekMode(ReadOptions::SEEK_CLOSEST) {
        mPortStatus[kPortIndexInput] = ENABLING;
        mPortStatus[kPortIndexOutput] = ENABLING;


        setComponentRole();
    }

        >>>>>>这里的read()方法,会被MPEG4Writer.cpp的mSourcr->read()调用,至于编码的过程,还没有详细看

    status_t OMXCodec::read(
            MediaBuffer **buffer, const ReadOptions *options) { 

          .......

    }

    ==============================================================================================

    欢迎关注我的个人微信公众号,公众号会记录自己开发的点滴,还有日常的生活,希望和更多的小伙伴一起交流~~

     

     

    展开全文
  • 高通Camera整体框架

    千次阅读 2017-11-07 14:33:08
    1. Introduction 本文档主要讲解高通Camera整体框架。 部分内容或参考资料可...2. 高通Camera框架简介 总体框图如下: 下面简要走一下流程,不涉及具体代码: 1、初始化CameraService 在fra

    1.    Introduction

    本文档主要讲解高通Camera整体框架。

    部分内容或参考资料可参考个人博客android开发栏目:http://blog.sina.com.cn/betterforlife 

    2.   高通Camera框架简介

    总体框图如下:

    下面简要走一下流程,不涉及具体代码:

    1、初始化CameraService

    在frameworks/av/media/mediaserver/Main_mediaserver.cpp中会对CameraService进行初始化:

    CameraService::instantiate();

    CameraService的父类BinderService定义了instantiate()函数:

    static void instantiate() { publish(); }

    CameraService的启动发生在init.rc中:

    service media /system/bin/mediaserver

    class main

    user media

    group audio camerainet net_bt net_bt_admin net_bw_acct drmrpc

    在CameraService初始化过程会从hal层获取一些基本信息,如支持的最大camera数目,如下图所示:

    2、连接CameraService

    如下图所示:

    2.1 Hal 1.0版本框架分析

    以设置camera sharpness(锐度)参数为例:

    数据流app parameter->java interface->jni->cameraclient->binder->camera service->hal->daemon->kernel

        如下图所示:

     

    2.2 Hal v3与v1区别与过渡

    2.2.1 简介

    在Android 5.0上,Google正式的将Camera HAL 3.0作为一个标准配置进行了发行,当然Camera HALV1也是作为兼容标准可用。

    HAL V3与V1本质区别是把帧的参数和帧的图像数据绑定到了一起,比如V1的时候一张preview上来的YUV帧,APP是不知道这个 YUV帧采用的Gain和曝光时间究竟是多少,但是在V3

    里面,每一帧都有一个数据结构来描述,其中包括了帧的参数和帧的数据,当APP发送一个request的时候是需要指定使用什么样的参数,到request返回的时候,返回数据中就有图像数据和相关的参数配置。

    2.2.2 HAL 1.0参数设置

    A、V1增加设定参数:增加OIS光学防抖参数设置(ois参数一般不作为设置参数,在本文档仅作实验测试),仅作流程分析对比。

    1)  添加接口函数,参考public void setSaturation(int saturation)设置

    在code/frameworks/base/core/java/android/hardware/Camera.java文件增加接口:

            publicvoid setOis(int saturation){

                  …………

                 set(KEY_QC_OIS, String.valueOf(OIS));

    }

    2)  App设置参数调用,假设设置ois值为1

    参考packages/apps/SnapdragonCamera/src/com/android/camera/PhotoModule.java

    mParameters.setSaturation(saturation);函数调用;

    mParameters.setOis(ois);

    由于HAL V1参数传递是通过字符串来完成的,最后传递到HAL层的字符串里面会有“ois=1”,在HAL层进行解析。

       B、Hal层相关修改:

    1、    添加相关定义

    1.1、 文件:hardware/qcom/camera/QCamera2/HAL/QCameraParameters.h

         static const char KEY_QC_SCE_FACTOR_STEP[];

    +    static const char KEY_QC_OIS[];

         staticconst char KEY_QC_HISTOGRAM[] ;

     

         int32_tsetSharpness(const QCameraParameters& );

    +    int32_t setOis(const QCameraParameters&);

         int32_tsetSaturation(const QCameraParameters& );

     

         int32_tsetSharpness(int sharpness);

    +    int32_t setOis(int ois);

         int32_tsetSaturation(int saturation);

    1.2、 文件:hardware/qcom/camera/QCamera2/stack/common/cam_types.h

                typedef enum {

                    CAM_INTF_PARM_FLASH_BRACKETING,

                    CAM_INTF_PARM_GET_IMG_PROP,

     

                        CAM_INTF_PARM_MAX

    +    CAM_INTF_PARM_OIS

                }cam_intf_parm_type_t;

    1.3、 文件:hardware/qcom/camera/QCamera2/stack/common/cam_intf.h

    typedefstruct{

         cam_af_bracketing_t  mtf_af_bracketing_parm;

         /* Sensor type information */

         cam_sensor_type_t sensor_type;

    +    /*ois default value*/

    +   int32_t ois_default_value;

     } cam_capability_t;

    2、    添加相关设置

    文件:hardware/qcom/camera/QCamera2/HAL/QCameraParameters.cpp

    const charQCameraParameters::KEY_QC_SCE_FACTOR_STEP[] = "sce-factor-step";

    +const char QCameraParameters::KEY_QC_OIS[] = "ois";

     

    //open camera时OIS默认值,该值在vendor中设置

    int32_t QCameraParameters::initDefaultParameters()

    {

           ………

    +   // Set Ois

    +   setOis(m_pCapability->ois_default_value);

    +   ALOGE("the default_ois = %d",m_pCapability->ois_default_value);

         // Set Contrast

        set(KEY_QC_MIN_CONTRAST,m_pCapability->contrast_ctrl.min_value);

        set(KEY_QC_MAX_CONTRAST, m_pCapability->contrast_ctrl.max_value);

        ………

    }

     

    +int32_t QCameraParameters::setOis(constQCameraParameters& params)

    +{

    +    int ois = params.getInt(KEY_QC_OIS);

    +    int prev_ois = getInt(KEY_QC_OIS);

    +    if(params.get(KEY_QC_OIS) == NULL) {

    +       CDBG_HIGH("%s: Ois not set by App",__func__);

    +       return NO_ERROR;

    +    }

    +    ALOGE("haljay ois=%dprev_ois=%d",ois, prev_ois);

    +    if (prev_ois !=  ois) {

    +        if((ois >= 0) && (ois <=2)) {

    +            CDBG(" new ois value : %d", ois);

    +            return setOis(ois);

    +        } else {

    +            ALOGE("%s: invalid value%d",__func__, ois);

    +            return BAD_VALUE;

    +        }

    +    } else {

    +        ALOGE("haljay no valuechange");

    +        CDBG("%s: No value change inois", __func__);

    +        return NO_ERROR;

    +    }

    +}

     

    +int32_t QCameraParameters::setOis(intois)

    +{

    +    charval[16];

    +   sprintf(val, "%d", ois);

    +   updateParamEntry(KEY_QC_OIS, val);

    +   CDBG("%s: Setting ois %s", __func__, val);

    +    ALOGE("haljay%s set ois=%s OIS=%d", __func__, val, CAM_INTF_PARM_OIS);

    +    int32_tvalue = ois;

    +    returnAddSetParmEntryToBatch(m_pParamBuf,

    +                                 CAM_INTF_PARM_OIS,

    +                                 sizeof(value),

    +                                  &value);

    +}

     

    函数int32_tQCameraParameters::updateParameters添加setOis

         if ((rc =setBrightness(params)))                  final_rc = rc;

         if ((rc =setZoom(params)))                        final_rc = rc;

         if ((rc = setSharpness(params)))                    final_rc = rc;

    +    if ((rc = setOis(params)))                          final_rc = rc;

         if ((rc =setSaturation(params)))                   final_rc = rc;

       C、Vendor层相关修改:

    1、    添加相关定义

    1.1、 文件:kernel/include/media/msm_cam_sensor.h

    enum msm_actuator_cfg_type_t {

      CFG_SET_POSITION,

      CFG_ACTUATOR_POWERDOWN,

      CFG_ACTUATOR_POWERUP,

    + CFG_ACTUATOR_OIS,

     };

    struct msm_actuator_cfg_data {

          struct msm_actuator_get_info_t get_info;

          struct msm_actuator_set_position_t setpos;

          enum af_camera_name cam_name;

    +      void*setting;

      } cfg;

    1.2、 文件:vendor/qcom/proprietary/mm-camera/mm-camera2/media-controller/mct/pipeline/mct_pipeline.c

          在函数boolean mct_pipeline_populate_query_cap_buffer(mct_pipeline_t*pipeline)中添加:

                    hal_data->sharpness_ctrl.min_value= 0;

                    hal_data->sharpness_ctrl.step= 6;

     

    +  hal_data->ois_default_value= 1;

                    hal_data->contrast_ctrl.def_value= 5;

                    hal_data->contrast_ctrl.max_value= 10;

    1.3、 文件:vendor/qcom/proprietary/mm-camera/mm-camera2/media-controller/modules/sensors/module/sensor_common.h

    typedefenum {

       /* End of CSID enums*/

       /* video hdr enums */

       SENSOR_SET_AWB_UPDATE, /*sensor_set_awb_data_t * */

    + ACTUATOR_SET_OIS

     } sensor_submodule_event_type_t;

    2、    添加相关设置

    文件:vendor/qcom/proprietary/mm-camera/mm-camera2/media-controller/modules/sensors/module/module_sensor.c

    2.1、 获取hal层参数

    在函数static boolean module_sensor_event_control_set_parm中增加:

    +  case CAM_INTF_PARM_OIS:{

    +    if (!event_control->parm_data) {

    +        SERR("failed parm_dataNULL");

    +        ret = FALSE;

    +        break;

    +      }

    +    module_sensor_params_t        *ois_module_params = NULL;

    +    ois_module_params =s_bundle->module_sensor_params[SUB_MODULE_ACTUATOR];

    +    if (ois_module_params->func_tbl.process != NULL) {

    +      rc =ois_module_params->func_tbl.process(

    +        ois_module_params->sub_module_private,

    +        ACTUATOR_SET_OIS,event_control->parm_data);

    +    }

    +    if (rc < 0) {

    +      SERR("failed");

    +      ret = FALSE;

    +    }

    +     break;

    +  }

    文件:vendor/qcom/proprietary/mm-camera/mm-camera2/media-controller/modules/sensors/actuators/actuator.c

    2.2、在函数int32_t actuator_process中增加:

           case ACTUATOR_SET_POSITION:

                rc =actuator_set_position(actuator_ctrl, data);

                break;

    +  /*set ois*/

    +   case ACTUATOR_SET_OIS:

    +   rc = actuator_set_ois(actuator_ctrl,data);

    +   break;

    2.3、将参数通过ioctl方法下至内核

            +staticint actuator_set_ois(void *ptr, void*data) {

    +  int rc = 0;

    +  int32_t *ois_level = (int32_t*)data;

    +  actuator_data_t *ois_actuator_ptr =(actuator_data_t *)ptr;

    +  struct msm_actuator_cfg_data cfg;

    +  if (ois_actuator_ptr->fd <= 0)

    +    return -EINVAL;

    +  cfg.cfgtype = CFG_ACTUATOR_OIS;

    +  cfg.cfg.setting = ois_level;

    +  /* Invoke the IOCTL to set the ois */

    +  rc = ioctl(ois_actuator_ptr->fd,VIDIOC_MSM_ACTUATOR_CFG, &cfg);

    +  if (rc < 0) {

    +    SERR("failed-errno:%s!!!",strerror(errno));

    +  }

    +  return rc;

    +}

    2.2.3 HAL 3.0参数设置

    V3增加设定参数:对于HAL V3,从framework到HAL层的参数传递是通过metadata方式完成的,即每一个设置现在都变成了一个参数对,例如:设置AE mode为auto,V1版本参数可能是“AE mode=auto”字符串;V3版本假设AE mode功能序号是10,参数auto为1,传到HAL层的参数类似(10,1)这样的参数对,在HAL层需要通过10这个参数,获取设置值1;对于在V1版本对ois的设置需要在V3中添加新的处理来实现。

    如何在V3中定义自己特定参数(如ois设置):谷歌考虑到厂商可能需要定义自己特定的参数,因此在metadata里面定义了vendor tag的数据范围来让vendor可以添加自己特定的操作,如ois设置,可以通过vendor tag来实现。

    步骤

    1)  定义自己的vendor tag序号值

    vim system/media/camera/include/system/camera_metadata_tags.h

              typedefenum camera_metadata_tag {

                 ANDROID_SYNC_START,

                 ANDROID_SYNC_MAX_LATENCY,

                 ANDROID_SYNC_END,

    + VENDOR_TAG_OIS =

    + VENDOR_SECTION_START,  //由于参数少,没有重新定义section,使用默认section 0x8000

                    ......................

               } camera_metadata_tag_t;

    2)  所需支持配置

    Vendor Tag都需要在VENDOR_SECTION_START后面添加,此处添加了VENDOR_TAG_OIS。在HAL里面如果需要处理 Vendor Tag,一个是需要camera module的版本是2.2以上,因为Google在这个版本之后才稳定支持vendor tag。一个是需要vendor tag的的operations函数

    vim ./hardware/libhardware/modules/camera/CameraHAL.cpp +186

    版本和操作函数如下图所示:

    vim ./hardware/qcom/camera/QCamera2/HAL3/QCamera3VendorTags.cpp +184

     

               get_tag_count:返回所有vendor tag的个数;

    get_all_tags:把所有vendor tag依次放在service传下来的uint32_t * tag_array里面,这样上层就知道每一个tag对应的序号值了;

    get_section_name:获取vendor tag的section对应的section名称,比如可以把某几个vendor tag放在一个section里面,其它的放在其它的section里面。查看metadata.h里面的定义很好理解,如果你想增加自己的section,就可以在VENDOR_SECTION = 0x8000后面添加自己的section。由于本次只设置ois参数,没有分类的必要,所以就使用默认的VENDOR_SECTION.

    vim system/media/camera/include/system/camera_metadata_tags.h

     

    get_tag_name:用于获取每一个vendor tag的名称,比如我们这个地方返回“VENDOR_TAG_OIS”就可以了;

    get_tag_type:这个函数返回vendor tag对应的设置数据的类型,可以用TYPE_INT32, TYPE_FLOAT等多种数据格式,取决于需求,我们ois参数只要是INT32就行了。

    3)  加载vendor tag

    这样CameraService.cpp在启动的时候就会调用onFirstRef里面的下面代码来加载我们所写的vendor tag

    if (mModule->common.module_api_version >= CAMERA_MODULE_API_VERSION_2_2) {

                           setUpVendorTags();

            }

    4)  V1到V3参数转化

    由于我们这个ois设置是在V1的APP里面使用,因此首先需要实现V1和V3参数的转换,Google在services/camera/libcameraservice/api1/client2/Parameters.cpp实现相应的转换,因此首先需要在如下函数里面获取V1 APP传下来的OIS的值,其中的paramString就是V1的参数设置的字符串

    status_t Parameters::set(const String8& paramString)

    {

        …………

        mOis = newParams.get(CameraParameters::KEY_OIS);

        …………

    }

    由于V3的参数都是在request frame的时候一起下发的,因此需要讲mSaturation的值在Parameters::updateRequest(CameraMetadata *request)里面下发到HAL,即

    +  res = request->update(VENDOR_TAG_SATURATION,&mOis, 1);

     这样就将saturation的vendor tag和其设置值发送到了HAL V3。

    5)  HAL V3获取设置的OIS参数

    使用CameraMetadata::find(uint32_ttag)函数来获取参数:

    oisMapMode =                frame_settings.find(VENDOR_TAG_OIS).data.i32[0];

    通过ADD_SET_PARAM_ENTRY_TO_BATCH函数将设置下到vendor层:

    ADD_SET_PARAM_ENTRY_TO_BATCH(hal_metadata, CAM_INTF_META_OIS,

    oisMapMode);

     

    2.3 Hal 3.0版本框架分析

    2.3.1 Frameworks层总体框架

    Frameworks之CameraService部分架构图如下图所示:

    v3将更多的工作集中在了Framework去完成,将更多的控制权掌握在自己的手里,从而与HAL的交互的数据信息更少,也进一步减轻了一些在旧版本中HAL层所需要做的事情,也更加模块化。

    Camera2Client建立与初始化过程如下图所示:

     

    由上图可知建立好Camera2Client后会进行initialize操作,完成各个处理模块的创建:

    代码目录:frameworks/av/services/camera/libcameraservice/api1/Camera2Client.cpp

    status_tCamera2Client::initialize(CameraModule *module)

    {

        ………

        mStreamingProcessor = new StreamingProcessor(this);//preview和recorder

        threadName =String8::format(C2-%d-StreamProc, mCameraId);

       mStreamingProcessor->run(threadName.string());//预览与录像

     

        mFrameProcessor = new FrameProcessor(mDevice, this);// 3A

        threadName = String8::format(C2-%d-FrameProc,mCameraId);

       mFrameProcessor->run(threadName.string()); //3A

     

        mCaptureSequencer = new CaptureSequencer(this);

        threadName =String8::format(C2-%d-CaptureSeq, mCameraId);

       mCaptureSequencer->run(threadName.string());//录像,拍照

     

       mJpegProcessor = new JpegProcessor(this,mCaptureSequencer);

        threadName =String8::format(C2-%d-JpegProc, mCameraId);

       mJpegProcessor->run(threadName.string());

    ………

        mCallbackProcessor = new CallbackProcessor(this);//回调处理

        threadName = String8::format(C2-%d-CallbkProc,mCameraId);

       mCallbackProcessor->run(threadName.string());

        ………

    }

    依次分别创建了:

    1、StreamingProcessor并启动一个它所属的thread,该模块主要负责处理previews与record两种视频流的处理,用于从hal层获取原始的视频数据

    2、FrameProcessor并启动一个thread,该模块专门用于处理回调回来的每一帧的3A等信息,即每一帧视频除去原始视频数据外,还应该有其他附加的数据信息,如3A值。

    3、CaptureSequencer并启动一个thread,该模块需要和其他模块配合使用,主要用于向APP层告知capture到的picture。

    4、JpegProcessor并启动一个thread,该模块和streamprocessor类似,他启动一个拍照流,一般用于从HAL层获取jpeg编码后的图像照片数据。

    5、另外ZslProcessor模块称之为0秒快拍,其本质是直接从原始的Preview流中获取预存着的最近的几帧,直接编码后返回给APP,而不 需要再经过take picture去请求获取jpeg数据。0秒快拍技术得意于当下处理器CSI2 MIPI性能的提升以及Sensor支持全像素高帧率的实时输出。一般手机拍照在按下快门后都会有一定的延时,是因为需要切换底层Camera以及ISP 等的工作模式,并重新设置参数以及重新对焦等等,都需要花一定时间后才抓取一帧用于编码为jpeg图像。

    以上5个模块整合在一起基本上实现了Camera应用开发所需的基本业务功能。

    2.3.2 Preview模式下的控制流

    代码目录,直接以Camera2Client::startPreview()作为入口来分析整个Framework层中Preview相关的数据流

       1、调用Camera2Client::startPreview函数

    代码目录-1:frameworks/av/services/camera/libcameraservice/api1/Camera2Client.cpp

    status_t Camera2Client::startPreview() {

        ATRACE_CALL();

        ALOGV(%s: E, __FUNCTION__);

        Mutex::Autolockicl(mBinderSerializationLock);

        status_t res;

        if ( (res = checkPid(__FUNCTION__) ) != OK)return res;

        SharedParameters::Lock l(mParameters);

        return startPreviewL(l.mParameters,false);

    }

    startPreview通过startPreviewL提取参数后真正的开始执行Preview相关的控制流。该函数看上去内容虽然较多,但基本采用了同一种处理方式:

    2、    调用Camera2Client::startPreviewL函数

    代码目录-1:frameworks/av/services/camera/libcameraservice/api1/Camera2Client.cpp

    后面会详细介绍2.1-2.6粗体标注部分;

    status_tCamera2Client::startPreviewL(Parameters &params, bool restart){

    ......

    //获取上一层Preview stream id

    intlastPreviewStreamId = mStreamingProcessor->getPreviewStreamId();

    //2.1创建camera3device stream, Camera3OutputStream

        res =mStreamingProcessor->updatePreviewStream(params);

    .....

    intlastJpegStreamId = mJpegProcessor->getStreamId();

    //2.2预览启动时就建立一个jpeg的outstream

    res= updateProcessorStream(mJpegProcessor,params);

    .....

    //2.3回调处理建立一个Camera3outputstream

    res= mCallbackProcessor->updateStream(params);

    ………

    //2.4

    outputStreams.push(getCallbackStreamId());

    ......

    outputStreams.push(getPreviewStreamId());//预览stream

    ......

    if(!params.recordingHint) {

       if (!restart) {

          //2.5 request处理,更新了mPreviewrequest

          res = mStreamingProcessor->updatePreviewRequest(params); 

    ......

        }

            //2.6

            res = mStreamingProcessor->startStream(StreamingProcessor::PREVIEW,

                    outputStreams);//启动stream,传入outputStreams即stream 的id

        }

    ......

    }

    2.1、调用mStreamingProcessor->updatePreviewStream函数

       代码目录-2:

        frameworks/av/services/camera/libcameraservice/api1/client2/StreamingProcessor.cpp

    status_t StreamingProcessor::updatePreviewStream (constParameters &params) {

    ......

        sp<cameradevicebase> device =mDevice.promote();//Camera3Device

    ......

        if (mPreviewStreamId != NO_STREAM) {

            // Check if stream parameters have tochange

           uint32_t currentWidth, currentHeight;

            res =device->getStreamInfo(mPreviewStreamId,

                    &tWidth, &tHeight, 0);

        ......

            if (currentWidth !=(uint32_t)params.previewWidth ||

                    currentHeight != (uint32_t)params.previewHeight){

            ......    

                res =device->waitUntilDrained();

            ......   

                res =device->deleteStream(mPreviewStreamId);

                ......

                mPreviewStreamId = NO_STREAM;

            }

        }

    if (mPreviewStreamId == NO_STREAM) {//首次create stream

            //创建一个Camera3OutputStream

            res = device->createStream(mPreviewWindow,

                    params.previewWidth,params.previewHeight,

                   CAMERA2_HAL_PIXEL_FORMAT_OPAQUE, &mPreviewStreamId);

            ......

            }

        }

        res =device->setStreamTransform(mPreviewStreamId,

                params.previewTransform);

        ......

    }

    该函数首先是查看当前StreamingProcessor模块下是否存在Stream,没有的话,则交由Camera3Device创建一个 stream。显然,一个StreamingProcessor只能拥有一个PreviewStream,而一个Camera3Device显然控制着所 有的Stream。

    注意:在Camera2Client中,5大模块的数据交互均以stream作为基础。

    下面我们来重点关注Camera3Device的接口createStream,他是5个模块创建stream的基础:

          代码目录-3:

           frameworks/av/services/camera/libcameraservice/device3/Camera3Device.cpp

    status_tCamera3Device::createStream(spconsumer,

            uint32_t width, uint32_t height, intformat, int *id) {

        ......

        assert(mStatus != STATUS_ACTIVE);

        sp<camera3outputstream> newStream;

        if (format == HAL_PIXEL_FORMAT_BLOB) {//图片

            ssize_t jpegBufferSize =getJpegBufferSize(width, height);

           ......

            newStream = new Camera3OutputStream(mNextStreamId, consumer,

                    width, height, jpegBufferSize,format);//jpeg 缓存的大小

        } else {

            newStream = new Camera3OutputStream(mNextStreamId, consumer,

                    width, height, format);//Camera3OutputStream

        }

    newStream->setStatusTracker(mStatusTracker);

    //一个streamid与Camera3OutputStream绑定

        res = mOutputStreams.add(mNextStreamId,newStream);

        ......

        *id = mNextStreamId++;//至少一个previewstream 一般还有CallbackStream

        mNeedConfig = true;

        // Continue captures if active at start

        if (wasActive) {

            ALOGV(%s: Restarting activity toreconfigure streams, __FUNCTION__);

            res = configureStreamsLocked();

           ......

            internalResumeLocked();

        }

        ALOGV(Camera %d: Created new stream, mId);

        return OK;

    }

    该函数重点是关注一个new Camera3OutputStream,在Camera3Device主要存在Camera3OutputStream和Camera3InputStream,两种stream,前者主要作为HAL的输出,是请求HAL填充数据的OutPutStream,后者是由Framework将Stream进行填充。无论是Preview、record还是capture均是从HAL层获取数据,故都会以OutPutStream的形式存在,是我们关注的重点,后面在描述Preview的数据流时还会进一步的阐述。

    每当创建一个OutPutStream后,相关的stream信息被push维护在一个mOutputStreams的KeyedVector表中,分别是该stream在Camera3Device中创建时的ID以及Camera3OutputStream的sp值。同时对mNextStreamId记录下一个Stream的ID号。

    上述过程完成StreamingProcessor模块中一个PreviewStream的创建,其中Camera3OutputStream创建时的ID值被返回记录作为mPreviewStreamId的值,此外每个Stream都会有一个对应的ANativeWindow,这里称之为Consumer。

    2.2、调用updateProcessorStream(mJpegProcessor, params)函数

        代码目录-2:

        frameworks/av/services/camera/libcameraservice/api1/client2/StreamingProcessor.cpp

           status_tCamera2Client::updateProcessorStream(sp<processort> processor,

                                                 camera2::Parameters params) {

                //No default template arguments until C++11, so we need this overload

                 return updateProcessorStream<processort,processort::updatestream="">(

                    processor,params);

    }

    template <typename const="" parameters=""status_t="">

    status_tCamera2Client::updateProcessorStream(sp<processort> processor,

                                                 Parameters params) {

                status_tres;

                //Get raw pointer since sp<t> doesn't have operator->*

                ProcessorT*processorPtr = processor.get();

                res= (processorPtr->*updateStreamF)(params);

    .......

    }

    该模板函数处理过程最终通过非显示实例到显示实例调用JpegProcessor::updateStream,该函数处理的逻辑基本和Callback 模块处理一致,创建的一个OutPutStream和CaptureWindow相互绑定,同时Stream的ID保存在 mCaptureStreamId中。

    此外需要说明一点:

    在preview模式下,就去创建一个jpeg处理的stream,目的在于启动takepicture时,可以更快的进行capture操作,是通过牺牲内存空间来提升效率。

    2.3、调用mCallbackProcessor->updateStream函数

    代码目录-2:

        frameworks/av/services/camera/libcameraservice/api1/client2/CallbackProcessor.cpp

    对比StreamingProcessor模块创建previewstream的过程,很容易定位到Callback模块是需要建立一个 callback流,同样需要创建一个Camera3OutputStream来接收HAL返回的每一帧帧数据,是否需要callback可以通过 callbackenable来控制。一般但预览阶段可能不需要回调每一帧的数据到APP,但涉及到相应的其他业务如视频处理时,就需要进行 callback的enable。

    status_t CallbackProcessor::updateStream(constParameters &params) {

        ………

        sp<cameradevicebase> device =mDevice.promote();

        ………

        // If possible, use the flexible YUV format

        int32_t callbackFormat =params.previewFormat;

        if (mCallbackToApp) {

            // TODO: etalvala: This should use theflexible YUV format as well, but

            // need to reconcile HAL2/HAL3requirements.

            callbackFormat = HAL_PIXEL_FORMAT_YV12;

        } else if(params.fastInfo.useFlexibleYuv&&

                (params.previewFormat ==HAL_PIXEL_FORMAT_YCrCb_420_SP ||

                 params.previewFormat ==HAL_PIXEL_FORMAT_YV12) ) {

            callbackFormat =HAL_PIXEL_FORMAT_YCbCr_420_888;

        }

        if (!mCallbackToApp &&mCallbackConsumer == 0) {

            // Create CPU buffer queue endpoint,since app hasn't given us one

            // Make it async to avoid disconnectdeadlocks

            sp<igraphicbufferproducer>producer;

            sp<igraphicbufferconsumer>consumer;

           //BufferQueueProducer与BufferQueueConsumer

            BufferQueue::createBufferQueue(&producer, &consumer);

            mCallbackConsumer = new CpuConsumer(consumer,kCallbackHeapCount);

    //当前CallbackProcessor继承于CpuConsumer::FrameAvailableListener

            mCallbackConsumer->setFrameAvailableListener(this);

           mCallbackConsumer->setName(String8(Camera2Client::CallbackConsumer));

    //用于queue操作,这里直接进行本地的buffer操作

            mCallbackWindow = new Surface(producer);

        }

        if (mCallbackStreamId != NO_STREAM) {

            // Check if stream parameters have tochange

            uint32_t currentWidth, currentHeight,currentFormat;

            res =device->getStreamInfo(mCallbackStreamId,

                    &tWidth, &tHeight, &tFormat);

           ………

        }

        if (mCallbackStreamId == NO_STREAM) {

            ALOGV(Creating callback stream: %d x%d, format 0x%x, API format 0x%x,

                    params.previewWidth,params.previewHeight,

                    callbackFormat,params.previewFormat);

            res = device->createStream(mCallbackWindow,

                   params.previewWidth, params.previewHeight,

                    callbackFormat,&mCallbackStreamId);//Creating callback stream

            ………

        }

        return OK;

    }

    2.4、整合startPreviewL中所有的stream 到Vector outputStreams

    outputStreams.push(getPreviewStreamId());//预览stream

    outputStreams.push(getCallbackStreamId())//Callback stream

    目前一次Preview构建的stream数目至少为两个。

    2.5、调用mStreamingProcessor->updatePreviewRequest函数

    代码目录-2:

        frameworks/av/services/camera/libcameraservice/api1/client2/StreamingProcessor.cpp

    在创建好多路stream后,由StreamingProcessor模块来将所有的stream信息交由Camera3Device去打包成Request请求。

    注意:

    Camera HAL2/3的特点是:将所有stream的请求都转化为几个典型的Request请求,而这些Request需要由HAL去解析,进而处理所需的业务,这也是Camera3数据处理复杂化的原因所在。

    status_t StreamingProcessor::updatePreviewRequest(constParameters &params) {

        ………

        if (mPreviewRequest.entryCount()== 0) {

            sp<camera2client> client =mClient.promote();

            if (client == 0) {

                ALOGE(%s: Camera %d: Client doesnot exist, __FUNCTION__, mId);

                return INVALID_OPERATION;

            }

            // UseCAMERA3_TEMPLATE_ZERO_SHUTTER_LAG for ZSL streaming case.

            if (client->getCameraDeviceVersion()>= CAMERA_DEVICE_API_VERSION_3_0) {

                if (params.zslMode &&!params.recordingHint) {

                    res = device->createDefaultRequest(CAMERA3_TEMPLATE_ZERO_SHUTTER_LAG,

                            &mPreviewRequest);

                } else {

                    res = device->createDefaultRequest(CAMERA3_TEMPLATE_PREVIEW,

                            &mPreviewRequest);

                }

            } else {

              //创建一个Preview相关的request,由底层的hal来完成default创建

                res =device->createDefaultRequest(CAMERA2_TEMPLATE_PREVIEW,

                        &mPreviewRequest);

            ………

    }

    //根据参数来更新CameraMetadatarequest,用于app设置参数,如antibanding设置

    res= params.updateRequest(&mPreviewRequest);  

        ………

        res = mPreviewRequest.update(ANDROID_REQUEST_ID,

                &mPreviewRequestId,1);//mPreviewRequest的ANDROID_REQUEST_ID

        ………

    }

    a mPreviewRequest是一个CameraMetadata类型数据,用于封装当前previewRequest;

    b 调用device->createDefaultRequest(CAMERA3_TEMPLATE_PREVIEW,&mPreviewRequest)函数

    代码目录-3:

    frameworks/av/services/camera/libcameraservice/device3/Camera3Device.cpp

    status_t Camera3Device::createDefaultRequest(int templateId, CameraMetadata*request) {

        ………

    const camera_metadata_t *rawRequest;

     ATRACE_BEGIN(camera3->construct_default_request_settings);

     rawRequest = mHal3Device->ops->construct_default_request_settings(

        mHal3Device, templateId);

     ATRACE_END();

     if (rawRequest == NULL) {

        SET_ERR_L(HAL is unable to construct default settings for template %d,

                 templateId);

        return DEAD_OBJECT;

     }

     *request = rawRequest;

     mRequestTemplateCache[templateId] =rawRequest;

    ………

    }

    最终是由hal来实现构建一个rawrequest,即对于Preview,而言是构建了一个CAMERA3_TEMPLATE_PREVIEW类型的 Request。其实对HAL而言,rawrequest本质是用于操作一个camera_metadata_t类型的数据:

    struct camera_metadata {

        metadata_size_t          size;

        uint32_t                 version;

        uint32_t                 flags;

        metadata_size_t          entry_count;

        metadata_size_t          entry_capacity;

        metadata_uptrdiff_t      entries_start; // Offset fromcamera_metadata

        metadata_size_t          data_count;

        metadata_size_t          data_capacity;

        metadata_uptrdiff_t      data_start; // Offset fromcamera_metadata

        uint8_t                 reserved[];

    };

    该数据结构可以存储多种数据,且可以根据entry tag的不同类型来存储数据,同时数据量的大小也可以自动调整;

    c mPreviewRequest.update(ANDROID_REQUEST_ID,&mPreviewRequestId,1)

    将当前的PreviewRequest相应的ID保存到camera metadata。

    2.6、调用mStreamingProcessor->startStream函数启动整个预览的stream流

    代码目录-2:

      frameworks/av/services/camera/libcameraservice/api1/client2/StreamingProcessor.cpp

    该函数的处理过程较为复杂,可以说是整个Preview正常工作的核心控制:

    tatus_tStreamingProcessor::startStream(StreamType type,

            const Vector<int32_t>&outputStreams) {

    .....

    CameraMetadata&request = (type == PREVIEW) ?

                mPreviewRequest :mRecordingRequest;//取preview的CameraMetadata request

    //CameraMetadata中添加outputStreams

    res = request.update(ANDROID_REQUEST_OUTPUT_STREAMS,outputStreams);

    res= device->setStreamingRequest(request);//向hal发送request

    .....

    }

    该函数首先是根据当前工作模式来确定StreamingProcessor需要处理的Request,该模块负责Preview和Record两个Request。

    以PreviewRequest就是之前createDefaultRequest构建的,这里先是将这个Request所需要操作的Outputstream打包到一个tag叫ANDROID_REQUEST_OUTPUT_STREAMS的entry当中。

          a 调用setStreamingRequest函数

          代码目录:

           frameworks/av/services/camera/libcameraservice/device3/Camera3Device.cpp

    真正的请求Camera3Device去处理这个带有多路stream的PreviewRequest。

    a.1 status_t Camera3Device::setStreamingRequest(constCameraMetadata &request,

                                               int64_t* /*lastFrameNumber*/) {

        ATRACE_CALL();

        List<constcamerametadata=""> requests;

        requests.push_back(request);

        return setStreamingRequestList(requests,/*lastFrameNumber*/NULL);

    }

    该函数将mPreviewRequest push到一个list,调用setStreamingRequestList

    a.2 status_t Camera3Device::setStreamingRequestList(constList<const camerametadata=""> &requests, int64_t*lastFrameNumber) {

            ATRACE_CALL();

            returnsubmitRequestsHelper(requests,/*repeating*/true, lastFrameNumber);

    }

    a.3 status_t Camera3Device::submitRequestsHelper(

           const List<const camerametadata=""> &requests, boolrepeating,

           /*out*/

           int64_t *lastFrameNumber) {//repeating = 1;lastFrameNumber = NULL

       ………

       status_t res = checkStatusOkToCaptureLocked();

       ………

        RequestList requestList;

    //返回的是CaptureRequest RequestList

    res = convertMetadataListToRequestListLocked(requests,/*out*/&requestList);   

    ………

       if (repeating) {

    //重复的request存入到RequestThread

    res = mRequestThread->setRepeatingRequests(requestList, lastFrameNumber); 

    }  else {

    //capture模式,拍照单词

           res = mRequestThread->queueRequestList(requestList,lastFrameNumber);  

     }

       if (res == OK) {

           waitUntilStateThenRelock(/*active*/true, kActiveTimeout);

           if (res != OK) {

                SET_ERR_L(Can't transition toactive in %f seconds!,

                        kActiveTimeout/1e9);

           }

           ALOGV(Camera %d: Capture request % PRId32  enqueued, mId,

                 (*(requestList.begin()))->mResultExtras.requestId);

       } else {

           CLOGE(Cannot queue request. Impossible.);

           return BAD_VALUE;

       }

       return res;

    }

    a.4 convertMetadataListToRequestListLocked

    这个函数是需要将Requestlist中保存的CameraMetadata数据转换为List;

    status_tCamera3Device::convertMetadataListToRequestListLocked(

    const List<constcamerametadata=""> &metadataList, RequestList *requestList) {

       ………

       for (List<const camerametadata="">::const_iterator it =metadataList.begin();//CameraMetadata, mPreviewRequest

                it != metadataList.end(); ++it) {

            //新建CaptureRequest由CameraMetadata转化而来

           sp<capturerequest>newRequest = setUpRequestLocked(*it);       

            ………

           // Setup burst Id and request Id

           newRequest->mResultExtras.burstId = burstId++;

           if (it->exists(ANDROID_REQUEST_ID)) {

                if(it->find(ANDROID_REQUEST_ID).count == 0) {

                    CLOGE(RequestID entry exists;but must not be empty in metadata);

                    return BAD_VALUE;

                }

            //设置该request对应的id

            newRequest->mResultExtras.requestId =it->find(ANDROID_REQUEST_ID).data.i32[0];

           } else {

                CLOGE(RequestID does not exist inmetadata);

                return BAD_VALUE;

           }

           requestList->push_back(newRequest);

            ………

       }

       return OK;

    }

    这里是对List进行迭代解析处理,如当前模式下仅存在PreviewRequest这一个CameraMetadata,通过setUpRequestLocked将其转换为一个CaptureRequest。

            a.5 setUpRequestLocked

               sp<camera3device::capturerequest>Camera3Device::setUpRequestLocked(

                    constCameraMetadata &request) {//mPreviewRequest

                    status_tres;

                    if(mStatus == STATUS_UNCONFIGURED || mNeedConfig) {

                    res= configureStreamsLocked();

                    ......

        //CameraMetadata转为CaptureRequest,包含mOutputStreams

       </strong>sp<capturerequest> newRequest = createCaptureRequest(request);

                    return newRequest;

    }

    configureStreamsLocked函数主要是将Camera3Device侧建立的所有Stream包括Output与InPut格式 的交由HAL3层的Device去实现处理的核心接口是configure_streamsregister_stream_buffer

    createCaptureRequest函数是将一个CameraMetadata格式的数据如PreviewRequest转换为一个CaptureRequest:

               a.6 sp<camera3device::capturerequest>Camera3Device::createCaptureRequest(

                    constCameraMetadata &request) {//mPreviewRequest

                    ………

                    sp<capturerequest>newRequest = new CaptureRequest;

                    newRequest->mSettings= request;//CameraMetadata

                    camera_metadata_entry_tinputStreams =

                        newRequest->mSettings.find(ANDROID_REQUEST_INPUT_STREAMS);

                    if(inputStreams.count > 0) {

                        if(mInputStream == NULL ||

                            mInputStream->getId() != inputStreams.data.i32[0]) {

                            CLOGE(Requestreferences unknown input stream %d,

                            inputStreams.data.u8[0]);

                            returnNULL;

                        }

                    ………

                        newRequest->mInputStream= mInputStream;

                        newRequest->mSettings.erase(ANDROID_REQUEST_INPUT_STREAMS);

                    }

    //读取存储在CameraMetadata的stream id信息

                    camera_metadata_entry_tstreams =

                        newRequest->mSettings.find(ANDROID_REQUEST_OUTPUT_STREAMS);

                        ………

    for (size_t i = 0; i < streams.count; i++) {

                        //Camera3OutputStream的id在mOutputStreams中

                        intidx = mOutputStreams.indexOfKey(streams.data.i32[i]);

                        ………

                     }

                    //返回的是Camera3OutputStream,preview/callback等stream

                    sp<camera3outputstreaminterface>stream =

                         mOutputStreams.editValueAt(idx);

                    ………

    //Camera3OutputStream添加到CaptureRequest的mOutputStreams

                    newRequest->mOutputStreams.push(stream);

        }

                    newRequest->mSettings.erase(ANDROID_REQUEST_OUTPUT_STREAMS);

                    returnnewRequest;

    }

    该函数主要处理指定的这个CameraMetadata mPreviewRequest下对应所拥有的Output与Input Stream,对于Preview而言,至少存在OutPutStream包括一路StreamProcessor与一路可选的 CallbackProcessor。

    在构建这个PreviewRequest时,已经将ANDROID_REQUEST_OUTPUT_STREAMS这个Tag进行了初始化,相应的内容为Vector &outputStreams,包含着属于PreviewRequest这个Request所需要的输出stream的ID值,通过这个IDindex值,可以遍历到Camera3Device下所createstream创造的Camera3OutputStream,即说明不同类型的 Request在Camera3Device端存在多个Stream,而每次不同业务下所需要Request的对应的Stream又仅是其中的个别而已。

    idx = mOutputStreams.indexOfKey(streams.data.i32[i])是通过属于PreviewRequest中包含的一个 stream的ID值来查找到mOutputStreams这个KeyedVector中对应的标定值index。注意:两个索引值不一定是一致的。

    mOutputStreams.editValueAt(idx)是获取一个与该ID值(如Previewstream ID、CallbackStream ID等等)相对应的Camera3OutputStream。

    在找到了当前Request中所有的Camera3OutputStream后,将其维护在CaptureRequest中:

    class CaptureRequest : public LightRefBase<capturerequest> {

          public:

            CameraMetadata                      mSettings;

            sp<camera3::camera3stream>          mInputStream;

           Vector<sp<camera3::camera3outputstreaminterface> >

                                                mOutputStreams;

            CaptureResultExtras                 mResultExtras;

        };

    mSettings是保存CameraMetadata PreviewRequest,vectormOutPutStreams保存着当前Request提取出来的Camera3OutputStream,至此构建了一个CaptureRequest。

               回到a.4:convertMetadataListToRequestListLocked

    返回到convertMetadataListToRequestListLocked中,现在已经完成了一个CameraMetadata Request的处理,生产的是一个CaptureRequest。我们将这个ANDROID_REQUEST_ID的ID值,保留在newRequest->mResultExtras.requestId =it->find(ANDROID_REQUEST_ID).data.i32[0]。

    这个值在整个Camera3的架构中,仅存在3大种Request类型,说明了整个和HAL层交互的Request类型是不多的:

    预览RequestmPreviewRequest:mPreviewRequestId(Camera2Client::kPreviewRequestIdStart),

    拍照RequestmCaptureRequest:mCaptureId(Camera2Client::kCaptureRequestIdStart),

    录像RequestmRecordingRequest: mRecordingRequestId(Camera2Client::kRecordingRequestIdStart);

    staticconst int32_t kPreviewRequestIdStart = 10000000;

    staticconst int32_t kPreviewRequestIdEnd   =20000000;

    staticconst int32_t kRecordingRequestIdStart  =20000000;

    staticconst int32_t kRecordingRequestIdEnd    =30000000;

    staticconst int32_t kCaptureRequestIdStart = 30000000;

    staticconst int32_t kCaptureRequestIdEnd   =40000000;

               回到a.3:mRequestThread->setRepeatingRequests(requestList)

    对于Preview来说,一次Preview后底层硬件就该可以连续的工作,而不需要进行过多的切换,故Framework每次向HAL发送的Request均是一种repeat的操作模式,故调用了一个重复的RequestQueue来循环处理每次的Request。

    status_tCamera3Device::RequestThread::setRepeatingRequests(

            const RequestList &requests,

            /*out*/

            int64_t *lastFrameNumber) {

        Mutex::Autolock l(mRequestLock);

        if (lastFrameNumber != NULL) {//第一次进来为null

            *lastFrameNumber =mRepeatingLastFrameNumber;

        }

        mRepeatingRequests.clear();

        mRepeatingRequests.insert(mRepeatingRequests.begin(),

                requests.begin(), requests.end());

       unpauseForNewRequests();//signal request_thread in waitfornextrequest

        mRepeatingLastFrameNumber =NO_IN_FLIGHT_REPEATING_FRAMES;

        return OK;

    }

    将Preview线程提交的Request加入到mRepeatingRequests中后,唤醒RequestThread线程去处理当前新的Request。

    2.7、经过2.6步骤将开启RequestThread 请求处理线程

    RequestThread::threadLoop()函数主要用于响应并处理新加入到Request队列中的请求。

    代码目录-2:

    frameworks/av/services/camera/libcameraservice/device3/Camera3Device.cpp

    boolCamera3Device::RequestThread::threadLoop(){

    ....

    //返回的是mRepeatingRequests,mPreviewRequest

     sp<capturerequest> nextRequest = waitForNextRequest();  

    ………

        // Create request to HAL

    //CaptureRequest转为给HAL3.0的camera3_capture_request_t

    camera3_capture_request_t request =camera3_capture_request_t();   request.frame_number = nextRequest->mResultExtras.frameNumber;//当前帧号

        Vector<camera3_stream_buffer_t>outputBuffers;

        // Get the request ID, if any

        int requestId;

        camera_metadata_entry_t requestIdEntry =

                nextRequest->mSettings.find(ANDROID_REQUEST_ID);

        if (requestIdEntry.count > 0) {

    //获取requestid,这里是mPreviewRequest的id

            requestId = requestIdEntry.data.i32[0];

        }

             .....

       for (size_t i = 0; i <nextRequest->mOutputStreams.size(); i++) {

             res =nextRequest->mOutputStreams.editItemAt(i)->

                     getBuffer(&outputBuffers.editItemAt(i));

    .....

        // Submit request and block until ready fornext one

        ATRACE_ASYNC_BEGIN(frame capture,request.frame_number);

       ATRACE_BEGIN(camera3->process_capture_request);

       //调用底层hal的process_capture_request,如antibanding参数设置

    res = mHal3Device->ops->process_capture_request(mHal3Device,&request);    ATRACE_END();

         .......

    }

    a.1 waitForNextRequest()

        Camera3Device::RequestThread::waitForNextRequest() {

       ………

        while (mRequestQueue.empty()) {

            if (!mRepeatingRequests.empty()) {

                // Always atomically enqueue allrequests in a repeating request

                // list. Guarantees a completein-sequence set of captures to

                // application.

                const RequestList &requests =mRepeatingRequests;

                RequestList::const_iteratorfirstRequest =

                        requests.begin();

                nextRequest = *firstRequest;

                //把当前的mRepeatingRequests插入到mRequestQueue

               mRequestQueue.insert(mRequestQueue.end(),

                        ++firstRequest,

                        requests.end());

                // No need to wait any longer

                mRepeatingLastFrameNumber = mFrameNumber+ requests.size() - 1;

                break;

            }

            //等待下一个request

            res =mRequestSignal.waitRelative(mRequestLock, kRequestTimeout);

           if ((mRequestQueue.empty() && mRepeatingRequests.empty()) ||

                    exitPending()) {

                Mutex::Autolock pl(mPauseLock);

                if (mPaused == false) {

                    ALOGV(%s: RequestThread: Goingidle, __FUNCTION__);

                    mPaused = true;

                    // Let the tracker know

                    sp<statustracker>statusTracker = mStatusTracker.promote();

                    if (statusTracker != 0) {

                       statusTracker->markComponentIdle(mStatusId, Fence::NO_FENCE);

                    }

                }

                // Stop waiting for now and letthread management happen

                return NULL;

            }

        }

        if (nextRequest == NULL) {

            // Don't have a repeating requestalready in hand, so queue

            // must have an entry now.

            RequestList::iterator firstRequest =

                    mRequestQueue.begin();

            nextRequest = *firstRequest;

    //取一根mRequestQueue中的CaptureRequest,来自于mRepeatingRequests的next

            mRequestQueue.erase(firstRequest);

        }

        ………

        if (nextRequest != NULL) {

            //对每一个非空的request需要帧号++

    nextRequest->mResultExtras.frameNumber= mFrameNumber++;       nextRequest->mResultExtras.afTriggerId = mCurrentAfTriggerId;

           nextRequest->mResultExtras.precaptureTriggerId = mCurrentPreCaptureTriggerId;

        }

        return nextRequest;

    }

    该函数是响应RequestList的核心,通过不断的轮训休眠等待一旦mRepeatingRequests有Request可处理时,就将他内部所有的CaptureRequest加入到mRequestQueue 中去,理论来说每一个CaptureRequest对应着一帧的请求处理,每次响应时可能会出现mRequestQueue包含了多个 CaptureRequest。

    通过nextRequest->mResultExtras.frameNumber= mFrameNumber++表示当前CaptureRequest在处理的一帧图像号。

    对于mRepeatingRequests而言,只有其非空,在执行完一次queue操作后,在循环进入执行时,会自动对 mRequestQueue进行erase操作,是的mRequestQueue变为empty后再次重新加载mRepeatingRequests中的 内容,从而形成一个队repeatRequest的重复响应过程。

    a.2

    camera_metadata_entry_t requestIdEntry =nextRequest->mSettings.find(ANDROID_REQUEST_ID);提取该CaptureRequest对应的 Request 类型值;

    a.3 getBuffer操作

    a.4 mHal3Device->ops->process_capture_request(mHal3Device,&request)

    这里的request是已经由一个CaptureRequest转换为和HAL3.0交互的camera3_capture_request_t结构。

    3、    总结

    至此已经完成了一次向HAL3.0 Device发送一次完整的Request的请求。从最初Preview启动建立多个OutPutStream,再是将这些Stream打包成一个 mPreviewRequest来启动stream,随后将这个Request又转变为一个CaptureRequest,直到转为Capture list后交由RequestThread来处理这些请求。每一次的Request简单可以说是Camera3Device向HAL3.0请求一帧数据, 当然每一次Request也可以包含各种控制操作,如AutoFocus等内容。

    2.3.3 opencamera过程调用device3 initialize函数

        App至framework流程上面章节已做简要分析,frameworks -> hal初始化框图如下:

     

    2.3.4 frameworks层设置参数流程

        设置参数setParameters流程图如下所示:

        Frameworks层:

     

    2.3.5设置参数下至hal层流程

    由2.3.2节可知,开启并在request线程中--Camera3Device::RequestThread::threadLoop调用hal层接口函数mHal3Device->ops->process_capture_request(mHal3Device, &request),接口函数中会完成参数设置相关工作,如antibanding的设置。

    根据2.3.6节可知,antibanding等相关参数已经更新到requestlist中,hal层参数设置如下图所示:

     


    camera相关优秀 博客 http://www.cnblogs.com/whw19818/p/5766027.html
    展开全文
  • 高通camera代码框架

    2018-01-17 13:25:23
    高通平台camera层代码初始化的框架流程图,对于ISP工程师等了解camera的运行流程有帮助
  • 高通平台camera框架分析,详细的camera hal和驱动之间的关系。
  • 接触高通平台camera不长时间,了解的不够深入,下面个人所了解内容的是基于高通平台的camera,有的地方能描述不一定会很清楚,以后会慢慢补充! 一 框架介绍 Package -> Framwork -> JNI -> Camera.cpp -...
  • Android 高通平台 Camera C/S init 流程
  • 在开发和调试camera底层驱动的时候,大家需要对camera基本驱动架构和平台有一定的了解,熟悉你开发的环境之后,根据平台指引的文档,找到API接口,...1.高通camera驱动架构Qcom camera采用的是v4l2开源的驱动架构...
  • 高通Camera功耗优化

    2019-12-16 10:16:33
    一、随便说说 功耗优化一直是一个令人头大的东西,毕竟单枪匹马,没人指导,真的很... 不知道有没有大神通过改Camera框架,进行这种级别的功耗优化! 功耗优化的2个层面: 1.硬件 2.软件 平台:高通8909 二、知...
  • 高通camera调试步骤

    2018-11-27 11:44:47
    项目比较紧,3周内把一个带有外置ISP,MIPI数据通信,800万像素的camera从无驱动到实现客户全部需求。 1日 搭平台,建环境,编译内核,烧写代码。 我是一直在Window下搭...2日 编写camera驱动大致框架,配置GPIO...
  • 一、打开Camera Led 校准模式 setprop vendor.debug.camera.dualLEDCalibrationMode 1 (注意:成功设置该属性后,打开骁龙相机,CamearId 切换得到主摄,正常也就是id 0 才会起作用,如果看到闪光灯会自动间隔...
  • 高通平台对于camera的代码组织,大体上还是遵循Android的框架:即上层应用和HAL层交互,高通平台在HAL层里面实现自己的一套管理策略;在kernel中实现sensor的底层驱动。但是,对于最核心的sensor端的底层设...
  • 高通camera模块驱动指南资料介绍

    千次阅读 2018-10-15 14:41:33
    本资料提供了摄像机传感器和相关模块的驱动程序开发指南,并描述了...摄像机传感器框架包括以下组件的配置: Sensor CSIPHY CSID Camera Control Interface (CCI) Actuator Flash EEPROM Chromatix™ 本资料中的大...
  • 高通Camera驱动(1)--Camx架构介绍

    千次阅读 2021-01-13 18:21:57
    参考文档:Android Camera简单整理(二)-Qcom HAL3 Camx架构学习 - 简书 (jianshu.com) 一、Android分层架构 ... Framework层:用java编写一些规范化的模块封装框架。用Java Native Interface(JNI 是...
  • 摄像头基础介绍 一、摄像头结构和工作原理. 拍摄景物通过镜头,将生成的光学图像投射到传感器上,然后光学图像被转换成电信号,电信号再经过模数转换变为数字信号,数字信号...DSP结构框架: 1. ISP(image signal pro
  • 1、首先我们已经给高通确认过了,高通框架是不支持这个的。 2、看了平台里面的代码,framework/av/camera/ framework/av/service/camera 这部分跟其他平台的差别不大。 3、主要是 hal里面的东西,高通的hal里面...
  • 高通平台camera 移植过程

    千次阅读 2018-03-09 14:54:43
    1 camera基本代码架构 高通平台对于camera的代码组织,大体上还是遵循Android的框架:即上层应用和HAL层交互,高通平台在HAL层里面实现自己的一套管理策略;在kernel中实现sensor的底层驱动。但是,对于最核心的...
  • Android O 的camera framework/hal层框架笔记(基于高通845平台) tags: android camera 文章目录Android O 的camera framework/hal层框架笔记(基于高通845平台)@[toc]**0 前言** **1 CameraServer的启动** **2 ...
  • 高通平台对于camera的代码组织,大体上还是遵循Android的框架:即上层应用和HAL层交互,高通平台在HAL层里面实现自己的一套管理策略;在kernel中实现sensor的底层驱动。但是,对于最核心的sensor端的底层设置、ISP...
  • 此patch为将raw12数据流传输给上层,高通平台默认支持raw10和raw16,按照基础框架将raw12添加进camera HAL3代码中即可。
  • 本文转载自:... 1.总体架构 AndroidCamera 框架从整体上看是一个 client/service 的架构, 有两个进程: client 进程,可以看成是 AP 端,主要包括Java代码与一些 native c/c++代码; service ...
  • 本文主要研究高通平台Camera驱动和HAL层代码架构,熟悉高通Camera的控制流程。 平台:Qcom-高通平台 Hal版本:【HAL1】 知识点如下: 从HAL层到driver层:研究Camera以下内容 1.打开(open)流程 2.预览(preview)流程 ...

空空如也

空空如也

1 2 3 4 5
收藏数 85
精华内容 34
关键字:

高通camera框架