精华内容
参与话题
问答
  • AndroidCamera使用案例

    热门讨论 2016-07-28 21:44:05
    AndroidCamera使用案例
  • Android Camera fw学习(四)-recording流程分析 备注:备注:本文是Android5.1学习笔记。博文按照软件启动流程分析。  且行且惜,一步一个脚印,这次学习camera Video.虽然标题是recording流程分析,但这里很多和...

    Android Camera fw学习(四)-recording流程分析


    感兴趣可以加QQ群85486140,大家一起交流相互学习下!
    备注:备注:本文是Android5.1学习笔记。博文按照软件启动流程分析。
      且行且惜,一步一个脚印,这次学习camera Video.虽然标题是recording流程分析,但这里很多和preview是相似的(包含更新,创建Stream,创建Request),这里主要分析MediaRecorder对象创建、video帧监听对象注册、帧可用事件以及一系列callback流程分析。

    一、认识video(mediaRecorder)状态机

    Used to record audio and video. The recording control is based on a
    simple state machine (see below).状态机请看上面源码中给的流程图。
    A common case of using MediaRecorder to record audio works as follows:
    1.MediaRecorder recorder = new MediaRecorder();
    2.recorder.setAudioSource(MediaRecorder.AudioSource.MIC);
    3.recorder.setOutputFormat(MediaRecorder.OutputFormat.THREE_GPP);
    4.recorder.setAudioEncoder(MediaRecorder.AudioEncoder.AMR_NB);
    5.recorder.setOutputFile(PATH_NAME);
    6.recorder.prepare();
    7.recorder.start(); // Recording is now started
    8…
    9.recorder.stop();
    10.recorder.reset(); // You can reuse the object by going back to setAudioSource() step
    recorder.release(); // Now the object cannot be reused
      Applications may want to register for informational and error
    events in order to be informed of some internal update and possible
    runtime errors during recording. Registration for such events is
    done by setting the appropriate listeners (via calls
    (to {@link #setOnInfoListener(OnInfoListener)}setOnInfoListener and/or
    {@link #setOnErrorListener(OnErrorListener)}setOnErrorListener).
    In order to receive the respective callback associated with these listeners,
    applications are required to create MediaRecorder objects on threads with a
    Looper running (the main UI thread by default already has a Looper running).

    上面是googole工程师加的注释,最权威的资料。大概意思就是说“使用mediaRecorder记录音视频,需要一个简单的状态机来控制”。上面的1,2,3…就是在操作时需要准守的步骤。算了吧,翻译水平有限,重点还是放到camera这边吧。

    二、Camera app如何启动录像

    //源码路径:pdk/apps/TestingCamera/src/com/android/testingcamera/TestingCamera.java
     private void startRecording() {
            log("Starting recording");
            logIndent(1);
            log("Configuring MediaRecoder");
            //这里会检查是否打开了录像功能。这里我们省略了,直接不如正题
    //上面首先创建了一个MediaRecorder的java对象(注意这里同camera.java类似,java对象中肯定包含了一个mediaRecorder jni本地对象,继续往下看)
            mRecorder = new MediaRecorder();
            //下面就是设置一些callback.
            mRecorder.setOnErrorListener(mRecordingErrorListener);
            mRecorder.setOnInfoListener(mRecordingInfoListener);
            if (!mRecordHandoffCheckBox.isChecked()) {
        //将当前camera java对象设置给了mediaRecorder java对象。
        //这里setCamera是jni接口,后面我们贴代码在分析。
                mRecorder.setCamera(mCamera);
            }
        //将preview surface java对象设置给mediaRecorder java对象,后面贴代码
        //详细说明。
            mRecorder.setPreviewDisplay(mPreviewHolder.getSurface());
        //下面2个是设置音频和视频的资源。
            mRecorder.setAudioSource(MediaRecorder.AudioSource.CAMCORDER);
            mRecorder.setVideoSource(MediaRecorder.VideoSource.CAMERA);
            mRecorder.setProfile(mCamcorderProfiles.get(mCamcorderProfile));
            //从app控件选择录像帧大小,并设置给mediaRecorder
            Camera.Size videoRecordSize = mVideoRecordSizes.get(mVideoRecordSize);
            if (videoRecordSize.width > 0 && videoRecordSize.height > 0) {
                mRecorder.setVideoSize(videoRecordSize.width, videoRecordSize.height);
            }
            //从app控件选择录像帧率,并设置给mediaRecorder.
            if (mVideoFrameRates.get(mVideoFrameRate) > 0) {
                mRecorder.setVideoFrameRate(mVideoFrameRates.get(mVideoFrameRate));
            }
            File outputFile = getOutputMediaFile(MEDIA_TYPE_VIDEO);
            log("File name:" + outputFile.toString());
            mRecorder.setOutputFile(outputFile.toString());
    
            boolean ready = false;
            log("Preparing MediaRecorder");
            try {
        //准备一下,请看下面google给的使用mediaRecorder标准流程
                mRecorder.prepare();
                ready = true;
            } catch (Exception e) {//------异常处理省略
            }
    
            if (ready) {
                try {
                    log("Starting MediaRecorder");
                    mRecorder.start();//启动录像
                    mState = CAMERA_RECORD;
                    log("Recording active");
                    mRecordingFile = outputFile;
                } catch (Exception e) {//-----异常处理省略
            }
    //------------
        }
    

    可以看到应用启动录像功能是是符合状态机流程的。在应用开发中,也要这样来做。

    • 1.创建mediaRecorderjava对象,mRecorder = new MediaRecorder();
    • 2.设置camera java对象到mediaRecorder中,mRecorder.setCamera(mCamera);
    • 3.将preview surface对象设置给mediaRecorder,mRecorder.setPreviewDisplay(mPreviewHolder.getSurface());
    • 4.设置音频源,mRecorder.setAudioSource(MediaRecorder.AudioSource.CAMCORDER);
    • 5.设置视频源,mRecorder.setVideoSource(MediaRecorder.VideoSource.CAMERA);
    • 6.设置录像帧大小和帧率,以及setOutputFile
    • 8.准备工作,mRecorder.prepare();
    • 9.启动mdiaRecorder,mRecorder.start();

    三、与MediaPlayerService相关的类接口之间的关系简介

    1.mediaRecorder何时与MediaPlayerService发送关系

    MediaRecorder::MediaRecorder() : mSurfaceMediaSource(NULL)
    {
        ALOGV("constructor");
        const sp<IMediaPlayerService>& service(getMediaPlayerService());
        if (service != NULL) {
            mMediaRecorder = service->createMediaRecorder();
        }
        if (mMediaRecorder != NULL) {
            mCurrentState = MEDIA_RECORDER_IDLE;
        }
        doCleanUp();
    }
    

    在jni中创建mediaRecorder对象时,其实在构造函数中偷偷的链接了mediaPlayerService,这也是Android习惯用的方法。获取到MediaPlayerService代理对象后,通过匿名binder获取mediaRecorder代理对象。
    frameworks/base/media/java/android/media/MediaRecorder.java

    2.mediaPlayerService类和接口之间关系

    接口简单介绍-都是通过mediaPlayerService代理对象获取匿名mediaRecorder和mediaPlayer

    接口类型 接口说明
    virtual sp createMediaRecorder() = 0; 创建mediaRecorder录视频服务对象的接口
    virtual sp create(const sp& client, int audioSessionId = 0) = 0; 创建mediaPlayer播放音乐服务对象的接口,播放音乐都是通过mediaPlayer对象播放的
    virtual status_t decode() = 0; 音频解码器

    3.MediaRecorder类和接口之间关系

    mediaRecorder功能就是来录像的。其中MediaRecorder类中,包含了BpMediaRecorder代理对象引用。MediaRecorderClient本地对象驻留在mediaPlayService中。它的接口比较多,这里就列出我们今天关注的几个接口。其它接口查看源码吧
    详细介绍可以参考源码:frameworks/av/include/media/IMediaRecorder.h

    接口类型 接口说明
    virtual status_t setCamera(const sp& camera,const sp& proxy) = 0; 这个接口也是非常需要我们关注的,这里获取到了启动录像操作的本地对象(BnCameraRecordingProxy),并通过匿名binder通信方式,第二个参数就是本地对象.然后在startRecording时将帧监听对象注册到camera本地对象中了
    virtual status_t setPreviewSurface(const sp& surface) = 0; 将preview预览surface对象设置给medaiRecorder,因为mediaRecorder也有一个camera本地client,所以这个surface对象最终还是会设置到cameraService用于显示。而录像的帧会在CameraService本地创建一个bufferQueue,具体下面会详细说明
    virtual status_t setListener(const sp& listener) = 0; 这里一看就是设置监听对象,监听对象是jni中的JNIMediaRecorderListener对象,该对象可以回调MediaRecorder.java类中的postEventFromNative方法,将时间送到java层。其实MediaRecorder实现了BnMediaRecorderClient接口,即实现notify接口,那么这里其实将本地对象传到MediaRecorder本地的客户端对象中(本地对象拿到的就是代理对象了),参考代码片段1
    virtual status_t start() = 0; 启动录像功能,函数追究下去和Camera关系不大了,这里就不细说了
    1)代码片段1
    源码路径:frameworks/base/media/jni/android_media_MediaRecorder.cpp
    // create new listener and give it to MediaRecorder
    sp<JNIMediaRecorderListener> listener = new JNIMediaRecorderListener(env, thiz, weak_this);
    mr->setListener(listener);
    

    mediaRecorder jni接口回调java方法,通知上层native事件。

    2)代码片段2
    static void android_media_MediaRecorder_setCamera(JNIEnv* env, jobject thiz, jobject camera)
    {
    // we should not pass a null camera to get_native_camera() call.
    //这里检查camera是不是空的,显然不是空的。
        //这个地方需要好好研究一下,其中camera是java层的camera对象(即camera.java)
        //这里由java对象获取到camera应用端本地对象。
        sp<Camera> c = get_native_camera(env, camera, NULL);
        if (c == NULL) {
        // get_native_camera will throw an exception in this case
            return;
        }
        //获取mediaRecorder本地对象
        sp<MediaRecorder> mr = getMediaRecorder(env, thiz);
        //下面要特别注意,这里为什么传入的不是Camera对象而是c->remote(),当时琢磨
        //着,camera.cpp也没实现什么代理类的接口啊,不过后来在cameraBase类中发现
        //重载了remote()方法,该方法返回ICamera代理对象,呵呵。这样的话就会在
        //mediaRecorder中创建一个新的ICamera代理对象。并在mediaPlayerService中
        //创建了一个本地的Camera对象。
        //c->getRecordingProxy():获取camera本地对象实现的Recording本地对象。这里
        //调用setCamera设置到mediaRecorder本地对象中了(见代码片段3)
       process_media_recorder_call(env, mr->setCamera(c->remote(), c->getRecordingProxy()),
                "java/lang/RuntimeException", "setCamera failed.");
    }
    //camera端
    sp<ICameraRecordingProxy> Camera::getRecordingProxy() {
        ALOGV("getProxy");
        return new RecordingProxy(this);
    }
    //看看下面RecordingProxy实现了BnCameraRecordingProxy接口,
    //是个本地对象,水落石出了。
    class RecordingProxy : public BnCameraRecordingProxy
        {
        public:
            RecordingProxy(const sp<Camera>& camera);
    
            // ICameraRecordingProxy interface
            virtual status_t startRecording(const sp<ICameraRecordingProxyListener>& listener);
            virtual void stopRecording();
            virtual void releaseRecordingFrame(const sp<IMemory>& mem);
        private:
        //这里的是mCamera已经不再是之前preview启动时对应的那个本地Camera对象
        //这是mediaRecorder重新创建的camera本地对象。
            sp<Camera>         mCamera;
        };
    
    3)代码片段3-setCamera本地实现
    status_t MediaRecorderClient::setCamera(const sp<ICamera>& camera,
                                            const sp<ICameraRecordingProxy>& proxy)
    {
        ALOGV("setCamera");
        Mutex::Autolock lock(mLock);
        if (mRecorder == NULL) {
            ALOGE("recorder is not initialized");
            return NO_INIT;
        }
        return mRecorder->setCamera(camera, proxy);
    }
    //构造函数中可以看到创建了一个StagefrightRecorder对象,后续的其它操作
    //都是通过mRecorder对象实现的
    MediaRecorderClient::MediaRecorderClient(const sp<MediaPlayerService>& service, pid_t pid)
    {
        ALOGV("Client constructor");
        mPid = pid;
        mRecorder = new StagefrightRecorder;
        mMediaPlayerService = service;
    }
    //StagefrightRecorder::setCamera实现
    struct StagefrightRecorder : public MediaRecorderBase {}
    status_t StagefrightRecorder::setCamera(const sp<ICamera> &camera,
                                            const sp<ICameraRecordingProxy> &proxy) {
    //省去一些错误检查代码
        mCamera = camera;
        mCameraProxy = proxy;
        return OK;
    }
    

    最终ICamera,ICameraRecordingProxy代理对象都存放到StagefrightRecorder对应的成员变量中,看来猪脚就在这个类中。

    4)代码片段4
    status_t CameraSource::isCameraAvailable(
        const sp<ICamera>& camera, const sp<ICameraRecordingProxy>& proxy,
        int32_t cameraId, const String16& clientName, uid_t clientUid) {
    
        if (camera == 0) {
            mCamera = Camera::connect(cameraId, clientName, clientUid);
            if (mCamera == 0) return -EBUSY;
            mCameraFlags &= ~FLAGS_HOT_CAMERA;
        } else {
            // We get the proxy from Camera, not ICamera. We need to get the proxy
            // to the remote Camera owned by the application. Here mCamera is a
            // local Camera object created by us. We cannot use the proxy from
            // mCamera here.
            //根据ICamera代理对象重新创建Camera本地对象
            mCamera = Camera::create(camera);
            if (mCamera == 0) return -EBUSY;
            mCameraRecordingProxy = proxy;
            //目前还不清楚是什么标记,权且理解成支持热插拔标记
            mCameraFlags |= FLAGS_HOT_CAMERA;
            //代理对象绑定死亡通知对象
            mDeathNotifier = new DeathNotifier();
            // isBinderAlive needs linkToDeath to work.
            mCameraRecordingProxy->asBinder()->linkToDeath(mDeathNotifier);
        }
        mCamera->lock();
        return OK;
    }
    

    由上面的类图之间的关系的,就知道mediaRecorder间接包含了cameaSource对象,这里为了简单直接要害代码。

    • 1.在创建CameraSource对象时,会去检查一下Camera对象是否可用,可用的话就会根据传进来的代理对象重新创建Camera本地对象(注意这个时候Camera代理对象在mediaRecorder中)
    • 2.然后保存RecordingProxy代理对象到mCameraRecordingProxy成员中,然后绑定死亡通知对象到RecordingProxy代理对象。
    5)代码片段5
    status_t CameraSource::startCameraRecording() {
        ALOGV("startCameraRecording");
        // Reset the identity to the current thread because media server owns the
        // camera and recording is started by the applications. The applications
        // will connect to the camera in ICameraRecordingProxy::startRecording.
        int64_t token = IPCThreadState::self()->clearCallingIdentity();
        status_t err;
        if (mNumInputBuffers > 0) {
            err = mCamera->sendCommand(
                CAMERA_CMD_SET_VIDEO_BUFFER_COUNT, mNumInputBuffers, 0);
        }
        err = OK;
        if (mCameraFlags & FLAGS_HOT_CAMERA) {//前面已经置位FLAGS_HOT_CAMERA,成立
            mCamera->unlock();
            mCamera.clear();
            //通过recording代理对象,直接启动camera本地端的recording
            if ((err = mCameraRecordingProxy->startRecording(
                    new ProxyListener(this))) != OK) {
            }
        } else {
        }
        IPCThreadState::self()->restoreCallingIdentity(token);
        return err;
    }
    

    上面代码需要我们注意的是在启动startRecording()时,创建的监听对象new ProxyListener(this),该监听对象会传到Camera本地对象中。当帧可用时,用来通知mediaRecorder有帧可以使用了,赶紧编码吧。

    6) 代码片段6-mediaRecorder注册帧可用监听对象
    class ProxyListener: public BnCameraRecordingProxyListener {
        public:
            ProxyListener(const sp<CameraSource>& source);
            virtual void dataCallbackTimestamp(int64_t timestampUs, int32_t msgType,
                    const sp<IMemory> &data);
        private:
            sp<CameraSource> mSource;
        };
    //camera.cpp
    status_t Camera::RecordingProxy::startRecording(const sp<ICameraRecordingProxyListener>& listener)
    {
        ALOGV("RecordingProxy::startRecording");
        mCamera->setRecordingProxyListener(listener);
        mCamera->reconnect();
        return mCamera->startRecording();
    }
    

    注册帧监听对象就是在启动Recording时注册,主要有下面几步:

    • 1.使用setRecordingProxyListener接口,将监听对象设置给mRecordingProxyListener 成员。
    • 2.重新和cameraService握手(preview停止时就会断开链接,在切换瞬间就断开了)
    • 3.使用ICamera代理对象启动录像。

    四、阶段小结

    到这里Camera如何使用medaiRecorder录像的基本流程已经清楚了,这里我画了一个流程图,大概包含下面9个流程。

    • 过程1:上层点击了录像功能,或者录像preview模式下,会创建一个mediaRecorDer Java层对象。
    • 过程2:java层mediaRecorder对象调用native_jni native_setup方法,创建一个native的mediaRecorder对象。创建的过程中连接mediaPlayerService,并通过匿名binder通信方式获取到一个mediaRecorderClient代理对象,并保存到mediaRecorder对象的成员变量mMediaRecorder中。
    • 过程3:ava层的Camera对象传给mediaRecorder native层时,可以通过本地方法获取到Camera本地对象和ICamera代理对象。这里是获取ICamera代理对象和RecordingProxy本地对象
    • 过程4:将ICamera代理对象和RecordingProxy本地对象传给在MedaiService本地端的MediaRecorderClient对象,这时ICamera是重新创建的ICamer代理对象,以及获取到RecordingProxy代理对象。
    • 过程5:根据过程4获取到的新的ICamera代理对象和RecordingProxy代理对象,创建新的本地Camera对象Camera2,以及注册录像帧监听对象到Camera2中。
    • 过程6:启动StartRecording
    • 过程7:当录像帧可用时,通知驻留在MedaiRecorderClient中的Camera2本地对象收帧,于此同时Camera2又是通过注册的帧监听对象告知MediaClientClient对象。MediaClientClient对象拿到帧后进行录像编码。
    • 过程8,过程9:通过回调函数,将一些消息发送给应用端。

    五、Camera video创建BufferQueue.

    status_t StreamingProcessor::updateRecordingStream(const Parameters &params) {
        ATRACE_CALL();
        status_t res;
        Mutex::Autolock m(mMutex);
        sp<CameraDeviceBase> device = mDevice.promote();
        //----------------
        bool newConsumer = false;
        if (mRecordingConsumer == 0) {
            ALOGV("%s: Camera %d: Creating recording consumer with %zu + 1 "
                    "consumer-side buffers", __FUNCTION__, mId, mRecordingHeapCount);
            // Create CPU buffer queue endpoint. We need one more buffer here so that we can
            // always acquire and free a buffer when the heap is full; otherwise the consumer
            // will have buffers in flight we'll never clear out.
            sp<IGraphicBufferProducer> producer;
            sp<IGraphicBufferConsumer> consumer;
            //创建bufferQueue,同时获取到生产者和消费者对象。
            BufferQueue::createBufferQueue(&producer, &consumer);
            //注意下面设置buffer的用处是GRALLOC_USAGE_HW_VIDEO_ENCODER,这个会在
            //mediaRecorder中使用到。
            mRecordingConsumer = new BufferItemConsumer(consumer,
                    GRALLOC_USAGE_HW_VIDEO_ENCODER,
                    mRecordingHeapCount + 1);
            mRecordingConsumer->setFrameAvailableListener(this);
            mRecordingConsumer->setName(String8("Camera2-RecordingConsumer"));
            mRecordingWindow = new Surface(producer);
            newConsumer = true;
            // Allocate memory later, since we don't know buffer size until receipt
        }
    //更新部分代码,就不贴出来了----
    //注意下面video 录像buffer的像素格式是CAMERA2_HAL_PIXEL_FORMAT_OPAQUE
        if (mRecordingStreamId == NO_STREAM) {
            mRecordingFrameCount = 0;
            res = device->createStream(mRecordingWindow,
                    params.videoWidth, params.videoHeight,
                    CAMERA2_HAL_PIXEL_FORMAT_OPAQUE, &mRecordingStreamId);
        }
    
        return OK;
    }
    

    主要处理下面几件事情。

    • 1.由于录像不需要显示,这里创建CameraService BufferQueue本地对象,这个时候获取到的生产者和消费者都是本地的,只有BufferQueue保存的有IGraphicBufferAlloc代理对象mAllocator,专门用来分配buffer。
    • 2.由于StremingProcess.cpp中实现了FrameAvailableListener监听接口方法onFrameAvailable()。这里会通过setFrameAvailableListener方法注册到BufferQueue中。
    • 3.根据生产者对象创建surface对象,并传给Camera3Device申请录像buffer.
    • 4.如果参数有偏差或者之前已经创建过video Stream.这里会删除或者更新videoStream.如果压根没有创建VideoStream,直接创建VideoStream并根据参数更新流信息。

    六、何时录像帧可用

    1.onFrameAvailable()

    void StreamingProcessor::onFrameAvailable(const BufferItem& /*item*/) {
        ATRACE_CALL();
        Mutex::Autolock l(mMutex);
        if (!mRecordingFrameAvailable) {
            mRecordingFrameAvailable = true;
            mRecordingFrameAvailableSignal.signal();
        }
    
    }
    

    当video buffer进行enqueue操作后,该函数会被调用。函数中可用发现,激活了StreamingProcessor主线程。

    2.StreamingProcessor线程loop

    bool StreamingProcessor::threadLoop() {
        status_t res;
        {
            Mutex::Autolock l(mMutex);
            while (!mRecordingFrameAvailable) {
            //之前是在这里挂起的,现在有帧可用就会从这里唤醒。
                res = mRecordingFrameAvailableSignal.waitRelative(
                    mMutex, kWaitDuration);
                if (res == TIMED_OUT) return true;
            }
            mRecordingFrameAvailable = false;
        }
        do {
            res = processRecordingFrame();//进一步处理。
        } while (res == OK);
    
        return true;
    }
    

    到这里发现,原来StreamingProcessor主线程只为录像服务,previewStream只是使用了它的几个方法而已。

    3.帧可用消息发送给Camera本地对象

    status_t StreamingProcessor::processRecordingFrame() {
        ATRACE_CALL();
        status_t res;
        sp<Camera2Heap> recordingHeap;
        size_t heapIdx = 0;
        nsecs_t timestamp;
        sp<Camera2Client> client = mClient.promote();
    
        BufferItemConsumer::BufferItem imgBuffer;
        //取出buffer消费,就是拿给mediaRecorder编码
        res = mRecordingConsumer->acquireBuffer(&imgBuffer, 0);
        //----------------------------
        // Call outside locked parameters to allow re-entrancy from notification
        Camera2Client::SharedCameraCallbacks::Lock l(client->mSharedCameraCallbacks);
        if (l.mRemoteCallback != 0) {
        //调用Callback通知Camea本地对象。
            l.mRemoteCallback->dataCallbackTimestamp(timestamp,
                    CAMERA_MSG_VIDEO_FRAME,
                    recordingHeap->mBuffers[heapIdx]);
        } else {
            ALOGW("%s: Camera %d: Remote callback gone", __FUNCTION__, mId);
        }
        return OK;
    

    之前我们已经知道Camera运行时存在类型为ICameraClient的两个对象,其中一个代理对象保存在CameraService中,本地对象保存的Camera本地对象中。这里代理对象通知本地对象取帧了。注意这里消息发送的是“CAMERA_MSG_VIDEO_FRAME”。

    4.Camera本地对象转发消息给mediaRecorder.

    void Camera::dataCallbackTimestamp(nsecs_t timestamp, int32_t msgType, const sp<IMemory>& dataPtr)
    {
        // If recording proxy listener is registered, forward the frame and return.
        // The other listener (mListener) is ignored because the receiver needs to
        // call releaseRecordingFrame.
        sp<ICameraRecordingProxyListener> proxylistener;
        {
        //这里mRecordingProxyListener就是mediaRecorder注册过来的监听代理对象
            Mutex::Autolock _l(mLock);
            proxylistener = mRecordingProxyListener;
        }
        if (proxylistener != NULL) {
        //这里就把buffer送到了mediaRecorder中进行编码
            proxylistener->dataCallbackTimestamp(timestamp, msgType, dataPtr);
            return;
        }
      //---------省略代码
    }
    

    到这里Camera本地对象就会调用mediaRecorder注册来的帧监听对象。前面我们已经做了那么长的铺垫,我想应该可以理解了。好了,mediaRecorder有饭吃了。

    7.总结

    • 1.一开始我自以为preview和Video使用同一个camera本地对象,看了代码发现,原来是不同的对象。
    • 2.预览的BufferQueue是在CameraService中创建的,和surfaceFlinger没有关系,只是保留了IGraphicBufferAlloc代理对象mAllocator,用于分配buffer.
    • 3.之匿名binder没有理解透彻,以为只有传递本地对象才能使用writeStrongBinder()接口保存binder对象,同时在使用端使用readStrongBinder()就可以获取到代理对象了。其实也可以传递代理对象,只不过代码会走另外一套逻辑,在kernel中重新创建一个binder_ref索引对象返回给另一端。如下mediaRecorder设置camera时就是传递的ICamera代理对象
        status_t setCamera(const sp<ICamera>& camera, const sp<ICameraRecordingProxy>& proxy)
        {
            ALOGV("setCamera(%p,%p)", camera.get(), proxy.get());
            Parcel data, reply;
            data.writeInterfaceToken(IMediaRecorder::getInterfaceDescriptor());
            //camera->asBinder()是ICamera代理对象
            data.writeStrongBinder(camera->asBinder());
            data.writeStrongBinder(proxy->asBinder());
            remote()->transact(SET_CAMERA, data, &reply);
            return reply.readInt32();
        }
    
    展开全文
  • Android8.0 Camera系统架构(一)

    万次阅读 多人点赞 2018-08-27 22:50:05
    随着Android系统的不断升级,相机子系统框架也在不断进化,由最初的API1...Android 的相机硬件抽象层 (HAL) 可将 Camera 2 中较高级别的相机框架 API 连接到底层的相机驱动程序和硬件。相机子系统包括相机管道组件的...

    随着Android系统的不断升级,相机子系统框架也在不断进化,由最初的API1和HAL1到现在的API2和HAL3,由最初简单的拍照,录制到现在的连拍,AI人像;可以说是架构上变动最大最频繁的子系统。很多设备仍然依赖相机 HAL1,因此 Android 7.0 继续支持该模块。此外,Android 相机服务还支持同时实现两种 HAL(1 和 3),如果您希望通过相机 HAL1 支持性能略低的前置摄像头,并通过相机 HAL3 支持更为高级的后置摄像头。Android 的相机硬件抽象层 (HAL) 可将 Camera 2 中较高级别的相机框架 API 连接到底层的相机驱动程序和硬件。相机子系统包括相机管道组件的实现,而相机 HAL 则可提供用于实现您的这些组件版本的接口。从 Android 8.0 开始,相机 HAL 接口是 Project Treble 的一部分,相应的 HIDL 接口在硬件/接口/相机中定义。该实现会封装仍在使用旧版 API 的旧 HAL。从 Android 8.0 开始,相机 HAL 实现必须使用 HIDL API;不支持使用旧版接口。Android8.0下最新的相机架构具有更高的灵活性。架构如下:

    这里写图片描述

    重新设计 Android Camera API 的目的在于大幅提高应用对于 Android 设备上的相机子系统的控制能力,同时重新组织 API,提高其效率和可维护性。借助额外的控制能力,您可以更轻松地在 Android 设备上构建高品质的相机应用,这些应用可在多种产品上稳定运行,同时仍会尽可能使用设备专用算法来最大限度地提升质量和性能。版本 3 相机子系统将多个运行模式整合为一个统一的视图,您可以使用这种视图实现之前的任何模式以及一些其他模式,例如连拍模式。这样一来,便可以提高用户对聚焦、曝光以及更多后期处理(例如降噪、对比度和锐化)效果的控制能力。此外,这种简化的视图还能够使应用开发者更轻松地使用相机的各种功能。架构图已经很清晰的描述了各层架构之间的关系,我们按图索骥从最新的架构开始,再看完整的架构,最后我们回到应用层来看Camera子系统的设计。

    1. CameraService

    CameraManager与CameraService通过Binder机制,形成推拉回调

    frameworks\base\core\java\android\hardware\camera2\CameraManager.java
    frameworks\av\camera\aidl\android\hardware\ICameraService.aidl
    frameworks\av\camera\aidl\android\hardware\ICameraServiceListener.aidl

    private static final class CameraManagerGlobal extends ICameraServiceListener.Stub {
             ......
    		 public ICameraService getCameraService() {
                synchronized(mLock) {
                    connectCameraServiceLocked(); //连接服务
                    if (mCameraService == null) {
                        Log.e(TAG, "Camera service is unavailable");
                    }
                    return mCameraService;
                }
            }
            ......
    }
    

    CameraManager通过CameraManagerGlobal访问CameraService服务,并注册监听,CamreaService持有CameraServiceListener列表,并回调结果给CameraManager

    private void connectCameraServiceLocked() {
            //查询服务引用
            IBinder cameraServiceBinder = ServiceManager.getService(CAMERA_SERVICE_BINDER_NAME);
            //转换服务接口
            ICameraService cameraService = ICameraService.Stub.asInterface(cameraServiceBinder);
            try {
               CameraStatus[] cameraStatuses = cameraService.addListener(this); //注册回调监听
               //存副本
               mCameraService = cameraService;
           }
    }
    

    frameworks\av\services\camera\libcameraservice\CameraService.h

    class CameraService :
        public BinderService<CameraService>,
        public virtual ::android::hardware::BnCameraService, //Bn端(服务端)
        public virtual IBinder::DeathRecipient {.....}
    

    注册CameraService回调监听

    Status CameraService::addListener(const sp<ICameraServiceListener>& listener,
            std::vector<hardware::CameraStatus> *cameraStatuses) {
        {
            Mutex::Autolock lock(mStatusListenerLock);
            for (auto& it : mListenerList) {
                if (IInterface::asBinder(it) == IInterface::asBinder(listener)) {
                    return STATUS_ERROR(ERROR_ALREADY_EXISTS, "Listener already registered");
                }
            }
            mListenerList.push_back(listener); //注册
        }
    
        return Status::ok();
    }
    

    CameraService初始化

    void CameraService::onFirstRef()
    {
        BnCameraService::onFirstRef();
        res = enumerateProviders(); //枚举Provider
        CameraService::pingCameraServiceProxy();
    }
    

    调用CameraProviderManager枚举设备

    status_t CameraService::enumerateProviders() {
       
        if (nullptr == mCameraProviderManager.get()) {
            mCameraProviderManager = new CameraProviderManager();
            res = mCameraProviderManager->initialize(this); //初始化
        }
    
        mNumberOfCameras = mCameraProviderManager->getCameraCount(); //相机数目
        mNumberOfNormalCameras =
                mCameraProviderManager->getAPI1CompatibleCameraCount(); //可用API
                
        mCameraProviderManager->setUpVendorTags(); //第三方厂商Tag
    
        if (nullptr == mFlashlight.get()) {
            mFlashlight = new CameraFlashlight(mCameraProviderManager, this); //闪光灯
        }
        res = mFlashlight->findFlashUnits();
        return OK;
    }
    

    2. CameraProviderManager

    frameworks\av\services\camera\libcameraservice\common\CameraProviderManager.cpp

    status_t CameraProviderManager::initialize(wp<CameraProviderManager::StatusListener> listener,
            ServiceInteractionProxy* proxy) {
        std::lock_guard<std::mutex> lock(mInterfaceMutex);
        mListener = listener;
        mServiceProxy = proxy;
    
        // Registering will trigger notifications for all already-known providers
        bool success = mServiceProxy->registerForNotifications( //注册代理通知
            /* instance name, empty means no filter */ "",
            this);
            return INVALID_OPERATION;
        }
    
        //添加提供者
        addProviderLocked(kLegacyProviderName, /*expected*/ false);
        return OK;
    }
    

    查找初始化并保存Provider; mServiceProxy是ServiceInteractionProxy*;

    status_t CameraProviderManager::addProviderLocked(const std::string& newProvider, bool expected) {
       
        sp<provider::V2_4::ICameraProvider> interface;
        interface = mServiceProxy->getService(newProvider); //获取服务
    
        sp<ProviderInfo> providerInfo =
                new ProviderInfo(newProvider, interface, this);
        status_t res = providerInfo->initialize(); //执行初始化
    
        mProviders.push_back(providerInfo); //备份
    
        return OK;
    }
    

    frameworks\av\services\camera\libcameraservice\common\CameraProviderManager.h

    struct ServiceInteractionProxy {
            virtual bool registerForNotifications(
                    const std::string &serviceName,
                    const sp<hidl::manager::V1_0::IServiceNotification>
                    &notification) = 0;
            virtual sp<hardware::camera::provider::V2_4::ICameraProvider> getService(
                    const std::string &serviceName) = 0;
            virtual ~ServiceInteractionProxy() {}
        };
    
        // Standard use case - call into the normal generated static methods which invoke
        // the real hardware service manager
        struct HardwareServiceInteractionProxy : public ServiceInteractionProxy {
            virtual bool registerForNotifications(
                    const std::string &serviceName,
                    const sp<hidl::manager::V1_0::IServiceNotification>
                    &notification) override {
                return hardware::camera::provider::V2_4::ICameraProvider::registerForNotifications(
                        serviceName, notification);
            }
            virtual sp<hardware::camera::provider::V2_4::ICameraProvider> getService(
                    const std::string &serviceName) override { //调用HAL
                return hardware::camera::provider::V2_4::ICameraProvider::getService(serviceName);
            }
        };
    

    3. Camera硬件抽象层

    hardware\interfaces\camera\provider\2.4\default\CameraProvider.h

    struct CameraProvider : public ICameraProvider, public camera_module_callbacks_t {......}
    

    执行初始化

    bool CameraProvider::initialize() {
        camera_module_t *rawModule;
        int err = hw_get_module(CAMERA_HARDWARE_MODULE_ID, //熟悉的配方,熟悉的操作
                (const hw_module_t **)&rawModule);
        mModule = new CameraModule(rawModule); //封装了一层
        err = mModule->init();
        // Setup callback now because we are going to try openLegacy next
        err = mModule->setCallbacks(this);
        mNumberOfLegacyCameras = mModule->getNumberOfCameras();
        for (int i = 0; i < mNumberOfLegacyCameras; i++) {
            struct camera_info info;
            auto rc = mModule->getCameraInfo(i, &info); //查相机信息
            char cameraId[kMaxCameraIdLen];
            snprintf(cameraId, sizeof(cameraId), "%d", i);
            std::string cameraIdStr(cameraId);
            mCameraStatusMap[cameraIdStr] = CAMERA_DEVICE_STATUS_PRESENT;
            mCameraIds.add(cameraIdStr);
            ......
        }
    
        return false; // mInitFailed
    }
    

    hardware\interfaces\camera\common\1.0\default\CameraModule.cpp

    CameraModule::CameraModule(camera_module_t *module) {
        mModule = module; //save this ref
    }
    

    做了一些版本相关处理

    int CameraModule::init() {
        ATRACE_CALL();
        int res = OK;
        if (getModuleApiVersion() >= CAMERA_MODULE_API_VERSION_2_4 &&
                mModule->init != NULL) {
            ATRACE_BEGIN("camera_module->init");
            res = mModule->init(); //初始化
            ATRACE_END();
        }
        mCameraInfoMap.setCapacity(getNumberOfCameras());
        return res;
    }
    

    hardware\libhardware\include\hardware\camera_common.h
    最终通过HAL与相机设备驱动交互

    {
        ......
         */
        int (*init)();
    
        /* reserved for future use */
        void* reserved[5];
    } camera_module_t;
    

    让我们往回倒一下车,providerInfo->initialize();

    status_t CameraProviderManager::ProviderInfo::initialize() {
        status_t res = parseProviderName(mProviderName, &mType, &mId);
        hardware::Return<Status> status = mInterface->setCallback(this);
        hardware::Return<bool> linked = mInterface->linkToDeath(this, /*cookie*/ mId);
        //初始化相机设备
        // Get initial list of camera devices, if any
        std::vector<std::string> devices;
        hardware::Return<void> ret = mInterface->getCameraIdList([&status, &devices]( //获取device
                Status idStatus,
                const hardware::hidl_vec<hardware::hidl_string>& cameraDeviceNames) {
            status = idStatus;
    
        sp<StatusListener> listener = mManager->getStatusListener();
        for (auto& device : devices) {
            std::string id;
            status_t res = addDevice(device, //天加device
                    hardware::camera::common::V1_0::CameraDeviceStatus::PRESENT, &id);
        }
    
        for (auto& device : mDevices) {
            mUniqueCameraIds.insert(device->mId);
            if (device->isAPI1Compatible()) {
                mUniqueAPI1CompatibleCameraIds.insert(device->mId);
            }
        }
        mUniqueDeviceCount = mUniqueCameraIds.size();
        return OK;
    }
    
    status_t CameraProviderManager::ProviderInfo::addDevice(const std::string& name,
            CameraDeviceStatus initialStatus, /*out*/ std::string* parsedId) {
            
        uint16_t major, minor;
        std::string type, id;
    
        status_t res = parseDeviceName(name, &major, &minor, &type, &id); //解析设备名
        
        if (mManager->isValidDeviceLocked(id, major)) { //验证
            return BAD_VALUE;
        }
    
        std::unique_ptr<DeviceInfo> deviceInfo;
        switch (major) {
            case 1:
                deviceInfo = initializeDeviceInfo<DeviceInfo1>(name, mProviderTagid, //Device1
                        id, minor);
                break;
            case 3:
                deviceInfo = initializeDeviceInfo<DeviceInfo3>(name, mProviderTagid, //Device3
                        id, minor);
                break;
            default:
                return BAD_VALUE;
        }
        if (deviceInfo == nullptr) return BAD_VALUE;
        deviceInfo->mStatus = initialStatus; //回调设置
    
        mDevices.push_back(std::move(deviceInfo)); //存储引用
    
        if (parsedId != nullptr) {
            *parsedId = id;
        }
        return OK;
    }
    
    

    初始化设备信息

    template<class DeviceInfoT>
    std::unique_ptr<CameraProviderManager::ProviderInfo::DeviceInfo>
        CameraProviderManager::ProviderInfo::initializeDeviceInfo(
            const std::string &name, const metadata_vendor_id_t tagId,
            const std::string &id, uint16_t minorVersion) const {
        Status status;
    
        auto cameraInterface =
                getDeviceInterface<typename DeviceInfoT::InterfaceT>(name); //获取HAL设备远程接口
        if (cameraInterface == nullptr) return nullptr;
        return std::unique_ptr<DeviceInfo>( //返回设备信息
            new DeviceInfoT(name, tagId, id, minorVersion, resourceCost,
                    cameraInterface));
    }
    
    

    通过ICameraDevice关联硬件抽象层

    template<>
    sp<device::V1_0::ICameraDevice>
    CameraProviderManager::ProviderInfo::getDeviceInterface
            <device::V1_0::ICameraDevice>(const std::string &name) const {
        Status status;
        sp<device::V1_0::ICameraDevice> cameraInterface;
        hardware::Return<void> ret;
        ret = mInterface->getCameraDeviceInterface_V1_x(name, [&status, &cameraInterface](
            Status s, sp<device::V1_0::ICameraDevice> interface) {
                    status = s;
                    cameraInterface = interface;
                });
        return cameraInterface;
    }
    

    硬件抽象层调用接口(Treble架构)

    struct CameraDevice : public ICameraDevice {......}
    

    4. CameraDeviceClient与CameraDevice

    新的架构图中右边的分支已经跟踪完毕,我们回过头来看左边的分支
    frameworks/base/core/java/android/hardware/camera2/CameraManager.java

    private CameraDevice openCameraDeviceUserAsync(String cameraId,
                CameraDevice.StateCallback callback, Handler handler, final int uid)
                throws CameraAccessException {
            CameraCharacteristics characteristics = getCameraCharacteristics(cameraId);
            CameraDevice device = null;
    
            synchronized (mLock) {
    
                ICameraDeviceUser cameraUser = null;
                //CameraDeviceImpl 为 CameraDevice抽象类的自类
                android.hardware.camera2.impl.CameraDeviceImpl deviceImpl =
                        new android.hardware.camera2.impl.CameraDeviceImpl(
                            cameraId,
                            callback,
                            handler,
                            characteristics,
                            mContext.getApplicationInfo().targetSdkVersion);
    
                ICameraDeviceCallbacks callbacks = deviceImpl.getCallbacks();
    
                try {
                    if (supportsCamera2ApiLocked(cameraId)) {
                        // Use cameraservice's cameradeviceclient implementation for HAL3.2+ devices
                        ICameraService cameraService = CameraManagerGlobal.get().getCameraService();
                        //连接相机Device
                        cameraUser = cameraService.connectDevice(callbacks, cameraId,
                                mContext.getOpPackageName(), uid);
                    } else {
                        // Use legacy camera implementation for HAL1 devices
                        //使用旧版HAL1
                        cameraUser = CameraDeviceUserShim.connectBinderShim(callbacks, id);
                    }
                } catch (ServiceSpecificException e) {
                   ......
                }
    
                // TODO: factor out callback to be non-nested, then move setter to constructor
                // For now, calling setRemoteDevice will fire initial
                // onOpened/onUnconfigured callbacks.
                // This function call may post onDisconnected and throw CAMERA_DISCONNECTED if
                // cameraUser dies during setup.
                deviceImpl.setRemoteDevice(cameraUser); //设置
                device = deviceImpl;
            }
    
            return device;
        }
    

    frameworks/av/services/camera/libcameraservice/CameraService.cpp

    Status CameraService::connectDevice(
            const sp<hardware::camera2::ICameraDeviceCallbacks>& cameraCb,
            const String16& cameraId,
            const String16& clientPackageName,
            int clientUid,
            /*out*/
            sp<hardware::camera2::ICameraDeviceUser>* device) {
    
        ATRACE_CALL();
        Status ret = Status::ok();
        String8 id = String8(cameraId);
        sp<CameraDeviceClient> client = nullptr;
        //调用模板函数
        ret = connectHelper<hardware::camera2::ICameraDeviceCallbacks,CameraDeviceClient>(cameraCb, id,
                CAMERA_HAL_API_VERSION_UNSPECIFIED, clientPackageName,
                clientUid, USE_CALLING_PID, API_2,
                /*legacyMode*/ false, /*shimUpdateOnly*/ false,
                /*out*/client);
    
        if(!ret.isOk()) {
            logRejected(id, getCallingPid(), String8(clientPackageName),
                    ret.toString8());
            return ret;
        }
    
        *device = client; //返回设备
        return ret;
    }
    

    模板函数

    template<class CALLBACK, class CLIENT>
    Status CameraService::connectHelper(const sp<CALLBACK>& cameraCb, const String8& cameraId,
            int halVersion, const String16& clientPackageName, int clientUid, int clientPid,
            apiLevel effectiveApiLevel, bool legacyMode, bool shimUpdateOnly,
            /*out*/sp<CLIENT>& device) {
        binder::Status ret = binder::Status::ok();
    
        String8 clientName8(clientPackageName);
    
        int originalClientPid = 0;
        sp<CLIENT> client = nullptr;
        {
            ......
            // Enforce client permissions and do basic sanity checks
            if(!(ret = validateConnectLocked(cameraId, clientName8, //连接验证
                    /*inout*/clientUid, /*inout*/clientPid, /*out*/originalClientPid)).isOk()) {
                return ret;
            }
    
            // Check the shim parameters after acquiring lock, if they have already been updated and
            // we were doing a shim update, return immediately
            if (shimUpdateOnly) {
                auto cameraState = getCameraState(cameraId);
                if (cameraState != nullptr) {
                    if (!cameraState->getShimParams().isEmpty()) return ret;
                }
            }
            
            ......
    
            sp<BasicClient> tmp = nullptr;
            if(!(ret = makeClient(this, cameraCb, clientPackageName, cameraId, facing, clientPid,
                    clientUid, getpid(), legacyMode, halVersion, deviceVersion, effectiveApiLevel,
                    /*out*/&tmp)).isOk()) {
                return ret;
            }
            client = static_cast<CLIENT*>(tmp.get());
            //初始化
            err = client->initialize(mCameraProviderManager);
    
            // Update shim paremeters for legacy clients
            if (effectiveApiLevel == API_1) { //系统旧版本API1
                // Assume we have always received a Client subclass for API1
                sp<Client> shimClient = reinterpret_cast<Client*>(client.get());
                String8 rawParams = shimClient->getParameters();
                CameraParameters params(rawParams);
    
                auto cameraState = getCameraState(cameraId);
                if (cameraState != nullptr) {
                    cameraState->setShimParams(params);
                }
            }
        } // lock is destroyed, allow further connect calls
        device = client;
        return ret;
    }
    

    创建不同HAL版本对应的相机Client

    Status CameraService::makeClient(const sp<CameraService>& cameraService,
            const sp<IInterface>& cameraCb, const String16& packageName, const String8& cameraId,
            int facing, int clientPid, uid_t clientUid, int servicePid, bool legacyMode,
            int halVersion, int deviceVersion, apiLevel effectiveApiLevel,
            /*out*/sp<BasicClient>* client) {
    
        if (halVersion < 0 || halVersion == deviceVersion) {
            // Default path: HAL version is unspecified by caller, create CameraClient
            // based on device version reported by the HAL.
            switch(deviceVersion) {
              //HAL1的架构
              case CAMERA_DEVICE_API_VERSION_1_0:
                if (effectiveApiLevel == API_1) {  // Camera1 API route
                    sp<ICameraClient> tmp = static_cast<ICameraClient*>(cameraCb.get());
                    *client = new CameraClient(cameraService, tmp, packageName, cameraIdToInt(cameraId),
                            facing, clientPid, clientUid, getpid(), legacyMode);
                } else { // Camera2 API route
                    ALOGW("Camera using old HAL version: %d", deviceVersion);
                    return STATUS_ERROR_FMT(ERROR_DEPRECATED_HAL,
                            "Camera device \"%s\" HAL version %d does not support camera2 API",
                            cameraId.string(), deviceVersion);
                }
                break;
              //HAL3的架构
              case CAMERA_DEVICE_API_VERSION_3_0:
              case CAMERA_DEVICE_API_VERSION_3_1:
              case CAMERA_DEVICE_API_VERSION_3_2:
              case CAMERA_DEVICE_API_VERSION_3_3:
              case CAMERA_DEVICE_API_VERSION_3_4:
                if (effectiveApiLevel == API_1) { // Camera1 API route
                    sp<ICameraClient> tmp = static_cast<ICameraClient*>(cameraCb.get());
                    *client = new Camera2Client(cameraService, tmp, packageName, cameraIdToInt(cameraId),
                            facing, clientPid, clientUid, servicePid, legacyMode);
                } else { // Camera2 API route
                    sp<hardware::camera2::ICameraDeviceCallbacks> tmp =
                            static_cast<hardware::camera2::ICameraDeviceCallbacks*>(cameraCb.get());
                    *client = new CameraDeviceClient(cameraService, tmp, packageName, cameraId,
                            facing, clientPid, clientUid, servicePid);
                }
                break;
              default:
                // Should not be reachable
                ALOGE("Unknown camera device HAL version: %d", deviceVersion);
                return STATUS_ERROR_FMT(ERROR_INVALID_OPERATION,
                        "Camera device \"%s\" has unknown HAL version %d",
                        cameraId.string(), deviceVersion);
            }
        } else {
            // A particular HAL version is requested by caller. Create CameraClient
            // based on the requested HAL version.
            if (deviceVersion > CAMERA_DEVICE_API_VERSION_1_0 &&
                halVersion == CAMERA_DEVICE_API_VERSION_1_0) {
                // Only support higher HAL version device opened as HAL1.0 device.
                sp<ICameraClient> tmp = static_cast<ICameraClient*>(cameraCb.get());
                *client = new CameraClient(cameraService, tmp, packageName, cameraIdToInt(cameraId),
                        facing, clientPid, clientUid, servicePid, legacyMode);
            } else {
                // Other combinations (e.g. HAL3.x open as HAL2.x) are not supported yet.
                ALOGE("Invalid camera HAL version %x: HAL %x device can only be"
                        " opened as HAL %x device", halVersion, deviceVersion,
                        CAMERA_DEVICE_API_VERSION_1_0);
                return STATUS_ERROR_FMT(ERROR_ILLEGAL_ARGUMENT,
                        "Camera device \"%s\" (HAL version %d) cannot be opened as HAL version %d",
                        cameraId.string(), deviceVersion, halVersion);
            }
        }
        return Status::ok();
    }
    

    frameworks/av/services/camera/libcameraservice/api2/CameraDeviceClient.h

    struct CameraDeviceClientBase :
             public CameraService::BasicClient,
             public hardware::camera2::BnCameraDeviceUser //CameraDeviceUser的服务端
    {
        typedef hardware::camera2::ICameraDeviceCallbacks TCamCallbacks; //该回调在CameraDeviceImpl
    
        const sp<hardware::camera2::ICameraDeviceCallbacks>& getRemoteCallback() {
            return mRemoteCallback;
        }
        ......
    };
    

    HAL3对应的CameraDeviceClient

    class CameraDeviceClient :
            public Camera2ClientBase<CameraDeviceClientBase>,
            public camera2::FrameProcessorBase::FilteredListener
    {......}
    

    如此以来CameraDeviceClient就继承了CameraDeviceClientBase,间接继承了BnCameraDeviceUser

    template <typename TClientBase>
    class Camera2ClientBase :
            public TClientBase,
            public CameraDeviceBase::NotificationListener
    {
    

    持有远程mRemoteDevice的Binder对象

        public void setRemoteDevice(ICameraDeviceUser remoteDevice) throws CameraAccessException {
            synchronized(mInterfaceLock) {
                mRemoteDevice = new ICameraDeviceUserWrapper(remoteDevice); //包装
    
                IBinder remoteDeviceBinder = remoteDevice.asBinder();//远程Binder服务,HAL3架构下是CameraDeviceClient
                ......
    
                mDeviceHandler.post(mCallOnOpened);
                mDeviceHandler.post(mCallOnUnconfigured);
            }
        }
    

    通过此远程回调将CamreaDevice与CameraDeviceClient联系起来,处理来自CameraDeviceClient的消息

    public class CameraDeviceCallbacks extends ICameraDeviceCallbacks.Stub {......}
    

    5. CameraDeviceClient与Camera3Device

    回到刚才的err = client->initialize(mCameraProviderManager);

    status_t CameraDeviceClient::initialize(sp<CameraProviderManager> manager) {
        return initializeImpl(manager);
    }
    
    template<typename TProviderPtr>
    status_t CameraDeviceClient::initializeImpl(TProviderPtr providerPtr) {
        ATRACE_CALL();
        status_t res;
    
        res = Camera2ClientBase::initialize(providerPtr); //调用初始化
        if (res != OK) {
            return res;
        }
        ......
        return OK;
    }
    

    frameworks/av/services/camera/libcameraservice/common/Camera2ClientBase.cpp

    template <typename TClientBase>
    status_t Camera2ClientBase<TClientBase>::initialize(sp<CameraProviderManager> manager) {
        return initializeImpl(manager);
    }
    
    template <typename TClientBase>
    template <typename TProviderPtr>
    status_t Camera2ClientBase<TClientBase>::initializeImpl(TProviderPtr providerPtr) {
        ATRACE_CALL();
        ALOGV("%s: Initializing client for camera %s", __FUNCTION__,
              TClientBase::mCameraIdStr.string());
        status_t res;
    
        // Verify ops permissions
        res = TClientBase::startCameraOps();
        if (res != OK) {
            return res;
        }
    
        if (mDevice == NULL) {
            ALOGE("%s: Camera %s: No device connected",
                    __FUNCTION__, TClientBase::mCameraIdStr.string());
            return NO_INIT;
        }
    
        res = mDevice->initialize(providerPtr); //调用具体的设备初始化
        if (res != OK) {
            ALOGE("%s: Camera %s: unable to initialize device: %s (%d)",
                    __FUNCTION__, TClientBase::mCameraIdStr.string(), strerror(-res), res);
            return res;
        }
    
        wp<CameraDeviceBase::NotificationListener> weakThis(this);
        res = mDevice->setNotifyCallback(weakThis);
    
        return OK;
    }
    

    frameworks/av/services/camera/libcameraservice/common/Camera2ClientBase.h

     sp<CameraDeviceBase>  mDevice;
    

    rameworks/av/services/camera/libcameraservice/device3/Camera3Device.h

    class Camera3Device :
                public CameraDeviceBase, //继承了CameraDeviceBase
                virtual public hardware::camera::device::V3_2::ICameraDeviceCallback,
                private camera3_callback_ops {
    

    frameworks/av/services/camera/libcameraservice/device3/Camera3Device.cpp

    status_t Camera3Device::initialize(sp<CameraProviderManager> manager) {
        ......
        sp<ICameraDeviceSession> session;
        ATRACE_BEGIN("CameraHal::openSession");
        status_t res = manager->openSession(mId.string(), this, //通过session获取Device
                /*out*/ &session);
        res = manager->getCameraCharacteristics(mId.string(), &mDeviceInfo); //通过相机参数获取Device
        std::shared_ptr<RequestMetadataQueue> queue;
        auto requestQueueRet = session->getCaptureRequestMetadataQueue(//元数据捕获请求队列
            [&queue](const auto& descriptor) {
                queue = std::make_shared<RequestMetadataQueue>(descriptor);
                if (!queue->isValid() || queue->availableToWrite() <= 0) {
                    ALOGE("HAL returns empty request metadata fmq, not use it");
                    queue = nullptr;
                    // don't use the queue onwards.
                }
            });
    
        std::unique_ptr<ResultMetadataQueue>& resQueue = mResultMetadataQueue;
        auto resultQueueRet = session->getCaptureResultMetadataQueue(//元数据捕获结果队列
            [&resQueue](const auto& descriptor) {
                resQueue = std::make_unique<ResultMetadataQueue>(descriptor);
                if (!resQueue->isValid() || resQueue->availableToWrite() <= 0) {
                    ALOGE("HAL returns empty result metadata fmq, not use it");
                    resQueue = nullptr;
                    // Don't use the resQueue onwards.
                }
            });
        IF_ALOGV() {
            session->interfaceChain([](
                ::android::hardware::hidl_vec<::android::hardware::hidl_string> interfaceChain) {
                    ALOGV("Session interface chain:");
                    for (auto iface : interfaceChain) {
                        ALOGV("  %s", iface.c_str());
                    }
                });
        }
    
        mInterface = new HalInterface(session, queue); //新建硬件抽象层接口实例
        std::string providerType;
        mVendorTagId = manager->getProviderTagIdLocked(mId.string());
    
        return initializeCommonLocked();
    }
    

    至此新Camera架构整个蓝图构建完成,我们只是按图索骥从Java层一直到HAL层,事实上Camera大部分重要的东西都在其驱动算法层,我们暂且不去深究。下一篇我们将从整个新旧架构的角度去分析Camera系统。

    展开全文
  • Android Camera2 API和拍照与录像过程

    万次阅读 多人点赞 2018-02-08 15:17:20
    Android 5.0开始出现了新的相机Camera 2 API,用来替代以前的camera api。 Camera2 API不仅提高了android系统的拍照性能,还支持RAW照片输出,还可以设置相机的对焦模式,曝光模式,快门等等。 Camera2 中主要...

    这里写图片描述

    简介

    Android 5.0开始出现了新的相机Camera 2 API,用来替代以前的camera api。

    Camera2 API不仅提高了android系统的拍照性能,还支持RAW照片输出,还可以设置相机的对焦模式,曝光模式,快门等等。

    Camera2 中主要的API类

    • CameraManager类 : 摄像头管理类,用于检测、打开系统摄像头,通过getCameraCharacteristics(cameraId)可以获取摄像头特征。

    • CameraCharacteristics类:相机特性类,例如,是否支持自动调焦,是否支持zoom,是否支持闪光灯一系列特征。

    • CameraDevice类: 相机设备,类似早期的camera类。

    • CameraCaptureSession类:用于创建预览、拍照的Session类。通过它的setRepeatingRequest()方法控制预览界面 , 通过它的capture()方法控制拍照动作或者录像动作。

    • CameraRequest类:一次捕获的请求,可以设置一些列的参数,用于控制预览和拍照参数,例如:对焦模式,曝光模式,zoom参数等等。

    接下来,进一步介绍,Camera2 API中的各种常见类和抽象类。

    CameraManager类


    CameraCharacteristics cameraCharacteristics =manager.getCameraCharacteristics(cameraId);

    通过以上代码可以获取摄像头的特征对象,例如: 前后摄像头,分辨率等。

    CameraCharacteristics类


    相机特性类

    CameraCharacteristics是一个包含相机参数的对象,可以通过一些key获取对应的values.

    以下几种常用的参数

    • LENS_FACING:获取摄像头方向。LENS_FACING_FRONT是前摄像头,LENS_FACING_BACK是后摄像头。

    • SENSOR_ORIENTATION:获取摄像头拍照的方向。

    • FLASH_INFO_AVAILABLE:获取是否支持闪光灯。

    • SCALER_AVAILABLE_MAX_DIGITAL_ZOOM:获取最大的数字调焦值,也就是zoom最大值。

    • LENS_INFO_MINIMUM_FOCUS_DISTANCE:获取最小的调焦距离,某些手机上获取到的该values为null或者0.0。前摄像头大部分有固定焦距,无法调节。

    • INFO_SUPPORTED_HARDWARE_LEVEL:获取摄像头支持某些特性的程度。

      以下手机中支持的若干种程度:

      • INFO_SUPPORTED_HARDWARE_LEVEL_FULL:全方位的硬件支持,允许手动控制全高清的摄像、支持连拍模式以及其他新特性。

      • INFO_SUPPORTED_HARDWARE_LEVEL_LIMITED:有限支持,这个需要单独查询。

      • INFO_SUPPORTED_HARDWARE_LEVEL_LEGACY:所有设备都会支持,也就是和过时的Camera API支持的特性是一致的。

    CameraDevice类


    CameraDevice的reateCaptureRequest(int templateType)方法创建CaptureRequest.Builder。

    templateType参数有以下几种:

    • TEMPLATE_PREVIEW :预览

    • TEMPLATE_RECORD:拍摄视频

    • TEMPLATE_STILL_CAPTURE:拍照

    • TEMPLATE_VIDEO_SNAPSHOT:创建视视频录制时截屏的请求

    • TEMPLATE_ZERO_SHUTTER_LAG:创建一个适用于零快门延迟的请求。在不影响预览帧率的情况下最大化图像质量。

    • TEMPLATE_MANUAL:创建一个基本捕获请求,这种请求中所有的自动控制都是禁用的(自动曝光,自动白平衡、自动焦点)。

    CameraDevice.StateCallback抽象类


    该抽象类用于CemeraDevice相机设备状态的回调。

        /**
         * 当相机设备的状态发生改变的时候,将会回调。
         */
        protected final CameraDevice.StateCallback stateCallback = new CameraDevice.StateCallback() {
            /**
             * 当相机打开的时候,调用
             * @param cameraDevice
             */
            @Override
            public void onOpened(@NonNull CameraDevice cameraDevice) {
    
                mCameraDevice = cameraDevice;
                startPreView();
            }
    
            @Override
            public void onDisconnected(@NonNull CameraDevice cameraDevice) {
    
                cameraDevice.close();
                mCameraDevice = null;
            }
    
            /**
             * 发生异常的时候调用
             *
             * 这里释放资源,然后关闭界面
             * @param cameraDevice
             * @param error
             */
            @Override
            public void onError(@NonNull CameraDevice cameraDevice, int error) {
                cameraDevice.close();
                mCameraDevice = null;
    
            }
            /**
             *当相机被关闭的时候
             */
            @Override
            public void onClosed(@NonNull CameraDevice camera) {
                super.onClosed(camera);
            }
        };

    CameraCaptureSession.StateCallback抽象类


    该抽象类用于Session过程中状态的回调。

    public static abstract class StateCallback {
    
            //摄像头完成配置,可以处理Capture请求了。
            public abstract void onConfigured(@NonNull CameraCaptureSession session);
    
            //摄像头配置失败
            public abstract void onConfigureFailed(@NonNull CameraCaptureSession session);
    
            //摄像头处于就绪状态,当前没有请求需要处理
            public void onReady(@NonNull CameraCaptureSession session) {}
    
            //摄像头正在处理请求
            public void onActive(@NonNull CameraCaptureSession session) {}
    
            //请求队列中为空,准备着接受下一个请求。
            public void onCaptureQueueEmpty(@NonNull CameraCaptureSession session) {}
    
            //会话被关闭
            public void onClosed(@NonNull CameraCaptureSession session) {}
    
            //Surface准备就绪
            public void onSurfacePrepared(@NonNull CameraCaptureSession session,@NonNull Surface surface) {}
    
    }

    接下来,是介绍拍照和录像流程步骤。


    使用流程:

    1. 打开指定的方向的相机

    最先获取CameraManager对象,通过该对象的getCameraIdList()获取到一些列的摄像头参数。

    通过循环匹配,获取到指定方向的摄像头,例如后摄像头等。

     CameraManager manager = (CameraManager)getSystemService(Context.CAMERA_SERVICE);
    
    //获取到可用的相机
    for (String cameraId : manager.getCameraIdList()) {
    
          //获取到每个相机的参数对象,包含前后摄像头,分辨率等
         CameraCharacteristics  cameraCharacteristics = manager.getCameraCharacteristics(cameraId);
         //摄像头的方向
         Integer facing = cameraCharacteristics.get(CameraCharacteristics.LENS_FACING);
    
         if(facing==null){
             continue;
         }
         //匹配方向,指定打开后摄像头
         if(facing!=CameraCharacteristics.LENS_FACING_BACK){
              continue; 
         }
    
         //打开指定的摄像头
        manager.openCamera(mCameraId, stateCallback, workThreadManager.getBackgroundHandler());
    
        return;
    }

    当然,实际开发中,还需要获取相机支持的特性(闪光灯,zoom调焦,手动调焦等),和设置摄像头的参数(例如:预览的Size)。

    2. 创建预览的界面

    创建 CameraDevice.StateCallback 对象,且开启一个相机。当相机开启后,将出现相机预览界面。

    CameraDevice.StateCallback 对象传入CameraManager中openCamera(mCameraId, stateCallback, workThreadManager.getBackgroundHandler())的第二个参数,用于监听摄像头的状态。

    /**
      * 相机设备
      */
     protected CameraDevice mCameraDevice;
    
    
     /**
       * 当相机设备的状态发生改变的时候,将会回调。
       */
     protected final CameraDevice.StateCallback stateCallback = new CameraDevice.StateCallback() {
            /**
             * 当相机打开的时候,调用
             * @param cameraDevice
             */
            @Override
            public void onOpened(@NonNull CameraDevice cameraDevice) {
                mCameraDevice = cameraDevice;
                createCameraPreviewSession();
            }
    
           // 省略该状态接口的部分方法
           ...............
    
     };
    
     /**
      * 预览请求的Builder
      */
     private CaptureRequest.Builder mPreviewRequestBuilder;
    
    
     /**
     * 相机开始预览,创建一个CameraCaptureSession对象
     */
     private void createCameraPreviewSession() {
    
          // 将CaptureRequest的构建器与Surface对象绑定在一起    
          mPreviewRequestBuilder = mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
    
          // 为相机预览,创建一个CameraCaptureSession对象
          mCameraDevice.createCaptureSession(Arrays.asList(surface, imageReader.getSurface()), stateCallback, null);        
     }
    

    创建完预览的界面后,接下来需要开始刷新。

    3. 在预览界面过程中,需要间隔刷新界面

    相机预览使用TextureView来实现。创建一个CameraCaptureSession ,通过一个用于预览界面的CaptureRequest,间隔复用给CameraCaptureSession。

     private CameraCaptureSession mCaptureSession;
    
     CameraCaptureSession.StateCallback stateCallback=new CameraCaptureSession.StateCallback() {
                    @Override
                    public void onConfigured(@NonNull CameraCaptureSession cameraCaptureSession) {
    
                    //当cameraCaptureSession已经准备完成,开始显示预览界面
                        mCaptureSession = cameraCaptureSession;
                        setCameraCaptureSession();
                    }
    
                    //省略该接口的部分方法
                    .......
     }
    
     /**
      * 设置CameraCaptureSession的特征:
      * <p>
      * 自动对焦,闪光灯
      */
     private void setCameraCaptureSession() {
    
         //设置预览界面的特征,通过mPreviewRequestBuilder.set()方法,例如,闪光灯,zoom调焦等
         ..........
    
          //为CameraCaptureSession设置间隔的CaptureRequest,用间隔刷新预览界面。
         mCaptureSession.setRepeatingRequest(mPreviewRequestBuilder.build(), mCaptureCallback, workThreadManager.getBackgroundHandler()); 
     }

    只要未开始拍照动作或者录像动作,该复用的CaptureRequest会重复的刷新预览界面。

    接下来,等待用户点击拍照按钮或者录像按钮,进行拍照动作,或者录像动作。

    4. 拍照动作

    首先锁住焦点,通过在相机预览界面个性CaptureRequest。然后,以类似方式,需要运行一个预捕获序列。接下来,可已经准备好捕捉图片。创建一个新的CaptureRequest,且拍照。

      /**
         * 拍照一个静态的图片
         * ,当在CaptureCallback监听器响应的时候调用该方法。
         * <p>
         * 当数字调焦缩放的时候,在写入图片数中也要设置。
         */
    private void captureStillPicture() {
        try {
    
                // 创建一个拍照的CaptureRequest.Builder
                final CaptureRequest.Builder captureBuilder = mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_STILL_CAPTURE);
    
                 captureBuilder.addTarget(imageReader.getSurface());
    
                //设置一系列的拍照参数,这里省略
                ...........
    
                //先停止以前的预览状态
                mCaptureSession.stopRepeating();
                mCaptureSession.abortCaptures();
    
                //执行拍照动作
                mCaptureSession.capture(captureBuilder.build(), captureCallback, null);
        } catch (CameraAccessException e) {
                e.printStackTrace();
        }
    }

    拍照界面产生的数据只是在手机内存中,图片是一个磁盘文件,还需要一个将拍照产生数据写入文件中的操作类ImageReader。

    先是创建ImageReader对象,和设置监听器等一些列参数。

        /**
         * 处理静态图片的输出
         */
        private ImageReader imageReader;
    
          //对于静态图片,使用可用的最大值来拍摄。
          Size largest = Collections.max(Arrays.asList(map.getOutputSizes(ImageFormat.JPEG)), new CompareSizeByArea());
          //设置ImageReader,将大小,图片格式
          imageReader = ImageReader.newInstance(largest.getWidth(), largest.getHeight(), ImageFormat.JPEG, /*maxImages*/2);
          imageReader.setOnImageAvailableListener(onImageAvailableListener, workThreadManager.getBackgroundHandler());    
    

    接下来,将ImageReader的surface配置到captureBuilder对象中captureBuilder.addTarget(imageReader.getSurface());

    最后,当拍照完成后,会在该监听状态中回调:

       /**
         * ImageReader的回调监听器
         * <p>
         * onImageAvailable被调用的时候,已经拍照完,准备保存的操作
         * 通常写入磁盘文件中。
         */
        protected final ImageReader.OnImageAvailableListener onImageAvailableListener = (ImageReader reader)
                -> writePictureData(reader.acquireNextImage());
    
    
        public void writePictureData(Image image) {
            if (camera2ResultCallBack != null) {
                camera2ResultCallBack.callBack(ObservableBuilder.createWriteCaptureImage(appContext, image));
            }
        }        
    
        /**
         * 将JPEG图片的数据,写入磁盘中
         *
         * @param context
         * @param mImage
         * @return
         */
        public static Observable<String> createWriteCaptureImage(final Context context, final Image mImage) {
            Observable<String> observable = Observable.create(subscriber -> {
                File file = FileUtils.createPictureDiskFile(context, FileUtils.createBitmapFileName());
                ByteBuffer buffer = mImage.getPlanes()[0].getBuffer();
                byte[] bytes = new byte[buffer.remaining()];
                buffer.get(bytes);
                FileOutputStream output = null;
                try {
                    output = new FileOutputStream(file);
                    output.write(bytes);
                } catch (IOException e) {
                    e.printStackTrace();
                } finally {
                    mImage.close();
                    if (null != output) {
                        try {
                            output.close();
                        } catch (IOException e) {
                            e.printStackTrace();
                        }
                    }
                }
                subscriber.onNext(file.getAbsolutePath());
            });
            return observable;
        }
    

    这里采用RxJava+RxAndroid异步通讯,避免太多回调接口。

    5. 录像动作

    录像是长时间的动作,录像过程中需要重复性的刷新录制界面。其余的步骤和拍照动作基本类似。

     /**
       * 开始视频录制。
       */
    private void startRecordingVideo() {
        try {
                //创建录制的session会话中的请求
                mPreviewBuilder = mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_RECORD);
    
                //设置录制参数,这里省略
                .........
    
                // Start a capture session
                // Once the session starts, we can update the UI and start recording
                mCameraDevice.createCaptureSession(surfaces, new CameraCaptureSession.StateCallback() {
                    @Override
                    public void onConfigured(@NonNull CameraCaptureSession cameraCaptureSession) {
    
                         mPreviewSession = cameraCaptureSession;
                         Log.i(TAG, " startRecordingVideo  正式开始录制 ");
                         updatePreview();
                    }
                    //该接口的方法,部分省略
                    .............
    
                }, workThreadManager.getBackgroundHandler());
            } catch (CameraAccessException | IOException e) {
                e.printStackTrace();
            }
    }
    
    //录制过程中,不断刷新录制界面
    private void updatePreview() {
    
            try {
                mPreviewSession.setRepeatingRequest(mPreviewBuilder.build(), null, workThreadManager.getBackgroundHandler());
            } catch (CameraAccessException e) {
                e.printStackTrace();
            }
    }

    和拍照类似,将视频数据写入磁盘文件中,也是需要一个操作类 MediaRecorder来实现的。

    先是创建该操作类对象,设置一些列参数:

        /**
         * MediaRecorder
         */
    private MediaRecorder mMediaRecorder;
    
     /**
         * 设置媒体录制器的配置参数
         * <p>
         * 音频,视频格式,文件路径,频率,编码格式等等
         *
         * @throws IOException
         */
        private void setUpMediaRecorder() throws IOException {
    
            mMediaRecorder.setAudioSource(MediaRecorder.AudioSource.MIC);
            mMediaRecorder.setVideoSource(MediaRecorder.VideoSource.SURFACE);
            mMediaRecorder.setOutputFormat(MediaRecorder.OutputFormat.MPEG_4);
            mNextVideoAbsolutePath = FileUtils.createVideoDiskFile(appContext, FileUtils.createVideoFileName()).getAbsolutePath();
            mMediaRecorder.setOutputFile(mNextVideoAbsolutePath);
            mMediaRecorder.setVideoEncodingBitRate(10000000);
            //每秒30帧
            mMediaRecorder.setVideoFrameRate(30);
            mMediaRecorder.setVideoSize(mVideoSize.getWidth(), mVideoSize.getHeight());
            mMediaRecorder.setVideoEncoder(MediaRecorder.VideoEncoder.H264);
            mMediaRecorder.setAudioEncoder(MediaRecorder.AudioEncoder.AAC);
            int rotation = activity.getWindowManager().getDefaultDisplay().getRotation();
            switch (mSensorOrientation) {
                case SENSOR_ORIENTATION_DEFAULT_DEGREES:
                    mMediaRecorder.setOrientationHint(DEFAULT_ORIENTATIONS.get(rotation));
                    break;
                case SENSOR_ORIENTATION_INVERSE_DEGREES:
                    mMediaRecorder.setOrientationHint(ORIENTATIONS.get(rotation));
                    break;
                default:
                    break;
            }
            mMediaRecorder.prepare();
        }

    间隔性的随着视频录制而输出数据到文件中。

    // 为 MediaRecorder设置Surface
    Surface recorderSurface = mMediaRecorder.getSurface();
    surfaces.add(recorderSurface);
    mPreviewBuilder.addTarget(recorderSurface);

    最后,当录制视频结束后,停止输出:

     // 停止录制
     mMediaRecorder.stop();
     mMediaRecorder.reset();

    6. 恢复到预览界面

    完成一些列拍照或录像动作后,重新恢复到预览界面。

    /**
      * 完成一些列拍照或录像动作后,释放焦点。
      */
    private void unlockFocus() {
       try {
             //向session重新发送,预览的间隔性请求,出现预览界面。
             mCaptureSession.setRepeatingRequest(mPreviewRequest, mCaptureCallback, workThreadManager.getBackgroundHandler());
        } catch (CameraAccessException e) {
                e.printStackTrace();
        }
    }
    

    当然,还有关闭相机操作,和与Activity生命周期绑定的操作,这里不再做介绍了。

    以上代码来源于,Camera2App项目

    下一篇,开始介绍,如何开发一个真正的相机程序。


    资源参考

    展开全文
  • Android系统中的Camera系统 一、Androidcamera简介 1.Camera 1.1camera初识 摄像头模组,全称CameraCompact Module ,以下简称CCM,是影像捕捉至关重要的电子器件。 [外链图片转存失败,源站可能有防盗链机制,...

    Android系统中的Camera系统

    一、Android中camera简介

    1.Camera

    1.1camera初识

    • 摄像头模组,全称CameraCompact Module ,以下简称CCM,是影像捕捉至关重要的电子器件。

    在这里插入图片描述

    1.2.camera硬件组成

    • CCM组成

    在这里插入图片描述

    1.3.camera工作原理

    • 工作原理。物体通过镜头( lens )聚集的光,通过CMOS或CCD集成电路,把光信号转换成电信号,再经过内部图像处理器( ISP )转换成数字图像信号输出到数字信号处理器( DSP )加工处理,转换成标准的GRB、YUV等格式图像信号。

    1.4.camera图像格式

    • RGB格式:
    • 采用这种编码方法,每种颜色都可用三个变量来表示红色、绿色以及蓝色的强度,每一个像素有三原色R红色、G绿色、B蓝色组成。
    • YUV格式:
    • 其中"Y" 表示明亮度(Luminance或Luma) ,就是灰阶值;而"U" 和"V" 表示色度(Chrominance或Chroma) ,是描述影像色彩及饱和度,用于指定像素的颜色。
    • RAW DATA 格式:
    • CCD或CMOS在将光信号转换为电信号时的电平高低的原始记录,单纯地将没有进行任何处理的图像数据,即摄像元件直接得到的电信号进行数字化处理而得到的。
    • YCbCr格式:
      YCbCr其中Y是指亮度分量,Cb指蓝色色度分量,而Cr指红色色度分量。人的肉眼对视频的Y分量更敏感,因此在通过对色度分量进行子采样来减少色度分量后,肉眼将察觉不到的图像质量的变化。主要的子采样格式有YCbCr4:2:0、YCbCr 4:2:2和YCbCr 4:4:4。

    1.5.Camera分辨率

    • 分辨率就是显示像素点的数量。常见的分辨率如下:

    在这里插入图片描述

    1.6.Camera传输率

    • 传感器采集来的数据般由专用芯片进行处理 ,处理后的数据就是视频流格式也有很多。如MPEG (运动图像专家组Motion Picture Experts Group )、AVI (音频视频交错Audio Video Interleaved)、 MOV ( QuickTime影片格式)、ASF (高级流格式Advanced stift) WMV (windows media vido) 3GP ( 3C流媒体Streaming format)、 的视频编码格式)、 FLV ( FLASH VIDEO )、RM与RMVB等等。
    • 视频流的传输速度就是传输率,该参数主要对连拍和摄像有影响。一般传输速率越高,视频越流畅。常见的传输速率有15fps , 30fps,60fps,120fps等。
    • 传输速率与图像的分辨率有关,图像分辨率越低,传输速率越高,例如某摄像头在CIF ( 352x288 )分辨率下可实现30fps传输速率,则在VGA ( 640x480 )分辨率下就只有10fps左右。故此传输率的选择会参考到对应的分辨率。一般手机应用30fps的流畅度就足够了。

    2.V4L2

    2.1.V4L2框架

    • V4L2其全称为video for linux two. 是1 inux内核关于视频设备的API接口,涉及开关视频设备,以及该类设备采集并处理相关的音、视频信息。

    • V412有几个重要的文档是必须要读的,

      Documentat ion/v ideo4l inux目录下的V4L2- framework. txt和videobuf、V4L2的官方API文档V4L2 API Specification ,

      dri vers/ media/video目录下的vivi.c (虚拟视频驱动程序此代码模拟一个真正的视频设备V4L2 API )。

    2.2.V412接口

    V412可以支持多种设备,它可以有以下几种接口:

    • 视频采集接口(video capture interface) :这种应用的设备可以是高频头或者摄像头,V4L2的最初设计就是应用于这种功能的。
    • 视频输出接口(videooutputinterface) :可以驱动计算机的外围视频图像设备–像可以输出电视信号格式的设备。
    • 直接传输视频接1 (video overlay interface) :它的主要工作是把从视频采集设备采集过来的信号直接输出到输出设备之上, 而不用经过系统的CPU。
    • 视频间隔消隐信号接口(VBI interface) :它可以使应用可以访问传输消隐期的视频信号。
    • 收音机接口(radio interface) :可用来处理从AM或FM高频头设备接收来的音物流。

    二、虚拟摄像头驱动分析

    1.虚拟摄像头驱动vivi.c

    • 编译/drivers/media/video/vivi.c生成viiv.ko。

    • modprobe vivi装载驱动。(modprobe 将vivi.ko所以来的模块也进行装载。)

    • 利用xawtv工具测试虚拟video0驱动。

    在这里插入图片描述

    2.vivi源码

    /*
     * Virtual Video driver - This code emulates a real video device with v4l2 api
     *
     * Copyright (c) 2006 by:
     *      Mauro Carvalho Chehab <mchehab--a.t--infradead.org>
     *      Ted Walther <ted--a.t--enumera.com>
     *      John Sokol <sokol--a.t--videotechnology.com>
     *      http://v4l.videotechnology.com/
     *
     *      Conversion to videobuf2 by Pawel Osciak & Marek Szyprowski
     *      Copyright (c) 2010 Samsung Electronics
     *
     * This program is free software; you can redistribute it and/or modify
     * it under the terms of the BSD Licence, GNU General Public License
     * as published by the Free Software Foundation; either version 2 of the
     * License, or (at your option) any later version
     */
    #include <linux/module.h>
    #include <linux/errno.h>
    #include <linux/kernel.h>
    #include <linux/init.h>
    #include <linux/sched.h>
    #include <linux/slab.h>
    #include <linux/font.h>
    #include <linux/mutex.h>
    #include <linux/videodev2.h>
    #include <linux/kthread.h>
    #include <linux/freezer.h>
    #include <media/videobuf2-vmalloc.h>
    #include <media/v4l2-device.h>
    #include <media/v4l2-ioctl.h>
    #include <media/v4l2-ctrls.h>
    #include <media/v4l2-fh.h>
    #include <media/v4l2-event.h>
    #include <media/v4l2-common.h>
    
    #define VIVI_MODULE_NAME "vivi"
    
    /* Wake up at about 30 fps */
    #define WAKE_NUMERATOR 30
    #define WAKE_DENOMINATOR 1001
    #define BUFFER_TIMEOUT     msecs_to_jiffies(500)  /* 0.5 seconds */
    
    #define MAX_WIDTH 1920
    #define MAX_HEIGHT 1200
    
    #define VIVI_VERSION "0.8.1"
    
    MODULE_DESCRIPTION("Video Technology Magazine Virtual Video Capture Board");
    MODULE_AUTHOR("Mauro Carvalho Chehab, Ted Walther and John Sokol");
    MODULE_LICENSE("Dual BSD/GPL");
    MODULE_VERSION(VIVI_VERSION);
    
    static unsigned video_nr = -1;
    module_param(video_nr, uint, 0644);
    MODULE_PARM_DESC(video_nr, "videoX start number, -1 is autodetect");
    
    static unsigned n_devs = 1;
    module_param(n_devs, uint, 0644);
    MODULE_PARM_DESC(n_devs, "number of video devices to create");
    
    static unsigned debug;
    module_param(debug, uint, 0644);
    MODULE_PARM_DESC(debug, "activates debug info");
    
    static unsigned int vid_limit = 16;
    module_param(vid_limit, uint, 0644);
    MODULE_PARM_DESC(vid_limit, "capture memory limit in megabytes");
    
    /* Global font descriptor */
    static const u8 *font8x16;
    
    #define dprintk(dev, level, fmt, arg...) \
    	v4l2_dbg(level, debug, &dev->v4l2_dev, fmt, ## arg)
    
    /* ------------------------------------------------------------------
    	Basic structures
       ------------------------------------------------------------------*/
    
    struct vivi_fmt {
    	char  *name;
    	u32   fourcc;          /* v4l2 format id */
    	int   depth;
    };
    
    static struct vivi_fmt formats[] = {
    	{
    		.name     = "4:2:2, packed, YUYV",
    		.fourcc   = V4L2_PIX_FMT_YUYV,
    		.depth    = 16,
    	},
    	{
    		.name     = "4:2:2, packed, UYVY",
    		.fourcc   = V4L2_PIX_FMT_UYVY,
    		.depth    = 16,
    	},
    	{
    		.name     = "4:2:2, packed, YVYU",
    		.fourcc   = V4L2_PIX_FMT_YVYU,
    		.depth    = 16,
    	},
    	{
    		.name     = "4:2:2, packed, VYUY",
    		.fourcc   = V4L2_PIX_FMT_VYUY,
    		.depth    = 16,
    	},
    	{
    		.name     = "RGB565 (LE)",
    		.fourcc   = V4L2_PIX_FMT_RGB565, /* gggbbbbb rrrrrggg */
    		.depth    = 16,
    	},
    	{
    		.name     = "RGB565 (BE)",
    		.fourcc   = V4L2_PIX_FMT_RGB565X, /* rrrrrggg gggbbbbb */
    		.depth    = 16,
    	},
    	{
    		.name     = "RGB555 (LE)",
    		.fourcc   = V4L2_PIX_FMT_RGB555, /* gggbbbbb arrrrrgg */
    		.depth    = 16,
    	},
    	{
    		.name     = "RGB555 (BE)",
    		.fourcc   = V4L2_PIX_FMT_RGB555X, /* arrrrrgg gggbbbbb */
    		.depth    = 16,
    	},
    	{
    		.name     = "RGB24 (LE)",
    		.fourcc   = V4L2_PIX_FMT_RGB24, /* rgb */
    		.depth    = 24,
    	},
    	{
    		.name     = "RGB24 (BE)",
    		.fourcc   = V4L2_PIX_FMT_BGR24, /* bgr */
    		.depth    = 24,
    	},
    	{
    		.name     = "RGB32 (LE)",
    		.fourcc   = V4L2_PIX_FMT_RGB32, /* argb */
    		.depth    = 32,
    	},
    	{
    		.name     = "RGB32 (BE)",
    		.fourcc   = V4L2_PIX_FMT_BGR32, /* bgra */
    		.depth    = 32,
    	},
    };
    
    static struct vivi_fmt *get_format(struct v4l2_format *f)
    {
    	struct vivi_fmt *fmt;
    	unsigned int k;
    
    	for (k = 0; k < ARRAY_SIZE(formats); k++) {
    		fmt = &formats[k];
    		if (fmt->fourcc == f->fmt.pix.pixelformat)
    			break;
    	}
    
    	if (k == ARRAY_SIZE(formats))
    		return NULL;
    
    	return &formats[k];
    }
    
    /* buffer for one video frame */
    struct vivi_buffer {
    	/* common v4l buffer stuff -- must be first */
    	struct vb2_buffer	vb;
    	struct list_head	list;
    	struct vivi_fmt        *fmt;
    };
    
    struct vivi_dmaqueue {
    	struct list_head       active;
    
    	/* thread for generating video stream*/
    	struct task_struct         *kthread;
    	wait_queue_head_t          wq;
    	/* Counters to control fps rate */
    	int                        frame;
    	int                        ini_jiffies;
    };
    
    static LIST_HEAD(vivi_devlist);
    
    struct vivi_dev {
    	struct list_head           vivi_devlist;
    	struct v4l2_device 	   v4l2_dev;
    	struct v4l2_ctrl_handler   ctrl_handler;
    
    	/* controls */
    	struct v4l2_ctrl	   *brightness;
    	struct v4l2_ctrl	   *contrast;
    	struct v4l2_ctrl	   *saturation;
    	struct v4l2_ctrl	   *hue;
    	struct {
    		/* autogain/gain cluster */
    		struct v4l2_ctrl	   *autogain;
    		struct v4l2_ctrl	   *gain;
    	};
    	struct v4l2_ctrl	   *volume;
    	struct v4l2_ctrl	   *alpha;
    	struct v4l2_ctrl	   *button;
    	struct v4l2_ctrl	   *boolean;
    	struct v4l2_ctrl	   *int32;
    	struct v4l2_ctrl	   *int64;
    	struct v4l2_ctrl	   *menu;
    	struct v4l2_ctrl	   *string;
    	struct v4l2_ctrl	   *bitmask;
    	struct v4l2_ctrl	   *int_menu;
    
    	spinlock_t                 slock;
    	struct mutex		   mutex;
    
    	/* various device info */
    	struct video_device        *vfd;
    
    	struct vivi_dmaqueue       vidq;
    
    	/* Several counters */
    	unsigned 		   ms;
    	unsigned long              jiffies;
    	unsigned		   button_pressed;
    
    	int			   mv_count;	/* Controls bars movement */
    
    	/* Input Number */
    	int			   input;
    
    	/* video capture */
    	struct vivi_fmt            *fmt;
    	unsigned int               width, height;
    	struct vb2_queue	   vb_vidq;
    	enum v4l2_field		   field;
    	unsigned int		   field_count;
    
    	u8			   bars[9][3];
    	u8			   line[MAX_WIDTH * 8];
    	unsigned int		   pixelsize;
    	u8			   alpha_component;
    };
    
    /* ------------------------------------------------------------------
    	DMA and thread functions
       ------------------------------------------------------------------*/
    
    /* Bars and Colors should match positions */
    
    enum colors {
    	WHITE,
    	AMBER,
    	CYAN,
    	GREEN,
    	MAGENTA,
    	RED,
    	BLUE,
    	BLACK,
    	TEXT_BLACK,
    };
    
    /* R   G   B */
    #define COLOR_WHITE	{204, 204, 204}
    #define COLOR_AMBER	{208, 208,   0}
    #define COLOR_CYAN	{  0, 206, 206}
    #define	COLOR_GREEN	{  0, 239,   0}
    #define COLOR_MAGENTA	{239,   0, 239}
    #define COLOR_RED	{205,   0,   0}
    #define COLOR_BLUE	{  0,   0, 255}
    #define COLOR_BLACK	{  0,   0,   0}
    
    struct bar_std {
    	u8 bar[9][3];
    };
    
    /* Maximum number of bars are 10 - otherwise, the input print code
       should be modified */
    static struct bar_std bars[] = {
    	{	/* Standard ITU-R color bar sequence */
    		{ COLOR_WHITE, COLOR_AMBER, COLOR_CYAN, COLOR_GREEN,
    		  COLOR_MAGENTA, COLOR_RED, COLOR_BLUE, COLOR_BLACK, COLOR_BLACK }
    	}, {
    		{ COLOR_WHITE, COLOR_AMBER, COLOR_BLACK, COLOR_WHITE,
    		  COLOR_AMBER, COLOR_BLACK, COLOR_WHITE, COLOR_AMBER, COLOR_BLACK }
    	}, {
    		{ COLOR_WHITE, COLOR_CYAN, COLOR_BLACK, COLOR_WHITE,
    		  COLOR_CYAN, COLOR_BLACK, COLOR_WHITE, COLOR_CYAN, COLOR_BLACK }
    	}, {
    		{ COLOR_WHITE, COLOR_GREEN, COLOR_BLACK, COLOR_WHITE,
    		  COLOR_GREEN, COLOR_BLACK, COLOR_WHITE, COLOR_GREEN, COLOR_BLACK }
    	},
    };
    
    #define NUM_INPUTS ARRAY_SIZE(bars)
    
    #define TO_Y(r, g, b) \
    	(((16829 * r + 33039 * g + 6416 * b  + 32768) >> 16) + 16)
    /* RGB to  V(Cr) Color transform */
    #define TO_V(r, g, b) \
    	(((28784 * r - 24103 * g - 4681 * b  + 32768) >> 16) + 128)
    /* RGB to  U(Cb) Color transform */
    #define TO_U(r, g, b) \
    	(((-9714 * r - 19070 * g + 28784 * b + 32768) >> 16) + 128)
    
    /* precalculate color bar values to speed up rendering */
    static void precalculate_bars(struct vivi_dev *dev)
    {
    	u8 r, g, b;
    	int k, is_yuv;
    
    	for (k = 0; k < 9; k++) {
    		r = bars[dev->input].bar[k][0];
    		g = bars[dev->input].bar[k][1];
    		b = bars[dev->input].bar[k][2];
    		is_yuv = 0;
    
    		switch (dev->fmt->fourcc) {
    		case V4L2_PIX_FMT_YUYV:
    		case V4L2_PIX_FMT_UYVY:
    		case V4L2_PIX_FMT_YVYU:
    		case V4L2_PIX_FMT_VYUY:
    			is_yuv = 1;
    			break;
    		case V4L2_PIX_FMT_RGB565:
    		case V4L2_PIX_FMT_RGB565X:
    			r >>= 3;
    			g >>= 2;
    			b >>= 3;
    			break;
    		case V4L2_PIX_FMT_RGB555:
    		case V4L2_PIX_FMT_RGB555X:
    			r >>= 3;
    			g >>= 3;
    			b >>= 3;
    			break;
    		case V4L2_PIX_FMT_RGB24:
    		case V4L2_PIX_FMT_BGR24:
    		case V4L2_PIX_FMT_RGB32:
    		case V4L2_PIX_FMT_BGR32:
    			break;
    		}
    
    		if (is_yuv) {
    			dev->bars[k][0] = TO_Y(r, g, b);	/* Luma */
    			dev->bars[k][1] = TO_U(r, g, b);	/* Cb */
    			dev->bars[k][2] = TO_V(r, g, b);	/* Cr */
    		} else {
    			dev->bars[k][0] = r;
    			dev->bars[k][1] = g;
    			dev->bars[k][2] = b;
    		}
    	}
    }
    
    #define TSTAMP_MIN_Y	24
    #define TSTAMP_MAX_Y	(TSTAMP_MIN_Y + 15)
    #define TSTAMP_INPUT_X	10
    #define TSTAMP_MIN_X	(54 + TSTAMP_INPUT_X)
    
    /* 'odd' is true for pixels 1, 3, 5, etc. and false for pixels 0, 2, 4, etc. */
    static void gen_twopix(struct vivi_dev *dev, u8 *buf, int colorpos, bool odd)
    {
    	u8 r_y, g_u, b_v;
    	u8 alpha = dev->alpha_component;
    	int color;
    	u8 *p;
    
    	r_y = dev->bars[colorpos][0]; /* R or precalculated Y */
    	g_u = dev->bars[colorpos][1]; /* G or precalculated U */
    	b_v = dev->bars[colorpos][2]; /* B or precalculated V */
    
    	for (color = 0; color < dev->pixelsize; color++) {
    		p = buf + color;
    
    		switch (dev->fmt->fourcc) {
    		case V4L2_PIX_FMT_YUYV:
    			switch (color) {
    			case 0:
    				*p = r_y;
    				break;
    			case 1:
    				*p = odd ? b_v : g_u;
    				break;
    			}
    			break;
    		case V4L2_PIX_FMT_UYVY:
    			switch (color) {
    			case 0:
    				*p = odd ? b_v : g_u;
    				break;
    			case 1:
    				*p = r_y;
    				break;
    			}
    			break;
    		case V4L2_PIX_FMT_YVYU:
    			switch (color) {
    			case 0:
    				*p = r_y;
    				break;
    			case 1:
    				*p = odd ? g_u : b_v;
    				break;
    			}
    			break;
    		case V4L2_PIX_FMT_VYUY:
    			switch (color) {
    			case 0:
    				*p = odd ? g_u : b_v;
    				break;
    			case 1:
    				*p = r_y;
    				break;
    			}
    			break;
    		case V4L2_PIX_FMT_RGB565:
    			switch (color) {
    			case 0:
    				*p = (g_u << 5) | b_v;
    				break;
    			case 1:
    				*p = (r_y << 3) | (g_u >> 3);
    				break;
    			}
    			break;
    		case V4L2_PIX_FMT_RGB565X:
    			switch (color) {
    			case 0:
    				*p = (r_y << 3) | (g_u >> 3);
    				break;
    			case 1:
    				*p = (g_u << 5) | b_v;
    				break;
    			}
    			break;
    		case V4L2_PIX_FMT_RGB555:
    			switch (color) {
    			case 0:
    				*p = (g_u << 5) | b_v;
    				break;
    			case 1:
    				*p = (alpha & 0x80) | (r_y << 2) | (g_u >> 3);
    				break;
    			}
    			break;
    		case V4L2_PIX_FMT_RGB555X:
    			switch (color) {
    			case 0:
    				*p = (alpha & 0x80) | (r_y << 2) | (g_u >> 3);
    				break;
    			case 1:
    				*p = (g_u << 5) | b_v;
    				break;
    			}
    			break;
    		case V4L2_PIX_FMT_RGB24:
    			switch (color) {
    			case 0:
    				*p = r_y;
    				break;
    			case 1:
    				*p = g_u;
    				break;
    			case 2:
    				*p = b_v;
    				break;
    			}
    			break;
    		case V4L2_PIX_FMT_BGR24:
    			switch (color) {
    			case 0:
    				*p = b_v;
    				break;
    			case 1:
    				*p = g_u;
    				break;
    			case 2:
    				*p = r_y;
    				break;
    			}
    			break;
    		case V4L2_PIX_FMT_RGB32:
    			switch (color) {
    			case 0:
    				*p = alpha;
    				break;
    			case 1:
    				*p = r_y;
    				break;
    			case 2:
    				*p = g_u;
    				break;
    			case 3:
    				*p = b_v;
    				break;
    			}
    			break;
    		case V4L2_PIX_FMT_BGR32:
    			switch (color) {
    			case 0:
    				*p = b_v;
    				break;
    			case 1:
    				*p = g_u;
    				break;
    			case 2:
    				*p = r_y;
    				break;
    			case 3:
    				*p = alpha;
    				break;
    			}
    			break;
    		}
    	}
    }
    
    static void precalculate_line(struct vivi_dev *dev)
    {
    	int w;
    
    	for (w = 0; w < dev->width * 2; w++) {
    		int colorpos = w / (dev->width / 8) % 8;
    
    		gen_twopix(dev, dev->line + w * dev->pixelsize, colorpos, w & 1);
    	}
    }
    
    static void gen_text(struct vivi_dev *dev, char *basep,
    					int y, int x, char *text)
    {
    	int line;
    
    	/* Checks if it is possible to show string */
    	if (y + 16 >= dev->height || x + strlen(text) * 8 >= dev->width)
    		return;
    
    	/* Print stream time */
    	for (line = y; line < y + 16; line++) {
    		int j = 0;
    		char *pos = basep + line * dev->width * dev->pixelsize + x * dev->pixelsize;
    		char *s;
    
    		for (s = text; *s; s++) {
    			u8 chr = font8x16[*s * 16 + line - y];
    			int i;
    
    			for (i = 0; i < 7; i++, j++) {
    				/* Draw white font on black background */
    				if (chr & (1 << (7 - i)))
    					gen_twopix(dev, pos + j * dev->pixelsize, WHITE, (x+y) & 1);
    				else
    					gen_twopix(dev, pos + j * dev->pixelsize, TEXT_BLACK, (x+y) & 1);
    			}
    		}
    	}
    }
    
    static void vivi_fillbuff(struct vivi_dev *dev, struct vivi_buffer *buf)
    {
    	int wmax = dev->width;
    	int hmax = dev->height;
    	struct timeval ts;
    	void *vbuf = vb2_plane_vaddr(&buf->vb, 0);
    	unsigned ms;
    	char str[100];
    	int h, line = 1;
    	s32 gain;
    
    	if (!vbuf)
    		return;
    
    	for (h = 0; h < hmax; h++)
    		memcpy(vbuf + h * wmax * dev->pixelsize,
    		       dev->line + (dev->mv_count % wmax) * dev->pixelsize,
    		       wmax * dev->pixelsize);
    
    	/* Updates stream time */
    
    	dev->ms += jiffies_to_msecs(jiffies - dev->jiffies);
    	dev->jiffies = jiffies;
    	ms = dev->ms;
    	snprintf(str, sizeof(str), " %02d:%02d:%02d:%03d ",
    			(ms / (60 * 60 * 1000)) % 24,
    			(ms / (60 * 1000)) % 60,
    			(ms / 1000) % 60,
    			ms % 1000);
    	gen_text(dev, vbuf, line++ * 16, 16, str);
    	snprintf(str, sizeof(str), " %dx%d, input %d ",
    			dev->width, dev->height, dev->input);
    	gen_text(dev, vbuf, line++ * 16, 16, str);
    
    	gain = v4l2_ctrl_g_ctrl(dev->gain);
    	mutex_lock(dev->ctrl_handler.lock);
    	snprintf(str, sizeof(str), " brightness %3d, contrast %3d, saturation %3d, hue %d ",
    			dev->brightness->cur.val,
    			dev->contrast->cur.val,
    			dev->saturation->cur.val,
    			dev->hue->cur.val);
    	gen_text(dev, vbuf, line++ * 16, 16, str);
    	snprintf(str, sizeof(str), " autogain %d, gain %3d, volume %3d, alpha 0x%02x ",
    			dev->autogain->cur.val, gain, dev->volume->cur.val,
    			dev->alpha->cur.val);
    	gen_text(dev, vbuf, line++ * 16, 16, str);
    	snprintf(str, sizeof(str), " int32 %d, int64 %lld, bitmask %08x ",
    			dev->int32->cur.val,
    			dev->int64->cur.val64,
    			dev->bitmask->cur.val);
    	gen_text(dev, vbuf, line++ * 16, 16, str);
    	snprintf(str, sizeof(str), " boolean %d, menu %s, string \"%s\" ",
    			dev->boolean->cur.val,
    			dev->menu->qmenu[dev->menu->cur.val],
    			dev->string->cur.string);
    	gen_text(dev, vbuf, line++ * 16, 16, str);
    	snprintf(str, sizeof(str), " integer_menu %lld, value %d ",
    			dev->int_menu->qmenu_int[dev->int_menu->cur.val],
    			dev->int_menu->cur.val);
    	gen_text(dev, vbuf, line++ * 16, 16, str);
    	mutex_unlock(dev->ctrl_handler.lock);
    	if (dev->button_pressed) {
    		dev->button_pressed--;
    		snprintf(str, sizeof(str), " button pressed!");
    		gen_text(dev, vbuf, line++ * 16, 16, str);
    	}
    
    	dev->mv_count += 2;
    
    	buf->vb.v4l2_buf.field = dev->field;
    	dev->field_count++;
    	buf->vb.v4l2_buf.sequence = dev->field_count >> 1;
    	do_gettimeofday(&ts);
    	buf->vb.v4l2_buf.timestamp = ts;
    }
    
    static void vivi_thread_tick(struct vivi_dev *dev)
    {
    	struct vivi_dmaqueue *dma_q = &dev->vidq;
    	struct vivi_buffer *buf;
    	unsigned long flags = 0;
    
    	dprintk(dev, 1, "Thread tick\n");
    
    	spin_lock_irqsave(&dev->slock, flags);
    	if (list_empty(&dma_q->active)) {
    		dprintk(dev, 1, "No active queue to serve\n");
    		spin_unlock_irqrestore(&dev->slock, flags);
    		return;
    	}
    
    	buf = list_entry(dma_q->active.next, struct vivi_buffer, list);
    	list_del(&buf->list);
    	spin_unlock_irqrestore(&dev->slock, flags);
    
    	do_gettimeofday(&buf->vb.v4l2_buf.timestamp);
    
    	/* Fill buffer */
    	vivi_fillbuff(dev, buf);
    	dprintk(dev, 1, "filled buffer %p\n", buf);
    
    	vb2_buffer_done(&buf->vb, VB2_BUF_STATE_DONE);
    	dprintk(dev, 2, "[%p/%d] done\n", buf, buf->vb.v4l2_buf.index);
    }
    
    #define frames_to_ms(frames)					\
    	((frames * WAKE_NUMERATOR * 1000) / WAKE_DENOMINATOR)
    
    static void vivi_sleep(struct vivi_dev *dev)
    {
    	struct vivi_dmaqueue *dma_q = &dev->vidq;
    	int timeout;
    	DECLARE_WAITQUEUE(wait, current);
    
    	dprintk(dev, 1, "%s dma_q=0x%08lx\n", __func__,
    		(unsigned long)dma_q);
    
    	add_wait_queue(&dma_q->wq, &wait);
    	if (kthread_should_stop())
    		goto stop_task;
    
    	/* Calculate time to wake up */
    	timeout = msecs_to_jiffies(frames_to_ms(1));
    
    	vivi_thread_tick(dev);
    
    	schedule_timeout_interruptible(timeout);
    
    stop_task:
    	remove_wait_queue(&dma_q->wq, &wait);
    	try_to_freeze();
    }
    
    static int vivi_thread(void *data)
    {
    	struct vivi_dev *dev = data;
    
    	dprintk(dev, 1, "thread started\n");
    
    	set_freezable();
    
    	for (;;) {
    		vivi_sleep(dev);
    
    		if (kthread_should_stop())
    			break;
    	}
    	dprintk(dev, 1, "thread: exit\n");
    	return 0;
    }
    
    static int vivi_start_generating(struct vivi_dev *dev)
    {
    	struct vivi_dmaqueue *dma_q = &dev->vidq;
    
    	dprintk(dev, 1, "%s\n", __func__);
    
    	/* Resets frame counters */
    	dev->ms = 0;
    	dev->mv_count = 0;
    	dev->jiffies = jiffies;
    
    	dma_q->frame = 0;
    	dma_q->ini_jiffies = jiffies;
    	dma_q->kthread = kthread_run(vivi_thread, dev, dev->v4l2_dev.name);
    
    	if (IS_ERR(dma_q->kthread)) {
    		v4l2_err(&dev->v4l2_dev, "kernel_thread() failed\n");
    		return PTR_ERR(dma_q->kthread);
    	}
    	/* Wakes thread */
    	wake_up_interruptible(&dma_q->wq);
    
    	dprintk(dev, 1, "returning from %s\n", __func__);
    	return 0;
    }
    
    static void vivi_stop_generating(struct vivi_dev *dev)
    {
    	struct vivi_dmaqueue *dma_q = &dev->vidq;
    
    	dprintk(dev, 1, "%s\n", __func__);
    
    	/* shutdown control thread */
    	if (dma_q->kthread) {
    		kthread_stop(dma_q->kthread);
    		dma_q->kthread = NULL;
    	}
    
    	/*
    	 * Typical driver might need to wait here until dma engine stops.
    	 * In this case we can abort imiedetly, so it's just a noop.
    	 */
    
    	/* Release all active buffers */
    	while (!list_empty(&dma_q->active)) {
    		struct vivi_buffer *buf;
    		buf = list_entry(dma_q->active.next, struct vivi_buffer, list);
    		list_del(&buf->list);
    		vb2_buffer_done(&buf->vb, VB2_BUF_STATE_ERROR);
    		dprintk(dev, 2, "[%p/%d] done\n", buf, buf->vb.v4l2_buf.index);
    	}
    }
    /* ------------------------------------------------------------------
    	Videobuf operations
       ------------------------------------------------------------------*/
    static int queue_setup(struct vb2_queue *vq, const struct v4l2_format *fmt,
    				unsigned int *nbuffers, unsigned int *nplanes,
    				unsigned int sizes[], void *alloc_ctxs[])
    {
    	struct vivi_dev *dev = vb2_get_drv_priv(vq);
    	unsigned long size;
    
    	size = dev->width * dev->height * dev->pixelsize;
    
    	if (0 == *nbuffers)
    		*nbuffers = 32;
    
    	while (size * *nbuffers > vid_limit * 1024 * 1024)
    		(*nbuffers)--;
    
    	*nplanes = 1;
    
    	sizes[0] = size;
    
    	/*
    	 * videobuf2-vmalloc allocator is context-less so no need to set
    	 * alloc_ctxs array.
    	 */
    
    	dprintk(dev, 1, "%s, count=%d, size=%ld\n", __func__,
    		*nbuffers, size);
    
    	return 0;
    }
    
    static int buffer_init(struct vb2_buffer *vb)
    {
    	struct vivi_dev *dev = vb2_get_drv_priv(vb->vb2_queue);
    
    	BUG_ON(NULL == dev->fmt);
    
    	/*
    	 * This callback is called once per buffer, after its allocation.
    	 *
    	 * Vivi does not allow changing format during streaming, but it is
    	 * possible to do so when streaming is paused (i.e. in streamoff state).
    	 * Buffers however are not freed when going into streamoff and so
    	 * buffer size verification has to be done in buffer_prepare, on each
    	 * qbuf.
    	 * It would be best to move verification code here to buf_init and
    	 * s_fmt though.
    	 */
    
    	return 0;
    }
    
    static int buffer_prepare(struct vb2_buffer *vb)
    {
    	struct vivi_dev *dev = vb2_get_drv_priv(vb->vb2_queue);
    	struct vivi_buffer *buf = container_of(vb, struct vivi_buffer, vb);
    	unsigned long size;
    
    	dprintk(dev, 1, "%s, field=%d\n", __func__, vb->v4l2_buf.field);
    
    	BUG_ON(NULL == dev->fmt);
    
    	/*
    	 * Theses properties only change when queue is idle, see s_fmt.
    	 * The below checks should not be performed here, on each
    	 * buffer_prepare (i.e. on each qbuf). Most of the code in this function
    	 * should thus be moved to buffer_init and s_fmt.
    	 */
    	if (dev->width  < 48 || dev->width  > MAX_WIDTH ||
    	    dev->height < 32 || dev->height > MAX_HEIGHT)
    		return -EINVAL;
    
    	size = dev->width * dev->height * dev->pixelsize;
    	if (vb2_plane_size(vb, 0) < size) {
    		dprintk(dev, 1, "%s data will not fit into plane (%lu < %lu)\n",
    				__func__, vb2_plane_size(vb, 0), size);
    		return -EINVAL;
    	}
    
    	vb2_set_plane_payload(&buf->vb, 0, size);
    
    	buf->fmt = dev->fmt;
    
    	precalculate_bars(dev);
    	precalculate_line(dev);
    
    	return 0;
    }
    
    static int buffer_finish(struct vb2_buffer *vb)
    {
    	struct vivi_dev *dev = vb2_get_drv_priv(vb->vb2_queue);
    	dprintk(dev, 1, "%s\n", __func__);
    	return 0;
    }
    
    static void buffer_cleanup(struct vb2_buffer *vb)
    {
    	struct vivi_dev *dev = vb2_get_drv_priv(vb->vb2_queue);
    	dprintk(dev, 1, "%s\n", __func__);
    
    }
    
    static void buffer_queue(struct vb2_buffer *vb)
    {
    	struct vivi_dev *dev = vb2_get_drv_priv(vb->vb2_queue);
    	struct vivi_buffer *buf = container_of(vb, struct vivi_buffer, vb);
    	struct vivi_dmaqueue *vidq = &dev->vidq;
    	unsigned long flags = 0;
    
    	dprintk(dev, 1, "%s\n", __func__);
    
    	spin_lock_irqsave(&dev->slock, flags);
    	list_add_tail(&buf->list, &vidq->active);
    	spin_unlock_irqrestore(&dev->slock, flags);
    }
    
    static int start_streaming(struct vb2_queue *vq, unsigned int count)
    {
    	struct vivi_dev *dev = vb2_get_drv_priv(vq);
    	dprintk(dev, 1, "%s\n", __func__);
    	return vivi_start_generating(dev);
    }
    
    /* abort streaming and wait for last buffer */
    static int stop_streaming(struct vb2_queue *vq)
    {
    	struct vivi_dev *dev = vb2_get_drv_priv(vq);
    	dprintk(dev, 1, "%s\n", __func__);
    	vivi_stop_generating(dev);
    	return 0;
    }
    
    static void vivi_lock(struct vb2_queue *vq)
    {
    	struct vivi_dev *dev = vb2_get_drv_priv(vq);
    	mutex_lock(&dev->mutex);
    }
    
    static void vivi_unlock(struct vb2_queue *vq)
    {
    	struct vivi_dev *dev = vb2_get_drv_priv(vq);
    	mutex_unlock(&dev->mutex);
    }
    
    
    static struct vb2_ops vivi_video_qops = {
    	.queue_setup		= queue_setup,
    	.buf_init		= buffer_init,
    	.buf_prepare		= buffer_prepare,
    	.buf_finish		= buffer_finish,
    	.buf_cleanup		= buffer_cleanup,
    	.buf_queue		= buffer_queue,
    	.start_streaming	= start_streaming,
    	.stop_streaming		= stop_streaming,
    	.wait_prepare		= vivi_unlock,
    	.wait_finish		= vivi_lock,
    };
    
    /* ------------------------------------------------------------------
    	IOCTL vidioc handling
       ------------------------------------------------------------------*/
    static int vidioc_querycap(struct file *file, void  *priv,
    					struct v4l2_capability *cap)
    {
    	struct vivi_dev *dev = video_drvdata(file);
    
    	strcpy(cap->driver, "vivi");
    	strcpy(cap->card, "vivi");
    	strlcpy(cap->bus_info, dev->v4l2_dev.name, sizeof(cap->bus_info));
    	cap->device_caps = V4L2_CAP_VIDEO_CAPTURE | V4L2_CAP_STREAMING |
    			    V4L2_CAP_READWRITE;
    	cap->capabilities = cap->device_caps | V4L2_CAP_DEVICE_CAPS;
    	return 0;
    }
    
    static int vidioc_enum_fmt_vid_cap(struct file *file, void  *priv,
    					struct v4l2_fmtdesc *f)
    {
    	struct vivi_fmt *fmt;
    
    	if (f->index >= ARRAY_SIZE(formats))
    		return -EINVAL;
    
    	fmt = &formats[f->index];
    
    	strlcpy(f->description, fmt->name, sizeof(f->description));
    	f->pixelformat = fmt->fourcc;
    	return 0;
    }
    
    static int vidioc_g_fmt_vid_cap(struct file *file, void *priv,
    					struct v4l2_format *f)
    {
    	struct vivi_dev *dev = video_drvdata(file);
    
    	f->fmt.pix.width        = dev->width;
    	f->fmt.pix.height       = dev->height;
    	f->fmt.pix.field        = dev->field;
    	f->fmt.pix.pixelformat  = dev->fmt->fourcc;
    	f->fmt.pix.bytesperline =
    		(f->fmt.pix.width * dev->fmt->depth) >> 3;
    	f->fmt.pix.sizeimage =
    		f->fmt.pix.height * f->fmt.pix.bytesperline;
    	if (dev->fmt->fourcc == V4L2_PIX_FMT_YUYV ||
    	    dev->fmt->fourcc == V4L2_PIX_FMT_UYVY)
    		f->fmt.pix.colorspace = V4L2_COLORSPACE_SMPTE170M;
    	else
    		f->fmt.pix.colorspace = V4L2_COLORSPACE_SRGB;
    	return 0;
    }
    
    static int vidioc_try_fmt_vid_cap(struct file *file, void *priv,
    			struct v4l2_format *f)
    {
    	struct vivi_dev *dev = video_drvdata(file);
    	struct vivi_fmt *fmt;
    	enum v4l2_field field;
    
    	fmt = get_format(f);
    	if (!fmt) {
    		dprintk(dev, 1, "Fourcc format (0x%08x) invalid.\n",
    			f->fmt.pix.pixelformat);
    		return -EINVAL;
    	}
    
    	field = f->fmt.pix.field;
    
    	if (field == V4L2_FIELD_ANY) {
    		field = V4L2_FIELD_INTERLACED;
    	} else if (V4L2_FIELD_INTERLACED != field) {
    		dprintk(dev, 1, "Field type invalid.\n");
    		return -EINVAL;
    	}
    
    	f->fmt.pix.field = field;
    	v4l_bound_align_image(&f->fmt.pix.width, 48, MAX_WIDTH, 2,
    			      &f->fmt.pix.height, 32, MAX_HEIGHT, 0, 0);
    	f->fmt.pix.bytesperline =
    		(f->fmt.pix.width * fmt->depth) >> 3;
    	f->fmt.pix.sizeimage =
    		f->fmt.pix.height * f->fmt.pix.bytesperline;
    	if (fmt->fourcc == V4L2_PIX_FMT_YUYV ||
    	    fmt->fourcc == V4L2_PIX_FMT_UYVY)
    		f->fmt.pix.colorspace = V4L2_COLORSPACE_SMPTE170M;
    	else
    		f->fmt.pix.colorspace = V4L2_COLORSPACE_SRGB;
    	return 0;
    }
    
    static int vidioc_s_fmt_vid_cap(struct file *file, void *priv,
    					struct v4l2_format *f)
    {
    	struct vivi_dev *dev = video_drvdata(file);
    	struct vb2_queue *q = &dev->vb_vidq;
    
    	int ret = vidioc_try_fmt_vid_cap(file, priv, f);
    	if (ret < 0)
    		return ret;
    
    	if (vb2_is_streaming(q)) {
    		dprintk(dev, 1, "%s device busy\n", __func__);
    		return -EBUSY;
    	}
    
    	dev->fmt = get_format(f);
    	dev->pixelsize = dev->fmt->depth / 8;
    	dev->width = f->fmt.pix.width;
    	dev->height = f->fmt.pix.height;
    	dev->field = f->fmt.pix.field;
    
    	return 0;
    }
    
    static int vidioc_reqbufs(struct file *file, void *priv,
    			  struct v4l2_requestbuffers *p)
    {
    	struct vivi_dev *dev = video_drvdata(file);
    	return vb2_reqbufs(&dev->vb_vidq, p);
    }
    
    static int vidioc_querybuf(struct file *file, void *priv, struct v4l2_buffer *p)
    {
    	struct vivi_dev *dev = video_drvdata(file);
    	return vb2_querybuf(&dev->vb_vidq, p);
    }
    
    static int vidioc_qbuf(struct file *file, void *priv, struct v4l2_buffer *p)
    {
    	struct vivi_dev *dev = video_drvdata(file);
    	return vb2_qbuf(&dev->vb_vidq, p);
    }
    
    static int vidioc_dqbuf(struct file *file, void *priv, struct v4l2_buffer *p)
    {
    	struct vivi_dev *dev = video_drvdata(file);
    	return vb2_dqbuf(&dev->vb_vidq, p, file->f_flags & O_NONBLOCK);
    }
    
    static int vidioc_streamon(struct file *file, void *priv, enum v4l2_buf_type i)
    {
    	struct vivi_dev *dev = video_drvdata(file);
    	return vb2_streamon(&dev->vb_vidq, i);
    }
    
    static int vidioc_streamoff(struct file *file, void *priv, enum v4l2_buf_type i)
    {
    	struct vivi_dev *dev = video_drvdata(file);
    	return vb2_streamoff(&dev->vb_vidq, i);
    }
    
    static int vidioc_s_std(struct file *file, void *priv, v4l2_std_id *i)
    {
    	return 0;
    }
    
    /* only one input in this sample driver */
    static int vidioc_enum_input(struct file *file, void *priv,
    				struct v4l2_input *inp)
    {
    	if (inp->index >= NUM_INPUTS)
    		return -EINVAL;
    
    	inp->type = V4L2_INPUT_TYPE_CAMERA;
    	inp->std = V4L2_STD_525_60;
    	sprintf(inp->name, "Camera %u", inp->index);
    	return 0;
    }
    
    static int vidioc_g_input(struct file *file, void *priv, unsigned int *i)
    {
    	struct vivi_dev *dev = video_drvdata(file);
    
    	*i = dev->input;
    	return 0;
    }
    
    static int vidioc_s_input(struct file *file, void *priv, unsigned int i)
    {
    	struct vivi_dev *dev = video_drvdata(file);
    
    	if (i >= NUM_INPUTS)
    		return -EINVAL;
    
    	if (i == dev->input)
    		return 0;
    
    	dev->input = i;
    	precalculate_bars(dev);
    	precalculate_line(dev);
    	return 0;
    }
    
    /* --- controls ---------------------------------------------- */
    
    static int vivi_g_volatile_ctrl(struct v4l2_ctrl *ctrl)
    {
    	struct vivi_dev *dev = container_of(ctrl->handler, struct vivi_dev, ctrl_handler);
    
    	if (ctrl == dev->autogain)
    		dev->gain->val = jiffies & 0xff;
    	return 0;
    }
    
    static int vivi_s_ctrl(struct v4l2_ctrl *ctrl)
    {
    	struct vivi_dev *dev = container_of(ctrl->handler, struct vivi_dev, ctrl_handler);
    
    	switch (ctrl->id) {
    	case V4L2_CID_ALPHA_COMPONENT:
    		dev->alpha_component = ctrl->val;
    		break;
    	default:
    		if (ctrl == dev->button)
    			dev->button_pressed = 30;
    		break;
    	}
    	return 0;
    }
    
    /* ------------------------------------------------------------------
    	File operations for the device
       ------------------------------------------------------------------*/
    
    static ssize_t
    vivi_read(struct file *file, char __user *data, size_t count, loff_t *ppos)
    {
    	struct vivi_dev *dev = video_drvdata(file);
    	int err;
    
    	dprintk(dev, 1, "read called\n");
    	mutex_lock(&dev->mutex);
    	err = vb2_read(&dev->vb_vidq, data, count, ppos,
    		       file->f_flags & O_NONBLOCK);
    	mutex_unlock(&dev->mutex);
    	return err;
    }
    
    static unsigned int
    vivi_poll(struct file *file, struct poll_table_struct *wait)
    {
    	struct vivi_dev *dev = video_drvdata(file);
    	struct vb2_queue *q = &dev->vb_vidq;
    
    	dprintk(dev, 1, "%s\n", __func__);
    	return vb2_poll(q, file, wait);
    }
    
    static int vivi_close(struct file *file)
    {
    	struct video_device  *vdev = video_devdata(file);
    	struct vivi_dev *dev = video_drvdata(file);
    
    	dprintk(dev, 1, "close called (dev=%s), file %p\n",
    		video_device_node_name(vdev), file);
    
    	if (v4l2_fh_is_singular_file(file))
    		vb2_queue_release(&dev->vb_vidq);
    	return v4l2_fh_release(file);
    }
    
    static int vivi_mmap(struct file *file, struct vm_area_struct *vma)
    {
    	struct vivi_dev *dev = video_drvdata(file);
    	int ret;
    
    	dprintk(dev, 1, "mmap called, vma=0x%08lx\n", (unsigned long)vma);
    
    	ret = vb2_mmap(&dev->vb_vidq, vma);
    	dprintk(dev, 1, "vma start=0x%08lx, size=%ld, ret=%d\n",
    		(unsigned long)vma->vm_start,
    		(unsigned long)vma->vm_end - (unsigned long)vma->vm_start,
    		ret);
    	return ret;
    }
    
    static const struct v4l2_ctrl_ops vivi_ctrl_ops = {
    	.g_volatile_ctrl = vivi_g_volatile_ctrl,
    	.s_ctrl = vivi_s_ctrl,
    };
    
    #define VIVI_CID_CUSTOM_BASE	(V4L2_CID_USER_BASE | 0xf000)
    
    static const struct v4l2_ctrl_config vivi_ctrl_button = {
    	.ops = &vivi_ctrl_ops,
    	.id = VIVI_CID_CUSTOM_BASE + 0,
    	.name = "Button",
    	.type = V4L2_CTRL_TYPE_BUTTON,
    };
    
    static const struct v4l2_ctrl_config vivi_ctrl_boolean = {
    	.ops = &vivi_ctrl_ops,
    	.id = VIVI_CID_CUSTOM_BASE + 1,
    	.name = "Boolean",
    	.type = V4L2_CTRL_TYPE_BOOLEAN,
    	.min = 0,
    	.max = 1,
    	.step = 1,
    	.def = 1,
    };
    
    static const struct v4l2_ctrl_config vivi_ctrl_int32 = {
    	.ops = &vivi_ctrl_ops,
    	.id = VIVI_CID_CUSTOM_BASE + 2,
    	.name = "Integer 32 Bits",
    	.type = V4L2_CTRL_TYPE_INTEGER,
    	.min = 0x80000000,
    	.max = 0x7fffffff,
    	.step = 1,
    };
    
    static const struct v4l2_ctrl_config vivi_ctrl_int64 = {
    	.ops = &vivi_ctrl_ops,
    	.id = VIVI_CID_CUSTOM_BASE + 3,
    	.name = "Integer 64 Bits",
    	.type = V4L2_CTRL_TYPE_INTEGER64,
    };
    
    static const char * const vivi_ctrl_menu_strings[] = {
    	"Menu Item 0 (Skipped)",
    	"Menu Item 1",
    	"Menu Item 2 (Skipped)",
    	"Menu Item 3",
    	"Menu Item 4",
    	"Menu Item 5 (Skipped)",
    	NULL,
    };
    
    static const struct v4l2_ctrl_config vivi_ctrl_menu = {
    	.ops = &vivi_ctrl_ops,
    	.id = VIVI_CID_CUSTOM_BASE + 4,
    	.name = "Menu",
    	.type = V4L2_CTRL_TYPE_MENU,
    	.min = 1,
    	.max = 4,
    	.def = 3,
    	.menu_skip_mask = 0x04,
    	.qmenu = vivi_ctrl_menu_strings,
    };
    
    static const struct v4l2_ctrl_config vivi_ctrl_string = {
    	.ops = &vivi_ctrl_ops,
    	.id = VIVI_CID_CUSTOM_BASE + 5,
    	.name = "String",
    	.type = V4L2_CTRL_TYPE_STRING,
    	.min = 2,
    	.max = 4,
    	.step = 1,
    };
    
    static const struct v4l2_ctrl_config vivi_ctrl_bitmask = {
    	.ops = &vivi_ctrl_ops,
    	.id = VIVI_CID_CUSTOM_BASE + 6,
    	.name = "Bitmask",
    	.type = V4L2_CTRL_TYPE_BITMASK,
    	.def = 0x80002000,
    	.min = 0,
    	.max = 0x80402010,
    	.step = 0,
    };
    
    static const s64 vivi_ctrl_int_menu_values[] = {
    	1, 1, 2, 3, 5, 8, 13, 21, 42,
    };
    
    static const struct v4l2_ctrl_config vivi_ctrl_int_menu = {
    	.ops = &vivi_ctrl_ops,
    	.id = VIVI_CID_CUSTOM_BASE + 7,
    	.name = "Integer menu",
    	.type = V4L2_CTRL_TYPE_INTEGER_MENU,
    	.min = 1,
    	.max = 8,
    	.def = 4,
    	.menu_skip_mask = 0x02,
    	.qmenu_int = vivi_ctrl_int_menu_values,
    };
    
    static const struct v4l2_file_operations vivi_fops = {
    	.owner		= THIS_MODULE,
    	.open           = v4l2_fh_open,
    	.release        = vivi_close,
    	.read           = vivi_read,
    	.poll		= vivi_poll,
    	.unlocked_ioctl = video_ioctl2, /* V4L2 ioctl handler */
    	.mmap           = vivi_mmap,
    };
    
    static const struct v4l2_ioctl_ops vivi_ioctl_ops = {
    	.vidioc_querycap      = vidioc_querycap,
    	.vidioc_enum_fmt_vid_cap  = vidioc_enum_fmt_vid_cap,
    	.vidioc_g_fmt_vid_cap     = vidioc_g_fmt_vid_cap,
    	.vidioc_try_fmt_vid_cap   = vidioc_try_fmt_vid_cap,
    	.vidioc_s_fmt_vid_cap     = vidioc_s_fmt_vid_cap,
    	.vidioc_reqbufs       = vidioc_reqbufs,
    	.vidioc_querybuf      = vidioc_querybuf,
    	.vidioc_qbuf          = vidioc_qbuf,
    	.vidioc_dqbuf         = vidioc_dqbuf,
    	.vidioc_s_std         = vidioc_s_std,
    	.vidioc_enum_input    = vidioc_enum_input,
    	.vidioc_g_input       = vidioc_g_input,
    	.vidioc_s_input       = vidioc_s_input,
    	.vidioc_streamon      = vidioc_streamon,
    	.vidioc_streamoff     = vidioc_streamoff,
    	.vidioc_log_status    = v4l2_ctrl_log_status,
    	.vidioc_subscribe_event = v4l2_ctrl_subscribe_event,
    	.vidioc_unsubscribe_event = v4l2_event_unsubscribe,
    };
    
    static struct video_device vivi_template = {
    	.name		= "vivi",
    	.fops           = &vivi_fops,
    	.ioctl_ops 	= &vivi_ioctl_ops,
    	.release	= video_device_release,
    
    	.tvnorms              = V4L2_STD_525_60,
    	.current_norm         = V4L2_STD_NTSC_M,
    };
    
    /* -----------------------------------------------------------------
    	Initialization and module stuff
       ------------------------------------------------------------------*/
    
    static int vivi_release(void)
    {
    	struct vivi_dev *dev;
    	struct list_head *list;
    
    	while (!list_empty(&vivi_devlist)) {
    		list = vivi_devlist.next;
    		list_del(list);
    		dev = list_entry(list, struct vivi_dev, vivi_devlist);
    
    		v4l2_info(&dev->v4l2_dev, "unregistering %s\n",
    			video_device_node_name(dev->vfd));
    		video_unregister_device(dev->vfd);
    		v4l2_device_unregister(&dev->v4l2_dev);
    		v4l2_ctrl_handler_free(&dev->ctrl_handler);
    		kfree(dev);
    	}
    
    	return 0;
    }
    
    static int __init vivi_create_instance(int inst)
    {
    	struct vivi_dev *dev;
    	struct video_device *vfd;
    	struct v4l2_ctrl_handler *hdl;
    	struct vb2_queue *q;
    	int ret;
    
    	dev = kzalloc(sizeof(*dev), GFP_KERNEL);
    	if (!dev)
    		return -ENOMEM;
    
    	snprintf(dev->v4l2_dev.name, sizeof(dev->v4l2_dev.name),
    			"%s-%03d", VIVI_MODULE_NAME, inst);
    	ret = v4l2_device_register(NULL, &dev->v4l2_dev);
    	if (ret)
    		goto free_dev;
    
    	dev->fmt = &formats[0];
    	dev->width = 640;
    	dev->height = 480;
    	dev->pixelsize = dev->fmt->depth / 8;
    	hdl = &dev->ctrl_handler;
    	v4l2_ctrl_handler_init(hdl, 11);
    	dev->volume = v4l2_ctrl_new_std(hdl, &vivi_ctrl_ops,
    			V4L2_CID_AUDIO_VOLUME, 0, 255, 1, 200);
    	dev->brightness = v4l2_ctrl_new_std(hdl, &vivi_ctrl_ops,
    			V4L2_CID_BRIGHTNESS, 0, 255, 1, 127);
    	dev->contrast = v4l2_ctrl_new_std(hdl, &vivi_ctrl_ops,
    			V4L2_CID_CONTRAST, 0, 255, 1, 16);
    	dev->saturation = v4l2_ctrl_new_std(hdl, &vivi_ctrl_ops,
    			V4L2_CID_SATURATION, 0, 255, 1, 127);
    	dev->hue = v4l2_ctrl_new_std(hdl, &vivi_ctrl_ops,
    			V4L2_CID_HUE, -128, 127, 1, 0);
    	dev->autogain = v4l2_ctrl_new_std(hdl, &vivi_ctrl_ops,
    			V4L2_CID_AUTOGAIN, 0, 1, 1, 1);
    	dev->gain = v4l2_ctrl_new_std(hdl, &vivi_ctrl_ops,
    			V4L2_CID_GAIN, 0, 255, 1, 100);
    	dev->alpha = v4l2_ctrl_new_std(hdl, &vivi_ctrl_ops,
    			V4L2_CID_ALPHA_COMPONENT, 0, 255, 1, 0);
    	dev->button = v4l2_ctrl_new_custom(hdl, &vivi_ctrl_button, NULL);
    	dev->int32 = v4l2_ctrl_new_custom(hdl, &vivi_ctrl_int32, NULL);
    	dev->int64 = v4l2_ctrl_new_custom(hdl, &vivi_ctrl_int64, NULL);
    	dev->boolean = v4l2_ctrl_new_custom(hdl, &vivi_ctrl_boolean, NULL);
    	dev->menu = v4l2_ctrl_new_custom(hdl, &vivi_ctrl_menu, NULL);
    	dev->string = v4l2_ctrl_new_custom(hdl, &vivi_ctrl_string, NULL);
    	dev->bitmask = v4l2_ctrl_new_custom(hdl, &vivi_ctrl_bitmask, NULL);
    	dev->int_menu = v4l2_ctrl_new_custom(hdl, &vivi_ctrl_int_menu, NULL);
    	if (hdl->error) {
    		ret = hdl->error;
    		goto unreg_dev;
    	}
    	v4l2_ctrl_auto_cluster(2, &dev->autogain, 0, true);
    	dev->v4l2_dev.ctrl_handler = hdl;
    
    	/* initialize locks */
    	spin_lock_init(&dev->slock);
    
    	/* initialize queue */
    	q = &dev->vb_vidq;
    	memset(q, 0, sizeof(dev->vb_vidq));
    	q->type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
    	q->io_modes = VB2_MMAP | VB2_USERPTR | VB2_READ;
    	q->drv_priv = dev;
    	q->buf_struct_size = sizeof(struct vivi_buffer);
    	q->ops = &vivi_video_qops;
    	q->mem_ops = &vb2_vmalloc_memops;
    
    	vb2_queue_init(q);
    
    	mutex_init(&dev->mutex);
    
    	/* init video dma queues */
    	INIT_LIST_HEAD(&dev->vidq.active);
    	init_waitqueue_head(&dev->vidq.wq);
    
    	ret = -ENOMEM;
    	vfd = video_device_alloc();
    	if (!vfd)
    		goto unreg_dev;
    
    	*vfd = vivi_template;
    	vfd->debug = debug;
    	vfd->v4l2_dev = &dev->v4l2_dev;
    	set_bit(V4L2_FL_USE_FH_PRIO, &vfd->flags);
    
    	/*
    	 * Provide a mutex to v4l2 core. It will be used to protect
    	 * all fops and v4l2 ioctls.
    	 */
    	vfd->lock = &dev->mutex;
    
    	ret = video_register_device(vfd, VFL_TYPE_GRABBER, video_nr);
    	if (ret < 0)
    		goto rel_vdev;
    
    	video_set_drvdata(vfd, dev);
    
    	/* Now that everything is fine, let's add it to device list */
    	list_add_tail(&dev->vivi_devlist, &vivi_devlist);
    
    	if (video_nr != -1)
    		video_nr++;
    
    	dev->vfd = vfd;
    	v4l2_info(&dev->v4l2_dev, "V4L2 device registered as %s\n",
    		  video_device_node_name(vfd));
    	return 0;
    
    rel_vdev:
    	video_device_release(vfd);
    unreg_dev:
    	v4l2_ctrl_handler_free(hdl);
    	v4l2_device_unregister(&dev->v4l2_dev);
    free_dev:
    	kfree(dev);
    	return ret;
    }
    
    /* This routine allocates from 1 to n_devs virtual drivers.
    
       The real maximum number of virtual drivers will depend on how many drivers
       will succeed. This is limited to the maximum number of devices that
       videodev supports, which is equal to VIDEO_NUM_DEVICES.
     */
    static int __init vivi_init(void)
    {
    	const struct font_desc *font = find_font("VGA8x16");
    	int ret = 0, i;
    
    	if (font == NULL) {
    		printk(KERN_ERR "vivi: could not find font\n");
    		return -ENODEV;
    	}
    	font8x16 = font->data;
    
    	if (n_devs <= 0)
    		n_devs = 1;
    
    	for (i = 0; i < n_devs; i++) {
    		ret = vivi_create_instance(i);
    		if (ret) {
    			/* If some instantiations succeeded, keep driver */
    			if (i)
    				ret = 0;
    			break;
    		}
    	}
    
    	if (ret < 0) {
    		printk(KERN_ERR "vivi: error %d while loading driver\n", ret);
    		return ret;
    	}
    
    	printk(KERN_INFO "Video Technology Magazine Virtual Video "
    			"Capture Board ver %s successfully loaded.\n",
    			VIVI_VERSION);
    
    	/* n_devs will reflect the actual number of allocated devices */
    	n_devs = i;
    
    	return ret;
    }
    
    static void __exit vivi_exit(void)
    {
    	vivi_release();
    }
    
    module_init(vivi_init);
    module_exit(vivi_exit);
    

    三、camera应用程序编写

    1.程序运行及编写流程

    在这里插入图片描述

    2.应用程序源码

    #include <stdio.h>
    #include <sys/ioctl.h>
    #include <sys/types.h>
    #include <sys/stat.h>
    #include <fcntl.h>
    #include <linux/videodev2.h>
    #include <stdlib.h>
    #include <sys/mman.h>
    #include <sys/select.h>
    #include <string.h>
    /* According to earlier standards */
    #include <sys/time.h>
    #include <sys/types.h>
    #include <unistd.h>
    
    
    struct v4l2_capability cap;
    struct v4l2_format fmt;
    struct v4l2_requestbuffers req_buf;
    struct v4l2_buffer kbuf;
    enum v4l2_buf_type buf_type;
    int i;
    struct video_buffer{
    	void * start;
    	int length;
    };
    struct video_buffer *buffer;
    
    int open_camera(void)
    {
    	int fd;
    	fd = open("/dev/video0",O_RDWR);
    	if(fd < 0){
    		perror("open /dev/video0 is fail");
    		return -1;
    	}
    	return fd;
    }
    
    int check_device(int fd)
    {
    	if(ioctl(fd,VIDIOC_QUERYCAP,&cap) == -1){
    		perror("query device is fail");
    		return -1;
    	}
    	if(!(cap.capabilities & V4L2_CAP_VIDEO_CAPTURE)){
    		return -1;
    	}
    
    	return 0;
    }
    
    int set_camera_fmt(int fd)
    {
    	fmt.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
    	fmt.fmt.pix.width = 720;
    	fmt.fmt.pix.height = 576;
    	fmt.fmt.pix.pixelformat = V4L2_PIX_FMT_YUYV;
    	fmt.fmt.pix.field = V4L2_FIELD_INTERLACED;
    
    	if(ioctl(fd,VIDIOC_S_FMT,&fmt) == -1){
    		perror("set camera fmt is fail");
    		return -1;
    	}
    	
    	return 0;
    }
    
    int init_buffer(int fd)
    {
    	//1.分配缓冲区
    	req_buf.count = 4;
    	req_buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
    	req_buf.memory = V4L2_MEMORY_MMAP;
    	
    	if(ioctl(fd,VIDIOC_REQBUFS,&req_buf) == -1){
    		perror("set VIDIOC_REQBUFS is fail");
    		return -1;
    	}
    	
    	buffer = calloc(req_buf.count,sizeof(*buffer));
    	for(i=0; i<req_buf.count; i++){
    		kbuf.index = i;
    		kbuf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
    		kbuf.memory = V4L2_MEMORY_MMAP;
    		if(ioctl(fd,VIDIOC_QUERYBUF,&kbuf) == -1){
    			perror("set VIDIOC_QUERYBUF is fail");
    			return -1;
    		}
    		buffer[i].length = kbuf.length;
    		buffer[i].start = mmap(NULL,kbuf.length,  PROT_READ| PROT_WRITE, MAP_SHARED,fd,kbuf.m.offset);
    
    		if(ioctl(fd,VIDIOC_QBUF,&kbuf) == -1){
    			perror("set VIDIOC_QBUF is fail");
    			return -1;
    		}
    	
    	}
    	return 0;
    }
    
    int start_camera(int fd)
    {
    	buf_type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
    	if(ioctl(fd,VIDIOC_STREAMON,&buf_type) == -1){
    		perror("set VIDIOC_STREAMON is fail");
    		return -1;
    	}
    	return 0;
    
    }
    
    int stop_camera(int fd)
    {
    	buf_type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
    	if(ioctl(fd,VIDIOC_STREAMOFF,&buf_type) == -1){
    		perror("set VIDIOC_STREAMON is fail");
    		return -1;
    	}
    	return 0;
    
    }
    
    
    int build_picture(int fd)
    {
    	FILE * fp;
    	memset(&kbuf,0,sizeof(kbuf));
    	kbuf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
    	if(ioctl(fd,VIDIOC_DQBUF,&kbuf) == -1){
    		perror("set VIDIOC_DQBUF is fail");
    		return -1;
    	}
    
    	fp = fopen("picture.yuv","w");
    	fwrite(buffer[kbuf.index].start,1,buffer[kbuf.index].length,fp);
    	fclose(fp);
    
    	if(ioctl(fd,VIDIOC_QBUF,&kbuf) == -1){
    			perror("set VIDIOC_QBUF is fail");
    			return -1;
    	}
    	return 0;
    }
    
    int camera_read_data(int fd)
    {
    	int ret;
    	fd_set rfds;
    	struct timeval tim;
    	tim.tv_sec = 2;
    	tim.tv_usec = 0;
    
    	FD_ZERO(&rfds);
    	FD_SET(fd, &rfds);
    	ret = select(fd+1, &rfds,NULL,NULL, &tim);
    	if(ret == -1){
    		perror("select is fail");
    		return -1;
    	}else if(ret == 0){
    		perror("select timeout");
    		return -1;
    	}else{
    		build_picture(fd);
    	}
    	return 0;
    }
    
    int data_free(int fd)
    {
    	for(i=0; i<4; i++){
    		munmap(buffer[i].start,buffer[i].length);
    	}
    	free(buffer);
    	close(fd);
    }
    
    int main(int argc, const char *argv[])
    {
    	int fd,ret;
    	//1.打开驱动
    	fd = open_camera();
    	if(fd == -1){
    		return -1;
    	}
    
    	//2.查询是否是一个摄像头设备
    	ret = check_device(fd);
    	if(ret == -1){
    		return -1;
    	}
    
    	//3.设置图像采集的格式
    	ret = set_camera_fmt(fd);
    	if(ret != 0){
    		return -1;
    	}
    
    	//4.缓冲区的操作
    	init_buffer(fd);
    	
    	//5.开始采集数据
    	start_camera(fd);
    
    	//6.读取数据
    	camera_read_data(fd);
    
    	//7.停止采集
    	stop_camera(fd);
    	//8.释放缓冲区
    	data_free(fd);
    
    	return 0;
    
    }
    

    四、camera服务注册过程、服务获取过程、HAL层实现

    1./frameworks/av/camera/cameraserver/main_cameraserver.cpp

    /*
     * Copyright (C) 2015 The Android Open Source Project
     *
     * Licensed under the Apache License, Version 2.0 (the "License");
     * you may not use this file except in compliance with the License.
     * You may obtain a copy of the License at
     *
     *      http://www.apache.org/licenses/LICENSE-2.0
     *
     * Unless required by applicable law or agreed to in writing, software
     * distributed under the License is distributed on an "AS IS" BASIS,
     * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
     * See the License for the specific language governing permissions and
     * limitations under the License.
     */
    
    #define LOG_TAG "cameraserver"
    //#define LOG_NDEBUG 0
    
    #include "CameraService.h"
    #include <hidl/HidlTransportSupport.h>
    
    using namespace android;
    
    int main(int argc __unused, char** argv __unused)
    {
        signal(SIGPIPE, SIG_IGN);
    
        // Set 3 threads for HIDL calls
        hardware::configureRpcThreadpool(3, /*willjoin*/ false);
    
        sp<ProcessState> proc(ProcessState::self());
        sp<IServiceManager> sm = defaultServiceManager();
        ALOGI("ServiceManager: %p", sm.get());
        CameraService::instantiate();
        ProcessState::self()->startThreadPool();
        IPCThreadState::self()->joinThreadPool();
    }
    

    2./frameworks/base/core/java/android/hardware/Camera.java

       public static Camera open() {
            int numberOfCameras = getNumberOfCameras();
            CameraInfo cameraInfo = new CameraInfo();
            for (int i = 0; i < numberOfCameras; i++) {
                getCameraInfo(i, cameraInfo);
                if (cameraInfo.facing == CameraInfo.CAMERA_FACING_BACK) {
                    return new Camera(i);
                }
            }
            return null;
        }
    

    Camera.java源码

    3./frameworks/base/core/jni/android_hardware_Camera.cpp

    /*
    **
    ** Copyright 2008, The Android Open Source Project
    **
    ** Licensed under the Apache License, Version 2.0 (the "License");
    ** you may not use this file except in compliance with the License.
    ** You may obtain a copy of the License at
    **
    **     http://www.apache.org/licenses/LICENSE-2.0
    **
    ** Unless required by applicable law or agreed to in writing, software
    ** distributed under the License is distributed on an "AS IS" BASIS,
    ** WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    ** See the License for the specific language governing permissions and
    ** limitations under the License.
    */
    
    //#define LOG_NDEBUG 0
    #define LOG_TAG "Camera-JNI"
    #include <utils/Log.h>
    
    #include "jni.h"
    #include <nativehelper/JNIHelp.h>
    #include "core_jni_helpers.h"
    #include <android_runtime/android_graphics_SurfaceTexture.h>
    #include <android_runtime/android_view_Surface.h>
    
    #include <cutils/properties.h>
    #include <utils/Vector.h>
    #include <utils/Errors.h>
    
    #include <gui/GLConsumer.h>
    #include <gui/Surface.h>
    #include <camera/Camera.h>
    #include <binder/IMemory.h>
    
    using namespace android;
    
    enum {
        // Keep up to date with Camera.java
        CAMERA_HAL_API_VERSION_NORMAL_CONNECT = -2,
    };
    
    struct fields_t {
        jfieldID    context;
        jfieldID    facing;
        jfieldID    orientation;
        jfieldID    canDisableShutterSound;
        jfieldID    face_rect;
        jfieldID    face_score;
        jfieldID    face_id;
        jfieldID    face_left_eye;
        jfieldID    face_right_eye;
        jfieldID    face_mouth;
        jfieldID    rect_left;
        jfieldID    rect_top;
        jfieldID    rect_right;
        jfieldID    rect_bottom;
        jfieldID    point_x;
        jfieldID    point_y;
        jmethodID   post_event;
        jmethodID   rect_constructor;
        jmethodID   face_constructor;
        jmethodID   point_constructor;
    };
    
    static fields_t fields;
    static Mutex sLock;
    
    // provides persistent context for calls from native code to Java
    class JNICameraContext: public CameraListener
    {
    public:
        JNICameraContext(JNIEnv* env, jobject weak_this, jclass clazz, const sp<Camera>& camera);
        ~JNICameraContext() { release(); }
        virtual void notify(int32_t msgType, int32_t ext1, int32_t ext2);
        virtual void postData(int32_t msgType, const sp<IMemory>& dataPtr,
                              camera_frame_metadata_t *metadata);
        virtual void postDataTimestamp(nsecs_t timestamp, int32_t msgType, const sp<IMemory>& dataPtr);
        virtual void postRecordingFrameHandleTimestamp(nsecs_t timestamp, native_handle_t* handle);
        virtual void postRecordingFrameHandleTimestampBatch(
                const std::vector<nsecs_t>& timestamps,
                const std::vector<native_handle_t*>& handles);
        void postMetadata(JNIEnv *env, int32_t msgType, camera_frame_metadata_t *metadata);
        void addCallbackBuffer(JNIEnv *env, jbyteArray cbb, int msgType);
        void setCallbackMode(JNIEnv *env, bool installed, bool manualMode);
        sp<Camera> getCamera() { Mutex::Autolock _l(mLock); return mCamera; }
        bool isRawImageCallbackBufferAvailable() const;
        void release();
    
    private:
        void copyAndPost(JNIEnv* env, const sp<IMemory>& dataPtr, int msgType);
        void clearCallbackBuffers_l(JNIEnv *env, Vector<jbyteArray> *buffers);
        void clearCallbackBuffers_l(JNIEnv *env);
        jbyteArray getCallbackBuffer(JNIEnv *env, Vector<jbyteArray> *buffers, size_t bufferSize);
    
        jobject     mCameraJObjectWeak;     // weak reference to java object
        jclass      mCameraJClass;          // strong reference to java class
        sp<Camera>  mCamera;                // strong reference to native object
        jclass      mFaceClass;  // strong reference to Face class
        jclass      mRectClass;  // strong reference to Rect class
        jclass      mPointClass;  // strong reference to Point class
        Mutex       mLock;
    
        /*
         * Global reference application-managed raw image buffer queue.
         *
         * Manual-only mode is supported for raw image callbacks, which is
         * set whenever method addCallbackBuffer() with msgType =
         * CAMERA_MSG_RAW_IMAGE is called; otherwise, null is returned
         * with raw image callbacks.
         */
        Vector<jbyteArray> mRawImageCallbackBuffers;
    
        /*
         * Application-managed preview buffer queue and the flags
         * associated with the usage of the preview buffer callback.
         */
        Vector<jbyteArray> mCallbackBuffers; // Global reference application managed byte[]
        bool mManualBufferMode;              // Whether to use application managed buffers.
        bool mManualCameraCallbackSet;       // Whether the callback has been set, used to
                                             // reduce unnecessary calls to set the callback.
    };
    
    bool JNICameraContext::isRawImageCallbackBufferAvailable() const
    {
        return !mRawImageCallbackBuffers.isEmpty();
    }
    
    sp<Camera> get_native_camera(JNIEnv *env, jobject thiz, JNICameraContext** pContext)
    {
        sp<Camera> camera;
        Mutex::Autolock _l(sLock);
        JNICameraContext* context = reinterpret_cast<JNICameraContext*>(env->GetLongField(thiz, fields.context));
        if (context != NULL) {
            camera = context->getCamera();
        }
        ALOGV("get_native_camera: context=%p, camera=%p", context, camera.get());
        if (camera == 0) {
            jniThrowRuntimeException(env,
                    "Camera is being used after Camera.release() was called");
        }
    
        if (pContext != NULL) *pContext = context;
        return camera;
    }
    
    JNICameraContext::JNICameraContext(JNIEnv* env, jobject weak_this, jclass clazz, const sp<Camera>& camera)
    {
        mCameraJObjectWeak = env->NewGlobalRef(weak_this);
        mCameraJClass = (jclass)env->NewGlobalRef(clazz);
        mCamera = camera;
    
        jclass faceClazz = env->FindClass("android/hardware/Camera$Face");
        mFaceClass = (jclass) env->NewGlobalRef(faceClazz);
    
        jclass rectClazz = env->FindClass("android/graphics/Rect");
        mRectClass = (jclass) env->NewGlobalRef(rectClazz);
    
        jclass pointClazz = env->FindClass("android/graphics/Point");
        mPointClass = (jclass) env->NewGlobalRef(pointClazz);
    
        mManualBufferMode = false;
        mManualCameraCallbackSet = false;
    }
    
    void JNICameraContext::release()
    {
        ALOGV("release");
        Mutex::Autolock _l(mLock);
        JNIEnv *env = AndroidRuntime::getJNIEnv();
    
        if (mCameraJObjectWeak != NULL) {
            env->DeleteGlobalRef(mCameraJObjectWeak);
            mCameraJObjectWeak = NULL;
        }
        if (mCameraJClass != NULL) {
            env->DeleteGlobalRef(mCameraJClass);
            mCameraJClass = NULL;
        }
        if (mFaceClass != NULL) {
            env->DeleteGlobalRef(mFaceClass);
            mFaceClass = NULL;
        }
        if (mRectClass != NULL) {
            env->DeleteGlobalRef(mRectClass);
            mRectClass = NULL;
        }
        if (mPointClass != NULL) {
            env->DeleteGlobalRef(mPointClass);
            mPointClass = NULL;
        }
        clearCallbackBuffers_l(env);
        mCamera.clear();
    }
    
    void JNICameraContext::notify(int32_t msgType, int32_t ext1, int32_t ext2)
    {
        ALOGV("notify");
    
        // VM pointer will be NULL if object is released
        Mutex::Autolock _l(mLock);
        if (mCameraJObjectWeak == NULL) {
            ALOGW("callback on dead camera object");
            return;
        }
        JNIEnv *env = AndroidRuntime::getJNIEnv();
    
        /*
         * If the notification or msgType is CAMERA_MSG_RAW_IMAGE_NOTIFY, change it
         * to CAMERA_MSG_RAW_IMAGE since CAMERA_MSG_RAW_IMAGE_NOTIFY is not exposed
         * to the Java app.
         */
        if (msgType == CAMERA_MSG_RAW_IMAGE_NOTIFY) {
            msgType = CAMERA_MSG_RAW_IMAGE;
        }
    
        env->CallStaticVoidMethod(mCameraJClass, fields.post_event,
                mCameraJObjectWeak, msgType, ext1, ext2, NULL);
    }
    
    jbyteArray JNICameraContext::getCallbackBuffer(
            JNIEnv* env, Vector<jbyteArray>* buffers, size_t bufferSize)
    {
        jbyteArray obj = NULL;
    
        // Vector access should be protected by lock in postData()
        if (!buffers->isEmpty()) {
            ALOGV("Using callback buffer from queue of length %zu", buffers->size());
            jbyteArray globalBuffer = buffers->itemAt(0);
            buffers->removeAt(0);
    
            obj = (jbyteArray)env->NewLocalRef(globalBuffer);
            env->DeleteGlobalRef(globalBuffer);
    
            if (obj != NULL) {
                jsize bufferLength = env->GetArrayLength(obj);
                if ((int)bufferLength < (int)bufferSize) {
                    ALOGE("Callback buffer was too small! Expected %zu bytes, but got %d bytes!",
                        bufferSize, bufferLength);
                    env->DeleteLocalRef(obj);
                    return NULL;
                }
            }
        }
    
        return obj;
    }
    
    void JNICameraContext::copyAndPost(JNIEnv* env, const sp<IMemory>& dataPtr, int msgType)
    {
        jbyteArray obj = NULL;
    
        // allocate Java byte array and copy data
        if (dataPtr != NULL) {
            ssize_t offset;
            size_t size;
            sp<IMemoryHeap> heap = dataPtr->getMemory(&offset, &size);
            ALOGV("copyAndPost: off=%zd, size=%zu", offset, size);
            uint8_t *heapBase = (uint8_t*)heap->base();
    
            if (heapBase != NULL) {
                const jbyte* data = reinterpret_cast<const jbyte*>(heapBase + offset);
    
                if (msgType == CAMERA_MSG_RAW_IMAGE) {
                    obj = getCallbackBuffer(env, &mRawImageCallbackBuffers, size);
                } else if (msgType == CAMERA_MSG_PREVIEW_FRAME && mManualBufferMode) {
                    obj = getCallbackBuffer(env, &mCallbackBuffers, size);
    
                    if (mCallbackBuffers.isEmpty()) {
                        ALOGV("Out of buffers, clearing callback!");
                        mCamera->setPreviewCallbackFlags(CAMERA_FRAME_CALLBACK_FLAG_NOOP);
                        mManualCameraCallbackSet = false;
    
                        if (obj == NULL) {
                            return;
                        }
                    }
                } else {
                    ALOGV("Allocating callback buffer");
                    obj = env->NewByteArray(size);
                }
    
                if (obj == NULL) {
                    ALOGE("Couldn't allocate byte array for JPEG data");
                    env->ExceptionClear();
                } else {
                    env->SetByteArrayRegion(obj, 0, size, data);
                }
            } else {
                ALOGE("image heap is NULL");
            }
        }
    
        // post image data to Java
        env->CallStaticVoidMethod(mCameraJClass, fields.post_event,
                mCameraJObjectWeak, msgType, 0, 0, obj);
        if (obj) {
            env->DeleteLocalRef(obj);
        }
    }
    
    void JNICameraContext::postData(int32_t msgType, const sp<IMemory>& dataPtr,
                                    camera_frame_metadata_t *metadata)
    {
        // VM pointer will be NULL if object is released
        Mutex::Autolock _l(mLock);
        JNIEnv *env = AndroidRuntime::getJNIEnv();
        if (mCameraJObjectWeak == NULL) {
            ALOGW("callback on dead camera object");
            return;
        }
    
        int32_t dataMsgType = msgType & ~CAMERA_MSG_PREVIEW_METADATA;
    
        // return data based on callback type
        switch (dataMsgType) {
            case CAMERA_MSG_VIDEO_FRAME:
                // should never happen
                break;
    
            // For backward-compatibility purpose, if there is no callback
            // buffer for raw image, the callback returns null.
            case CAMERA_MSG_RAW_IMAGE:
                ALOGV("rawCallback");
                if (mRawImageCallbackBuffers.isEmpty()) {
                    env->CallStaticVoidMethod(mCameraJClass, fields.post_event,
                            mCameraJObjectWeak, dataMsgType, 0, 0, NULL);
                } else {
                    copyAndPost(env, dataPtr, dataMsgType);
                }
                break;
    
            // There is no data.
            case 0:
                break;
    
            default:
                ALOGV("dataCallback(%d, %p)", dataMsgType, dataPtr.get());
                copyAndPost(env, dataPtr, dataMsgType);
                break;
        }
    
        // post frame metadata to Java
        if (metadata && (msgType & CAMERA_MSG_PREVIEW_METADATA)) {
            postMetadata(env, CAMERA_MSG_PREVIEW_METADATA, metadata);
        }
    }
    
    void JNICameraContext::postDataTimestamp(nsecs_t timestamp, int32_t msgType, const sp<IMemory>& dataPtr)
    {
        // TODO: plumb up to Java. For now, just drop the timestamp
        postData(msgType, dataPtr, NULL);
    }
    
    void JNICameraContext::postRecordingFrameHandleTimestamp(nsecs_t, native_handle_t* handle) {
        // Video buffers are not needed at app layer so just return the video buffers here.
        // This may be called when stagefright just releases camera but there are still outstanding
        // video buffers.
        if (mCamera != nullptr) {
            mCamera->releaseRecordingFrameHandle(handle);
        } else {
            native_handle_close(handle);
            native_handle_delete(handle);
        }
    }
    
    void JNICameraContext::postRecordingFrameHandleTimestampBatch(
            const std::vector<nsecs_t>&,
            const std::vector<native_handle_t*>& handles) {
        // Video buffers are not needed at app layer so just return the video buffers here.
        // This may be called when stagefright just releases camera but there are still outstanding
        // video buffers.
        if (mCamera != nullptr) {
            mCamera->releaseRecordingFrameHandleBatch(handles);
        } else {
            for (auto& handle : handles) {
                native_handle_close(handle);
                native_handle_delete(handle);
            }
        }
    }
    
    void JNICameraContext::postMetadata(JNIEnv *env, int32_t msgType, camera_frame_metadata_t *metadata)
    {
        jobjectArray obj = NULL;
        obj = (jobjectArray) env->NewObjectArray(metadata->number_of_faces,
                                                 mFaceClass, NULL);
        if (obj == NULL) {
            ALOGE("Couldn't allocate face metadata array");
            return;
        }
    
        for (int i = 0; i < metadata->number_of_faces; i++) {
            jobject face = env->NewObject(mFaceClass, fields.face_constructor);
            env->SetObjectArrayElement(obj, i, face);
    
            jobject rect = env->NewObject(mRectClass, fields.rect_constructor);
            env->SetIntField(rect, fields.rect_left, metadata->faces[i].rect[0]);
            env->SetIntField(rect, fields.rect_top, metadata->faces[i].rect[1]);
            env->SetIntField(rect, fields.rect_right, metadata->faces[i].rect[2]);
            env->SetIntField(rect, fields.rect_bottom, metadata->faces[i].rect[3]);
    
            env->SetObjectField(face, fields.face_rect, rect);
            env->SetIntField(face, fields.face_score, metadata->faces[i].score);
    
            bool optionalFields = metadata->faces[i].id != 0
                && metadata->faces[i].left_eye[0] != -2000 && metadata->faces[i].left_eye[1] != -2000
                && metadata->faces[i].right_eye[0] != -2000 && metadata->faces[i].right_eye[1] != -2000
                && metadata->faces[i].mouth[0] != -2000 && metadata->faces[i].mouth[1] != -2000;
            if (optionalFields) {
                int32_t id = metadata->faces[i].id;
                env->SetIntField(face, fields.face_id, id);
    
                jobject leftEye = env->NewObject(mPointClass, fields.point_constructor);
                env->SetIntField(leftEye, fields.point_x, metadata->faces[i].left_eye[0]);
                env->SetIntField(leftEye, fields.point_y, metadata->faces[i].left_eye[1]);
                env->SetObjectField(face, fields.face_left_eye, leftEye);
                env->DeleteLocalRef(leftEye);
    
                jobject rightEye = env->NewObject(mPointClass, fields.point_constructor);
                env->SetIntField(rightEye, fields.point_x, metadata->faces[i].right_eye[0]);
                env->SetIntField(rightEye, fields.point_y, metadata->faces[i].right_eye[1]);
                env->SetObjectField(face, fields.face_right_eye, rightEye);
                env->DeleteLocalRef(rightEye);
    
                jobject mouth = env->NewObject(mPointClass, fields.point_constructor);
                env->SetIntField(mouth, fields.point_x, metadata->faces[i].mouth[0]);
                env->SetIntField(mouth, fields.point_y, metadata->faces[i].mouth[1]);
                env->SetObjectField(face, fields.face_mouth, mouth);
                env->DeleteLocalRef(mouth);
            }
    
            env->DeleteLocalRef(face);
            env->DeleteLocalRef(rect);
        }
        env->CallStaticVoidMethod(mCameraJClass, fields.post_event,
                mCameraJObjectWeak, msgType, 0, 0, obj);
        env->DeleteLocalRef(obj);
    }
    
    void JNICameraContext::setCallbackMode(JNIEnv *env, bool installed, bool manualMode)
    {
        Mutex::Autolock _l(mLock);
        mManualBufferMode = manualMode;
        mManualCameraCallbackSet = false;
    
        // In order to limit the over usage of binder threads, all non-manual buffer
        // callbacks use CAMERA_FRAME_CALLBACK_FLAG_BARCODE_SCANNER mode now.
        //
        // Continuous callbacks will have the callback re-registered from handleMessage.
        // Manual buffer mode will operate as fast as possible, relying on the finite supply
        // of buffers for throttling.
    
        if (!installed) {
            mCamera->setPreviewCallbackFlags(CAMERA_FRAME_CALLBACK_FLAG_NOOP);
            clearCallbackBuffers_l(env, &mCallbackBuffers);
        } else if (mManualBufferMode) {
            if (!mCallbackBuffers.isEmpty()) {
                mCamera->setPreviewCallbackFlags(CAMERA_FRAME_CALLBACK_FLAG_CAMERA);
                mManualCameraCallbackSet = true;
            }
        } else {
            mCamera->setPreviewCallbackFlags(CAMERA_FRAME_CALLBACK_FLAG_BARCODE_SCANNER);
            clearCallbackBuffers_l(env, &mCallbackBuffers);
        }
    }
    
    void JNICameraContext::addCallbackBuffer(
            JNIEnv *env, jbyteArray cbb, int msgType)
    {
        ALOGV("addCallbackBuffer: 0x%x", msgType);
        if (cbb != NULL) {
            Mutex::Autolock _l(mLock);
            switch (msgType) {
                case CAMERA_MSG_PREVIEW_FRAME: {
                    jbyteArray callbackBuffer = (jbyteArray)env->NewGlobalRef(cbb);
                    mCallbackBuffers.push(callbackBuffer);
    
                    ALOGV("Adding callback buffer to queue, %zu total",
                            mCallbackBuffers.size());
    
                    // We want to make sure the camera knows we're ready for the
                    // next frame. This may have come unset had we not had a
                    // callbackbuffer ready for it last time.
                    if (mManualBufferMode && !mManualCameraCallbackSet) {
                        mCamera->setPreviewCallbackFlags(CAMERA_FRAME_CALLBACK_FLAG_CAMERA);
                        mManualCameraCallbackSet = true;
                    }
                    break;
                }
                case CAMERA_MSG_RAW_IMAGE: {
                    jbyteArray callbackBuffer = (jbyteArray)env->NewGlobalRef(cbb);
                    mRawImageCallbackBuffers.push(callbackBuffer);
                    break;
                }
                default: {
                    jniThrowException(env,
                            "java/lang/IllegalArgumentException",
                            "Unsupported message type");
                    return;
                }
            }
        } else {
           ALOGE("Null byte array!");
        }
    }
    
    void JNICameraContext::clearCallbackBuffers_l(JNIEnv *env)
    {
        clearCallbackBuffers_l(env, &mCallbackBuffers);
        clearCallbackBuffers_l(env, &mRawImageCallbackBuffers);
    }
    
    void JNICameraContext::clearCallbackBuffers_l(JNIEnv *env, Vector<jbyteArray> *buffers) {
        ALOGV("Clearing callback buffers, %zu remained", buffers->size());
        while (!buffers->isEmpty()) {
            env->DeleteGlobalRef(buffers->top());
            buffers->pop();
        }
    }
    
    static jint android_hardware_Camera_getNumberOfCameras(JNIEnv *env, jobject thiz)
    {
        return Camera::getNumberOfCameras();
    }
    
    static void android_hardware_Camera_getCameraInfo(JNIEnv *env, jobject thiz,
        jint cameraId, jobject info_obj)
    {
        CameraInfo cameraInfo;
        if (cameraId >= Camera::getNumberOfCameras() || cameraId < 0) {
            ALOGE("%s: Unknown camera ID %d", __FUNCTION__, cameraId);
            jniThrowRuntimeException(env, "Unknown camera ID");
            return;
        }
    
        status_t rc = Camera::getCameraInfo(cameraId, &cameraInfo);
        if (rc != NO_ERROR) {
            jniThrowRuntimeException(env, "Fail to get camera info");
            return;
        }
        env->SetIntField(info_obj, fields.facing, cameraInfo.facing);
        env->SetIntField(info_obj, fields.orientation, cameraInfo.orientation);
    
        char value[PROPERTY_VALUE_MAX];
        property_get("ro.camera.sound.forced", value, "0");
        jboolean canDisableShutterSound = (strncmp(value, "0", 2) == 0);
        env->SetBooleanField(info_obj, fields.canDisableShutterSound,
                canDisableShutterSound);
    }
    
    // connect to camera service
    static jint android_hardware_Camera_native_setup(JNIEnv *env, jobject thiz,
        jobject weak_this, jint cameraId, jint halVersion, jstring clientPackageName)
    {
        // Convert jstring to String16
        const char16_t *rawClientName = reinterpret_cast<const char16_t*>(
            env->GetStringChars(clientPackageName, NULL));
        jsize rawClientNameLen = env->GetStringLength(clientPackageName);
        String16 clientName(rawClientName, rawClientNameLen);
        env->ReleaseStringChars(clientPackageName,
                                reinterpret_cast<const jchar*>(rawClientName));
    
        sp<Camera> camera;
        if (halVersion == CAMERA_HAL_API_VERSION_NORMAL_CONNECT) {
            // Default path: hal version is don't care, do normal camera connect.
            camera = Camera::connect(cameraId, clientName,
                    Camera::USE_CALLING_UID, Camera::USE_CALLING_PID);
        } else {
            jint status = Camera::connectLegacy(cameraId, halVersion, clientName,
                    Camera::USE_CALLING_UID, camera);
            if (status != NO_ERROR) {
                return status;
            }
        }
    
        if (camera == NULL) {
            return -EACCES;
        }
    
        // make sure camera hardware is alive
        if (camera->getStatus() != NO_ERROR) {
            return NO_INIT;
        }
    
        jclass clazz = env->GetObjectClass(thiz);
        if (clazz == NULL) {
            // This should never happen
            jniThrowRuntimeException(env, "Can't find android/hardware/Camera");
            return INVALID_OPERATION;
        }
    
        // We use a weak reference so the Camera object can be garbage collected.
        // The reference is only used as a proxy for callbacks.
        sp<JNICameraContext> context = new JNICameraContext(env, weak_this, clazz, camera);
        context->incStrong((void*)android_hardware_Camera_native_setup);
        camera->setListener(context);
    
        // save context in opaque field
        env->SetLongField(thiz, fields.context, (jlong)context.get());
    
        // Update default display orientation in case the sensor is reverse-landscape
        CameraInfo cameraInfo;
        status_t rc = Camera::getCameraInfo(cameraId, &cameraInfo);
        if (rc != NO_ERROR) {
            ALOGE("%s: getCameraInfo error: %d", __FUNCTION__, rc);
            return rc;
        }
        int defaultOrientation = 0;
        switch (cameraInfo.orientation) {
            case 0:
                break;
            case 90:
                if (cameraInfo.facing == CAMERA_FACING_FRONT) {
                    defaultOrientation = 180;
                }
                break;
            case 180:
                defaultOrientation = 180;
                break;
            case 270:
                if (cameraInfo.facing != CAMERA_FACING_FRONT) {
                    defaultOrientation = 180;
                }
                break;
            default:
                ALOGE("Unexpected camera orientation %d!", cameraInfo.orientation);
                break;
        }
        if (defaultOrientation != 0) {
            ALOGV("Setting default display orientation to %d", defaultOrientation);
            rc = camera->sendCommand(CAMERA_CMD_SET_DISPLAY_ORIENTATION,
                    defaultOrientation, 0);
            if (rc != NO_ERROR) {
                ALOGE("Unable to update default orientation: %s (%d)",
                        strerror(-rc), rc);
                return rc;
            }
        }
    
        return NO_ERROR;
    }
    
    // disconnect from camera service
    // It's okay to call this when the native camera context is already null.
    // This handles the case where the user has called release() and the
    // finalizer is invoked later.
    static void android_hardware_Camera_release(JNIEnv *env, jobject thiz)
    {
        ALOGV("release camera");
        JNICameraContext* context = NULL;
        sp<Camera> camera;
        {
            Mutex::Autolock _l(sLock);
            context = reinterpret_cast<JNICameraContext*>(env->GetLongField(thiz, fields.context));
    
            // Make sure we do not attempt to callback on a deleted Java object.
            env->SetLongField(thiz, fields.context, 0);
        }
    
        // clean up if release has not been called before
        if (context != NULL) {
            camera = context->getCamera();
            context->release();
            ALOGV("native_release: context=%p camera=%p", context, camera.get());
    
            // clear callbacks
            if (camera != NULL) {
                camera->setPreviewCallbackFlags(CAMERA_FRAME_CALLBACK_FLAG_NOOP);
                camera->disconnect();
            }
    
            // remove context to prevent further Java access
            context->decStrong((void*)android_hardware_Camera_native_setup);
        }
    }
    
    static void android_hardware_Camera_setPreviewSurface(JNIEnv *env, jobject thiz, jobject jSurface)
    {
        ALOGV("setPreviewSurface");
        sp<Camera> camera = get_native_camera(env, thiz, NULL);
        if (camera == 0) return;
    
        sp<IGraphicBufferProducer> gbp;
        sp<Surface> surface;
        if (jSurface) {
            surface = android_view_Surface_getSurface(env, jSurface);
            if (surface != NULL) {
                gbp = surface->getIGraphicBufferProducer();
            }
        }
    
        if (camera->setPreviewTarget(gbp) != NO_ERROR) {
            jniThrowException(env, "java/io/IOException", "setPreviewTexture failed");
        }
    }
    
    static void android_hardware_Camera_setPreviewTexture(JNIEnv *env,
            jobject thiz, jobject jSurfaceTexture)
    {
        ALOGV("setPreviewTexture");
        sp<Camera> camera = get_native_camera(env, thiz, NULL);
        if (camera == 0) return;
    
        sp<IGraphicBufferProducer> producer = NULL;
        if (jSurfaceTexture != NULL) {
            producer = SurfaceTexture_getProducer(env, jSurfaceTexture);
            if (producer == NULL) {
                jniThrowException(env, "java/lang/IllegalArgumentException",
                        "SurfaceTexture already released in setPreviewTexture");
                return;
            }
    
        }
    
        if (camera->setPreviewTarget(producer) != NO_ERROR) {
            jniThrowException(env, "java/io/IOException",
                    "setPreviewTexture failed");
        }
    }
    
    static void android_hardware_Camera_setPreviewCallbackSurface(JNIEnv *env,
            jobject thiz, jobject jSurface)
    {
        ALOGV("setPreviewCallbackSurface");
        JNICameraContext* context;
        sp<Camera> camera = get_native_camera(env, thiz, &context);
        if (camera == 0) return;
    
        sp<IGraphicBufferProducer> gbp;
        sp<Surface> surface;
        if (jSurface) {
            surface = android_view_Surface_getSurface(env, jSurface);
            if (surface != NULL) {
                gbp = surface->getIGraphicBufferProducer();
            }
        }
        // Clear out normal preview callbacks
        context->setCallbackMode(env, false, false);
        // Then set up callback surface
        if (camera->setPreviewCallbackTarget(gbp) != NO_ERROR) {
            jniThrowException(env, "java/io/IOException", "setPreviewCallbackTarget failed");
        }
    }
    
    static void android_hardware_Camera_startPreview(JNIEnv *env, jobject thiz)
    {
        ALOGV("startPreview");
        sp<Camera> camera = get_native_camera(env, thiz, NULL);
        if (camera == 0) return;
    
        if (camera->startPreview() != NO_ERROR) {
            jniThrowRuntimeException(env, "startPreview failed");
            return;
        }
    }
    
    static void android_hardware_Camera_stopPreview(JNIEnv *env, jobject thiz)
    {
        ALOGV("stopPreview");
        sp<Camera> c = get_native_camera(env, thiz, NULL);
        if (c == 0) return;
    
        c->stopPreview();
    }
    
    static jboolean android_hardware_Camera_previewEnabled(JNIEnv *env, jobject thiz)
    {
        ALOGV("previewEnabled");
        sp<Camera> c = get_native_camera(env, thiz, NULL);
        if (c == 0) return JNI_FALSE;
    
        return c->previewEnabled() ? JNI_TRUE : JNI_FALSE;
    }
    
    static void android_hardware_Camera_setHasPreviewCallback(JNIEnv *env, jobject thiz, jboolean installed, jboolean manualBuffer)
    {
        ALOGV("setHasPreviewCallback: installed:%d, manualBuffer:%d", (int)installed, (int)manualBuffer);
        // Important: Only install preview_callback if the Java code has called
        // setPreviewCallback() with a non-null value, otherwise we'd pay to memcpy
        // each preview frame for nothing.
        JNICameraContext* context;
        sp<Camera> camera = get_native_camera(env, thiz, &context);
        if (camera == 0) return;
    
        // setCallbackMode will take care of setting the context flags and calling
        // camera->setPreviewCallbackFlags within a mutex for us.
        context->setCallbackMode(env, installed, manualBuffer);
    }
    
    static void android_hardware_Camera_addCallbackBuffer(JNIEnv *env, jobject thiz, jbyteArray bytes, jint msgType) {
        ALOGV("addCallbackBuffer: 0x%x", msgType);
    
        JNICameraContext* context = reinterpret_cast<JNICameraContext*>(env->GetLongField(thiz, fields.context));
    
        if (context != NULL) {
            context->addCallbackBuffer(env, bytes, msgType);
        }
    }
    
    static void android_hardware_Camera_autoFocus(JNIEnv *env, jobject thiz)
    {
        ALOGV("autoFocus");
        JNICameraContext* context;
        sp<Camera> c = get_native_camera(env, thiz, &context);
        if (c == 0) return;
    
        if (c->autoFocus() != NO_ERROR) {
            jniThrowRuntimeException(env, "autoFocus failed");
        }
    }
    
    static void android_hardware_Camera_cancelAutoFocus(JNIEnv *env, jobject thiz)
    {
        ALOGV("cancelAutoFocus");
        JNICameraContext* context;
        sp<Camera> c = get_native_camera(env, thiz, &context);
        if (c == 0) return;
    
        if (c->cancelAutoFocus() != NO_ERROR) {
            jniThrowRuntimeException(env, "cancelAutoFocus failed");
        }
    }
    
    static void android_hardware_Camera_takePicture(JNIEnv *env, jobject thiz, jint msgType)
    {
        ALOGV("takePicture");
        JNICameraContext* context;
        sp<Camera> camera = get_native_camera(env, thiz, &context);
        if (camera == 0) return;
    
        /*
         * When CAMERA_MSG_RAW_IMAGE is requested, if the raw image callback
         * buffer is available, CAMERA_MSG_RAW_IMAGE is enabled to get the
         * notification _and_ the data; otherwise, CAMERA_MSG_RAW_IMAGE_NOTIFY
         * is enabled to receive the callback notification but no data.
         *
         * Note that CAMERA_MSG_RAW_IMAGE_NOTIFY is not exposed to the
         * Java application.
         */
        if (msgType & CAMERA_MSG_RAW_IMAGE) {
            ALOGV("Enable raw image callback buffer");
            if (!context->isRawImageCallbackBufferAvailable()) {
                ALOGV("Enable raw image notification, since no callback buffer exists");
                msgType &= ~CAMERA_MSG_RAW_IMAGE;
                msgType |= CAMERA_MSG_RAW_IMAGE_NOTIFY;
            }
        }
    
        if (camera->takePicture(msgType) != NO_ERROR) {
            jniThrowRuntimeException(env, "takePicture failed");
            return;
        }
    }
    
    static void android_hardware_Camera_setParameters(JNIEnv *env, jobject thiz, jstring params)
    {
        ALOGV("setParameters");
        sp<Camera> camera = get_native_camera(env, thiz, NULL);
        if (camera == 0) return;
    
        const jchar* str = env->GetStringCritical(params, 0);
        String8 params8;
        if (params) {
            params8 = String8(reinterpret_cast<const char16_t*>(str),
                              env->GetStringLength(params));
            env->ReleaseStringCritical(params, str);
        }
        if (camera->setParameters(params8) != NO_ERROR) {
            jniThrowRuntimeException(env, "setParameters failed");
            return;
        }
    }
    
    static jstring android_hardware_Camera_getParameters(JNIEnv *env, jobject thiz)
    {
        ALOGV("getParameters");
        sp<Camera> camera = get_native_camera(env, thiz, NULL);
        if (camera == 0) return 0;
    
        String8 params8 = camera->getParameters();
        if (params8.isEmpty()) {
            jniThrowRuntimeException(env, "getParameters failed (empty parameters)");
            return 0;
        }
        return env->NewStringUTF(params8.string());
    }
    
    static void android_hardware_Camera_reconnect(JNIEnv *env, jobject thiz)
    {
        ALOGV("reconnect");
        sp<Camera> camera = get_native_camera(env, thiz, NULL);
        if (camera == 0) return;
    
        if (camera->reconnect() != NO_ERROR) {
            jniThrowException(env, "java/io/IOException", "reconnect failed");
            return;
        }
    }
    
    static void android_hardware_Camera_lock(JNIEnv *env, jobject thiz)
    {
        ALOGV("lock");
        sp<Camera> camera = get_native_camera(env, thiz, NULL);
        if (camera == 0) return;
    
        if (camera->lock() != NO_ERROR) {
            jniThrowRuntimeException(env, "lock failed");
        }
    }
    
    static void android_hardware_Camera_unlock(JNIEnv *env, jobject thiz)
    {
        ALOGV("unlock");
        sp<Camera> camera = get_native_camera(env, thiz, NULL);
        if (camera == 0) return;
    
        if (camera->unlock() != NO_ERROR) {
            jniThrowRuntimeException(env, "unlock failed");
        }
    }
    
    static void android_hardware_Camera_startSmoothZoom(JNIEnv *env, jobject thiz, jint value)
    {
        ALOGV("startSmoothZoom");
        sp<Camera> camera = get_native_camera(env, thiz, NULL);
        if (camera == 0) return;
    
        status_t rc = camera->sendCommand(CAMERA_CMD_START_SMOOTH_ZOOM, value, 0);
        if (rc == BAD_VALUE) {
            char msg[64];
            sprintf(msg, "invalid zoom value=%d", value);
            jniThrowException(env, "java/lang/IllegalArgumentException", msg);
        } else if (rc != NO_ERROR) {
            jniThrowRuntimeException(env, "start smooth zoom failed");
        }
    }
    
    static void android_hardware_Camera_stopSmoothZoom(JNIEnv *env, jobject thiz)
    {
        ALOGV("stopSmoothZoom");
        sp<Camera> camera = get_native_camera(env, thiz, NULL);
        if (camera == 0) return;
    
        if (camera->sendCommand(CAMERA_CMD_STOP_SMOOTH_ZOOM, 0, 0) != NO_ERROR) {
            jniThrowRuntimeException(env, "stop smooth zoom failed");
        }
    }
    
    static void android_hardware_Camera_setDisplayOrientation(JNIEnv *env, jobject thiz,
            jint value)
    {
        ALOGV("setDisplayOrientation");
        sp<Camera> camera = get_native_camera(env, thiz, NULL);
        if (camera == 0) return;
    
        if (camera->sendCommand(CAMERA_CMD_SET_DISPLAY_ORIENTATION, value, 0) != NO_ERROR) {
            jniThrowRuntimeException(env, "set display orientation failed");
        }
    }
    
    static jboolean android_hardware_Camera_enableShutterSound(JNIEnv *env, jobject thiz,
            jboolean enabled)
    {
        ALOGV("enableShutterSound");
        sp<Camera> camera = get_native_camera(env, thiz, NULL);
        if (camera == 0) return JNI_FALSE;
    
        int32_t value = (enabled == JNI_TRUE) ? 1 : 0;
        status_t rc = camera->sendCommand(CAMERA_CMD_ENABLE_SHUTTER_SOUND, value, 0);
        if (rc == NO_ERROR) {
            return JNI_TRUE;
        } else if (rc == PERMISSION_DENIED) {
            return JNI_FALSE;
        } else {
            jniThrowRuntimeException(env, "enable shutter sound failed");
            return JNI_FALSE;
        }
    }
    
    static void android_hardware_Camera_startFaceDetection(JNIEnv *env, jobject thiz,
            jint type)
    {
        ALOGV("startFaceDetection");
        JNICameraContext* context;
        sp<Camera> camera = get_native_camera(env, thiz, &context);
        if (camera == 0) return;
    
        status_t rc = camera->sendCommand(CAMERA_CMD_START_FACE_DETECTION, type, 0);
        if (rc == BAD_VALUE) {
            char msg[64];
            snprintf(msg, sizeof(msg), "invalid face detection type=%d", type);
            jniThrowException(env, "java/lang/IllegalArgumentException", msg);
        } else if (rc != NO_ERROR) {
            jniThrowRuntimeException(env, "start face detection failed");
        }
    }
    
    static void android_hardware_Camera_stopFaceDetection(JNIEnv *env, jobject thiz)
    {
        ALOGV("stopFaceDetection");
        sp<Camera> camera = get_native_camera(env, thiz, NULL);
        if (camera == 0) return;
    
        if (camera->sendCommand(CAMERA_CMD_STOP_FACE_DETECTION, 0, 0) != NO_ERROR) {
            jniThrowRuntimeException(env, "stop face detection failed");
        }
    }
    
    static void android_hardware_Camera_enableFocusMoveCallback(JNIEnv *env, jobject thiz, jint enable)
    {
        ALOGV("enableFocusMoveCallback");
        sp<Camera> camera = get_native_camera(env, thiz, NULL);
        if (camera == 0) return;
    
        if (camera->sendCommand(CAMERA_CMD_ENABLE_FOCUS_MOVE_MSG, enable, 0) != NO_ERROR) {
            jniThrowRuntimeException(env, "enable focus move callback failed");
        }
    }
    
    //-------------------------------------------------
    
    static const JNINativeMethod camMethods[] = {
      { "getNumberOfCameras",
        "()I",
        (void *)android_hardware_Camera_getNumberOfCameras },
      { "_getCameraInfo",
        "(ILandroid/hardware/Camera$CameraInfo;)V",
        (void*)android_hardware_Camera_getCameraInfo },
      { "native_setup",
        "(Ljava/lang/Object;IILjava/lang/String;)I",
        (void*)android_hardware_Camera_native_setup },
      { "native_release",
        "()V",
        (void*)android_hardware_Camera_release },
      { "setPreviewSurface",
        "(Landroid/view/Surface;)V",
        (void *)android_hardware_Camera_setPreviewSurface },
      { "setPreviewTexture",
        "(Landroid/graphics/SurfaceTexture;)V",
        (void *)android_hardware_Camera_setPreviewTexture },
      { "setPreviewCallbackSurface",
        "(Landroid/view/Surface;)V",
        (void *)android_hardware_Camera_setPreviewCallbackSurface },
      { "startPreview",
        "()V",
        (void *)android_hardware_Camera_startPreview },
      { "_stopPreview",
        "()V",
        (void *)android_hardware_Camera_stopPreview },
      { "previewEnabled",
        "()Z",
        (void *)android_hardware_Camera_previewEnabled },
      { "setHasPreviewCallback",
        "(ZZ)V",
        (void *)android_hardware_Camera_setHasPreviewCallback },
      { "_addCallbackBuffer",
        "([BI)V",
        (void *)android_hardware_Camera_addCallbackBuffer },
      { "native_autoFocus",
        "()V",
        (void *)android_hardware_Camera_autoFocus },
      { "native_cancelAutoFocus",
        "()V",
        (void *)android_hardware_Camera_cancelAutoFocus },
      { "native_takePicture",
        "(I)V",
        (void *)android_hardware_Camera_takePicture },
      { "native_setParameters",
        "(Ljava/lang/String;)V",
        (void *)android_hardware_Camera_setParameters },
      { "native_getParameters",
        "()Ljava/lang/String;",
        (void *)android_hardware_Camera_getParameters },
      { "reconnect",
        "()V",
        (void*)android_hardware_Camera_reconnect },
      { "lock",
        "()V",
        (void*)android_hardware_Camera_lock },
      { "unlock",
        "()V",
        (void*)android_hardware_Camera_unlock },
      { "startSmoothZoom",
        "(I)V",
        (void *)android_hardware_Camera_startSmoothZoom },
      { "stopSmoothZoom",
        "()V",
        (void *)android_hardware_Camera_stopSmoothZoom },
      { "setDisplayOrientation",
        "(I)V",
        (void *)android_hardware_Camera_setDisplayOrientation },
      { "_enableShutterSound",
        "(Z)Z",
        (void *)android_hardware_Camera_enableShutterSound },
      { "_startFaceDetection",
        "(I)V",
        (void *)android_hardware_Camera_startFaceDetection },
      { "_stopFaceDetection",
        "()V",
        (void *)android_hardware_Camera_stopFaceDetection},
      { "enableFocusMoveCallback",
        "(I)V",
        (void *)android_hardware_Camera_enableFocusMoveCallback},
    };
    
    struct field {
        const char *class_name;
        const char *field_name;
        const char *field_type;
        jfieldID   *jfield;
    };
    
    static void find_fields(JNIEnv *env, field *fields, int count)
    {
        for (int i = 0; i < count; i++) {
            field *f = &fields[i];
            jclass clazz = FindClassOrDie(env, f->class_name);
            jfieldID field = GetFieldIDOrDie(env, clazz, f->field_name, f->field_type);
            *(f->jfield) = field;
        }
    }
    
    // Get all the required offsets in java class and register native functions
    int register_android_hardware_Camera(JNIEnv *env)
    {
        field fields_to_find[] = {
            { "android/hardware/Camera", "mNativeContext",   "J", &fields.context },
            { "android/hardware/Camera$CameraInfo", "facing",   "I", &fields.facing },
            { "android/hardware/Camera$CameraInfo", "orientation",   "I", &fields.orientation },
            { "android/hardware/Camera$CameraInfo", "canDisableShutterSound",   "Z",
              &fields.canDisableShutterSound },
            { "android/hardware/Camera$Face", "rect", "Landroid/graphics/Rect;", &fields.face_rect },
            { "android/hardware/Camera$Face", "leftEye", "Landroid/graphics/Point;", &fields.face_left_eye},
            { "android/hardware/Camera$Face", "rightEye", "Landroid/graphics/Point;", &fields.face_right_eye},
            { "android/hardware/Camera$Face", "mouth", "Landroid/graphics/Point;", &fields.face_mouth},
            { "android/hardware/Camera$Face", "score", "I", &fields.face_score },
            { "android/hardware/Camera$Face", "id", "I", &fields.face_id},
            { "android/graphics/Rect", "left", "I", &fields.rect_left },
            { "android/graphics/Rect", "top", "I", &fields.rect_top },
            { "android/graphics/Rect", "right", "I", &fields.rect_right },
            { "android/graphics/Rect", "bottom", "I", &fields.rect_bottom },
            { "android/graphics/Point", "x", "I", &fields.point_x},
            { "android/graphics/Point", "y", "I", &fields.point_y},
        };
    
        find_fields(env, fields_to_find, NELEM(fields_to_find));
    
        jclass clazz = FindClassOrDie(env, "android/hardware/Camera");
        fields.post_event = GetStaticMethodIDOrDie(env, clazz, "postEventFromNative",
                                                   "(Ljava/lang/Object;IIILjava/lang/Object;)V");
    
        clazz = FindClassOrDie(env, "android/graphics/Rect");
        fields.rect_constructor = GetMethodIDOrDie(env, clazz, "<init>", "()V");
    
        clazz = FindClassOrDie(env, "android/hardware/Camera$Face");
        fields.face_constructor = GetMethodIDOrDie(env, clazz, "<init>", "()V");
    
        clazz = env->FindClass("android/graphics/Point");
        fields.point_constructor = env->GetMethodID(clazz, "<init>", "()V");
        if (fields.point_constructor == NULL) {
            ALOGE("Can't find android/graphics/Point()");
            return -1;
        }
    
        // Register native functions
        return RegisterMethodsOrDie(env, "android/hardware/Camera", camMethods, NELEM(camMethods));
    }
    

    4./hardware/libhardware/modules/camera/CameraHAL.cpp

    /*
     * Copyright (C) 2012 The Android Open Source Project
     *
     * Licensed under the Apache License, Version 2.0 (the "License");
     * you may not use this file except in compliance with the License.
     * You may obtain a copy of the License at
     *
     *      http://www.apache.org/licenses/LICENSE-2.0
     *
     * Unless required by applicable law or agreed to in writing, software
     * distributed under the License is distributed on an "AS IS" BASIS,
     * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
     * See the License for the specific language governing permissions and
     * limitations under the License.
     */
    
    #include <cstdlib>
    #include <hardware/camera_common.h>
    #include <hardware/hardware.h>
    #include "ExampleCamera.h"
    #include "VendorTags.h"
    
    //#define LOG_NDEBUG 0
    #define LOG_TAG "DefaultCameraHAL"
    #include <cutils/log.h>
    
    #define ATRACE_TAG (ATRACE_TAG_CAMERA | ATRACE_TAG_HAL)
    #include <cutils/trace.h>
    
    #include "CameraHAL.h"
    
    /*
     * This file serves as the entry point to the HAL.  It contains the module
     * structure and functions used by the framework to load and interface to this
     * HAL, as well as the handles to the individual camera devices.
     */
    
    namespace default_camera_hal {
    
    // Default Camera HAL has 2 cameras, front and rear.
    static CameraHAL gCameraHAL(2);
    // Handle containing vendor tag functionality
    static VendorTags gVendorTags;
    
    CameraHAL::CameraHAL(int num_cameras)
      : mNumberOfCameras(num_cameras),
        mCallbacks(NULL)
    {
        // Allocate camera array and instantiate camera devices
        mCameras = new Camera*[mNumberOfCameras];
        // Rear camera
        mCameras[0] = new ExampleCamera(0);
        // Front camera
        mCameras[1] = new ExampleCamera(1);
    }
    
    CameraHAL::~CameraHAL()
    {
        for (int i = 0; i < mNumberOfCameras; i++) {
            delete mCameras[i];
        }
        delete [] mCameras;
    }
    
    int CameraHAL::getNumberOfCameras()
    {
        ALOGV("%s: %d", __func__, mNumberOfCameras);
        return mNumberOfCameras;
    }
    
    int CameraHAL::getCameraInfo(int id, struct camera_info* info)
    {
        ALOGV("%s: camera id %d: info=%p", __func__, id, info);
        if (id < 0 || id >= mNumberOfCameras) {
            ALOGE("%s: Invalid camera id %d", __func__, id);
            return -ENODEV;
        }
        // TODO: return device-specific static metadata
        return mCameras[id]->getInfo(info);
    }
    
    int CameraHAL::setCallbacks(const camera_module_callbacks_t *callbacks)
    {
        ALOGV("%s : callbacks=%p", __func__, callbacks);
        mCallbacks = callbacks;
        return 0;
    }
    
    int CameraHAL::open(const hw_module_t* mod, const char* name, hw_device_t** dev)
    {
        int id;
        char *nameEnd;
    
        ALOGV("%s: module=%p, name=%s, device=%p", __func__, mod, name, dev);
        if (*name == '\0') {
            ALOGE("%s: Invalid camera id name is NULL", __func__);
            return -EINVAL;
        }
        id = strtol(name, &nameEnd, 10);
        if (*nameEnd != '\0') {
            ALOGE("%s: Invalid camera id name %s", __func__, name);
            return -EINVAL;
        } else if (id < 0 || id >= mNumberOfCameras) {
            ALOGE("%s: Invalid camera id %d", __func__, id);
            return -ENODEV;
        }
        return mCameras[id]->open(mod, dev);
    }
    
    extern "C" {
    
    static int get_number_of_cameras()
    {
        return gCameraHAL.getNumberOfCameras();
    }
    
    static int get_camera_info(int id, struct camera_info* info)
    {
        return gCameraHAL.getCameraInfo(id, info);
    }
    
    static int set_callbacks(const camera_module_callbacks_t *callbacks)
    {
        return gCameraHAL.setCallbacks(callbacks);
    }
    
    static int get_tag_count(const vendor_tag_ops_t* ops)
    {
        return gVendorTags.getTagCount(ops);
    }
    
    static void get_all_tags(const vendor_tag_ops_t* ops, uint32_t* tag_array)
    {
        gVendorTags.getAllTags(ops, tag_array);
    }
    
    static const char* get_section_name(const vendor_tag_ops_t* ops, uint32_t tag)
    {
        return gVendorTags.getSectionName(ops, tag);
    }
    
    static const char* get_tag_name(const vendor_tag_ops_t* ops, uint32_t tag)
    {
        return gVendorTags.getTagName(ops, tag);
    }
    
    static int get_tag_type(const vendor_tag_ops_t* ops, uint32_t tag)
    {
        return gVendorTags.getTagType(ops, tag);
    }
    
    static void get_vendor_tag_ops(vendor_tag_ops_t* ops)
    {
        ALOGV("%s : ops=%p", __func__, ops);
        ops->get_tag_count      = get_tag_count;
        ops->get_all_tags       = get_all_tags;
        ops->get_section_name   = get_section_name;
        ops->get_tag_name       = get_tag_name;
        ops->get_tag_type       = get_tag_type;
    }
    
    static int open_dev(const hw_module_t* mod, const char* name, hw_device_t** dev)
    {
        return gCameraHAL.open(mod, name, dev);
    }
    
    static hw_module_methods_t gCameraModuleMethods = {
        open : open_dev
    };
    
    camera_module_t HAL_MODULE_INFO_SYM __attribute__ ((visibility("default"))) = {
        common : {
            tag                : HARDWARE_MODULE_TAG,
            module_api_version : CAMERA_MODULE_API_VERSION_2_2,
            hal_api_version    : HARDWARE_HAL_API_VERSION,
            id                 : CAMERA_HARDWARE_MODULE_ID,
            name               : "Default Camera HAL",
            author             : "The Android Open Source Project",
            methods            : &gCameraModuleMethods,
            dso                : NULL,
            reserved           : {0},
        },
        get_number_of_cameras : get_number_of_cameras,
        get_camera_info       : get_camera_info,
        set_callbacks         : set_callbacks,
        get_vendor_tag_ops    : get_vendor_tag_ops,
        open_legacy           : NULL,
        set_torch_mode        : NULL,
        init                  : NULL,
        reserved              : {0},
    };
    } // extern "C"
    
    } // namespace default_camera_hal
    
    展开全文
  • Android Camera2官方demo的学习

    千次阅读 2019-04-14 17:29:32
    Camera2 是 Android L 的一个重大更新,重新定义了相机 API,也重构了相机 API 的架构,但使用起来,还是很复杂。 官方demo地址:Camera2 二.流程: 下面根据官方demo来详细了解下Camera2的拍照流程 1.设置预览图的...
  • Android Camera详解

    千次阅读 2017-11-13 10:21:04
    本文译自官方文档:https://developer.android.com/guide/topics/media/camera.htmlAndroid框架层包含了对多种相机和相机特性的支持,可以让你在你的应用中拍照...1 基础Android框架层支持通过android.hardware.camera2
  • Android Camera基本用法一

    万次阅读 2019-01-17 18:59:49
    1 Camera 简介 ...考虑兼容性依然介绍Camera,目录为android.hardware.Camera,可以看到从api21开始这个类已经被标记为过时,谷歌大大推荐使用android.hardware.Camera2,但是Camera2要从api21才支持,...
  • Android Camera开发(一)之基础知识

    千次阅读 2016-11-09 09:47:44
    概述Android手机关于Camera的使用,一是拍照,二是摄像,由于Android提供了强大的组件功能,为此对于在Android手机系统上进行Camera的开发,我们可以使用两类方法:一是借助Intent和MediaStore调用系统Camera App...
  • AndroidCamera预览

    万次阅读 2014-04-02 13:54:35
    一、命令流程 ...packages/apps/Camera/src/com/android/camera/PhotoModule.java private void startPreview() { mCameraDevice.setPreviewDisplayAsync(mCameraSurfaceHolder); mCameraDevice.startPrev
  • 杂家前文是在2012年的除夕之夜仓促完成,后来很多人指出了一些问题,琐事缠身一直没有进行升级。后来随着我自己的使用,越来越发现不出个升级版的demo是不行了。有时候就连我自己用这个demo测一些性能、功能点,用着...
  • Android CameraCamera2详解

    千次阅读 2017-02-21 18:02:57
    Android5.0之前使用android.hardware包下的Camera类进行拍照、录视频等功能。5.0以后,新增了android.hardware.camera2包,利用新的机制、新的类进行拍照、录视频。 使用Camera 一、拍照 由于手机摄像头配置...
  • Android Camera 运行流程

    万次阅读 2013-10-28 16:03:41
    Android Camera 框架从整体上看是一个 client/service 的架构, 有两个进程: client 进程,可以看成是 AP 端,主要包括 JAVA 代码与一些 native c/c++代码; service 进 程,属于服务端,是 native c/c++代码,主要负责和 ...
  • Android Camera开发之基础知识篇

    万次阅读 多人点赞 2016-08-19 17:22:59
    概述Android框架支持设备的相机拍照和录像功能,你的应用可以直接调用系统的Camera应用来拍照或者录像(比如微信拍照),当然也可以利用Android系统提供的API开发一个Camera应用来实现相机拍照和录像功能(比如市面...
  • Android CameraCamera HAL 分析

    千次阅读 2019-03-27 19:10:54
    Android Camera 一 源码路径 Android Camera 二 JNI JAVA和C/CPP图像数据传输流程分析 Android CameraCameraService 和 Client 链接到 HAL Android CameraCamera HAL 分析 Android CameraCamera HAL...
  • Android CameraCameraMetadata分析

    万次阅读 2016-07-25 20:03:50
    Camera_metadata数据结构在Camera流程中起到了很大重要,可以说所有的自顶层下发给hal层的参数都是通过camera_metadata传递的。今天我们就来好好看看它到底如何保存的,以及它的数据组织形式如何表现。和Camera_...
  • Android Camera调用流程http://blog.csdn.net/lushengchu_luis/article/details/110330951、Packages/apps/到framework 打开Camera./packages/apps/Camera/src/com/android/camera/Camera.java进来第一个肯定是...
  • Android CameraX 使用入门

    千次阅读 2020-06-09 10:42:16
    Android 应用中要实现 Camera 功能还是比较困难的,为了保证在各品牌手机设备上的兼容性、响应速度等体验细节,Camera 应用的开发者往往需要花很大的时间和精力进行测试,甚至需要手动在数百种不同设备上进行测试...
  • Android Camera 系列(三)Camera API 详解

    千次阅读 2018-10-15 19:39:08
    Camera 可能是接下来个人想深入学习的课题,准备新起一个系列,从个人的角度总结阐述自己对于 Android Camera 的研究过程,希望也能够对其他想学习 Camera 的同学一些帮助。 本小节内容为 Android Camera 官方文档 ...
  • 前面一篇文章Android Camera基本用法一 只是简单的介绍了Camera的基本用法,很多知识都很粗糙,今天开始一系列文章开始分别学习Camera的知识,这次的内容为Camera对焦。 1 Camera 对焦模式 Camera如果不进行对焦...
  • Android camera预览流程

    千次阅读 2019-10-17 16:28:58
    前面已经简单介绍了,在Android系统中open camera的流程,但是,它又是怎么预览、怎么配置流,如何最终操作到camera HAL的呢。接下来以android原生相机应用,android9,API2,camera HAL3为例,继续阅读代码,看看...
  • Android Camera源码分析

    千次阅读 2018-11-23 20:55:40
    回顾这半年做的项目基本都...这两天好好读了一下Camera的源码,大概理清了整体架构,总结了一下,其实没多少东西,Android的各个模块都差不多,都是有个系统服务,然后Java封一层,Native封一层,Java层和Native层的...
  • Android Camera 调用流程

    千次阅读 2016-06-16 17:56:13
     Android Camera框架从整体上看是一个client/service架构。有两个进程,一个是client进程,可以看成AP端 ,主要包括Java代码和一些native层的c/c++代码;另一个是service进程,属于服务端,是native c/c++代码, ...
  • Android Camera 摄像 demo

    千次阅读 2017-06-15 10:51:17
    google 在Android 5.0推出 Camera2 这个类,用于替换 Camera,但是Camera2要求android sdk 最低版本为 minSdkVersion = 21 (5.0系统),所以Camera2 还不能完全替换 Camera,在兼容低版本的时候,还是需要两者一起...
  • Android Camera数据流完整分析

    千次阅读 2013-04-25 08:33:27
    Android Camera数据流完整分析 之前已经有很多文章一直在讲述Android Camera,这里也算是进行以下总结 我们依旧从camera 的打开开始,逐步看看camera的数据流向,内存分配,首先打开camera的第一步,实例化camera...
  • android camera(一):camera模组CMM介绍

    万次阅读 多人点赞 2012-07-07 00:09:48
    关键词:android camera CMM 模组 camera参数 平台信息: 内核:linux 系统:android 平台:S5PV310(samsung exynos 4210)  作者:xubin341719(欢迎转载,请注明作者) 下载:常用摄像头规格书(个别有...
  • Android Camera HAL浅析

    万次阅读 2012-07-24 15:52:30
    1、Camera成像原理介绍 Camera工作流程图 Camera的成像原理可以简单概括如下: 景物(SCENE)通过镜头(LENS)生成的光学图像投射到图像传感器(Sensor)表面上,然后转为电信号,经过...
  • android camera 各版本差异

    千次阅读 2019-01-28 16:38:37
    本页详细介绍了 Camera HAL、API 和相关的 Android 兼容性测试套件 (CTS) 测试中的版本差异,还介绍了在 Android 7.0 中为增强和提高相机框架安全性而进行的几项架构更改,在 Android 8.0 中引入 Treble,以及供应商...
  • 本篇文章将围绕自定义控件——CameraSurfaceView来阐述Android Camera(非Camera2,因为Camera2只适应5.0+的安卓系统,而目前5.0以下还是大部分,所以当前的主流还是Camera)的使用详解,首先先介绍下Camera...
  • android camera2启动流程

    千次阅读 2016-01-25 17:25:57
    android原生camera2启动流程

空空如也

1 2 3 4 5 ... 20
收藏数 202,577
精华内容 81,030
关键字:

android camera