ARGear SDK Documentations
  • Introduction
  • Android
    • 1. Quickstart
    • 2. Configuration Settings
    • 3. ARGear Overview
    • 4. ARGSession
    • 5. Camera
    • 6. Rendering
    • 7. CMS Service
    • 8. Download Contents
    • 9. Set Contents
    • 10. Media
    • 11. Switch Camera Face
    • 12. Enable Debugging Mode
    • 13. ARCore Connect API
    • 14. API Reference
  • iOS
    • 1. Quickstart
    • 2. Configuration Settings
    • 3. ARGear Overview
    • 4. ARGSession
    • 5. Camera
    • 6. Rendering
    • 7. CMS Service
    • 8. Download Contents
    • 9. Set Contents
    • 10. Media
    • 11. Switch Camera Face
    • 12. Enable Debugging Mode
    • 13. ARKit Connect API
    • 14. API Reference
  • Unity
    • 1. Quickstart
    • 2. Configuration Settings
    • 3. ARGear Plugin Overview
    • 4. ARGearManager
    • 5. ARGearCamera
    • 6. CMS Service
    • 7. Download Contents
    • 8. Set Contents
    • 9. Switch Camera Face
    • 10. Enable Debugging Mode
    • 11. API Reference
Powered by GitBook
On this page
  • Camera API 1 Sample Code
  • Camera API 2 Sample Code

Was this helpful?

  1. Android

5. Camera

To use ARGear, you must provide camera configuration information and video frames to ARGear. Based on that information, ARGear applies AR functions such as face tracking, segmentation, etc. on top of the received frames as configured in ARGInferenceConfig.Feature.

The sample code below describes an example of camera configuration and how to feed video frames to ARGear.

ReferenceCamera.CameraListener cameraListener = new ReferenceCamera.CameraListener() {
    @Override
    public void setConfig(int previewWidth, 
                                      int previewHeight, 
                                      float verticalFov, 
                                      float horizontalFov, 
                                      int orientation, 
                                      boolean isFrontFacing, 
                                      float fps) {
        argsession.setCameraConfig(new ARGCameraConfig(previewWidth,
                previewHeight,
                verticalFov,
                horizontalFov,
                orientation,
                isFrontFacing,
                fps));
    }

    // region - for camera api 1
    @Override
    public void feedRawData(byte[] data) {
        // Send preview frame raw data from camera device to ARGear
        argsession.feedRawData(data);
    }
    // endregion

    // region - for camera api 2
    @Override
    public void feedRawData(Image data) {
        // Send preview frame image from camera device to ARGear
        argsession.feedRawData(data);
    }
    // endregion
};
private var cameraListener: ReferenceCamera.CameraListener = object : ReferenceCamera.CameraListener {
    override fun setConfig(
        previewWidth: Int,
        previewHeight: Int,
        verticalFov: Float,
        horizontalFov: Float,
        orientation: Int,
        isFrontFacing: Boolean,
        fps: Float
    ) {
        argsession.setCameraConfig(
            ARGCameraConfig(
                previewWidth,
                previewHeight,
                verticalFov,
                horizontalFov,
                orientation,
                isFrontFacing,
                fps
            )
        )
    }

    // region - for camera api 1
    override fun feedRawData(data: ByteArray?) {
        argsession.feedRawData(data)
    }
    // endregion

    // region - for camera api 2
    override fun feedRawData(data: Image?) {
        argsession.feedRawData(data)
    } 
   // endregion
}

Sample code of a camera class that calls updateFaceRects and feedRawData functions is written below.

Camera API 1 Sample Code

<Camera API 1 Sample Code>
private class CameraPreviewCallback implements Camera.PreviewCallback {
    @Override
    public void onPreviewFrame(byte[] data, Camera camera) {
        listener.feedRawData(data.array());
    }
}
private inner class CameraPreviewCallback : PreviewCallback {
    override fun onPreviewFrame(
        data: ByteArray,
        camera: Camera
    ) {
        listener?.feedRawData(data?.array())
    }
}

Camera API 2 Sample Code

<Camera API 2 Sample Code>
private CameraCaptureSession.CaptureCallback mCaptureCallback
        = new CameraCaptureSession.CaptureCallback() {

    @Override
    public void onCaptureProgressed(CameraCaptureSession session,
                                    CaptureRequest request,
                                    CaptureResult partialResult) {
    }

    @Override
    public void onCaptureCompleted(@NonNull CameraCaptureSession session,
                                   @NonNull CaptureRequest request,
                                   @NonNull TotalCaptureResult result) {
    }
};


private final ImageReader.OnImageAvailableListener mOnImageAvailableListener
    = new ImageReader.OnImageAvailableListener() {
    @Override
    public void onImageAvailable(final ImageReader reader) {
        mHandler.post(new Runnable() {
            @Override
            public void run() {
                final Image image = reader.acquireLatestImage();
                if (image != null) {
                    listener.feedRawData(image);
                    image.close();
                }
            }
        });
    }
};
private val mCaptureCallback: CaptureCallback = object : CaptureCallback() {

    override fun onCaptureProgressed(
        session: CameraCaptureSession,
        request: CaptureRequest,
        partialResult: CaptureResult
    ) {
    }

    override fun onCaptureCompleted(
        session: CameraCaptureSession,
        request: CaptureRequest,
        result: TotalCaptureResult
    ) {
    }
}


private val mOnImageAvailableListener =
    OnImageAvailableListener { reader ->
        handler?.post {
            val image = reader.acquireLatestImage()
            if (image != null) {
                listener.feedRawData(image)
                image.close()
            }
        }
    }

Previous4. ARGSessionNext6. Rendering

Last updated 4 years ago

Was this helpful?