Android CameraX

Sergio Belda
ProAndroidDev
Published in
5 min readJan 27, 2020

--

CameraX is a Jetpack support library, built to help you make camera app development easier. It provides a consistent and easy-to-use API surface that works across most Android devices.

It uses a simpler, use case-based approach that is lifecycle-aware. It also resolves device compatibility issues for you so that you don’t have to include device-specific code in your code base. These features reduce the amount of code you need to write when adding camera capabilities to your app.

Add the dependencies in the app build.gradle file:

dependencies {
// CameraX core library using the camera2 implementation
def camerax_version = "1.0.0-rc02"
// The following line is optional, as the core library is included indirectly by camera-camera2
implementation "androidx.camera:camera-core:${camerax_version}"
implementation "androidx.camera:camera-camera2:${camerax_version}"
// If you want to additionally use the CameraX Lifecycle library
implementation "androidx.camera:camera-lifecycle:${camerax_version}"
// If you want to additionally use the CameraX View class
implementation "androidx.camera:camera-view:1.0.0-alpha21"
// If you want to additionally use the CameraX Extensions library
implementation "androidx.camera:camera-extensions:1.0.0-alpha21"
}

We need an instance of ProcessCameraProvider which will be obtained asynchronously using the static method ProcessCameraProvider.getInstance(). This, returns a listenable future, which provides the ProcessCameraProvider on completion.

The camera selection is done by a camera selector. A CameraSelector instance will be created and passed to bindToLifecycle function. A listener can be added to the cameraProviderFuture object to retrieve the camera provider.

Preview use case

For this example, I’m using the PreviewView view provided in the camera-view package for easily displaying the output from the Preview use case in an application, which is recommended for Android best practices. The XML activity looks as follows:

Then, we build our Preview use case as follows. We can specify some features as the target aspect ratio, target rotation or resolution.

The implementationMode can be COMPATIBLE for PreviewView to display the preview with a TextureView, or PERFORMANCE to use a SurfaceView if it’s possible (using TextureView as a fallback). The default mode is PERFORMANCE.

Now, if we launch the application, we can display the preview from our camera.

Image Analysis use case

The image analysis use case provides your app with a CPU-accessible image to perform image processing, computer vision, or machine learning inference on. The application implements an analyze method that is run on each frame.

Following the Google Codelabs example, we will implement an average luminosity analyzer.

To retrieve the rotationDegrees, call image.imageInfo.rotationDegrees.

We can define two different strategies to retrieve the images. By setting ImageAnalysis.STRATEGY_KEEP_ONLY_LATEST only one image will be delivered for analysis at a time. If more images are produced while that image is being analyzed, they will be dropped and not queued for delivery. Once the image being analyzed is closed by calling ImageProxy.close(), the next latest image will be delivered. Important: The Analyzer method implementation must call image.close() on received images when finished using them. Otherwise, new images may not be received or the camera may stall, depending on back pressure setting. With ImageAnalysis.STRATEGY_BLOCK_PRODUCER, once the producer has produced the number of images equal to the image queue depth, and none have been closed, the producer will stop producing images. Note that if producer stop producing images, it also stop producing images for other use cases as Preview. We can set the image queue depth by calling setImageQueueDepth().

Running the app now will produce a message every second:

D/CameraXApp: Average luminosity: 116.75151041666666
D/CameraXApp: Average luminosity: 118.029873046875
D/CameraXApp: Average luminosity: 117.33964518229166

Image Capture use case

The image capture use case is designed for capturing high-resolution, high-quality photos and provides auto-white-balance, auto-exposure, and auto-focus (3A) functionality, in additional to simple manual camera controls.

By setting the capture mode to minimize latency: setCaptureMode(ImageCapture.CAPTURE_MODE_MINIMIZE_LATENCY), we are prioritizing the latency over the quality to ensure that images will be captured faster but the image quality may be reduced. Otherwise, if we set the capture mode to maximize quality the images may take longer to capture, but they will be taken in the best possible quality.

Now, we can take a photo:

The takePicture() function can receive different params to treat the captured image. takePicture(OnImageCapturedListener) provides an in-memory buffer of the captured image. For example, we can take a picture and convert it into a bitmap to process it (note that the captured image ImageFormat is JPEG).

Torch and Zoom

We can also control the torch state and zoom. The bindToLifecycle() function returns an instance of our camera. By calling camera.cameraInfo() and camera.cameraControl() we can get an instance of both objects.

We can enable and disable the torch by calling cameraControl.enableTorch():

private fun toggleTorch() {
if (cameraInfo?.torchState?.value == TorchState.ON) {
cameraControl?.enableTorch(false)
} else {
cameraControl?.enableTorch(true)
}
}

The torch state LiveData can be observed to handle changes in its value.

private fun setTorchStateObserver() {
cameraInfo?.torchState?.observe(this, { state ->
if
(state == TorchState.ON) {
binding.cameraTorchButton.setImageDrawable(
ContextCompat.getDrawable(
this,
R.drawable.ic_flash_on_24dp
)
)
} else {
binding.cameraTorchButton.setImageDrawable(
ContextCompat.getDrawable(
this,
R.drawable.ic_flash_off_24dp
)
)
}
})
}
Torch control

Using setLinearZoom() we are able to set the current zoom by a linear zoom value ranging from 0f to 1.0f. LinearZoom 0f represents the minimum zoom while 1.0f represents the maximum zoom.

Zoom control

We can obtain the current state of Zoom by setting an Observer to the cameraInfo.zoomState LiveData object.

private fun setZoomStateObserver() {
cameraInfo?.zoomState?.observe(this, { state ->
// state.linearZoom
// state.zoomRatio
// state.maxZoomRatio
// state.minZoomRatio
Log.d(TAG, "${state.linearZoom}")
})
}

--

--