How to capture depth data from camera in iOS 11 and Swift 4?
Solution 1
First, you need to use the dual camera, otherwise you won't get any depth data.
let device = AVCaptureDevice.default(.builtInDualCamera, for: .video, position: .back)
And keep a reference to your queue
let dataOutputQueue = DispatchQueue(label: "data queue", qos: .userInitiated, attributes: [], autoreleaseFrequency: .workItem)
You'll also probably want to synchronize the video and depth data
var outputSynchronizer: AVCaptureDataOutputSynchronizer?
Then you can synchronize the two outputs in your viewDidLoad() method like this
if sessionOutput?.isDepthDataDeliverySupported {
sessionOutput?.isDepthDataDeliveryEnabled = true
depthDataOutput?.connection(with: .depthData)!.isEnabled = true
depthDataOutput?.isFilteringEnabled = true
outputSynchronizer = AVCaptureDataOutputSynchronizer(dataOutputs: [sessionOutput!, depthDataOutput!])
outputSynchronizer!.setDelegate(self, queue: self.dataOutputQueue)
}
I would recommend watching WWDC session 507 - they also provide a full sample app that does exactly what you want.
https://developer.apple.com/videos/play/wwdc2017/507/
Solution 2
To give more details to @klinger answer, here is what you need to do to get Depth Data for each pixel, I wrote some comments, hope it helps!
func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) {
//## Convert Disparity to Depth ##
let depthData = (photo.depthData as AVDepthData!).converting(toDepthDataType: kCVPixelFormatType_DepthFloat32)
let depthDataMap = depthData.depthDataMap //AVDepthData -> CVPixelBuffer
//## Data Analysis ##
// Useful data
let width = CVPixelBufferGetWidth(depthDataMap) //768 on an iPhone 7+
let height = CVPixelBufferGetHeight(depthDataMap) //576 on an iPhone 7+
CVPixelBufferLockBaseAddress(depthDataMap, CVPixelBufferLockFlags(rawValue: 0))
// Convert the base address to a safe pointer of the appropriate type
let floatBuffer = unsafeBitCast(CVPixelBufferGetBaseAddress(depthDataMap), to: UnsafeMutablePointer<Float32>.self)
// Read the data (returns value of type Float)
// Accessible values : (width-1) * (height-1) = 767 * 575
let distanceAtXYPoint = floatBuffer[Int(x * y)]
}
Solution 3
There are two ways to do this, and you are trying to do both at once:
- Capture depth data along with the image. This is done by using the
photo.depthData
object fromphotoOutput(_:didFinishProcessingPhoto:error:)
. I explain why this did not work for you below. - Use a
AVCaptureDepthDataOutput
and implementdepthDataOutput(_:didOutput:timestamp:connection:)
. I am not sure why this did not work for you, but implementingdepthDataOutput(_:didOutput:timestamp:connection:)
might help you figure out why.
I think that #1 is a better option, because it pairs the depth data with the image. Here's how you would do that:
@IBAction func capture(_ sender: Any) {
let settings = AVCapturePhotoSettings(format: [AVVideoCodecKey: AVVideoCodecType.jpeg])
settings.isDepthDataDeliveryEnabled = true
self.sessionOutput?.capturePhoto(with: settings, delegate: self)
}
// ...
override func viewDidLoad() {
// ...
self.sessionOutput = AVCapturePhotoOutput()
self.sessionOutput.isDepthDataDeliveryEnabled = true
// ...
}
Then, depth_map
shouldn't be nil
. Make sure to read both this and this (separate but similar pages) for more information about obtaining depth data.
For #2, I'm not quite sure why depthDataOutput(_:didOutput:timestamp:connection:)
isn't being called, but you should implement depthDataOutput(_:didDrop:timestamp:connection:reason:)
to see if depth data is being dropped for some reason.
Related videos on Youtube
Heestand XYZ
My goal is to make coding with PixelKit quick, easy and simple. Example of Operator Overloading Blending Modes: let img: PIX = imgA * imgB + imgC Tap in to the power of Metal with PixelKit (Swift Package)
Updated on June 30, 2022Comments
-
Heestand XYZ almost 2 years
I'm trying to get depth data from the camera in iOS 11 with AVDepthData, tho when I setup a photoOutput with the AVCapturePhotoCaptureDelegate the photo.depthData is nil.
So I tried setting up the AVCaptureDepthDataOutputDelegate with a AVCaptureDepthDataOutput, tho I don't know how to capture the depth photo?
Has anyone ever got an image from AVDepthData?
Edit:
Here's the code I tried:
// delegates: AVCapturePhotoCaptureDelegate & AVCaptureDepthDataOutputDelegate @IBOutlet var image_view: UIImageView! @IBOutlet var capture_button: UIButton! var captureSession: AVCaptureSession? var sessionOutput: AVCapturePhotoOutput? var depthOutput: AVCaptureDepthDataOutput? var previewLayer: AVCaptureVideoPreviewLayer? @IBAction func capture(_ sender: Any) { self.sessionOutput?.capturePhoto(with: AVCapturePhotoSettings(format: [AVVideoCodecKey: AVVideoCodecType.jpeg]), delegate: self) } func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) { self.previewLayer?.removeFromSuperlayer() self.image_view.image = UIImage(data: photo.fileDataRepresentation()!) let depth_map = photo.depthData?.depthDataMap print("depth_map:", depth_map) // is nil } func depthDataOutput(_ output: AVCaptureDepthDataOutput, didOutput depthData: AVDepthData, timestamp: CMTime, connection: AVCaptureConnection) { print("depth data") // never called } override func viewDidLoad() { super.viewDidLoad() self.captureSession = AVCaptureSession() self.captureSession?.sessionPreset = .photo self.sessionOutput = AVCapturePhotoOutput() self.depthOutput = AVCaptureDepthDataOutput() self.depthOutput?.setDelegate(self, callbackQueue: DispatchQueue(label: "depth queue")) do { let device = AVCaptureDevice.default(for: .video) let input = try AVCaptureDeviceInput(device: device!) if(self.captureSession?.canAddInput(input))!{ self.captureSession?.addInput(input) if(self.captureSession?.canAddOutput(self.sessionOutput!))!{ self.captureSession?.addOutput(self.sessionOutput!) if(self.captureSession?.canAddOutput(self.depthOutput!))!{ self.captureSession?.addOutput(self.depthOutput!) self.previewLayer = AVCaptureVideoPreviewLayer(session: self.captureSession!) self.previewLayer?.frame = self.image_view.bounds self.previewLayer?.videoGravity = AVLayerVideoGravity.resizeAspectFill self.previewLayer?.connection?.videoOrientation = AVCaptureVideoOrientation.portrait self.image_view.layer.addSublayer(self.previewLayer!) } } } } catch {} self.captureSession?.startRunning() }
I'm trying two things, one where the depth data is nil and one where I'm trying to call a depth delegate method.
Dose anyone know what I'm missing?
-
Coder-256 almost 7 yearsCould you please provide the code that you tried?
-