Scanning Barcode or QR code in Swift 3.0 using AVFoundation

21,153

Solution 1

The first step needs to be declare access to any user private data types that is a new requirement in iOS 10. You can do it by adding a usage key to your app’s Info.plist together with a purpose string.

Because if you are using one of the following frameworks and fail to declare the usage your app will crash when it first makes the access:

Contacts, Calendar, Reminders, Photos, Bluetooth Sharing, Microphone, Camera, Location, Health, HomeKit, Media Library, Motion, CallKit, Speech Recognition, SiriKit, TV Provider.

To avoid the crash you need to add the suggested key to Info.plist:

enter image description here

And then the system shows the purpose string when asking the user to allow access:

enter image description here

For more information about it you can use this article:

I have done a little modifications to your BarcodeViewController to make it work properly as you can see below:

BarcodeViewController

import UIKit
import AVFoundation

protocol BarcodeDelegate {
   func barcodeReaded(barcode: String)
}

class BarcodeViewController: UIViewController, AVCaptureMetadataOutputObjectsDelegate {

   var delegate: BarcodeDelegate?

   var videoCaptureDevice: AVCaptureDevice = AVCaptureDevice.defaultDevice(withMediaType: AVMediaTypeVideo)
   var device = AVCaptureDevice.defaultDevice(withMediaType: AVMediaTypeVideo)
   var output = AVCaptureMetadataOutput()
   var previewLayer: AVCaptureVideoPreviewLayer?

   var captureSession = AVCaptureSession()
   var code: String?

   override func viewDidLoad() {
      super.viewDidLoad()

      self.view.backgroundColor = UIColor.clear
      self.setupCamera()
   }

   private func setupCamera() {

      let input = try? AVCaptureDeviceInput(device: videoCaptureDevice)

      if self.captureSession.canAddInput(input) {
          self.captureSession.addInput(input)
      }

      self.previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)

      if let videoPreviewLayer = self.previewLayer {
          videoPreviewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill
          videoPreviewLayer.frame = self.view.bounds
          view.layer.addSublayer(videoPreviewLayer)
      }

      let metadataOutput = AVCaptureMetadataOutput()
      if self.captureSession.canAddOutput(metadataOutput) {
          self.captureSession.addOutput(metadataOutput)

          metadataOutput.setMetadataObjectsDelegate(self, queue: DispatchQueue.main)
          metadataOutput.metadataObjectTypes = [AVMetadataObjectTypeQRCode, AVMetadataObjectTypeEAN13Code]
      } else {
          print("Could not add metadata output")
      }
   }

   override func viewWillAppear(_ animated: Bool) {
       super.viewWillAppear(animated)

       if (captureSession.isRunning == false) {
          captureSession.startRunning();
       }
   }

   override func viewWillDisappear(_ animated: Bool) {
      super.viewWillDisappear(animated)

      if (captureSession.isRunning == true) {
         captureSession.stopRunning();
      }
   }

   func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputMetadataObjects metadataObjects: [Any]!, from connection: AVCaptureConnection!) {
       // This is the delegate's method that is called when a code is read
       for metadata in metadataObjects {
           let readableObject = metadata as! AVMetadataMachineReadableCodeObject
           let code = readableObject.stringValue


           self.dismiss(animated: true, completion: nil)
           self.delegate?.barcodeReaded(barcode: code!)
           print(code!)
       }
   }
}

One of the important points was to declare the global variables and start and stop the captureSession inside the viewWillAppear(:) and viewWillDisappear(:) methods. In your previous code I think it was not called at all as it never enter inside the method to process the barcode.

I hope this help you.

Solution 2

Here is Victor Sigler's answer updated to Swift 4 without force unwrapping, a weak protocol, executing expensive code in the background thread and other refinements.

Notice that AVCaptureMetadataOutputObjectsDelegate's method changed from

captureOutput(_ captureOutput: AVCaptureOutput!, didOutputMetadataObjects metadataObjects: [Any]!, from connection: AVCaptureConnection!)

to

metadataOutput(_ output: AVCaptureMetadataOutput, didOutput metadataObjects: [AVMetadataObject], from connection: AVCaptureConnection)

import UIKit
import AVFoundation

protocol BarcodeDelegate: class {
    func barcodeRead(barcode: String)
}

class ScannerViewController: UIViewController, AVCaptureMetadataOutputObjectsDelegate {
    weak var delegate: BarcodeDelegate?

    var output = AVCaptureMetadataOutput()
    var previewLayer: AVCaptureVideoPreviewLayer!

    var captureSession = AVCaptureSession()

    override func viewDidLoad() {
        super.viewDidLoad()

        setupCamera()
    }

    override func viewWillAppear(_ animated: Bool) {
        super.viewWillAppear(animated)

        DispatchQueue.global(qos: .userInitiated).async {
            if !self.captureSession.isRunning {
                self.captureSession.startRunning()
            }
        }
    }

    override func viewWillDisappear(_ animated: Bool) {
        super.viewWillDisappear(animated)

        DispatchQueue.global(qos: .userInitiated).async {
            if self.captureSession.isRunning {
                self.captureSession.stopRunning()
            }
        }
    }

    fileprivate func setupCamera() {
        guard let device = AVCaptureDevice.default(for: .video),
            let input = try? AVCaptureDeviceInput(device: device) else {
            return
        }

        DispatchQueue.global(qos: .userInitiated).async {
            if self.captureSession.canAddInput(input) {
                self.captureSession.addInput(input)
            }

            let metadataOutput = AVCaptureMetadataOutput()

            if self.captureSession.canAddOutput(metadataOutput) {
                self.captureSession.addOutput(metadataOutput)

                metadataOutput.setMetadataObjectsDelegate(self, queue: .global(qos: .userInitiated))

                if Set([.qr, .ean13]).isSubset(of: metadataOutput.availableMetadataObjectTypes) {
                    metadataOutput.metadataObjectTypes = [.qr, .ean13]
                }
            } else {
                print("Could not add metadata output")
            }

            self.previewLayer = AVCaptureVideoPreviewLayer(session: self.captureSession)
            self.previewLayer.videoGravity = .resizeAspectFill

            DispatchQueue.main.async {
                self.previewLayer.frame = self.view.bounds
                self.view.layer.addSublayer(self.previewLayer)
            }
        }
    }

    func metadataOutput(_ output: AVCaptureMetadataOutput, didOutput metadataObjects: [AVMetadataObject], from connection: AVCaptureConnection) {
        // This is the delegate's method that is called when a code is read
        for metadata in metadataObjects {
            if let readableObject = metadata as? AVMetadataMachineReadableCodeObject,
                let code = readableObject.stringValue {
                dismiss(animated: true)
                delegate?.barcodeRead(barcode: code)
                print(code)
            }
        }
    }
}

Solution 3

Barcode Scanner in Swift 4 for all code types

Below I would like to share with few ideas according to barcode scanning in iOS.

  • separate barcode scanner logic from View logic,
  • add entry in .plist file
  • set exposurePointOfInterest and focusPointOfInterest
  • set rectOfInterests with proper converted CGRect
  • set focusMode and exposureMode
  • lock captureDevice with lockForConfiguration properly while change camera capture settings

Add entry in .plist file
In Info.plist file add following code to allow your application to access iPhone's camera:

<key>NSCameraUsageDescription</key>
<string>Allow access to camera</string>

Set exposurePointOfInterest and focusPointOfInterest
exposurePointOfInterest and focusPointOfInterest allow to better quality of scanning, faster focusing camera on central point of screen.

Set rectOfInterests
This property let camera to focus just on a part of the screen. This way code can be scanned faster, focused just on codes presented in a center of the screen - what is useful while few other codes are available in background.

Set focusMode and exposureMode Properties should be set like following:

device.focusMode = .continuousAutoFocus
device.exposureMode = .continuousAutoExposure

This allow to continuously focus and set exposure well adjusted to scanning code.

Demo

Here you can find ready project implementing this idea: https://github.com/lukszar/QuickScanner

Share:
21,153
iOS.Lover
Author by

iOS.Lover

Updated on November 02, 2020

Comments

  • iOS.Lover
    iOS.Lover over 3 years

    I am following this tutorial and tried to convert codes form Swift 2.0 to 3.0. But when I launched the application, the app doesn't work! I mean, nothing happens! Here is my code:

    ViewController:

    class ViewController: UIViewController ,BarcodeDelegate {
    
        override func prepare(for segue: UIStoryboardSegue, sender: Any?) {
    
            let barcodeViewController: BarcodeViewController = segue.destination as! BarcodeViewController
            barcodeViewController.delegate = self
    
        }
    
    
    
        func barcodeReaded(barcode: String) {
            codeTextView.text = barcode
            print(barcode)
        }
    
    }
    

    BarcodeVC:

    import AVFoundation
    
    
    protocol BarcodeDelegate {
    
        func barcodeReaded(barcode: String)
    }
    
    class BarcodeViewController: UIViewController,AVCaptureMetadataOutputObjectsDelegate {
    
        var delegate: BarcodeDelegate?
        var captureSession: AVCaptureSession!
        var code: String?
    
    
        override func viewDidLoad() {
            super.viewDidLoad()
    
            // Do any additional setup after loading the view.
            print("works")
    
            self.captureSession = AVCaptureSession();
            let videoCaptureDevice: AVCaptureDevice = AVCaptureDevice.defaultDevice(withMediaType: AVMediaTypeVideo)
    
            do {
    
                let videoInput = try AVCaptureDeviceInput(device: videoCaptureDevice)
    
                if self.captureSession.canAddInput(videoInput) {
                    self.captureSession.addInput(videoInput)
                } else {
                    print("Could not add video input")
                }
    
                let metadataOutput = AVCaptureMetadataOutput()
                if self.captureSession.canAddOutput(metadataOutput) {
                    self.captureSession.addOutput(metadataOutput)
    
                    metadataOutput.setMetadataObjectsDelegate(self, queue: DispatchQueue.main)
                    metadataOutput.metadataObjectTypes = [AVMetadataObjectTypeQRCode,AVMetadataObjectTypeEAN8Code, AVMetadataObjectTypeEAN13Code, AVMetadataObjectTypePDF417Code]
                } else {
                    print("Could not add metadata output")
                }
    
                let previewLayer = AVCaptureVideoPreviewLayer(session: self.captureSession)
                previewLayer?.frame = self.view.layer.bounds
                self.view.layer .addSublayer(previewLayer!)
                self.captureSession.startRunning()
            } catch let error as NSError {
                print("Error while creating vide input device: \(error.localizedDescription)")
            }
    
    
    
        }
    
    
    
        //I THINK THIS METHOD NOT CALL !
        private func captureOutput(captureOutput: AVCaptureOutput!, didOutputMetadataObjects metadataObjects: [AnyObject]!, fromConnection connection: AVCaptureConnection!) {
    
            // This is the delegate'smethod that is called when a code is readed
            for metadata in metadataObjects {
                let readableObject = metadata as! AVMetadataMachineReadableCodeObject
                let code = readableObject.stringValue
    
                // If the code is not empty the code is ready and we call out delegate to pass the code.
                if  code!.isEmpty {
                    print("is empty")
    
                }else {
    
                    self.captureSession.stopRunning()
                    self.dismiss(animated: true, completion: nil)
                    self.delegate?.barcodeReaded(barcode: code!)
    
    
                }
            }
    
        }
    

    Here is the output:

    2016-09-17 18:10:26.000919 BarcodeScaning[2610:674253] [MC] System group container for systemgroup.com.apple.configurationprofiles path is /private/var/containers/Shared/SystemGroup/systemgroup.com.apple.configurationprofiles 2016-09-17 18:10:26.007782 BarcodeScaning[2610:674253] [MC] Reading from public effective user settings.

  • iOS.Lover
    iOS.Lover over 7 years
    I added ! Still not working , camera opens but no qr or bar codes detecting
  • JTing
    JTing over 7 years
    Yep, got the same problem now! First I just converted the full project provided to Swift 3 syntax and added NSCameraUsageDescription - Worked like a charm. Now after following the tutorial I'm stuck at the same problem... Not sure yet what the deal is, but I'll keep looking.
  • iOS.Lover
    iOS.Lover over 7 years
    I will award the bounty in 1H (due to limitations )
  • Victor Sigler
    Victor Sigler over 7 years
    @Mc.Lover No problem, :)
  • Lê Khánh Vinh
    Lê Khánh Vinh over 7 years
    Do you know how to specify the region to scan (for example only a finite rectangle?
  • Pepeng Hapon
    Pepeng Hapon about 7 years
    Hi @VictorSigler. May I know where did you find the documentation for the implementation of barcode Scanner? I just want to understand each line of your code, thanks a lot.
  • MBH
    MBH about 7 years
    Perfect, but was not filling the screen, so that i added videoPreviewLayer.frame = self.view.bounds again to viewDidLayoutSubviews() to fill the view because i have the Viewcontroller set from nib. and it is working perfectly now. Thank you
  • Akhil Nair
    Akhil Nair about 7 years
    This works perfectly. But when I get the result and the QR code is still in the camera bound, its scans the same QR Code continuously in background. I need a one time scan. How it is possible?
  • Gary Wright
    Gary Wright over 6 years
    The note about the change to AVCaptureMetadataOutputObjectsDelegate really helped me, thank you
  • Dilip Tiwari
    Dilip Tiwari over 6 years
    hiii @VictorSigler, i used ur code it is working fine and shows scanned qrcode but only when user allows permission for using camera but if user not allows permission for using camera then app crashes could u help me with this and update ur code
  • lukszar
    lukszar about 6 years
    @LêKhánhVinh look on my answer here: stackoverflow.com/a/49408133/3840884 There you can find how to set specify region to scan
  • Admin
    Admin about 6 years
    Adding a late comment...but @Victor Sigler how can we add a frame to the camera for the barcode. As of now, just a plain camera is shown...so the user cannot know where to focus the barcode..
  • Nikunj Kumbhani
    Nikunj Kumbhani about 5 years
    @VictorSigler Can you help me? I also want to get barcode image as UIimage with code how can I do this?
  • Hardik Thakkar
    Hardik Thakkar over 4 years
    Perfect demo which i want.. Thank you so much
  • lukszar
    lukszar over 4 years
    @HardikThakkar happy to hear that
  • ASD Solutions
    ASD Solutions over 3 years
    Brilliant summary of the properties to adjust to get better performance out of the scanner.