CoreML: Real Time Camera Object Detection with Machine Learning – Swift 4

Learn how to put together a Real Time Object Detection app by using one of the newest libraries announced in this year’s WWDC event. The combination of CPU and GPU allows for maximum efficiency in using inference technology from Machine Learning which enables us to create today’s application.

SqueezeNet and Resnet50 models can downloaded near the bottom at: https://developer.apple.com/machine-learning/

More detailed lessons on Showing the Camera: https://www.letsbuildthatapp.com/course_video?id=1252

Instagram Firebase Course
https://www.letsbuildthatapp.com/course/instagram-firebase

Facebook Group
https://www.facebook.com/groups/1240636442694543/

iOS Basic Training Course
https://www.letsbuildthatapp.com/basic-training

Completed Source Code
https://www.letsbuildthatapp.com/course_video?id=1592

Follow me on Twitter: https://twitter.com/buildthatapp

Comments

Mathew Mozaffari says:

Thanks for the video! What do we do if we want to end the camera from running if we for example, tap a button?

Ricky Avina says:

Is anyone else experiencing memory leaking issues?

UltimateX Armor Paulo says:

Can you do a face real time recognition with this technology?

Ohh BALR says:

Hi man, I have a question, Is it possible to run this app on another device like a (surveillance) camera for instance?

Simon Christensen says:

please learn proper swift, shouldn’t do Guard Let for every line

Александр Железняк says:

I have a question. I’ve tried to create the label and insert “confidence” and “identifier” into it. It works instead of print, but mobile application just stops in 1 minute. Why does it stop? I’ve used DispatchQueue.main.async for output 🙂

Илья Стукалов says:

The Video is great but it is not object detection it is Object recognition.
Object detection means neural network should return position of object

Xavier Senior says:

Is it possible to ..lets say….touch the detected object on the screen which then gives additional options to what i want to do with the detected image selected?….i know it displays the name to what percentage recognition, but what if i want more…..example: now i know its an “coffee cup” detected, i touch the “coffee cup” on my phone screen then the app asks me, are you planning on drinking from that “coffee cup” or putting it in the washer or just do you want to juggle the “coffee cup” like a soccer ball?

Alphonso Sensley II says:

Great tutorial! Thanks Brian!

Kai Chi Tsao says:

results.first get first result~
how to get second result?

Iftikhar Arif says:

On the link i not found the Model ResNet50

ur boi says:

Does anyone know how to print the objects name and confidence percentage on the app?

Mr.Shadow 125 says:

Hey Brian, I did everything step by step but the print function doesn’t work as effective as it should, it takes like 3 seconds to take a frame, what can I do?

KARTHICK T M says:

can you please create an measurement application to measure the height of person and also creating a 3d model from the measured sets in ARKIT

dosary2000 says:

شكرا جدأ لك
فيديو مميز ورائع

Lan Le says:

Hi Brian, thank you for the tutorial. I just have one small issue: I’m following the exact tutorial but the debug windows only print if I change the rotation of my view (like horizontal to vertical and vice versa). You know what could have caused this?

Bike Vids says:

DOOOD!

Muhammad Ehsan Mirzaei says:

Hi, anyone know which algorithm usage in CORE ML 2? and which process happens from moment an image detect until convert to string

initialstrike says:

I was just listening to that remix, I thought my music was playing in the back lol

Ravi Chauhan says:

hey Bryn anybody tell me how can i take screenshot of live camera …

Ion Mardari says:

That’s pretty cool, thx Brian!
I was wondering if you can connect to an external camera and let that camera recognize objects? Or this is only possible with the built in camera?
Do you happen to know?
Thanks!

Roman Matovsky says:

Very nice! But how can I take a pic and put static label? Possible to combine with that? https://www.youtube.com/watch?v=LyFijrVmfvs

Ashish giri says:

Hey Brain!! This is a fantastic video. I created my own custom ML model to accurately guess the object. But I want to add one more thing that when user suppose to scan the object then that object’s AR image should appear. Could you please make a video on this. Or someone can help me out. Thanks in advance!!

bynne Zhang says:

Awesome!

Jaglika Perkova says:

Hello, thanks for you video. Do you think it is an object detection ? I would rather say it is an object classification what you have done. Because when referring to the term object detection it means that the application is able to detect the object while doing a real time video streaming. Please make it clear for me, I got confused.

Adriano Lopez de onate says:

Hi guys, can someone tell me how can i remove the Number of the percentual in the application? Pelaste help me and tell me wich part of the code i have to modify

ios CMExpertise says:

I want to fetch object name and object image from this process, but core ml model only return object name in output. Is there any model which returns detected object image ?

Ho Kachon says:

can someone help me with this problem, i have been trying to do the same application but i have a problem of the camera is not showing when i run it in the actual device iphone 6. and i dont know which things to add in the main.storyboard

csrkn13 says:

Amazing as always. Thank you.

Eduardo Tavares says:

Hi Brian I am using this code to add a button but I am not seeing the button although if I click on the place where the button was supposed to be I get to take a picture but I dont see the button.I guess it is transparent in some way…what is wrong?

let myView : UIView = {
let view = UIView()
view.backgroundColor = .lightGray
view.translatesAutoresizingMaskIntoConstraints = false

return view
}()

let myButton : UIButton = {
let bt = UIButton()
bt.layer.cornerRadius = 30
bt.layer.masksToBounds = true
bt.backgroundColor = .white
bt.translatesAutoresizingMaskIntoConstraints = false
bt.layer.borderColor = UIColor.white.cgColor
bt.layer.borderWidth = 5
bt.addTarget(self, action: #selector(handleClick), for: .touchUpInside)

return bt
}()

and I have a function with this code :
view.addSubview(myView)
myView.addSubview(myButton)

I have the code for the constraints but I am not gonna post otherwise will be a long post..
thank you

amine ech-cherif says:

What is the song intro??

Marius Acsinte says:

Hey, I had a quick question for you. I wanted to use this and detect objects in a video instead of doing it in real time. I don’t really know how to go about it, as I have it saved in a dataPath , so I am having trouble doing a request with CoreML using that. Any help and/or tips are appreciated.

Thank you for your awesome videos!

Phil Martin says:

Does the core ml object detection only work on certain devices?

 Write a comment

*

Do you like our videos?
Do you want to see more like that?

Please click below to support us on Facebook!