Learn how to Recognize Handwriting on iOS with Core ML and Swift. You are going to use Core ML, the Vision Framework and the MNIST machine learning model to recognize handwriting directly from the touch drawing of your users.
📲 Slack Community: https://goo.gl/UabCFi
🛒 GAME DEVELOPMENT COURSE – JUST $20 🛒
https://www.udemy.com/2d-games-with-spritekit/?couponCode=YOUTUBE20
👏 Support me on Patreon: http://patreon.com/brianadvent
🖥 http://www.swift-tutorial-conference.net
➡️ Additional Tutorial Files:
Core ML Models: https://coreml.store
CanvasView class: https://goo.gl/AMfuSD
➡️ Finished Project on Github:
https://github.com/brianadvent/CoreMLHandwritingRecognition
➡️ Web: http://www.brianadvent.com
✉️ COMMENTS ✉️
If you have questions about the video or Cocoa programming, please comment below.
Amazon Auto Links: No products found.
I’m getting really poor results, any one having the same problem? i.e. I write a “1” but then the system recognizes it as a “4” 🙁
Not very accurate results and I am getting
this error:
⚠️flatMap’ is deprecated: Please use compactMap(_:) for the case where closure returns an optional value
Use ‘compactMap(_:)’ instead
is CoreML supported by iphone6?
I drew a bird and got a 4. Reminds be the good old back-propagation jokes.
When I opened the finished project it was just an Xcode project with no code. Where is the completed project?
How can you determine the name of a variable by something different? Is that even possible? I mean creating subclasses like example1, example2, example3 etc.
Who else is watching this and has no clue what is going on?
Hey,
I have a project to Recognise the number from number plate (Number Plate Recognition)
Can you help me for this ?
you have any tutorial of recognising number from number plate ?
awasome video . it is done by a coreml model not by a developer , developer like me only write a small code . what developer do special ???
For those getting poor results:
1) Use this MNIST model instead https://github.com/r4ghu/iOS-CoreML-MNIST (make sure to replace MNIST() with mnistCNN() in your setupVision function)
2) Remove the .filter from the handleClassification function
3) Remember to write your digit large enough, as the 375×375 (or whatever) canvas area is being downsized to 28×28. If your digit is small, it’s just going to look like a 1 when downsized.
Not entirely sure why, but the model linked in the video is not outputting proper confidence levels – it would only output a 1.0 confidence for one number and then the others would be 0.0000000000000001 (actually much much lower lol)
The new model I linked does give proper confidence levels; however, because there’s a lot of crossover between the numbers, a confidence requirement of 0.8 (or whatever) is too high. By removing the confidence filter, the first result of array is still the highest confidence number. Unfortunately this model is not great at recognizing the number 0 from my testing, but it’s pretty good with all the others.
Great tutorials man, thank you for keeping doing this!
OHHHHH AWESOME
to GitHub an empty project
Thank you so much! Your channel is awesome
can it recognizes colors?? that would be really cool
to GitHub an empty project
It recognizes letters too?
Downloaded the project as is and did not get any accurate results. 🙁