Remember when machine learning was something only data scientists could do? Those days are long gone. Thanks to tools like Apple’s CreateML, anyone with a Mac can create their own machine learning models in minutes.
In this tutorial, we’ll build a hand gesture recognition model that can distinguish between thumbs up and peace signs — no coding required!
What We’ll Build
A simple machine learning model that can recognize two hand gestures:
-
👍 Thumbs up
-
✌️ Peace sign
What You’ll Need
-
A Mac with Photo Booth (pre-installed | Free)
-
About 10 minutes of your time
-
👋 Your hands!
Step 1: Capturing Your Training Data
-
Open Photo Booth on your Mac
-
Position yourself where there’s good lighting
-
Take around 10 photos of each gesture:
-
10 photos doing thumbs up
-
10 photos making peace signs
TIP: Vary your hand position slightly in each photo for better results
Step 2: Setting Up Your Folders
Create this folder structure on your Mac:
Hand Gestures
├── training
│ ├── thumbs up
│ └── peace
└── testing
├── thumbs up
└── peace
Quick way to do this:
-
Create a main folder called “Hand Gestures”
-
Inside it, create two folders: “training” and “testing”
-
Inside both training and testing, create two folders: “thumbs up” and “peace”
Step 3: Organizing Your Photos
-
From your Photo Booth library, select your photos
-
For each gesture, distribute your photos:
-
Put 5 thumbs up photos in training/thumbs up
-
Put 5 peace sign photos in training/peace
-
Put the remaining photos in the corresponding testing folders
Pro Tip: Choose your best photos for the training set!
Step 4: Train Your Model
-
Open CreateML (Use Spotlight or you can find it in Xcode > Xcode menu > Open Developer Tool > Create ML)
-
Click “New Document”
-
Choose “Hand Pose Classification”
-
In the Training Data section:
-
Drag your “training” folder onto the Training Data box
-
Drag your “testing” folder onto the Testing Data box
5. Click the “Train” button and watch the magic happen!
Testing Your Model
Click on the preview tab to test your model.
Once training is complete:
-
Look at the evaluation metrics on the right
-
Try the Preview tab to test your model with your webcam
-
Make some gestures and see how well it recognizes them!
Improving Your Results
Not getting perfect results?
Try these tips:
-
Take more photos with different: Hand positions, Lighting conditions, Backgrounds and/or angles
-
Try playing with the “Augmentations” in the “Train” view.
-
Make sure your hands are clearly visible in all photos
-
Remove any blurry or unclear photos
-
Adding more images to the dataset
-
Add an “Empty” state. This is a classification with out any gestures.
Exporting Your Model
You can also export the model by clicking the “Get” icon.
Happy with the results?
-
Click “File > Export”
-
Save your model as “HandGestureClassifier.mlmodel”
-
You can now use this model in your iOS or macOS apps!
What’s Next?
Now that you’ve created your first model, here are some exciting ways to build upon this knowledge:
Try ML Shortcuts
Want to see your hand gesture model in action right away? Check out ML Shortcuts — an app I developed that lets you control your Mac using the models you create with CreateML. Just import your model, map gestures to keyboard shortcuts, and you’re ready to go! It’s especially useful for designers working in tools like Figma, where you can create interactive prototypes controlled by real gestures (think next-level Wizard of Oz prototyping!).
Other Things to Try
-
Add more gestures to recognize
-
Try different hand poses or other classifier types (Object, sound, body, etc)
-
Try adding negative or empty classifications (Not thumbs up)
-
Create a simple app that uses your model
You’ve just created a machine learning model that can recognize hand gestures — and you didn’t write a single line of code! A few years ago, this would have required extensive knowledge of machine learning algorithms and mathematics. Today, you can do it in minutes with CreateML.
Pretty easy, wasn’t it? This is just the beginning of what’s possible when we combine machine learning with design and development. The ability to recognize gestures, poses, and movements opens up entirely new ways to think about user interactions and interfaces.
While tools like CreateML have made machine learning more accessible, I believe we can push this even further. Some parts of the process — like creating folder structures, gathering training data, and identifying false classifications — could be even more streamlined. That’s why I’m currently working on developing tools to make this process even more accessible for designers and developers.
Want to stay updated on these tools and learn more about making machine learning more accessible? Subscribe to my Medium profile! I’ll be sharing:
-
More tutorials like this one
-
Updates on new tools and workflows
-
Tips for integrating ML into your design process
-
Real-world examples and case studies