Marco Luthi
Marco Luthi

Create Your First Machine Learning Model – No Coding Required.

Remember when machine learning was something only data scientists could do? Those days are long gone. Thanks to tools like Apple’s CreateML, anyone with a Mac can create their own machine learning models in minutes.

In this tutorial, we’ll build a hand gesture recognition model that can distinguish between thumbs up and peace signs — no coding required!

Let’s train a model in less than 10 min

What We’ll Build

A simple machine learning model that can recognize two hand gestures:

  • 👍 Thumbs up

  • ✌️ Peace sign

What You’ll Need

  • A Mac with Photo Booth (pre-installed | Free)

  • Xcode (includes CreateML | Free)

  • About 10 minutes of your time

  • 👋 Your hands!

Step 1: Capturing Your Training Data

Take at least 10 pictures of the gesture you want to classify.
  1. Open Photo Booth on your Mac

  2. Position yourself where there’s good lighting

  3. Take around 10 photos of each gesture:

  • 10 photos doing thumbs up

  • 10 photos making peace signs

TIP: Vary your hand position slightly in each photo for better results

Step 2: Setting Up Your Folders

The folders are the classifications we want to train the model for.

Create this folder structure on your Mac:

Hand Gestures
├── training
│   ├── thumbs up
│   └── peace
└── testing
    ├── thumbs up
    └── peace

Quick way to do this:

  1. Create a main folder called “Hand Gestures”

  2. Inside it, create two folders: “training” and “testing”

  3. Inside both training and testing, create two folders: “thumbs up” and “peace”

Step 3: Organizing Your Photos

  1. From your Photo Booth library, select your photos

  2. For each gesture, distribute your photos:

For the tutorial we are giving 5 picture for training and testing.
  • Put 5 thumbs up photos in training/thumbs up

  • Put 5 peace sign photos in training/peace

  • Put the remaining photos in the corresponding testing folders

Pro Tip: Choose your best photos for the training set!

Step 4: Train Your Model

Now add your trained and testing folders to CreateML and click on train.
  1. Open CreateML (Use Spotlight or you can find it in Xcode > Xcode menu > Open Developer Tool > Create ML)

  2. Click “New Document”

  3. Choose “Hand Pose Classification”

  4. In the Training Data section:

  • Drag your “training” folder onto the Training Data box

  • Drag your “testing” folder onto the Testing Data box

5. Click the “Train” button and watch the magic happen!

Testing Your Model

Click on the preview tab to test your model in CreateML

Click on the preview tab to test your model.

Once training is complete:

  1. Look at the evaluation metrics on the right

  2. Try the Preview tab to test your model with your webcam

  3. Make some gestures and see how well it recognizes them!

Improving Your Results

Not getting perfect results?
Try these tips:

  • Take more photos with different: Hand positions, Lighting conditions, Backgrounds and/or angles

  • Try playing with the “Augmentations” in the “Train” view.

  • Make sure your hands are clearly visible in all photos

  • Remove any blurry or unclear photos

  • Adding more images to the dataset

  • Add an “Empty” state. This is a classification with out any gestures.

Exporting Your Model

You can also export the model by clicking the “Get” icon.

You can also export the model by clicking the “Get” icon.

Happy with the results?

  1. Click “File > Export”

  2. Save your model as “HandGestureClassifier.mlmodel”

  3. You can now use this model in your iOS or macOS apps!

What’s Next?

Now that you’ve created your first model, here are some exciting ways to build upon this knowledge:

Try ML Shortcuts

Early demo of using the app to control a Figma prototype.

Want to see your hand gesture model in action right away? Check out ML Shortcuts — an app I developed that lets you control your Mac using the models you create with CreateML. Just import your model, map gestures to keyboard shortcuts, and you’re ready to go! It’s especially useful for designers working in tools like Figma, where you can create interactive prototypes controlled by real gestures (think next-level Wizard of Oz prototyping!).

[Download for free]

Other Things to Try

  • Add more gestures to recognize

  • Try different hand poses or other classifier types (Object, sound, body, etc)

  • Try adding negative or empty classifications (Not thumbs up)

  • Create a simple app that uses your model

You’ve just created a machine learning model that can recognize hand gestures — and you didn’t write a single line of code! A few years ago, this would have required extensive knowledge of machine learning algorithms and mathematics. Today, you can do it in minutes with CreateML.

Pretty easy, wasn’t it? This is just the beginning of what’s possible when we combine machine learning with design and development. The ability to recognize gestures, poses, and movements opens up entirely new ways to think about user interactions and interfaces.

While tools like CreateML have made machine learning more accessible, I believe we can push this even further. Some parts of the process — like creating folder structures, gathering training data, and identifying false classifications — could be even more streamlined. That’s why I’m currently working on developing tools to make this process even more accessible for designers and developers.

Want to stay updated on these tools and learn more about making machine learning more accessible? Subscribe to my Medium profile! I’ll be sharing:

  • More tutorials like this one

  • Updates on new tools and workflows

  • Tips for integrating ML into your design process

  • Real-world examples and case studies

Send Marco Luthi a reply about this page
Back to profile