Candy Dispenser

June 14, 2017

Candy Dispenser


At Google I/O 2017, people who completed the IoT Codelabs were given an awesome reward: A Pico Maker Kit for Android Things. It’s basically another hardware development platform just like Raspberry Pi and Intel Edison.

Pico Maker Kit

The kit includes the NXP development board, Rainbow HAT, Camera module, 5in multi-touch display, Wi-Fi antenna and other stuff that you need to build your prototype. The board comes pre-flashed already with the latest Android Things firmware (0.4-dev preview as of this writing).

Android Things on NXP

Just right after opening the box, I decided to whip up a quick demo app that uses the Camera module and Google Cloud Vision API to detect smiles.

Observations

So far here are my observations:

  • I can’t use TextureView or SurfaceView to render camera output. This is because OpenGL is not yet enabled for 0.4 dev preview.
  • The experience of developing IoT solution looks very easy specially for existing Android developers because you can leverage all your existing Android knowledge to build it: Android Studio for IDE paired with very familiar language.
  • I was trying to use Mobile Vision API which is dependent on Play Store Services but I can’t seem to find any information if it’s possible to install this on Android Things. Ideally, Mobile Vision API should be enough for my use case because I only need to detect smiles from the image. In the end, I decided to use Cloud Vision API instead to do the detection.
  • Coming from Arduino, I find it really exciting when I was able to set a breakpoint and debug through the code which is not possible at all in Arduino.


About The Prototype

I got inspired to build the smile-activated candy dispenser when I saw one of the demos in IoT booth.

Back in office, specifically in our team, we always bring some snacks to share with others and I thought it would be fun to build this dispenser for the team. Besides, I think it’s awesome to see photos of people smiling :)

Dispenser

I ended up buying this Candy Dispenser from Amazon simply because it already has a motor assembly inside. All I have to do is to remove the existing sensor circuitry and attach wires to the motor and that’s it.

Candy Dispenser

Here’s what it looks like inside the motor assembly. I desoldered the existing wires and attached my own set of wires.

Wired Motor

and here’s the bottom part where the wire goes out of the dispenser.

Wired Motor

I also need to build a motor driver consisting of transistor, resistor and a diode to protect the GPIO from being fried by the back EMF from the motor.

Motor Driver

Software

The android things application is very straightforward. We’ll take a picture using the standard Camera2 API and we’ll upload the photo to Google Cloud Storage using it’s Java client library.

Using Camera 2 API to capture an image

captureBuilder.addTarget(singlePhotoReader.getSurface());
captureBuilder.set(CaptureRequest.CONTROL_AE_MODE, CaptureRequest.CONTROL_AE_MODE_ON);

session.capture(captureBuilder.build(), new CameraCaptureSession.CaptureCallback() {
    @Override
    public void onCaptureCompleted(@NonNull CameraCaptureSession session, @NonNull CaptureRequest request, @NonNull TotalCaptureResult result) {
        session.close();
    }
}, null);

We’ll upload the byte[] output of the camera to Google Cloud Storage bucket

Blob blob = bucket.create("image-"+formatter.format(new Date())+".jpg", content, "image/jpeg");
storage.createAcl(blob.getBlobId(), Acl.of(Acl.User.ofAllUsers(), Acl.Role.READER));

Next, we’ll use Google Cloud Vision API to detect faces in our image. We can tell Cloud Vision API to fetch the image from Cloud Storage using the image’s cloud storage URI.

Image image = new Image();
image.setSource(new ImageSource().set("imageUri", "gs://image-$$$.jpg"));
annotateImageRequest.setImage(image);

annotateImageRequest.setFeatures(new ArrayList<Feature>() {{
    Feature labelDetection = new Feature();
    labelDetection.setType("FACE_DETECTION");
    add(labelDetection);
}});

Cloud Vision API will return a list of detected faces and their corresponding expressions. If all faces are smiling, then we’ll send a HIGH to our GPIO pin that is attached to the motor.

Declare PeripheralManagerService that we can use to send signals to our GPIO pins

private PeripheralManagerService manager;

Initializa our GPIO Pin and set it as output and set the initial state to be low or no voltage

manager = new PeripheralManagerService();
motor = manager.openGpio(MOTOR_PIN);
motor.setDirection(Gpio.DIRECTION_OUT_INITIALLY_LOW);
motor.setActiveType(Gpio.ACTIVE_LOW);

When dispensing candies, all that we need to do is to send a pulse (a HIGH then followed by a LOW) to a pin for a short period of time.

motor.setValue(true);

This will cause the DC motor to run and push the candies out of the hole.

comments powered by Disqus