Smart Garage – Part 1: Raspberry Pi Distance Measurer

When you are getting old, you start to forget about things. For example, leaving the garage door open until your neighbor knocked on your door. I want to build something that can alert me when my garage door is left open for a long time. I thought about it and there are a few solutions:

  1. Use a distance measurer like ultrasound or laser.
  2. Use a camera with image recognition.
  3. Use a magnet contact sensor.

I decided to go with #1 first because for #2, I will need to collect a lot of sample images first, e.g. light conditions, with or without cars, etc. I also need to figure out where to run the image processing, locally using OpenCV or use AWS Rekognition. For #3, it is feasible but I will need to see where to put the magnets and where to put the Raspberry Pi.

After buying the ultrasonic module HC-SR04,  it only took a while to wire up everything and wrote the program. However, when I took it to the garage, I realized this solution did not work properly: sound does not travel in a straight line but waves. Due to the power outlet position, I could not put the Pi in the middle of the garage that is closed to the garage door, so it was on next to the wall and facing the center of the garage door. Somehow the sound would always hit the edge of the wall around the garage door and return me constant distances, regardless of the garage door being opened or not.

Although the results for me was disappointing, it was fun for me to pick up the Pi again after so long!

Photo Mar 25, 3 38 18 PM.jpg


A Layman’s Approach to Connected Car

I have a 2008 Volvo, no remote start, no advanced infotainment system, no companion app. As my car gets older, it reveals more problems, mostly the “check engine” light is on. What does it really mean? After some research, I learned that there is a capability called On-Board Diagnostics (OBD), which is a standard in all cars. Then I bought an OBD scanner with Bluetooth from Amazon like this one, installed Torque Pro. The app is capable of translating code from OBD to human-readable description.

This is cool and all. But the real fun begins when you can build something on top of OBD data and make your car connected, sort of. My idea is to upload real-time car data, such as speed, fuel level to the cloud, then I can ask Alexa what the status is. In order to accomplish that, I need a Raspberry Pi 3 (which has a built-in Bluetooth adapter) and my smartphone (which serves as a hotspot). The Pi reads car data from OBD scanner via Bluetooth, uploads to AWS DynamoDB via hotspot. Build an Alexa skill that can read data from DynamoDB and respond:

Me: “Alexa, ask My Volvo what is my current fuel level?”
Alexa: “Your fuel level is at 60%”

The step-by-step instruction is available on my GitHub. I think having these little projects after a long week of work is super fun and rewarding!

Dashboard with Meteor.js

I have found myself following a routine on my phone in the morning: weather, stocks, traffic, news, social media, etc. It is basically open app, close app, repeat. Why not having a dashboard with all the information I need?

The execution is very straightforward: a web-app with all those widgets on a 24/7 running Raspberry Pi.

See my code here on GitHub.

Raspberry Pi Security Camera

This is Project #2 for my Raspberry Pi camera module. Thanks to many people’s hard work on building the motion detection library, it becomes fairly easy to transform Pi into a security camera.

Step 1: Install Motion: apt-get install motion will only download version 3.2, I need to manually download and install the latest version 4.0.1:

sudo apt-get install gdebi-core
sudo gdebi pi_jessie_motion_4.0.1-1_armhf.deb

Step 2: Copy motion.conf to the current project path, and modify it based on the needs. There is an overwhelming list of parameters for Motion, but it is important to read it through:

sudo cp /etc/motion/motion.conf ./motion.conf
sudo chown pi motion.conf

I made the following changes:

  • daemon on
  • process_id_file /home/pi/Projects/PiCam/
  • width 1280
  • height 720
  • framerate 100
  • uncomment mmalcam_name because I am using Raspberry Pi camera
  • auto_brightness on
  • output_pictures off
  • locate_motion_mode preview
  • text_changes on
  • target_dir /home/pi/Projects/PiCam/motion
  • stream_motion on
  • stream_maxrate 100
  • ffmpeg_output_movies on
  • ffmpeg_variable_bitrate 100
  • ffmpeg_video_codec mp4
  • stream_localhost off
  • webcontrol_localhost off
  • on_movie_start python /home/pi/Projects/PiCam/

Step 3: Run motion and open a browser with IP:8081 and see the stream!

Step 4: Now it is the fun part, how can I get notification when there is motion detected? That’s what script does. I will send an event to IFTTT, therefore, I will get a push notification on my phone via the IFTTT app. Then I will upload the video file to Dropbox by using DropBox API, so I can replay on my phone.

See code here on my GitHub.

Use Raspberry Pi Camera Module for Time-Lapse Video

On Pi day, I bought a Raspberry Pi Camera Module V2 from Amazon so I could try out Amazon RekognitionGoogle Video Intelligence API, and many other fun projects that utilize the camera. I finally opened the camera package yesterday and started playing with it. The first project was a simple one: take many still photos and create a time-lapse video. Here are the steps:

  1. Install the camera module.
  2. Set up the Pi and put the camera facing the window.
  3. Write the following Python program:
from picamera import PiCamera
import time

# Configs

SAVE_PATH = "/home/pi/Projects/PiCam/photos/"

def isDayTime():
  hour = int(time.strftime("%H")) + LOCAL_TIME_UTC_OFFSET
  hour_local = hour
  if hour < 0:
hour_local = hour + 24
return (hour_local >= SUNRISE and hour_local <= SUNSET)

def start():
  camera = PiCamera()

  while True:
    if isDayTime():
      now = time.strftime("%m%d%Y-%H%M")
      file_path = SAVE_PATH + now + ".jpg"
      print("Captured: " + file_path)
      print("Do not capture")

print("Program starts...")

Then just let it run in the background. Today, after sunset, I stopped the program and copied all the photos over to my Macbook, and used iMovie to create a time-lapse video.

That’s it! More camera projects to come!

Write your own version of Mint, part 2

Last time I demonstrated how to write a simple Java program to process your bank statements. Now let’s add more automation to it by using Machine Learning (ML) to automatically categorize each transaction, is this grocery or entertainment?

First, I will need a training data set. With 6 months worth of bank statements, I am able to get 400+ transactions by running my program and manually tag the category. As a result, this is the generated CSV file:

Id Description Amount Category
1 NETFLIX.COM NETFLIX.COM CA 10.94 Entertainment
3 COSTCO GAS #0006 TUKWILA WA 21.51 Car

Note the following changes:

  • The Id column replaces Date column so I can use it as the row id, which is a unique identifier for each record.
  • Card column is removed because I do not think it helps with model prediction. However, if certain card is always used to pay for certain type of bill, this can be a valuable feature.

The ML model will consume Description and Amount as features to predict label Category. ML is all about finding patterns, by skimming through my data set, I think the model will do a good job on predicting Grocery, Health, Phone, Entertainment, but poorly on Restaurant and Shopping, because restaurant names can be anything. Additionally, 400+ records is too small for a multi-class model with ~10 labels. So I think the model will be at around 50% accuracy.

I am going to use AWS Machine Learning to train a model because it is super easy to use. After uploading the CSV file as the training data set with the following input schema:

  "version" : "1.0",
  "rowId" : "Id",
  "rowWeight" : null,
  "targetAttributeName" : "Category",
  "dataFormat" : "CSV",
  "dataFileContainsHeader" : true,
  "attributes" : [ {
    "attributeName" : "Id",
    "attributeType" : "CATEGORICAL"
  }, {
    "attributeName" : "Description",
    "attributeType" : "TEXT"
  }, {
    "attributeName" : "Amount",
    "attributeType" : "NUMERIC"
  }, {
    "attributeName" : "Category",
    "attributeType" : "CATEGORICAL"
  } ],
  "excludedAttributeNames" : [ ]

I will use this modified schema:

  "groups": {
    "NUMERIC_VARS_QB_500": "group('Amount')"
  "assignments": {},
  "outputs": [

to configure the model settings. AWS ML service will automatically split the training data set into 70%-30%, meaningly randomly selected 70% of the data will be used for training, while the remaining 30% will be used for evaluation. It will take a few minutes for the service to finish building the model and executing the evaluation. At the end, my model shows a 0.600 F1 score, not bad at all!

By the end of Q2, I will use this model to predict the categories! Why Q2? Because I release our financial report once every quarter 🙂