A Layman’s Approach to Connected Car

I have a 2008 Volvo, no remote start, no advanced infotainment system, no companion app. As my car gets older, it reveals more problems, mostly the “check engine” light is on. What does it really mean? After some research, I learned that there is a capability called On-Board Diagnostics (OBD), which is a standard in all cars. Then I bought an OBD scanner with Bluetooth from Amazon like this one, installed Torque Pro. The app is capable of translating code from OBD to human-readable description.

This is cool and all. But the real fun begins when you can build something on top of OBD data and make your car connected, sort of. My idea is to upload real-time car data, such as speed, fuel level to the cloud, then I can ask Alexa what the status is. In order to accomplish that, I need a Raspberry Pi 3 (which has a built-in Bluetooth adapter) and my smartphone (which serves as a hotspot). The Pi reads car data from OBD scanner via Bluetooth, uploads to AWS DynamoDB via hotspot. Build an Alexa skill that can read data from DynamoDB and respond:

Me: “Alexa, ask My Volvo what is my current fuel level?”
Alexa: “Your fuel level is at 60%”

The step-by-step instruction is available on my GitHub. I think having these little projects after a long week of work is super fun and rewarding!


Dashboard with Meteor.js

I have found myself following a routine on my phone in the morning: weather, stocks, traffic, news, social media, etc. It is basically open app, close app, repeat. Why not having a dashboard with all the information I need?

The execution is very straightforward: a web-app with all those widgets on a 24/7 running Raspberry Pi.

See my code here on GitHub.

Raspberry Pi Security Camera

This is Project #2 for my Raspberry Pi camera module. Thanks to many people’s hard work on building the motion detection library, it becomes fairly easy to transform Pi into a security camera.

Step 1: Install Motion: apt-get install motion will only download version 3.2, I need to manually download and install the latest version 4.0.1:

wget https://github.com/Motion-Project/motion/releases/download/release-4.0.1/pi_jessie_motion_4.0.1-1_armhf.deb
sudo apt-get install gdebi-core
sudo gdebi pi_jessie_motion_4.0.1-1_armhf.deb

Step 2: Copy motion.conf to the current project path, and modify it based on the needs. There is an overwhelming list of parameters for Motion, but it is important to read it through:

sudo cp /etc/motion/motion.conf ./motion.conf
sudo chown pi motion.conf

I made the following changes:

  • daemon on
  • process_id_file /home/pi/Projects/PiCam/motion.pid
  • width 1280
  • height 720
  • framerate 100
  • uncomment mmalcam_name because I am using Raspberry Pi camera
  • auto_brightness on
  • output_pictures off
  • locate_motion_mode preview
  • text_changes on
  • target_dir /home/pi/Projects/PiCam/motion
  • stream_motion on
  • stream_maxrate 100
  • ffmpeg_output_movies on
  • ffmpeg_variable_bitrate 100
  • ffmpeg_video_codec mp4
  • stream_localhost off
  • webcontrol_localhost off
  • on_movie_start python /home/pi/Projects/PiCam/alert.py

Step 3: Run motion and open a browser with IP:8081 and see the stream!

Step 4: Now it is the fun part, how can I get notification when there is motion detected? That’s what alert.py script does. I will send an event to IFTTT, therefore, I will get a push notification on my phone via the IFTTT app. Then I will upload the video file to Dropbox by using DropBox API, so I can replay on my phone.

See alert.py code here on my GitHub.

Use Raspberry Pi Camera Module for Time-Lapse Video

On Pi day, I bought a Raspberry Pi Camera Module V2 from Amazon so I could try out Amazon RekognitionGoogle Video Intelligence API, and many other fun projects that utilize the camera. I finally opened the camera package yesterday and started playing with it. The first project was a simple one: take many still photos and create a time-lapse video. Here are the steps:

  1. Install the camera module.
  2. Set up the Pi and put the camera facing the window.
  3. Write the following Python program:
from picamera import PiCamera
import time

# Configs

SAVE_PATH = "/home/pi/Projects/PiCam/photos/"

def isDayTime():
  hour = int(time.strftime("%H")) + LOCAL_TIME_UTC_OFFSET
  hour_local = hour
  if hour < 0:
hour_local = hour + 24
return (hour_local >= SUNRISE and hour_local <= SUNSET)

def start():
  camera = PiCamera()

  while True:
    if isDayTime():
      now = time.strftime("%m%d%Y-%H%M")
      file_path = SAVE_PATH + now + ".jpg"
      print("Captured: " + file_path)
      print("Do not capture")

print("Program starts...")

Then just let it run in the background. Today, after sunset, I stopped the program and copied all the photos over to my Macbook, and used iMovie to create a time-lapse video.

That’s it! More camera projects to come!

Write your own version of Mint, part 2

Last time I demonstrated how to write a simple Java program to process your bank statements. Now let’s add more automation to it by using Machine Learning (ML) to automatically categorize each transaction, is this grocery or entertainment?

First, I will need a training data set. With 6 months worth of bank statements, I am able to get 400+ transactions by running my program and manually tag the category. As a result, this is the generated CSV file:

Id Description Amount Category
1 NETFLIX.COM NETFLIX.COM CA 10.94 Entertainment
3 COSTCO GAS #0006 TUKWILA WA 21.51 Car

Note the following changes:

  • The Id column replaces Date column so I can use it as the row id, which is a unique identifier for each record.
  • Card column is removed because I do not think it helps with model prediction. However, if certain card is always used to pay for certain type of bill, this can be a valuable feature.

The ML model will consume Description and Amount as features to predict label Category. ML is all about finding patterns, by skimming through my data set, I think the model will do a good job on predicting Grocery, Health, Phone, Entertainment, but poorly on Restaurant and Shopping, because restaurant names can be anything. Additionally, 400+ records is too small for a multi-class model with ~10 labels. So I think the model will be at around 50% accuracy.

I am going to use AWS Machine Learning to train a model because it is super easy to use. After uploading the CSV file as the training data set with the following input schema:

  "version" : "1.0",
  "rowId" : "Id",
  "rowWeight" : null,
  "targetAttributeName" : "Category",
  "dataFormat" : "CSV",
  "dataFileContainsHeader" : true,
  "attributes" : [ {
    "attributeName" : "Id",
    "attributeType" : "CATEGORICAL"
  }, {
    "attributeName" : "Description",
    "attributeType" : "TEXT"
  }, {
    "attributeName" : "Amount",
    "attributeType" : "NUMERIC"
  }, {
    "attributeName" : "Category",
    "attributeType" : "CATEGORICAL"
  } ],
  "excludedAttributeNames" : [ ]

I will use this modified schema:

  "groups": {
    "NUMERIC_VARS_QB_500": "group('Amount')"
  "assignments": {},
  "outputs": [

to configure the model settings. AWS ML service will automatically split the training data set into 70%-30%, meaningly randomly selected 70% of the data will be used for training, while the remaining 30% will be used for evaluation. It will take a few minutes for the service to finish building the model and executing the evaluation. At the end, my model shows a 0.600 F1 score, not bad at all!

By the end of Q2, I will use this model to predict the categories! Why Q2? Because I release our financial report once every quarter 🙂

My first mobile game

Long story short, I participated in a one-day Hackathon last week on mobile games. I knew nothing about programming games, and that’s what got me interested at the beginning. Of course, in addition to free food, t-shirt, and possibly winning prizes. I was a one-man team so I decided to do something deadly simple and stupid, a voice volume controlled game, similar to Pah!

I had to go native instead of HTML5 because mobile browser does not support microphone audio input or voice recognition API yet. The coding process was relatively straightforward, I was able to get majority of the game done within 10 hours – I am too old to stay pass midnight anyway. There was a learning curve on SpriteKit and AudioKit, and then I spent quite some time on connecting the game elements with the core business from the Hackathon organizer.

What I found most interesting was the game design thought process:

  • Purpose: Collect points? How?
  • Levels: Easy, medium, hard? How each level is introduced?
  • Engagement: What makes the game fun or ridiculous?
  • Dynamics: What are the dynamic elements in the game?
  • End game: When does the game end? Can the game last forever?

I ended up tweaking a lot on these game elements by playing over and over again. Because the game is voice-triggered, I am sure I sounded like crazy while testing.

I cannot show the final game but I extracted the Minimal Viable Playable content of the code and switched all the UI elements to something else. It is available on my GitHub.

Happy coding, happy gaming!