Build a thermal camera with Raspberry Pi and Go

The spread of the COVID-19 virus has gripped many parts of the world (especially in Asia) in the past couple of months and has affected many aspects of a lot of people’s lives, including mine. I no longer commute to work every day and try to work from home as much as possible. Non-essential meetings are cancelled (which is a good thing) and other meetings are mostly done through video or audio conferencing. Most larger-scale events like conferences have been postponed to avoid gathering of people, which increases the risk of COVID-19 spreading.

For business continuity, my team has been segregated to Team A and Team B, they take turns to use the office on alternate weeks, and neither the twain shall meet. Also we enter almost every office building, everyone’s body temperature is checked and if they have fever, they are not allowed in and instead advised to see a doctor.

In one of our management meetings recently, we were discussing about how to deal with the flow of people (both employees and visitors) into our various offices around the island. Checking temperature is mostly done through non-contact thermometers by security guards. This however is a laborious and time-consuming method, and as people head to their offices this comes a bottleneck that ironically causes people to gather.

One of the suggestions was to use thermal imagers to do mass screening, which was quickly agreed. However, only the offices with more people flow will be equipped with them since they’re not cheap. Per set they can run into tens of thousands of dollars! One of my colleagues joked and said he’d like to have one for his personal office to screen everyone who comes by.

That, of course, set me off immediately.

Raspberry Pi to the rescue

I wrote a story last December on how I used my iPad Pro as a development device by attaching a Raspberry Pi4 to it as a USB gadget. That was just the start, of course. The Pi4 is much more than a small computer. It will now also be the base of my new thermal screener.

This is image title

My Raspberry Pi4 (with a bright new case)
For this project I will be using Go primarily and it will be compiled and run on the Pi4. It will:

  1. Read data from the AMG8833 thermal camera sensor.
  2. Convert the data to temperature readings.
  3. Generate thermal images based on the readings.

To display the readings, the software will also act as a web server and continually display the images on a web page. Also because the Pi4 is run headless, the thermal camera software will be started and run as a systemd service.

This is how it should turn out if all is well. This is a screenshot of me sitting down on a sofa around 1 meter away from the camera and raising my right hand.

This is image title
Screenshot of thermal camera output from Safari on the iPhone

The thermal camera hardware

The idea is to build a really cheap thermal camera for temperature screening. For the hardware I’m simply re-using my existing Pi4 and connecting it to a AMG8833 thermal camera sensor.

The AMG8833 is one of the cheapest thermal camera sensors around (slightly more than $60 Singapore dollars, or US$39.95). The sensor itself is a 8x8 array of infra-red sensors from Panasonic that return an array of 64 individual infrared temperature readings over I2C. It measure temperatures ranging from 0°C to 80°C with an accuracy of ± 2.5°C and can detect a human from a distance of up to 7 meters (detection means it can sense the heat differences). It can generate up to 10 frames per second (or as frame every 100 millisecond).

This is image title
The Adafruit AMG8833 thermal camera sensor

The pin out connections from the AMG8833 are quite straightforward. I’ll be using only 4 out of the 6 pins.

  • Vin – this is the power pin. The sensor uses 3.3V so I connect it to corresponding 3.3V pin (pin 1) on Pi4
  • 3Vo – this is the 3.3V output, I won’t be using it.
  • GND – this is common ground for power and logic. I connect it to ground pin (pin 9) on the Pi4. There are more than 1 ground pin on the Pi4, you can use any of them.
  • SCL – this is the I2C clock pin and we connect it to the corresponding SCL pin on the Pi4 (pin 5).
  • SDA – this is the I2C data pin, and we connect it to the corresponding SDA pin on the Pi4 (pin 3).
  • INT – this is the interrupt-output pin. It is 3V logic and is used to detect when something moves or changes in the sensor vision path. I’m not using it.

This is image title

This is how it looks like after connecting the pins.
This is image title
Attaching the AMG8833 to my Pi4

To stand up the thermal camera, I took some discarded foam package cushion and carved a scaffolding mini-tower to hold it.

This is image title

Building a scaffolding tower with some discarded foam package cushion
And we’re done! It looks kind of scrappy but it’ll do.

The thermal camera software

Let’s see how the software works next.

For this project I used 2 external libraries. The first is the amg8833 project, which I took from https://github.com/jweissig/amg8833. The project itself is a port of Adafruit’s AMG88xx Library. The second library is a pure Golang image resize library https://github.com/nfnt/resize. The rest are all from the Go standard libraries.

Variables and parameters

We start off with the variables used as well as list of parameters that we capture from the command-line.

// used to interface with the sensor
var amg *amg8833.AMG88xx

// display frame
var frame string

// list of all colors used 1024 color hex integers
var colors []int

// temperature readings from the sensor 8x8 readings
var grid []float64

// refresh rate to capture and display the images
var refresh *int

// minimum and maximum temperature range for the sensor
var minTemp, maxTemp *float64

// new image size in pixel width
var newSize *int

// if true, will use the mock data (this can be used for testing)
var mock *bool

// directory where the public directory is in
var dir string

func init() {
	// capture the user parameters from the command-line
	refresh = flag.Int("f", 100, "refresh rate to capture and display the images")
	minTemp = flag.Float64("min", 26, "minimum temperature to measure from the sensor")
	maxTemp = flag.Float64("max", 32, "max temperature to measure from the sensor")
	newSize = flag.Int("s", 360, "new image size in pixel width")
	mock = flag.Bool("mock", false, "run using the mock data")
	flag.Parse()
	var err error
	dir, err = filepath.Abs(filepath.Dir(os.Args[0]))
	if err != nil {
		log.Fatal(err)
	}
}

parameters.go

Let’s look at the variables. amg is the interface to the sensor using the amg8833 library. I useframe to store the resized image captured from the sensor, which is then used by the web server to serve out to the web page. frame is a base64 encoded string.

The colors slice is a list of all the colors used in the image. This variable is declared here but populated in the heatmap.go file. grid is an array of 64 floating point temperature readings that is read from the AMG8833 8x8 sensor.

I capture a few parameters from the user when I start the software from the command line. refresh is the refresh rate to capture and display the images. By default it’s 100 milliseconds. minTemp and maxTemp are the minimum and maximum temperature ranges we can want to show on the image. newSize is the width of the final image that’s shown on the browser.

Finally, the mock parameter is used to determine if we are actually capturing from the sensor or using mock data. I used this when I was developing the software because it was easier to test.

Main

I start by checking if the user wants to use the mock data or the actual data captured from the sensor. If the user wants to capture from the sensor, I initialize the amg and then start a goroutine on startThermalCam.

The startThermalCam function is simple, it just grabs the temperature readings into the grid and then wait for a period of time as defined in refresh.

The rest of the main function is just setting up the web server. I only have 2 handlers for the web server. The first is for the web page, and the second returns the image captured from the thermal camera.

func main() {
	if *mock {
		// start populating the mock data into grid
		go startMock()
		fmt.Println("Using mock data.")
	} else {
		// start the thermal camera
		var err error
		amg, err = amg8833.NewAMG8833(&amg8833.Opts{
			Device: "/dev/i2c-1",
			Mode:   amg8833.AMG88xxNormalMode,
			Reset:  amg8833.AMG88xxInitialReset,
			FPS:    amg8833.AMG88xxFPS10,
		})
		if err != nil {
			panic(err)
		} else {
			fmt.Println("Connected to AMG8833 module.")
		}
		go startThermalCam()
	}

	// setting up the web server
	mux := http.NewServeMux()
	mux.Handle("/public/", http.StripPrefix("/public/", http.FileServer(http.Dir(dir+"/public"))))
	mux.HandleFunc("/", index)
	mux.HandleFunc("/frame", getFrame)
	server := &http.Server{
		Addr:    "0.0.0.0:12345",
		Handler: mux,
	}
	fmt.Println("Started AMG8833 Thermal Camera server at", server.Addr)
	server.ListenAndServe()
}

// start the thermal camera and start getting sensor data into the grid
func startThermalCam() {
	for {
		grid = amg.ReadPixels()
		time.Sleep(time.Duration(*refresh) * time.Millisecond)
	}
}

main.go

Web

The first handler, index uses the public/index.html template, passing it the refresh value. It also triggers a goroutine that start generating frames into the frame variable.

The getFrame handler takes this frame (which is a base64 encoded string) and pushes it out to the browser.

func index(w http.ResponseWriter, r *http.Request) {
	t, _ := template.ParseFiles(dir + "/public/index.html")
	// start generating frames in a new goroutine
	go generateFrames()
	t.Execute(w, *refresh)
}

// push the frame to the browser
func getFrame(w http.ResponseWriter, r *http.Request) {
	str := "data:image/png;base64," + frame
	w.Header().Set("Cache-Control", "no-cache")
	w.Write([]byte(str))
}

web.go

the HTTP handlers

The generateFrames function continually generates the image and places it into the frame variable. This image is encoded as a PNG file and then further encoded as a base64 string to be displayed as a data URL.

// continually generate frames at every period
func generateFrames() {
	for {
		img := createImage(8, 8) // from 8 x 8 sensor
		createFrame(img)         // create the frame from the sensor
		time.Sleep(time.Duration(*refresh) * time.Millisecond)
	}
}

// create a frame from the image
func createFrame(img image.Image) {
	var buf bytes.Buffer
	png.Encode(&buf, img)
	frame = base64.StdEncoding.EncodeToString(buf.Bytes())
}

generate-frame.go
generating the frames to populate the frame variable

Create images

The createImage is where the main action is. Remember the sensor captures data as an array of 64 temperature readings in the grid variable. Creating an image from this is simple.

First, I use the image standard library to create a new RGBA image. Then, for each temperature reading, I find get the index of the corresponding color I wanted and use that to get the hex color integer from the colors array.

package main

// this is the color heatmap used to display the image, from blue to red
// there are 1024 values

func init() {
	colors = []int{
		0x0000ff, 0x0001ff, 0x0002ff, 0x0003ff, 0x0004ff, 0x0005ff, 0x0006ff, 0x0007ff,
		0x0008ff, 0x0009ff, 0x000aff, 0x000bff, 0x000cff, 0x000dff, 0x000eff, 0x000fff,
		0x0010ff, 0x0011ff, 0x0012ff, 0x0013ff, 0x0014ff, 0x0015ff, 0x0016ff, 0x0017ff,
    ...
  }

heatmap.go
initializing the color hex integer array

With that, I grab the red, green and blue values from the integer and set it into consecutive elements of the Pix attribute of the image. If you remember from the A gentle introduction to genetic algorithms story I wrote earlier, Pix is a byte array with 4 bytes representing a pixel (R, G, B and A each represented by a byte), The red, green and blue bytes fit nicely into them and by the time the loop ends, we have a 8 pixel by 8 pixel thermal image!

Of course, this is way too small to show on the screen, so we use the resize library to resize the image to a more respectable size. Notice that it’s not just making the pixels larger, we use an algorithm (specifically the Lanczos resampling algorithm) to create a much smoother image when enlarged.

// create an enlarged image from the sensor
func createImage(w, h int) image.Image {
	// create a RGBA image from the sensor
	pixels := image.NewRGBA(image.Rect(0, 0, w, h))
	n := 0
	for _, i := range grid {
		color := colors[getColorIndex(i)]
		pixels.Pix[n] = getR(color)
		pixels.Pix[n+1] = getG(color)
		pixels.Pix[n+2] = getB(color)
		pixels.Pix[n+3] = 0xFF // we don't need to use this
		n = n + 4
	}
	dest := resize.Resize(360, 0, pixels, resize.Lanczos3)
	return dest
}

// get the index of the color to usee
func getColorIndex(temp float64) int {
	if temp < *minTemp {
		return 0
	}
	if temp > *maxTemp {
		return len(colors) - 1
	}
	return int((temp - *minTemp) * float64(len(colors)-1) / (*maxTemp - *minTemp))
}

// get the red (R) from the color integer i
func getR(i int) uint8 {
	return uint8((i >> 16) & 0x0000FF)
}

// get the green (G) from the color integer i
func getG(i int) uint8 {
	return uint8((i >> 8) & 0x0000FF)
}

// get the blue (B) from the color integer i
func getB(i int) uint8 {
	return uint8(i & 0x0000FF)
}

create-image.go

Displaying on the browser

The final bit is to display it on the browser. Here’s the HTML template that displays the image.

<!doctype html><meta charset=utf-8>
<html>
    <head>
        <script src="/public/jquery-3.3.1.min.js"></script>
        <script type="text/javascript">
            setInterval(function() {
                $.get('/frame', function(data) {
                    $('#image').attr('src', data);
                });
            }, {{ . }});

        </script>
    </head>

    <body>
        <img id="image" src="" style="display: block;"/>
    </body>
</html>

index.html

If you’re not familiar with Go, the { . } is just the value that is replaced in the final HTML that is displayed by the browser. In this case, it’s the value (in milliseconds) of how often the image should be refreshed.

That’s it, the software part is done!

Running the software

Let’s take a look at running the software. Remember this is going to be run on the Pi4.

This is image title

running the software on the Pi4

The larger window on this screenshot is VNC-ing into the Pi4 while the smaller browser on the side is running it on my MacBook Pro Safari browser. I was sitting down on a chair and raising my right hand.

Making it a service

The software runs but we need to start it from the command-line. As an IoT device this is not ok. It should start when the device is powered on and we shouldn’t need to log into the device, start up the command line and type in the command to start it!

This means the thermal camera software should be run as a service on startup. To do this, I’m going to make it into a systemd service. Here are the steps:

  1. Go to the directory /lib/systemd/system
  2. Create a file named thermalcam.service with the following content (this is the unit file). Remember, you need to have sudo rights to do this. The important part is the ExecStart which specifies the command that will be executed
[Unit]
Description=Thermal Camera
[Service]
Type=simple
ExecStart=/home/sausheong/go/src/github.com/sausheong/thermalcam/thermalcam -min=27.75 -max=30.75 -f=50 -s=480
[Install]
WantedBy=multi-user.target

3. Give the file the necessary permissions:

$ sudo chmod 644 /etc/systemd/system/thermalcam.service

4. Now you can start the service:

$ sudo systemctl start thermalcam

5. You can check the status here:

$ sudo systemctl status thermalcam

You should get something like this if it’s working. You can start or stop the service using systemctl.

This is image title

6. Finally to make sure the service starts whenever the Pi4 is powered on:

$ sudo systemctl enable thermalcam

Now you can place the thermal camera anywhere and the software will start as soon as the Pi4 is powered on! Here it is the camera in action on a shelf, next to my TV. I used a battery pack to power the Pi4 but I can also pull a USB power adapter to do the same.

This is image title
The thermal camera in action, powered by a portable battery pack

Let’s see how this looks like on the iPhone Safari browser.

Further thoughts

You might notice the picture quality is not that amazing. That’s to be expected, the sensor is after all only a 8x8 grid. With 64 data points, it’s not so easy to make out the details. There are definitely better thermal camera sensors out there. Here are some that I found from sniffing around:

  1. FLIR Lepton — https://www.flir.com/products/lepton/ (more than $100, so out of my price range)
  2. MLX90640—https://www.adafruit.com/product/4407 (it was out of stock when I was looking around)

I have a feeling that the MLX90640 will be better, after all it is a 32x24 pixel sensor, and with 768 data points, that’s 12x more than the AMG8833. Unfortunately I couldn’t get hold of one since it’s out of stock everywhere I looked around.

The software can detect people but it can’t really be used for thermal screening because it needs to be tuned to the correct temperature to screen. Unfortunately (or fortunately) I don’t have any way of doing this.

So far I’m only using this for thermal imaging, you can think of other things you can do with it, like detecting people or detecting if certain equipment is too hot and so on.

Knock yourself out!

#raspberry-pi #pi #Go

Build a thermal camera with Raspberry Pi and Go