Protected: FLiK (Fabrication Lab in a Kit)

This content is password protected. To view it please enter your password below:

Advertisements

Baking Pi’s 4: Working with Touch Pi

I have taken some time off from writing about my Raspberry Pi prototyping but I have been busy prototyping and making a GUI program for the Raspberry Pi.  I started with the wonderful Touch Pi design from the wonderful people at Adafruit that adds a Resistive Touchscreen to the Pi! I made many changes to this system which I will describe in future posts. But essentially, this is what the prototype looks like:

20160225_110925.jpg

My version of Touch Pi

So for my particular setup, where I wanted to use a large HDMI screen to write my program and then use the Touch Screen to test it, I needed to figure out a bunch of things that I will explain in this post. Please note that this is a fairly technical post that is aimed to help others who are running into similar challenges. I will have more application focused posts in the future (once I actually figure out the rest of my issues!).

For my implementation, I am using a Raspbian Jessie installation. For implementing my particular GUI (which I will discuss in a future post) I used pygame to interact both with the touch screen and the GPIO pins.

Detecting HDMI vs. Touchscreen

In any case, the first issue I had to figure out was how to automatically detect if the Pi was connected to the HDMI screen and then run the GUI from there. If an HDMI cable was not detected then I wanted the GUI to run on the touchscreen. One thing to note is to the best of my knowledge, you can’t run the GUI simultaneously on the touch screen and the HDMI TV because the system can’t handle input from two sources.

The solution I came up with involves modifying (or creating if it doesn’t exist) the rc.local file in the /etc/ folder:

cd etc

sudo nano rc.local

Then add these lines to the script:

if (/usr/bin/tvservice -s | /bin/egrep 'HDMI|DVI); then

     sudo cp /home/pi.displays/HDMI/99-fbdev.conf /etc/X11/xorg.conf.d

     con2fbmap 1 0

     echo "rc.local HDMI selected"

else

     sudo cp /home/pi/displays/TFT/99-fbdev.conf /etc/X11/xorg.conf.d

     con2fbmap 1 1

     echo "rc.local TFT selected"

fi

exit 0

Thanks to these forum discussions!

Note: This might cause a problem if you plan to use a Raspberry Pi Zero. In case your RPi Zero cannot use the display, comment out the added lines above and also copy the 99-fbdev.conf file form the HDMI folder to the X11 folder and everything should work on startup.

Starting programs on GUI startup

The next interesting challenge was to run my custom python code (or the shell script running it) on startup. I particularly wanted the program to run once the GUI is loaded. If you want your script to run at the beginning (with no graphical support), please see my previous post here.

There are many forum posts about where and how to change files to make this happen but it is important that you locate and change the right file for your particular setup. In Raspbian Jessie, you want to change the autostart file in the .config/lxsession/LXDE-pi directory:

cd .config/lxsession/LXDE-pi

sudo nano autostart

In this file you have to add your particular program path to the end of the list. So it will look something like this:

@lxpanel --profile LXDE-pi

@pcmanfm --desktop --profile LXDE-pi

@xscreensaver -no-spalsh

@lxterminal -e /home/pi/Foad/myProgramLaunch.sh

The last command specifies that I want my script to be called from a terminal. Depending on your needs you might or might not need to call from a terminal.

Making Desktop Shortcut

A final bonus tip (thanks to the information here)! If you would like to make a desktop shortcut with an icon that you can double click to start your program, you need to create a file in the Desktop directory:

cd Desktop

sudo nano mydesktop.desktop

In this file, you specify your icon, path, …:

[Desktop Entry]

Name=reMixer

Comment=This is the reMixer program

Icon=/home/pi/Foad/myProgramLaunch.sh

Type=Application

Encoding=UTF-8

Terminal=false

Categories=None;

You have to make sure that your script has execution privileges (set them with chmod) and you are set!

Baking Pi’s 2: Touch HAT

In the “Baking Pi’s” series of posts, I describe my adventures with the Raspberry Pi embedded computer. Each segment describes a specific setup. This series is fairly technical and is meant to be used to create digital prototypes.

Screen Shot 2016-02-10 at 7.03.48 PM

Raspberry Pi with Touch Hat and Fruit!

In this post, I will describe my experience using Raspberry Pi with a touch sensor to create an interactive sound box. This post assumes that you have a Raspberry Pi B+ setup and connected (either to a monitor or running in headless mode). See my previous post or the official Raspberry Pi website on how to setup your Pi.

Using Pi to create an interactive sound box

A really fun activity is playing music using fruit and other conductive things (even humans)! You can do this by connecting the fruit to a computer using a touch sensor that in effect treats them as a key on the keyboard, making it possible to activate sounds when they are touched.  Some examples of how to do this easily is using a Makey Makey board or a Bare Conductive touch board. In the past, we connected an MPR121 Capacitive touch sensor to a Raspberry Pi to create TalkBox, a DIY communication board for non-verbal users. One of the challenges of putting together TalkBoxes was to connect all the wires from the MPR121 breakout board to the Raspberry Pi. Recently, capacitive touch HATs designed specifically for Raspberry Pi have become available that make it much easier to create an interactive sound box with the Raspberry Pi.

Screen Shot 2016-02-10 at 7.04.05 PM

Preparing the Touch HAT

For this project, you need a Raspberry Pi (already setup with a Linux distribution system and connected to the Internet, see above) and Adafruit’s capacitative touch HAT. You have to do a bit of soldering of a 2×20 solder header (included in the kit) to the touch HAT which makes it fit perfectly on top of the Raspberry Pi B+ (for older Raspberry Pi’s you need to solder a different taller solder header, see here).

Screen Shot 2016-02-10 at 7.03.55 PM

Pi wearing its new HAT!

One you have soldered the header and placed the HAT on the Pi, you need to install software following instructions here. The one thing that is missing from the tutorial is that you also have to enable the I2C protocol so that the Raspberry Pi can talk to the HAT. To do this use sudo raspi-config and choose Advanced Options. Here, you can enable I2C support and set it to load by default. Once you reboot, the libraries should be loaded.

Following is an example of code that runs a simple version of TalkBox (with a small set of constant sounds):

# Copyright (c) 2016 Foad Hamidi
# Author: Foad Hamidi based on code originally written by Tony Dicola from Adafruit Industries
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.

import sys
import time
import pygame.mixer

import Adafruit_MPR121.MPR121 as MPR121


print 'TalkBox Test'

# Create MPR121 instance.
cap = MPR121.MPR121()

# Initialize communication with MPR121 using default I2C bus of device, and 
# default I2C address (0x5A).  On BeagleBone Black will default to I2C bus 0.
if not cap.begin():
    print 'Error initializing MPR121.  Check your wiring!'
    sys.exit(1)

#Initialize the sound system, and create empty lists of channels and sounds
pygame.mixer.init(48000, -16, 1, 1024)
soundChannelList = [None] * 12
soundList = [None] * 12

#Populate your list of sounds and channels, there are only 7 channels, so you
#will have to reuse some channels. Channels are used so that sounds can be played
#simultaneously.
#I'm using some semi-random voice samples from a previous project. 
sound1 = pygame.mixer.Sound("/home/pi/Music/Feeling/happy.wav")
soundChannel1 = pygame.mixer.Channel(1)
soundList[0] = sound1
soundChannelList[0] = soundChannel1
sound2 = pygame.mixer.Sound("/home/pi/Music/Feeling/excited.wav")
soundChannel2 = pygame.mixer.Channel(2)
soundList[1] = sound2
soundChannelList[1] = soundChannel2
sound3 = pygame.mixer.Sound("/home/pi/Music/Feeling/feeling.wav")
soundChannel3 = pygame.mixer.Channel(3)
soundList[2] = sound3
soundChannelList[2] = soundChannel3
sound4 = pygame.mixer.Sound("/home/pi/Music/Feeling/proud.wav")
soundChannel4 = pygame.mixer.Channel(4)
soundList[3] = sound4
soundChannelList[3] = soundChannel4
sound5 = pygame.mixer.Sound("/home/pi/Music/Feeling/sad.wav")
soundChannel5 = pygame.mixer.Channel(5)
soundList[4] = sound5
soundChannelList[4] = soundChannel5
sound6 = pygame.mixer.Sound("/home/pi/Music/Feeling/sick.wav")
soundChannel6 = pygame.mixer.Channel(6)
soundList[5] = sound6
soundChannelList[5] = soundChannel6
sound7 = pygame.mixer.Sound("/home/pi/Music/Feeling/tired.wav")
soundChannel7 = pygame.mixer.Channel(7)
soundList[6] = sound7
soundChannelList[6] = soundChannel7
sound8 = pygame.mixer.Sound("/home/pi/Music/Feeling/good_morning_f.wav")
soundChannel8 = pygame.mixer.Channel(1)
soundList[7] = sound8
soundChannelList[7] = soundChannel8
sound9 = pygame.mixer.Sound("/home/pi/Music/Feeling/goodbye_f.wav")
soundChannel9 = pygame.mixer.Channel(2)
soundList[8] = sound9
soundChannelList[8] = soundChannel9
sound10 = pygame.mixer.Sound("/home/pi/Music/Feeling/need_break.wav")
soundChannel10 = pygame.mixer.Channel(3)
soundList[9] = sound10
soundChannelList[9] = soundChannel10
sound11 = pygame.mixer.Sound("/home/pi/Music/Feeling/thank_you_f.wav")
soundChannel11 = pygame.mixer.Channel(4)
soundList[10] = sound11
soundChannelList[10] = soundChannel11
sound12 = pygame.mixer.Sound("/home/pi/Music/Feeling/sunny.wav")
soundChannel12 = pygame.mixer.Channel(5)
soundList[11] = sound12
soundChannelList[11] = soundChannel12
print "Soundboard Ready."

# Main loop to print a message and play a sound every time a pin is touched.
print 'Press Ctrl-C to quit.'
last_touched = cap.touched()
while True:
    current_touched = cap.touched()
    # Check each pin's last and current state to see if it was pressed or released.
    for i in range(12):
        # Each pin is represented by a bit in the touched value.  A value of 1
        # means the pin is being touched, and 0 means it is not being touched.
        pin_bit = 1 << i
        # First check if transitioned from not touched to touched.
        if current_touched & pin_bit and not last_touched & pin_bit:
            print '{0} touched!'.format(i)
            soundChannel = soundChannelList[i-1] 
            sound = soundList[i-1]
            soundChannel.play(sound)
            print'Sound played'
        # Next check if transitioned from touched to not touched.
        if not current_touched & pin_bit and last_touched & pin_bit:
            print '{0} released!'.format(i)
    # Update last state and wait a short period before repeating.
    last_touched = current_touched
    time.sleep(0.1)

    # Alternatively, if you only care about checking one or a few pins you can 
    # call the is_touched method with a pin number to directly check that pin.
    # This will be a little slower than the above code for checking a lot of pins.
    #if cap.is_touched(0):
    #    print 'Pin 0 is being touched!'
    

Finally, adjust the volume of the speakers using the following command at the terminal prompt:

amixer  sset PCM,0 90%

which will adjust the volume by the indicated percentage of its total capacity (e.g., 90% in the above example).

Here’s a video of the setup used to make two pieces of fruit have a conversation:

Note: If you are getting weird static sounds out of the speaker connected to your analog audio output, it’s probably because of the power source. Try different ones and hopefully you will find one that doesn’t’ affect the audio performance too much. Another possibility is that you have not plugged in your speaker properly. Check it again before giving up on your speakers!

Finally, if you want this code to be loaded at startup, follow these instructions.

 

 

Baking Pi’s 1: Set Up

In the past few weeks, I’ve been playing around with the amazing Raspberry Pi embedded computer platform. In the “Baking Pi’s” series of posts, I will describe how to setup and use a Raspberry Pi for different applications.

Raspberry Pi is a powerful and affordable credit-card sized computer. With some basic programming and networking skills and a lot of patience you can use it to setup customized physical computing projects. In this first post, I will describe how to setup a Raspberry Pi and in future posts, I will describe how to use it for fun and useful projects with it, including a networked Rafigh, a simple interactive sound box and other things. I will work with a variety of Pi’s including Raspberry Pi 1, Pi Zero and Raspberry Pi B+. The set ups are very similar and I will mention when there are differences. I will also use a Mac computer (although having access to a Linux machine will be useful).

Note: These posts are fairly technical, and if they are not your cup of tea, rest assured that I plan to get back to my travel and life posts as soon as I have fun adventures to report 🙂

20160210_170732.jpg

Raspberry Pi family: (bottom to top) Raspberry Pi 1, Raspberry Pi B+ and Raspberry Pi Zero

Setting up the Raspberry Pi Software

First, we need to setup the Raspberry Pi to work in headless mode, meaning that we can connect to it via a different computer and don’t need to connect it to a monitor, keyboard and mouse. There are many advantages to setting up your Pi in this mode: you won’t need extra peripherals and you can embed the Pi in physical computing projects. In the case of the networked Rafigh, this step is essential because you will need to be able to leave the Raspberry Pi next to growing mushrooms and debug and control it remotely.

Before I start with the software, a note on how to power the Pi. I’ve been using three ways of powering the Pi with varying degrees of success. Surprisingly, many of the problems that arise with using Raspberry Pi’s are related to the use of power supplies that are not adequate or consistent depending on what other peripherals (Wifi dongle, lights, speakers) you connect to the Pi. The usual way is to use a good quality 5V, 2A wall plug (like this one). The trick is that some wall plugs claim that they can provide up to 2A current but in reality they are inconsistent, so investing in a better quality adapter is worth it. The Pi does work with adapters that provide less than 2A current though and depending on your project needs, a portable battery might be enough. The other two ways, I have powered the Pi are 1) using a consumer grade Duracell portable power supply and a combination of a Lithium Ion Polymer Battery and a PowerBoost 500 Charger. To me this last combination is very promising for wireless projects, but I am still experimenting with it and not sure how stable it is.

Once you have decided on how to power the Pi, you need to prepare your SD card. You can usually get away with a 4GB MicroSD card but I recommend using an 8GB one, as you might be tempted to use more space in the future. Setting up the SD card can be pretty straight forward if you are comfortable with using the command line (terminal) in Mac. First, you need to download a Linux distribution. I recommend getting the latest version of Raspbian Jessie from here. Also, if you are comfortable with using Torrents, I recommend getting the image file from the Torrent because it is faster and also doesn’t fail in the middle of the download.

Once you have the zipped image file. You need to unzip it into an .image file. After this open up the terminal in Mac. If you have an SD card reader/writer on your Mac, plug your empty SD card in there. Alternatively, you can use an external SD card reader/writer. My on-board SD card reader/writer has become a bit faulty and sometimes doesn’t let me write to an SD card. A trick I’ve used is to set the lock switch on the SD card to the middle position and sometimes this allows the write to happen. This method doesn’t work well if you are trying to write to a MicroSD Card. So I usually prefer using an external card reader.

Next, we want to copy the image to the SD card. Note that this process will erase all data on the card, so be careful not to put the wrong card in the reader! In Mac, open a Terminal window and (with the SD card in the reader) type:

diskutil list

This will list all the devices on your mac and their partitions. You want to dismount the partitions on the SD card. Be careful about this step, you don’t want to mix up your disks! First, you should identify your disk. Usually, it is easy to do that by looking at the size of the disk. Another method is to use the diskutil list commend before you insert the SD card and note what disks are there and then insert the SD card and note which new disk was added. In any case, once you know that your SD card is, for example, disk3, then you should unmount all of its partitions (you can identify these by the letters number combinations that follow the disk name). For example, to unmount partition 1 (i.e., s1) on disk3 use the following command.

sudo diskutil unmount /dev/disk3s1

Once all the partitions are dismounted, you are ready to start the image copying process. You can use the following command:

sudo dd bs=1m if=~/Desktop/2015-11-21-raspbian-jessie.img of=/dev/disk3

Where “2015-11-21-raspbian-jessie.img” is the name of your image file that is stored on the desktop. If you have problems with the “bs=1m” parameter, use “bs=1M” or “bs=4M”. Also, note that the name of the destination drive refers to the whole drive, not to a single partition (e.g., “disk3” not “disk3s1”).

This operation will take a while. In Mac, you can use Ctrl+t to see the status of the write operation.

Update: Since writing this article, I came across a very handy free little program called ApplePi-Baker that I can highly recommend to prepare your SD card. A neat feature of this program is that it makes backing up and restoring copies of your program easy. Also, when I had trouble with formatting my SD cards in the past (including encountering the notorious “bad superblock” problem!), this program allowed me to correctly reformat the SD card and saved me from throwing it away thinking it was broken beyond repair!

Connecting to the Raspberry Pi

Once you have the SD card ready, you are on your way to a working Raspberry Pi! Now, we need to setup the Raspberry Pi to work in “headless” mode. Again, there are multiple ways to do this and I will briefly cover three options.

The first option is to plug-in your Pi directly into your router. If your router is setup to accept DHCP (i.e., assign IP addresses dynamically), then you should see the Pi on your local network within a couple of minutes. You can use the terminal command on Mac to find out the Pi’s ip address:

arp -a

A second method to connect to the Pi is via ethernet cable.  The good thing about this method is that once you set it up, you don’t need an internet connection to access the Pi. On the negative side, this method might significantly slow down your wifi connection while you are connected to the Pi. For this method to work, you have to setup your Mac first. First, you need to set ethernet to DHCP. Go to Network Settings and click on Ethernet. In the Configure IPv4 select the “DHCP” option.

Next, go to System Preferences and under Internet and Wireless, select Sharing. Here, enable Internet Sharing through Ethernet. Now, if you plug in the Pi via ethernet, you should see it after a while on the network using:

arp -a

Finally, the third way to connect to the Pi is via wifi. You can connect the Pi to the internet wirelessly via a compatible wifi dongle. This is an excellent (although sometimes faulty) option and with some tweaking should work well. In order to do this, you will have to edit the interfaces file on your Pi. In order to do that you will have to access the Pi file system either through one of the methods above or via a Linux machine. You can not use Mac without third-party software to manipulate files on the internal Pi file system. (You can access files on visible partition of the Pi in Mac using file sharing but that doesn’t give you access to the files we need for this step.) So assuming you can ssh into your pi, you will need to change the /etc/network/interfaces file:

sudo nano /etc/network/interfaces

You need to change the file contents to the following:

# Include files from /etc/network/interfaces.d:
source-directory /etc/network/interfaces.d

auto lo
iface lo inet loopback

iface eth0 inet dhcp

auto wlan0
allow-hotplug wlan0
iface wlan0 inet dhcp
wpa-ssid "your-network-name"
wpa-psk "your-network-password"

Once you have done this save the file, shutdown the Pi, plug  in the wifi dongle and power up. You should see the Pi on your network in a few minutes.

Note: In case you want to connect to a wifi hot spot that doesn’t have a password and requires you to use a login page to connect you can set a static ip address for your Pi and connect by putting the name of the network in the interfaces file. So your file would look something like:

# Include files from /etc/network/interfaces.d:
source-directory /etc/network/interfaces.d

auto lo
iface lo inet loopback

iface eth0 inet dhcp

auto wlan0
allow-hotplug wlan0
iface wlan0 inet static
address (your ip address here) 
netmask 255.224.0.0
network 10.224.0.0
broadcast 10.255.255.255
gateway 10.224.0.1
wireless-essid xfinitywifi

allow-hotplug wlan1
iface wlan1 inet manual
    wpa-conf /etc/wpa_supplicant/wpa_supplicant.conf

After this, Raspberry Pi will connect to the wifi network and you will need to use a browser to navigate to the login page and register your pi. If you want to setup your pi without a monitor and keyboard, a possible way (I wasn’t successful with!) is register the MAC address of your pi using anther device (e.g., your laptop) by spoofing its MAC address. See here  for more information on that method.

Once you know the ip address, you can ssh into the pi using

ssh pi@192.168.2.18

where you should replace “192.168.2.18” with your ip address. You will be prompted for a password which is “raspberry” by default.
Once you can SSH into the Raspberry Pi, the fun begins! A good first step when you are here is to do some basic maintenance in the Pi. The first thing, I usually do is to expand the file system and set the date and time. You can do both of these by accessing the configuration dialogue using:

sudo raspi-config

Next, you want to update the Pi using the following commands:

sudo apt-get update
sudo apt-get upgrade

When updating it’s good to keep an eye out on the disk size using:

df -h

To clean up after installing packages use

sudo apt-get clean

Finally, use the following commands to logout of ssh, reboot the Pi and to shutdown after you are done:

logout
sudo reboot
sudo shutdown -h now

It is important to shut down the Pi properly. Otherwise, the SD card might get damaged.

If you want to give your Pi a specific name (rather than an ip address) to use to log in to, you can use the Bonjour service described here. If you have multiple pi’s make sure you name them differently by following the instructions here.

If you would like to access the graphical interface of the Pi from your computer, you usually have to setup a VNC server. But if you are using a Mac, there’s an easy shortcut to access the graphical interface: using the X11 app! Here’s how to start it from the terminal:

ssh -x pi@<your PI's ip address>

Once logged in you can start an Xsession with:

/etc/11/Xsession

To end the session press Ctrl + x from the original terminal window.

Note: When connecting to the Pi, if you get an error with the message: “Warning: Remote Host Identification Has Changed” you can use the following command to clean your stored keys and get rid of the message:

ssh-keygen -R "you server hostname or ip"

 

 

 

Toronto Mini Maker Faire 2014

As we put the final touches on our two tables, I can hear people entering the main hall of Toronto Reference Library on this November Saturday morning for the first day of Toronto Mini Maker Faire. Yes, people are up early and ready to explore and get the latest news on local Makers, most recent 3D printers and scanners, wearable computers, crafts augmented with electronics (or not) and many other fun and crazy ideas!

Toronto Mini Maker Faire 2014

Toronto Mini Maker Faire 2014

This is the second time we are at the Toronto Mini Maker Faire and we have come a long way since last year when we were mainly spectators. Granted, on one of the days last year, I (with my collaborator, Natalie Comeau) demoed an early version of our wearable computing design, HugBug, at the Faire, but other than that we were observing and getting acquainted with the Toronto Maker scene. Last year, I visited with participants from an experimental MakeShop I was facilitating at York. This year, several of the participants from the MakeShop were also at the Faire but in a different capacity: as Makers presenting their work.

Screen Shot 2014-11-26 at 8.36.05 AM

Our tables at Maker Faire

“Making” in this context, refers to a specific way of fabricating custom prototypes and designs using hands-on methods that can range from using embedded electronics and 3D printing to using craft materials and methods, usually with an emphasis on open-source and Do-it-Yourself (DIY) designs and with a view to share knowledge and experience with fellow “Makers”. This attitude has become widely popular in the last decade through publications such as the Make Magazine and events such as Maker Faire, as well as, the proliferation of thousands of fabrication and Maker spaces, hack labs, and online forums and communities around the world. Making, with roots in older notions of “Hacking” and “Tinkering”, has gained such recognition as to be dubbed as the “new industrial revolution” and a “movement” that promises to democratize design and fabrication (see Chris Andersons, Makers: the New Industrial Revolution). Our Making activities have grown much over the last year and our presence at the Faire reflects this: in addition to two tables, we are also running an ambitious workshop for Makers with disabilities and their parents and caregivers. We are showcasing several projects that are examples of cross disciplinary collaborations. In this post, I will briefly describe these projects but before that I would like to briefly describe my experience at Maker Faire in general. This year, I mainly stay at our tables, describing and demoing our projects and meeting a lot of amazing people. While we have a great team of knowledgable and energetic volunteers at the table who would cover me if I want to take a break, I am having such a great time sharing experiences with visitors that it is difficult to leave! Also, I have worked on all the projects that we have on the table in some capacity and it is such a pleasure to present them to an open-minded audience. Because of this (and sadly!), I haven’t seen many projects other than our own. However, my brief detours around the Faire made me realize the range of projects from the usual embedded computing platforms (mainly Arduinos in many shapes and sizes, as well as, Raspberry Pis), to many shades of 3D scanning and printing, to hacked toys (in the creative sense of the word!), to non-digital puppets and even a hitchhiking robot that has crossed Canada!

Fun Projects at Maker Faire: (clockwise from top left) mixMotion, Puppets Cool, Cyberpunk/homemade mechanical nut cracker

Fun Projects at Maker Faire: (clockwise from top left) mixMotion, Puppets Cool, Cyberpunk/homemade mechanical nut cracker

The free and public location of the Faire at the Toronto Reference Library is fantastic: accessible and inviting to the general public (as Making should be)! Over the weekend, a whopping 10’000 people were estimated to visit the Faire! In my experience, I talked to so many interesting people that at the end of the Faire, I had an information overload and had to take a couple of days completely off to reflect and digest all the input! As I mentioned, this year we have two tables, one showcasing a range of projects from our lab, Graphics and Media at York (GaMaY), and the other, showing our colleague and collaborator, Ray Feraday‘s work, with especial focus on our collaboration on DIY open-source assistive technology. On his table, Ray has set up a pong game developed using the Scratch programming language, running on Raspberry Pi and controlled using home-made cardboard controllers using Makey Makey, all put into a cut kitchen table top surface. It is a hit with children! Screen Shot 2014-11-26 at 8.35.13 AM TalkBox TalkBox is an open-source DIY communication board for children or adults with no voice. This is truly a classic Maker project: the idea was first conceived by Ray, an inspiring and inventive special education teacher working with the TCSDB, who I met last year at Maker Faire. He had created a customizable communication board using the Makey Makey and conductive tape and was interested in improving the design and making it more mobile and affordable by replacing the Makey Makey and attached computer with a touch sensor and Raspberry Pi (an embedded computer). We formed a team with Toni Kunic, a great programmer and a computer engineering student at York and Melanie, my supervisor, and met over many weekends to work out and implement a new design for TalkBox. The result is a DIY communication board that can be used to store voice samples and associate them with touch buttons that play them back once touched. TalkBox costs less than $80 and is open-source (both hardware instructions and software code are shared freely with users). For more information please see this link.

Ray demoing TalkBox at Maker Faire

Ray demoing TalkBox at Maker Faire

Unexpectedly, we won an award for TalkBox at MakerFaire! Bridgable, an innovative design firm in Toronto, awarded us with the Bridging the Gap Award. This award was given to a team that demonstrated “a clear connection between a user need and product benefit; an extraordinary translation between research and design, concept and prototype or material/technology and execution; and an appropriately accessible and well communicated presentation”. We are very happy to accept this award as it provides us with further recognition of the idea of making “Making” accessible that we have had in mind with this project.

The Bridging the Gap Award

Bridging the Gap Award

MakeTalk Workshop Since the beginning of our collaboration on the TalkBox project, we had envisioned a process through which children with disabilities, their teachers and caregivers could put together the kits themselves. We wanted to make the design available and the process of Making accessible. The vision of making Making accessible has been discussed before  by inspiring assistive technology advocates and digital activists, Amy Hurst and Shaun Kane (see their paper here).  Additionally, we wanted to make a process in which we explored alternative ways to fund and deploy the technology to the people who need it most. To these ends we partnered with the Tetra Society of North America which is a volunteer-based organization that brings together student volunteers, retired engineers and inventive users with disabilities and their families together to modify, create and deploy assistive technology solutions. Tetra has a chapter at York University with many student volunteers. We decided to ask for their help in running a workshop at Maker Faire and also getting funding to create a series of TalkBoxes. The president of the York chapter, Brandon, is also a graduate student at our lab and along with another volunteer coordinator, Zareen, helped with recruiting, coordinating and training volunteers. (Brandon is also great at soldering, so he helped prepare the soldering part of the TalkBox kits too!)

Pre-workshop preparations

Pre-workshop preparations

In preparation of the workshop, we ran a series of dry runs at our lab in the weeks preceding the Faire. We recruited the help of Yana Boeva, a graduate student in Science and Technology Studies at York who has experience in Making and creating instructibles. Our amazing Tetra volunteers helped with every stage of the process: from preparing the material to running the workshops. Melanie designed a customized case for the Raspberry Pi which she miraculously (read, through many hours spent in the lab!) managed to print in time for the participants at the Faire.

3D printed modified Raspberry Pi cases

3D printed modified Raspberry Pi cases

At the Faire, we provided a kit consisting of all the parts and instructions to 7 groups of users with disabilities and their parents/guardians/care givers. I decided to stay the tables while most of the rest of the team were busy at the 2-hour workshop, so I could only tell from their elated faces and happy expressions how rewarding the process was.

MakeTalk Workshop: Kit, process, outcome

MakeTalk Workshop: Kit, process, outcome

Putting the kits together is not trivial and yet everyone at the workshop managed to put them together and get them working. I was reminded that one of the joys of working with people with disabilities is that they have a lot of patience and see value in process rather than just wanting quick outcomes.

HugBug HugBug is a playful wearable interface designed to teach children about digital design. I developed it with Natalie Comeau. I used it in a workshop in Mexico (described here) and presented it at the Design and Emotion Conference in Bogota. I have simplified the design and made it more robust and straightforward. HugBug is very successful at Maker Faire with many children and adults trying it out.

Colin trying out HugBug

Colin trying out HugBug

The next step for this project is to connect HugBug to biosignals (I am currently working on this project with our colleague, Manuel, from University of Seville) so that it would externalize various states onto its LED lights (e.g., stressed vs. relaxed).

Manuel connecting HugBug to Biosignals

Manuel connecting HugBug to biosignals

Rafigh Rafigh is a living media interface to encourage children to engage in learning and therapeutic activities. It consists of a real mushroom colony connected to a micro controller that controls the amount of water administered to it. At Maker Faire I have an early prototype of Rafigh with dried mushrooms for demonstration purposes. I am currently conducting a study with the working prototypes in situ at participants’ homes. For more information on Rafigh please see this link.

Magic Wand Magic Wand is a project of the Wandmakers, a team of three engineering students, Sonal Ranjit, Chitiiran Krishna Moorthy and Kajendra Seevananthan, designed and fabricated their own Harry Potter-inspired wand after attending the MakeShop sessions I facilitated last year. The Wandmakers created a custom 3D model of the wand, which they printed and outfitted with a microcontroller, accelerometer and laser light, such that specific movements of the wand would be translated into “spells” that would cast a laser light.

Magic Wand

Magic Wand

Magic Wand was first presented at the TEI’14 student design competition earlier this year and Sonal joined us at Maker Faire to present the Wand to many interested children (and adults). They had printed the instructions on beautiful rice paper that people took home as a souvenir. Their 3D model is open-source and available for free download on Thingiverse. It has been quite popular with fans and since last February when it was uploaded has been downloaded more than 1500 times!

Children trying Magic Wand

Children trying Magic Wand

Synchrum Synchrum is a tangible interface to facilitate audience participation through rhythmic collaboration. It was inspired by the Tibetan prayer wheel and aims to capture its physicality and performativity. A light source is attached to the top of the object and a weight rotates under the light. By detecting the number of blocks in the light by a sensor we calculate and communicate the rotation speed of the device wirelessly to a computer that can then coordinate its speed with other Synchrum units. Using Synchrum, members of an audience can synch with each other and collaborate to bring about a change in a performance or environmental factor (such as ambient sound, music or light). In a previous performance, members of an audience collaborated together to remove virtual chains from a performer. Synchrum was designed in collaboration with Alexander Moakler and Assaf Gadot and was presented at TEI’12 and UIST’12 conferences. Please see this document  for more information.

7Synchrum

Our Team This year our team consisted of Melanie Baljko (my supervisor and co-director of GaMaY), Ray Feraday (our community partner, a special education teacher and an inventor), Brandon Haworth (a PhD candidate at our lab with focus on serious games and Making), Toni Kunic (software and design developer for TalkBox), Sonal Ranjit (co-designer of the Magic Wand), Yana Boeva (instrcutible designer for the TalkMake workshop), Manuel Merino Monge (visiting researcher from University of Seville, working on HugBug), Natalie Comeau (co-designer of HugBug), Catherine Duchastel de Montrouge (Science and Technology Studies), Colin Ruan (Computer Science) and myself. Additionally, we got a lot of help from the York University Chapter of Tetra Society volunteers who tirelessly helped run the MakeTalk workshop, as well as, helping present at the tables. These included: Zareen, Steven, Syed, Nina, Laura, Vassil, Greene and Nitzi. Additionally, Jeannette Da Luz from TCDSB helped create digital content during the Faire, and Glenn Barnes and Hamed Dar from Tetra Society of North America, helped with setting up and running the tables. Thanks so much everyone and great job!

TEI’14: A Report from the Frontiers of Interaction Research

The 8th International Conference on Tangible, Embedded and Embodied Interaction (TEI’14) was in Munich this year and as usual it was inspiring, innovative and at times mind-blowing! This is a conference with a lot of vision (especially technological vision) and since finding it, I feel at home in a community of researchers/artists/designers/social activists that go beyond norms in their fields and are not afraid to be radically different. In this post, I’m going to give an overview of the projects and themes I encountered there. (I also had an art installation and two participants from the MakeShop I was teaching last semester presented a project in the student design challenge. I will write about these projects in future posts so I can include more detail).

For me the conference started a day early when I attended a full-day studio entitled, “The Misbehaviour of Animated Objects”, which was run by EnsadLab, MIT Medialab and Universite Paris 8. In the studio, we explored notions of “misbehaviours” and how to program physical objects to convey them. We worked with a robotics kit developed from servo motors, velcro and processing software. This allowed us to assemble and prototype objects quickly and think about their relationships to us and to each other.

Screen Shot 2014-02-21 at 4.17.05 AM

I worked with two other participants to make a prototype that consisted of a disembodied tail that had a conversation with a physical pacman! Initially, we wanted to make a wearable interface out of them but decided to explore the objects relationship to each other. The scenario was “What happens when you encounter two objects that seem to be communicating with each other but it’s a different language and hard to know if they are friendly or unfriendly?” Other projects, included a shy trash can and a rolling saucer!

Screen Shot 2014-02-21 at 4.15.33 AM

The conference proper started with an opening keynote by Chris Harrison from CMU. He has some really neat data visualization projects but his talk here was different. While his talk was interesting and focused on touch (especially multitouch in mobile contexts), I felt he was a little bit out of place at a conference that is going beyond mobile technology. The gist of his speech was that with the prevalence of touch sensitive devices that can go beyond sensing just the finger tap, we will have the possibility of using a lot of complex touch patterns (such as zooming with a physical camera).

The first day talks were very diverse: An interesting project (Karlesky and Isbister, New York) explored the margins of digital workspace by examining the possibilities of “mindless” activities such as doodling, fidgeting and fiddling for interaction design. In contrast, The Slow Floor project (Feltham, et al. Australia), a pressure sensitive sound-generating surface that was tested with a group of Butoh dancers performing a slow walk, aimed to develop an interface for meditation.

A central figure to TEI is Dr. Hiroshi Ishii from the MIT Media Lab. A pioneer of ambient and tangible interfaces, his idea of translating Bits to Atoms, of accessing, expressing and manipulating graphic bits of information, not only through the desktop computing tools (i.e., keyboard and mouse) but through novel tangible interfaces revolutionized the field. In the last few years, he has had another visionary idea, Radical Atoms, which involves shapeshifting and programmable material. Several presentations from his lab were about weight and volume changing material (through liquid metal injection) and stiffness changing material. Here’s a cool video example of his previous work.

20140216_101655

Katia Vega and Hugo Fuks from Brazil have been doing very interesting work with conductive make-up and augmented nails (that can be used to control music through water, for example) that allow one’s skin and nails to become interfaces. Since last year, I have been obsessed with the idea of augmenting my body and turning it into a digital interface (going one step beyond wearables) and it was very inspiring seeing her work. I believe this project has great potential for users with disabilities as well.

Another artist from Brazil, Philippe Bertrand (who is currently based in Barcelona), showcased an empathetic interface The Machine to be Another, that through an immersive and embodied interface allows one to experience being in another person’s body through a head-mounted display. The system has been used by children and parents, people with disabilities and men and women to experience seeing through the other person’s point of view.

20140219_132928

In the evening of the first day, entries from the Student Design Competition, showed their work which were interesting, complex and fun. (More about this in future posts!)

The second day was also very diverse. In one of the sessions, we saw presentations on an interface that creates art from skateboard movements (Pijnappel et al. Australia), an augmented frisbee that can be used to train novices (Cynthia Solomon et al., US), a stress releasing beatable wall where you can beat the shadow of your opponents to get scores (Floyd Mueller et al., Australia) and an opera singing interface that allows you to create sounds and modalities using facial gestures and hand movements (Feitsch et al., Germany)! I will not go into the details of these projects, but they were fantastic!

20140218_115636

The installations and demos were very interesting and popular. In the Arts Track I particularly liked a project that translated Laido sword movements into 3D objects that were then printed (Ueno et al. Germany), The YU system that used artistic Chinese paintings of fish to convey biometric information about the user (Bin Zhu et al., China), and A Day in a Life, which was next to my installation and used an empty book and air flow to display the time of day (Ivan Petkov, Austria).

20140217_130503

The second day ended in a traditional Bavarian beer hall which was excellently located because it was in a cellar deep in the earth and didn’t have cellphone reception!

20140218_212210

The final sessions included one on public interfaces which included a shape-changing bench (Sofie Kinch et al. Denmark) that encouraged users to talk to each other and a paper on exploring strategies to breach barrier to collaboration in public spaces (Trine Heinemann and Robb Mitchell, Finland and Denmark). The author of the last paper showed many videos of his students approaching strangers with hidden cameras and asking for favour or offering favours and finding patterns in the way they were rejected. I think this project would have been fascinating and uncomfortable to work with!

Finally, a citizen science project was a low-tech sensing system for particulate pollution (Stacey Kyznetsov et al. US) where small paper sensors were handed out to a community. These were to be left outside for 24 hours and then sealed and mailed back where particles gather on it would be examined and would create a map of air quality around the city. I liked this project because of its focus on citizen science and sustainable design.

The closing keynote was by Eric Paulos from UC Berkeley. For his talk, he focused on the theme of “the Amateur”, in its true sense of the world (i.e., one who does something for love) and mentioned that magic can happen when you dare to explore fields in which you are not the expert and combine your background with new experiences. He talked about the maker movement and how through learning by doing it allows the breaking of barriers to creativity. Finally, he talked about the relevance of art and art thinking to science, something that I agree is very present at TEI. He concluded by offering a stratagem that included ideas such as: question progress, embrace the noir, misuse technologies, blend disparate contexts and be tactfully contrarian. An excellent way to end an excellent conference!

20140219_163522

Last few days in Bhutan

I am nearing the end of my trip here in Bhutan. This last week was very busy but also very rewarding. I was offered to teach a class of 25 managers and employees from various banks and government agencies on Effective Communication. This was a three day course, where we covered a lot of topics including how to make effective multimedia (including infographics) presentations, how to conduct what I call “Poetic Presentations” where we use narrative, humour and metaphors to engage the audience.

Image

It was a very interesting and rewarding experience, especially as I had to come up with all the material, which was great since I could include things like poetry and meditation too! I divided the participants into groups and they all did a presentation at the end. I am happy to say most of the feedback was positive. This training was conducted at the Bhutan Media and Communication Institute where I was provided with a lot of assistance, support and flexibility.

The past week also marked the inauguration of ScanCafe, a multinational company with offices in USA and India and now in Bhutan that is moving part of its operations to the Thimphu TechPark. I met one of the co-founders of this company, Dr. Naren Dubey, and his family, which was great. After the ceremony, I setup a Makey Makey Pacman game which we played with the children and had fun. I have found some interesting and unusual fruit here (is it a mouse or a fruit?!) that I incorporated into my demo!

SAM_2077

 

 

SAM_2091On Saturday, my friend Manny who is working with Youth in Thimphu, kindly invited me to give a presentation at his lab for youth who are interested in digital media and take photography and public speaking classes at his centre. I really enjoyed talking with this group and after the talk, we setup a collaborative Pacman game, where each person become a “button” that had to be high-fived for the Pacman to change direction and avoid the ghosts!

ImageImage

Also, I brought a number of prototypes, including Synchrum there and we had a discussion about design and prototyping.

Image

These last few days also were occasion to meet friends and colleagues. On Saturday after dinner and hang out at my place, we went to see a live show by a young (and really good) Bhutanese band. The singer’s name is Ugyen Panday and here’s a video of him performing a Hindi song:

I am hoping to bring back some of the songs to Canada!