Tomdee

7Nov/160

Triggering Philips Hue Lights using Amazon Dash and OpenWRT

There are plenty of existing blog posts describing Amazon Dash hacks. I don't think I've seen any with this approach so I'd thought I'd share.

I own a router running OpenWRT and I wanted a way to respond to Amazon Dash events without needing to sniff all my traffic for ARPs (the "normal" way people are triggering using dash buttons). My method involves listening for wireless events from the "iw" tool.

This line can be placed in /etc/rc.local (or edited through the OpenWRT web page under Admin->Startup and putting it in the box at the bottom)

nohup ash -c 'iw event |awk  "/new station a0:02:dc:c9:07:62/" | while read -r line; do if hue get 2 | grep "\"on\":true"; then hue set 2 --off ; else hue set 2 --on ; fi; done ' &

I run the script with "nohup" at the start and "&" at the end so it runs in the background.

"iw event" prints out all the wireless events. I pipe that through "awk" to filter out just the events I want (my dash button connecting).

The filtered events are then passed to a while loop which blocks reading each line of input (which is one line per event or button press).

The body of the while loop is the actual command that it runs for each press. It checks the current state of a hue bulb and toggles it between on and off.

This relies on the excellent "hue-shell" script, which was simple to install on OpenWRT - see http://josef-friedrich.github.io/Hue-shell/docs/

Filed under: Uncategorized No Comments
12Jan/160

Preparing Github release notes from Github pull requests

When doing releases for Project Calico, I like to include the highlights of what's been merged since the previous release.
Release notes on Github are written in markdown and automagically create links when issues or pull requests are referenced with a "#", e.g. #662 will create a link to https://github.com/projectcalico/calico-containers/pull/662
But it doesn't fill in a title for the link, so I like to write my release note with lines like "#705 pool should always come from client" which provides a link and a title.

Rather than tediously copying and pasting all the text to create these links, I wrote a one-liner to do it for me.

PREVIOUS_RELEASE=v0.13.0
git log $PREVIOUS_RELEASE..master --merges --format="%s" |grep -P -o '#\d+' | grep -P -o '\d+' |xargs -I ^ -n 1 curl -s https://api.github.com/repos/projectcalico/calico-containers/pulls/^ | jq -r '"#" + (.number |tostring) + " " + .title'

 

git log $PREVIOUS_RELEASE..master --merges --format="%s"

  • Print the commit message of the merges that have happened since the last release. e.g. Merge pull request #662 from tomdee/kubernetes-versioning

grep -P -o '#\d+' | grep -P -o '\d+'

  • Pull out just the number part of the #XXX PR number.
  • -o ensures that just the matched part of the line is output

xargs -I ^ -n 1 curl -s https://api.github.com/repos/projectcalico/calico-containers/pulls/^

  • Run curl for each of the PR numbers that were merged. -s makes curl silent.
  • xargs is run with -I to control the replace character and -n 1 ensures that curl is called for each PR.

jq -r '"#" + (.number |tostring) + " " + .title'

  • Use jq to pull out the PR number and title and format it to get the desired output.
Filed under: Uncategorized No Comments
20Nov/150

First experience of using Metal as a Service (MAAS) from Ubuntu

After coming into a number of servers for Project Calico I needed some way to set them up and provision them. MAAS from Canonical seems like a good place to start so I had a play.

I had a number of issues along the way (detailed below) but ultimately I got where I needed to be.

I started off with an existing Ubuntu server and decided to just install the packages on there. After realising how out of date (and messy) that server was I scrapped that idea and decided to just to the MAAS install from an Ubuntu ISO. I just wanted this to work with minimal fuss so I went for the latest and greatest - Ubuntu 15.10. The installation went smoothly but I found it confusing to learn about MAAS from the docs

  • Region controller, clusters and cluster controllers - How does this fit in to the single server install I just did?
  • What are the stages that a "node" goes through? Where is that documented?

When trying to configure my interfaces, I hit this bug which was fixed in the latest RC. (LP: #1439476 - Internal Server Error when creating/editing cluster)

So I upgraded, got an interface configured and went to configure my DHCP server. I followed the docs and got a server to PXE boot. Success!! Or not... the server immediately failed not being able to find a file... Eventually I tracked this down to me missing the next-server parameter. It really should be mentioned in the docs at https://maas.ubuntu.com/docs1.7/install.html#configure-dhcp

Once I got servers actually booting I ran into endless problems during the cloud-init about not being able to find files or servers. I was seeing various different errors but generally it was timing out trying to connect to servers. I was at a bit of a loss. Changing my interface settings had some effect on the IPs that the nodes were trying to connect to but still didn't resolve the problem.

I got another VLAN set up and added another interface to my MAAS server. I allowed this interface to manage DHCP and DNS and tried again PXE booting servers. They booted, they got IP addresses on this new subnet but they were still failing with strange errors trying to connect odd IPs.

I was now many hours into the process and I felt tantalizingly close but I was struggling to debug the problems I was seeing. MAAS was just too magic and the documentation didn't give me enough detail to diagnose what was going on.

I decided to take a step back and start again. This time I started from a Ubuntu 14.04.3 ISO which took me back down to a 1.7.X release. The server I used had two interfaces - the "main" one on the main LAN and another interface on the my new VLAN. After doing the initial set up I was getting errors about not being able to contact my cluster. This was my prompt to actually find the logs (under /var/log/maas) and found that something was trying to use an old IP address. It would have been really useful to be able to get this detail from within the web UI but now I knew what was wrong I needed to work out out to fix it. After some googling I found that I needed to run

sudo dpkg-reconfigure maas-region-controller
sudo dpkg-reconfigure maas-cluster-controller

And that was enough to get things working. I still needed to spent some time getting power control working (some more guidance on this would be really nice too!).

MAAS seems like it's going to be a really useful tool. The initial set up was a challenge which would have been made much easier by a few simple docs improvements.

Summary of docs issues

  • Missing detail on manual DHCP config
  • General intro/orientation - different types of controller and node lifecycle.
  • IP configuration of servers - how to reconfigure IPs and how to tell if things aren't configured correctly.
  • Troubleshooting - Linked to orientation above. What is the full flow from start to end of getting a server set up and a node provisioned. What packets should flow from where to where (e.g. DHCP, TFTP, cloud-init+Metadata etc..) Where are the logs?
  • Overview of different power options - what to use when and how to configure.

 

Filed under: Uncategorized No Comments
23Feb/150

Simple benchmarking of etcd read and write performance

There's surprisingly little information on the web about the performance of CoreOS's distributed data store etcd. It's reasonable to assume that writes are slow (because they need to be replicated) and reads should be fast (because they can just come from RAM). But everything is being transported over HTTP and needs to be JSON encoded. I know that etcd hasn't been optimized for performance (yet) but it would be great to know what sort of ballpark performance is possible.

I ran a few simple tests against etcd version 2.0.0, on a single node "cluster" running on an Ubuntu 14.04 VM running on a slow Dell laptop. This isn't any kind of reliable benchmark - I'm just trying to get a ballpark estimate.

I started testing with boom (the python version) but it was way too slow, so I switched to ab.

Results

I tested 10000 requests, with 10 concurrent workers. I also turned on HTTP keepalives.

  • Read Performance - 6758 req/sec
  • Write Performance - 1388 writes/sec (but about 20% of the requests failed)

Not too shabby but the numbers should be taken with a large pinch of salt. I'm testing against localhost and only reading/writing a tiny amount of data. The write number is pretty meaningless given that I have a single node in my cluster and a lot of the requests failed (though I tried a single concurrent connection and got 0 failures and roughly 350 writes/sec - still pretty respectable)

Read details

ab -k -c 10 -n 10000 http://127.0.0.1:4001/v2/keys/key
This is ApacheBench, Version 2.3 <$Revision: 1528965 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 127.0.0.1 (be patient)
Completed 1000 requests
Completed 2000 requests
Completed 3000 requests
Completed 4000 requests
Completed 5000 requests
Completed 6000 requests
Completed 7000 requests
Completed 8000 requests
Completed 9000 requests
Completed 10000 requests
Finished 10000 requests
Server Software:
Server Hostname: 127.0.0.1
Server Port: 4001

Document Path: /v2/keys/key
Document Length: 93 bytes

Concurrency Level: 10
Time taken for tests: 1.480 seconds
Complete requests: 10000
Failed requests: 0
Keep-Alive requests: 10000
Total transferred: 3220000 bytes
HTML transferred: 930000 bytes
Requests per second: 6758.10 [#/sec] (mean)
Time per request: 1.480 [ms] (mean)
Time per request: 0.148 [ms] (mean, across all concurrent requests)
Transfer rate: 2125.11 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.1 0 2
Processing: 0 1 1.0 1 13
Waiting: 0 1 1.0 1 13
Total: 0 1 1.0 1 13

Percentage of the requests served within a certain time (ms)
50% 1
66% 2
75% 2
80% 2
90% 2
95% 3
98% 4
99% 5
100% 13 (longest request)

Write details

ab -k -u data -c 10 -n 10000 http://127.0.0.1:4001/v2/keys/key
This is ApacheBench, Version 2.3 <$Revision: 1528965 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 127.0.0.1 (be patient)
Completed 1000 requests
Completed 2000 requests
Completed 3000 requests
Completed 4000 requests
Completed 5000 requests
Completed 6000 requests
Completed 7000 requests
Completed 8000 requests
Completed 9000 requests
Completed 10000 requests
Finished 10000 requests
Server Software:
Server Hostname: 127.0.0.1
Server Port: 4001

Document Path: /v2/keys/key
Document Length: 169 bytes

Concurrency Level: 10
Time taken for tests: 7.202 seconds
Complete requests: 10000
Failed requests: 2035
(Connect: 0, Receive: 0, Length: 2035, Exceptions: 0)
Keep-Alive requests: 10000
Total transferred: 4000173 bytes
Total body sent: 1650000
HTML transferred: 1698138 bytes
Requests per second: 1388.59 [#/sec] (mean)
Time per request: 7.202 [ms] (mean)
Time per request: 0.720 [ms] (mean, across all concurrent requests)
Transfer rate: 542.44 [Kbytes/sec] received
223.75 kb/s sent
766.19 kb/s total

Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.0 0 2
Processing: 3 7 1.9 7 30
Waiting: 3 7 1.9 7 30
Total: 3 7 1.9 7 32

Percentage of the requests served within a certain time (ms)
50% 7
66% 7
75% 8
80% 8
90% 9
95% 10
98% 12
99% 14
100% 32 (longest request)

Tagged as: , No Comments
22Apr/140

Getting rid of the xxx@gmail.com on behalf of Tom Denham [xxx@tomdee.co.uk] message when using your own domain with Gmail.

If you have your own domain but use gmail.com for your email, you might have tried to use the "Send mail as:" feature in GMail to have your outgoing mail appear to be from your own domain without needing to pay Google.

This approach generally works well, with one exception. Some mail client (the biggest example being Outlook) will display that the mail is from your Gmail address.

xxx@gmail.com on behalf of Tom Denham [xxx@tomdee.co.uk]

Google do offer a way around this (without paying), but you need an SMTP server. I didn't fancy setting up my own and after hunting around I couldn't find any cheap SMTP servers that directly meet this use case.

Finally, a few days ago I came across https://postmarkapp.com/. Although it's aimed at "transactional email for webapps" it supports authenticated SMTP and they even give you 10,000 email credits for free when you sign up. Since these credit never expire, and I'm unlikely to send 10,000 emails in the foreseeable future, I'm hoping that this service will be free for life!

I've been using it for a few days now and I've not had any problems. It was quick and easy to sign up (no credit card details required) and the Gmail config is straight forward.

Simply edit your Gmail settings and use the following information (get the username and password from your PostMark account - use a server API key for both - not the main account API key)

Example gmail config
Filed under: Uncategorized No Comments
20May/130

PlayRTP now supports real timestamps

I'm now grabbing the timestamp from the capture file and using that to pace the playback. This means that capture files can actually be played back through the jitter buffer like they would be on a real Jitsi client.

I've also testing this with capture files from tcpdump and it works fine.

Filed under: Uncategorized No Comments
18May/130

Testing a jitter buffer by presenting packet captures through a DatagramSocket interface

I've been working on the jitter buffer code in the FMJ project which is used by the Jitsi softclient. To know that the code is good, it's been handy to be able to try it out under real world conditions. To make this repeatable I wanted to be able to play packet captures through the jitter buffer and hear the results so I could then tweak the code and hear whether it improved things.

The easiest way to achieve this was to use the excellent libjitsi library from the Jitsi team. This just allows me to call LibJitsi.start() then I can use the MediaService to create a MediaDevice to play the audio and use a MediaStream connected to this device for dealing with the RTP. See here for the code.

The MediaStream is then connected to a new class I wrote which implements the StreamConnector interface. This class PCapStreamConnector, is a small class which has most of the interesting logic in another new class I wrote - PCapDatagramSocket. This presents a packet capture file through a DatagramSocket interface! It's fairly crude, and assumes the media file is written in the exact format that Jitsi uses for writing packet capture files, but that's all I need so no point doing any more.

It's a bit rough and ready at the moment. The fact that it doesn't actually use the RTP timings makes this borderline useless (!), but I will be adding that feature soon. At the same time, I'll be cleaning up the interface to allow the payload type to be passed in, or even better just try to detect it from the capture file. With a little more work, this could be a handy generic tool for playing media from RTP streams in any format that libjitsi supports - at least g711, SILK, Opus, g722, g723, iLBC and speex.

To really make this feature useful, I also enhanced the error reporting module so that users would have the chance to report a few minutes of media if they've experienced bad voice quality on a call.

 

5Aug/120

Free Wifi on Chiltern Railways and Dropbox

The free Wifi on Chiltern Railways annoyingly uses opendns.com for its DNS. This breaks dropbox with a rather confusing error

Unable to make a secure connection to the Dropbox servers because your computer's date and time settings are incorrect. Please correct your computer's date and time to allow a connection to Dropbox

The fix is to change the DNS servers after connecting to something else. I've just the google DNS servers at 8.8.8.8

The DNS servers can only be changed after connecting since DNS redirection is used for presenting the WIFI sign in page.

Filed under: Uncategorized No Comments
19Feb/120

JNI and Gluegen

I know I can develop software faster using Java so I want to learn how to interface with native code using JNI. No solution seems particularly elegant and going for a completely manual approach seems to involve a lot of boiler plate and work.

I had a quick look around google and Wikipedia and a couple of options to help with code generation - SWIG and Gluegen. The latter particularly caught my eye, in particular the ability to have structs treated as Java classes.

It took me a little while to get a hello world app running, so instructions are below to remind me in future and for anyone else that might find it useful.

Obtaining and building Gluegen

  • git clone git://github.com/mbien/gluegen.git gluegen
  • cd gluegen/make
  • ant clean all

This gives the required jars in the build directory

  • gluegen.jar for generating the Java version of C code and the bridging C code.
  • antlr.jar for parsing C code.
  • gluegen-rt.jar required at runtime.

Creating a sample application

The C code

int one_plus(int a) {
 return 1 + a;
}

Taken from the Gluegen website

The Java code

import testfunction.*;
 class Test
 {
 static {
 System.loadLibrary("nativelib");
 }
public static void main(String args[])
 {
 System.out.println(TestFunction.one_plus(5));
 }
}

I'm jumping ahead a little here. This code assumes that the C code is in a package call testfunction and that there is a native lib called nativelib.

Using Gluegen

Running Gluegen creates binding C code and the Java code that defines the C methods.

Gluegen needs some configuration to guide its behaviour. It's here that the Java package and class names are defined.

Package testfunction
 Style AllStatic
 JavaClass TestFunction
 JavaOutputDir gensrc/java
 NativeOutputDir gensrc/native

To run Gluegen, I went for the ant approach. My build file is below

<?xml version="1.0"?>
 <project name="sampleProject" basedir=".">
 <path id="gluegen.classpath">
 <pathelement location="gluegen.jar" />
 <pathelement location="antlr.jar" />
 </path>
<taskdef name="gluegen"
 classname="com.jogamp.gluegen.ant.GlueGenTask"
 classpathref="gluegen.classpath" />
 <target name="build">
 <gluegen src="function.h"
 config="function.cfg"
 emitter="com.jogamp.gluegen.JavaEmitter">
 <classpath refid="gluegen.classpath" />
 </gluegen>
 </target>
 </project>

running ant build results in a new directory "gensrc" containing a "native" and a "java" directory.

The generated native code is the Java to C glue code. The only thing worth noting about it is that it pulls in JNI.h C header file.

The generated java code has the "native" method and pulls in the runtime lib.

package testfunction;
import com.jogamp.gluegen.runtime.*;
 import com.jogamp.common.os.*;
 import com.jogamp.common.nio.*;
 import java.nio.*;
public class TestFunction {
 /** Interface to C language function: <br> <code> int one_plus(int a); </code> */
 public static native int one_plus(int a);
 } // end of class TestFunction

 Building the results

For me, this was the most challenging part. Most of the above is covered quite well on the Gluegen website. The following isn't.

Building the C code

 gcc -Wl,-soname,libnative.so -o libnativelib.so -fPIC --shared gensrc/native/TestFunction_JNI.c function.c -I/usr/lib/jvm/java-6-sun/include -I/usr/lib/jvm/java-6-sun/include/linux /usr/lib/jvm/java-6-sun-1.6.0.26/jre/lib/amd64/server/libjvm.so -lc

Quite a lot here.

  • gcc is used to build the code
  • -Wl passes options to the linker. In this case, the shared object name is set to libnative
  • -o specifies the output filename
  • -fPIC was required to avoid an error message along the lines of
    • /usr/bin/ld: /tmp/ccI3vLJd.o: relocation R_X86_64_PC32 against symbol `one_plus' can not be used when making a shared object; recompile with -fPIC
  • --shared causes a shared object to be built
  • TestFunction_JNI.c is the Gluegen generated code and function.c is the actual implemenation. These are the two files that are actually being built.
  • the two -I options specify the locations of the JNI headers (jni.h)
  • libjvm.so is also to be passed in.
This results in a file libnativelib.so in the current directory.

Building the Java Code

javac Test.java gensrc/java/testfunction/TestFunction.java -cp ../gluegen/build/gluegen-rt.jar

More straightforward. The Gluegen code and the code I wrote to drive it is compiled. The gluegen runtime jar needs to be on the classpath.

Running the Result

java -Djava.library.path=. -cp gensrc/java:. Test
6

Running the Java requires the native lib path to be specified. The program then specifies 6 as expected. Phew!



Filed under: Uncategorized No Comments
18Feb/120

(Re)Learning C

I've spent the day skimming over http://c.learncodethehardway.org/ I've currently only half written but covers all the basics including things like using valgrind, writing complete programs and obviously all the basic language constructs. It was a useful refresher and I would recommend it. I certainly think it has a lot more depth than I've had time to get out of it but it has still server as a great refresher/overview.

I've started looking at the librailfare code with my newly refreshed C knowledge. My previous aspirations for improving my Vim skills have gone out the window. I've been happily using Vim for the learncodethehard way examples but once I started wanting to poke round a multi-file project, I reached for trusty Eclipse. The CDT looks a lot more polished than when I last looked at it a few years ago and coupled with Msys and MiniGW I'm happily navigating my way around the code using familiar key bindings!

I've got librailfare compiled and running on linux (Ubuntu). I had to install the C Minimal Perfect Hashing libs. But I can't get it compiling on Windows yet - I need to work out how to compile the CMPH libs on windows first.

Filed under: Uncategorized No Comments