Hack Jam Log Book is a log of progress made in and around a weekly hack session. Topics include natural language processing, high energy electronics, linguistics, interface design, &c. Enjoy.

Recent Posts:

Archives:

18.11.08

 

Livescribe Pulse Hacking: The buisness already!

Blah, blah, blah. Those of you who were up to date on your Fourier transform are now bored out of your mind --- and those of you who weren't are still wondering why I'm talking so much about it rather than about the pen.

So now, it happens! Data is sent from the pen to my laptop, running Linux, via vibrating air! Doesn't it just put you at the edge of your seat?

The good news: "Hello, World!" did get from my pen to my desktop. The bad news: it did so at something like 8bps --- that's 8 bits per second, mind you. So it took something like a quarter of a minute just to send those two words.

Are improvements possible? Absolutely. I can increase my sampling rate (currently 8kHz, could easily be 16). I can increase the number of different frequencies to increase the per-unit-bandwidth.

But what is currently killing me is (I believe) the period between tones; not the tones themselves. There are two dangerous things that happen during the transition. One is that the Fourier analysis for time-bins which contain a frequency change have two peaks; this makes classifying those bins far more challenging, even classifying them as worthless and ignoring them.

Second, and a little more worrying, is on the pen: the amount of time that passes between the current sound clip ending after I request that the next sound play, and the next sound actually playing. It is, uh ... not zero. And that of course makes the the whole in-between-tone period even HARDER to identify and cope with.

Short note. Hopefully have more time next week to discuss possible fixes for all these faults.

Labels: , ,


Comments:
You could probably write a custom subclass of OutputStream, which would feed the pen a custom on-the-fly generated "WAV file" for your data. The SDK docs claim you can do that. Each individual output message would then be a single playback event.

Add some forward error correction and other sorts of data integrity foo, et voila, you have a nice, reliable communications channel. I'm thinking of doing this to interface my Pulse to my iPhone.

One other area to look at for fun is the ham radio field. There are numerous FOSS digital audio processing apps out there. You could use one of them as a decoder and avoid writing your own.

I already did a quick hack, and recorded APRS 1200 bps sounds on my Pulse using its built-in microphone. The audio quality was sufficient for my ham radio to properly decode them when played back.

So, 1200 bps is easily possible with the audio quality of the system's hardware and software.
 
Good! I'm glad that other people are working on this -- and that you're getting better results. It seemed bizarre to me that my speed was so poor --- but you demonstrate that it must be an artifact of my methods. I'll look into the ham radio decoders.


And yes, an OutputStream class is very much what I was planning to do, when I can get the data to move reliably.

Thanks!
 
Post a Comment

Subscribe to Post Comments [Atom]





<< Home

This page is powered by Blogger. Isn't yours?

Subscribe to Posts [Atom]