Ok, it's not a surprise, but I really wanted to feel it for myself.
I wrote a small C program to decode the PPM signal from an RC receiver, fed in via GPIO pin on the Overo. I'm using Dave Hylands' gpio-event kernel module to get interrupts in user-space. The driver allows a user-space program to subscribe to GPIO events and read them in via /dev/gpio-event. When an interrupt is received, the driver spits out the timestamp of the event, using do_gettimeofday() under the hood.
To get an idea about how well this thing would keep up, I hooked up the 'Cal' output from my trusty Tek 454 o-scope (1 kHz square wave @ 1V) to GPIO 114 on the overo, logged the timestamps, and then differenced the samples to get a jitter plot:
Of note:
- The plot is mislabeled but I'm too lazy to make another: replace "timestamp jitter ms" with "pulse interval".
- The jitter occurs in ~32us increments. I learned from this discussion that this increment corresponds to the 32kHz timer clock used by the stock kernel on the OMAP3503.
- Standard deviation of the jitter is 25us, but we frequently miss our 1ms deadline by as much as 190us. That's a lot for a PPM signal that varies between 0.5-1.5ms.
So, what to do? I have a vague sense that using a kernel with the PREEMPT_RT patches might help the situation, but I don't really understand enough to know if this is true.
I'm inclined to think that it's time to move my control code onto a dedicated microprocessor and stop trying to shoehorn it into a Linux-based system. I'd planned to do this someday anyway, and maybe this means it's time.