Jerk is defined as the time derivative of acceleration. KINARM Labs provide position, velocity and acceleration data for KINARM robots. Although velocity and acceleration can nominally be calculated by differentiating position, as pointed out in the KINARM User Guides that approach is not advisable because the kinematic data from the robots is collected at ~1.129 kHz and then re-sampled to 1 kHz. Using the position data to accurately estimate velocity requires knowing the time between position samples, which is not saved as part of the c3d file. If one just assumes that the position samples were acquired 1 ms apart, then very large ‘noise’ will be observed in the resulting velocity.

How then, to calculate ‘jerk’? One possibility is to re-create the actual time samples from the position, velocity and acceleration data in the KINARM data files (sample code on how to do that is provided in another posting) and use the actual time samples. However, as outlined below, that approach is not required for jerk. One can comfortable calculate jerk as the time derivative of the acceleration provided, and simply assume 1 ms time bins (i.e. use the simple approach).

RATIONALE:

Using standard difference formulas, v = dx / dt. However, if you use the data that is saved in a c3d file, there is ‘noise’ in each of x and

t. Position is quantized on the Human KINARM Exoskeleton by the primary encoders to 80,000 counts/rev, with the PMAC providing up to an additional 32x resolution using 1/T interpolation, for up to ~3 microradians in quantization. Actual time between samples is not recorded, but is either 0.885ms or 2*0.885ms; so the max sampling time error is (1.77 – 1.0) = 0.77 ms.

If I calculate velocity using the 1ms time base and saved POSITION data in the c3d file, my velocity result includes 2 sources of error: one from the quantization of x and one from the re-sampling of t.

Vel_err_x

= 3 microradians / 1 ms = 0.003 rad/s

Vel_err_t

= dx * (1.77 – 1) = 0.77 V.

For typical speed of 1 rad/s, the error from re-sampling is ~0.8 rad/s. Therefore, calculating velocity in this method results in sampling time errors ~100x larger than the position quantization error, so this approach is a bad idea and is why you should always use the

velocity saved in the c3d file, and not try to calculate velocity from position.

If I calculate acceleration using the 1ms time base and the saved VELOCITY data in the c3d file, my acceleration result again includes 2 sources of error: one from the quantization of velocity and one from the re-sampling of t. Note: because the saved VELOCITY signal is calculated over an extended time-base of ~3.5ms, the quantization error in the saved velocity signal is ~ 0.001 rad/s.

Acc_err_x

= (0.001 rad/s) / 1ms = 1 rad/s^2

Acc_err_t

= dv * (1.77 – 1) ~ 0.77 A. For typical acceleration of 5 rad/s^2, the error from re-sampling is ~4 rad/s^2, or on the same order of magnitude as the quantization error. If you take a sample data file and compare the recorded acceleration to an estimate of acceleration from a difference calculation using velocity, you can see that they are both very noisy (due to quantization noise), but quite similar to each other overall, which is different that the situation described above for velocity. There are still some differences, so it still makes sense to use the saved acceleration, but the errors from not doing so will not be nearly as noticeable as with Velocity.

If I calculate jerk using the 1ms time base and the saved ACCELERATION data in the c3d file, my jerk result includes 2 sources of error: one from the quantization of acceleration and one from the re-sampling of t. Because the saved ACCELERATION signal is calculated over an extended time-base of ~3.5ms, the quantization error in the saved acceleration signal is ~ 0.3 rad/s^2.

Jerk_err_x

= (0.3 rad/s^2) / 1ms = 300 rad/s^3

Jerk_err_t

= da * (1.77 – 1) ~ 0.77 J. For typical jerk of ~20 rad/s^2, the error from re-sampling is ~15 rad/s^3, or 1/20 the positional quantization

error. In this case, the errors from the re-sampling are much smaller compared to the quantization errors. Furthermore, the quantization noise will be so large that you are going to have to filter the signal very strongly, so there does not seem to be any benefit in trying to use the “correct” sampling time versus just assuming that it is 1ms.