These spikes are of constant magnitude, about 40 mA. The spikes add a lot of error when the chip draw is ~500uA. But, not as much error when the chip draw is on the order of ~100mA. I can take a look using the dual markers to get you a number.
Not sure I understand your second question. Yes the spikes add error, and yes it matters?
Yes, the spikes have consistent energy. However, we do not how many spikes might occur, if at all. That is why we are unable to subtract a known quantity of energy.
So, I have no idea how you are going to filter out spikes at random locations with no other knowledge. You would need to develop some nonlinear filtering approach (neural net?) that can detect these spikes. Can the hardware identify these spikes for you? Is there a way to drive a signal to one of the JS220’s general-purpose inputs when a spike is “active”?
My point for (1) and (2) is to quantify the spike energy and be able to state the amount of error that this adds to the measurement, for example in percent.
Gotcha. I was planning on capturing stats for 15 seconds at the 1000 Hz fs rate. Let me describe it using a 15 second example.
Instead of computing a giant 15 second average power and average current measurement, I would set the duration to 0.5 second.
So, we would have 30 half second measurements for average current, and power.
We then calculate average current and power by taking the mean of these values(current_mean).
deviation_1 = (current_mean - current_t1)/current_mean #current_t1 is the average current computed for the first 0.5 second duration.
deviation_2 = (current_mean - current_t2)/current_mean
If deviation_1 exceeds a certain threshold(due to the spike), we would straight up delete that value for average current, and average power. But there is only a certain threshold of deletions we can do, before we bring the validity of the measurement in question.
When the stats exceed a certain error percentage, discard that half second(or shorter time period) worth of data. I would only discard a certain percentage of data.
I can look into why these spikes are happening. Seems like a worthwhile approach to take in the future. Assuming we know when a spike is active, what then? Can the Joulescope do something with the information on the GPI?
Hi @s-nadella - I recommend just capturing all of the statistics data to RAM at 1000 Hz for 15 seconds = 15,000 statistics updates. You can then post-process however you would like. I can’t really comment on your deviation-based algorithm. I can say that this type of identification and filtering is very problematic and difficult to get right with high confidence.
The Joulescope does not do anything with the GPI values, but your script could use that to know where the spikes are. This would make a more robust identification method for the spikes.
I do not recommend deleting data, which ignores the actual energy consumption over that window. Instead, you likely want to fill it in based on the surrounding good data. The simplest method is linear interpolation.
I can say that spike “filtering” could end up being a lot of work to get the spike identification good enough, especially since you have not yet been able to quantify your error & accuracy thresholds yet. Definitely consider figuring out how to isolate the subsystem you care about of the offending subsystem that causes these spikes.