In my typical use case I need to make oscilloscope measurements which would run for at least 5 - 10 minutes, creating multi-giga-byte-sized data files. In such use case I am not interested in seeing fast transients, but want to see how the device’s energy profile changes over time and how much energy the device is consuming at different stages. So, I came up with this idea of decimating:
Currently the sample rate is 2 Ms/s, which limits the oscilloscope capture buffer to 30 seconds, and which creates very large data files very quickly.
Sometimes it may not be necessary to be able to see/plot the capture data at full 2 Ms/s, nor store the captured data into a file at 2 Ms/s. For example, it might be useful to plot/store the data only at 100 kHz rate, which would increase the 30 second limit by 20 times up to 600 seconds and reduce the file size by factor of 20.
We could call this as decimating at factor of N. During decimating the data would be still sampled at 2Ms/s, but its min, max and average would be computed over N samples, thus reducing the sample rate effectively by N.
Using this decimation the data would still contain useful information for analysis purposes, although some very quick transients are smoothed out by averaging, but the statistics are still maintained reasonably well.
Would this sound as a useful feature to be added?