WinUsb_GetOverlappedResult error / device already removed


We are currently facing some issues with the Joulescope JS220 we’d hope for some input on. I wrote some code to make a thread read data from the Joulescope, which we want to use as part of our Pytest based test framework. This code fails within approx. 1 hour due to the the attached error. Some of the error messages are unfortunately in Norwegian (thanks Windows!), but translates to

WinUsb_GetOverlappedResult error 31: A device connected to the system does not work


WinUsb_GetOverlappedResult error 31: A device was listed that does not exist.

Seems to me like the Joulescope is suddenly powered off or something. Power to DUT is also cut.

Excerpt from the code is here: Joulescope sample script -

This is based on the simple example from the ReadTheDocs documentation. This instantiates the joulescope, creates a callback function to get the data and hands off that callback to a separate thread for periodic logging using my DictLogger class.

I noticed from other posts in this forum that my approach here was maybe a bit overkill, as I’m probably using the full data rate of the Joulescope?

We need the Joulsescope to do some long running logging over approximately 30 hours, so this needs to be rock solid. We need samples with 1 or 0.5 second intervals of current, voltage and power. Getting the mean over the intervals is probably better than just 1 sample so we don’t accidentally miss huge spikes. Joulescope is connected directly to USB port on HP Elite Desk running Windows 10 with Python 3.10.

As a intermediate solution, we are now manually running a modified version of the example script, where I have changed the output format of the CSV slightly to match the previous format used in the project.

I guess I’m both asking why this error is occuring and how I could have rewritten the code to avoid this. I’d rather put together some minimal code that does what we need for the project, rather than use the full example scripts provided on Github. I also somewhat struggle with understanding how and when I want to use StreamProcess and DataBuffer.

Other than this error, the Joulescope is just what we need for monitoring out hardware! So I have bought three this far :grin:

Providing some more info:

Python 3.10.7

Device info from GUI:
hw 1
fw 1.0.7
fpga 1.0.4
serial number 001545

Is there any more info I can provide? Not sure if the Joulescope keeps a log anywhere if I run it without the GUI.

Hi @weierstrass92 and welcome to the Joulescope forum!

I think that there are two core questions here:

  1. What is the most reliable way to capture 2 Hz or 1 Hz mean of current and voltage.

  2. What went wrong with the capture as written on this machine.

Recommend Approach

The Joulescope JS220 produces two types of data:
a. The 1 Msps sample data (downsampled on instrument from 2 Msps to 1 Msps).
b. The statistics data with configurable rate, defaults to 2 Hz.

For long-term captures, the stastics data is definitely more reliable. If the host drops samples, those samples are lost forever. However, statistics data is computed on the instrument, so the accumulators energy and charge will always be right, even if the host drops a statistics update. The downsample_logging script with the JS220 uses the statistics data computed on the instrument.

For a more minimal example using pyjoulescope_driver directly, see the statistics entry point.

Here is a script that does what you want. However, it displays to stdout rather than DictLogger as I am not familiar with DictLogger:

# Copyright 2023 Jetperch LLC
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# See the License for the specific language governing permissions and
# limitations under the License.

from pyjoulescope_driver import Driver
import time

_SAMPLE_RATE = 2_000_000
_sample_id_offset = None

def _on_statistics_value(topic, value):
    # duration is guaranteed to be monotonic with ±25 ppm accuracy.
    global _sample_id_offset
    sample_id = value['time']['samples']['value'][0]
    if _sample_id_offset is None:
        _sample_id_offset = sample_id
    duration = (sample_id - _sample_id_offset) / _SAMPLE_RATE
    # localtime will have some variability due to host service interval variations
    # localtime is not guaranteed to be monotonic (daylight savings time, NTP)
    timestamp = time.strftime("%Y-%m-%d %H:%M:%S", time.localtime())
    i_avg = value['signals']['current']['avg']['value']
    v_avg = value['signals']['voltage']['avg']['value']
    p_avg = value['signals']['power']['avg']['value']

def run():
    frequency = 1.0   # in Hz from 1 Hz to _STATISTICS_BASE_RATE Hz
    with Driver() as d:
        devices = d.device_paths()
        device_count = len(devices)
        if device_count != 1:
            print(f'Found {device_count} devices: {devices}')
            return 1
        device = devices[0]
        print("#timestamp,duration,current,voltage,power")  # column header
            d.publish(device + '/s/i/range/mode', 'auto')
            scnt = int(round(_STATISTICS_BASE_RATE / frequency))
            d.publish(device + '/s/stats/scnt', scnt)
            d.publish(device + '/s/stats/ctrl', 1)
            d.subscribe(device + '/s/stats/value', 'pub', _on_statistics_value)
                # do some testing for 30 hours
                while True:
            except KeyboardInterrupt:
            d.publish(device + '/s/stats/ctrl', 0)
            d.unsubscribe(device + '/s/stats/value', _on_statistics_value)

if __name__ == '__main__':

What went wrong

I am not entirely sure. However, for best reliability, I recommend using pyjoulescope_driver directly rather than the older joulescope package. The joulescope package wraps pyjoulescope_driver, and both approaches should work. We have done lots of testing on the pyjoulescope_driver implementation, but most less testing on the joulescope v1 backend wrapper and buffering.

In the provided code, it ignores 0.9 seconds of data and then attempts to stream approximately 0.1 seconds of data. Unfortunately, starting and stopping the data stream is not very accurate relative to the 0.1 second duration. With this approach, it would be cleaner and more reliable to leave streaming on. The callback would then ignore 0.9 seconds of the received data and process 0.1 seconds. However, why do this when statistics is easier and more reliable?

Does this make sense? Does the code above work for you?