-
Notifications
You must be signed in to change notification settings - Fork 30
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
NTP local server, Poll frequency, Hardware timestamping #65
Comments
Through the firmware, yes. This feature is not exposed as an end point, if this is a feature you think would be a good addition, feel free to open an issue requesting that feature.
Through the firmware, I believe so. A different NTP library may have to be used to get a higher accuracy, the current ntp uses a remote server to get second level precision. If you want to say, syncronize mutliple wifi shields to a local server, I would recommend you roll your own NTP method using the TCP client/server socket used for streaming.
No. The OpenBCI GUI uses a time stamp when the packet arrives to the GUI. |
Thanks for adding this feature request. Using a local NTP server and allowing more frequent polls (e.g., 8 packets in the first 30 seconds at the initiation––i.e., the "burst" option in ntp configuration) can be quite essential to sync time down to 1 millisecond level----which is helpful to studies involving many timely stimuli or social interactions. Unfortunately, I am not quite savvy in coding for the firmware, but would definitely appreciate such a functionality in the wifi shield. |
Totally agree! This will have to have a driver on the other end to start the burst. I think this is totally doable. I think we need to code a method that creates a map from system time on driver to board time or vice versa. There is a feature on the Cyton for adding board time to every sample. |
Just a few more notes:
|
Another person just asked for this feature |
@aj-ptw Of course, the problem is not severe at this point, as I got about 1 to 5 seconds time drifting during an entire 90 minutes recording. |
@wliao229 are you using UDP or TCP? And then you are inputting in LSL from the GUI? |
@aj-ptw I am using TCP (should I use UDPx3?). I am using the SavedData from the GUI. |
@wliao229 A 5-second time drift for a 90-minute recording is A LOT! |
might be worth trying! |
Some inspiration on how it's done in LSL: |
my time drift = (start_timestamp - end_timestamp) - number_of_samples / sampling_rate |
@wliao229 Is there a lot of drops? What is your effective sampling rate? |
@mesca I think a straightforward solution is: (a) hardware timestamping by the wifi-shield, (b) millisecond-level time sync with a local NTP server. Both of them are sort of being implemented. |
@mesca A major source of dropping in my case happens at sample level (see #70) -- I customized ganglion firmware to read analog signals, and that causes drop of sample by ganglion when running at 1600Hz. My effective sampling rate is around 1400Hz (~ 8-10% of dropped samples), which I am still okay with. The only problem is the unequal time interval due to a mixture of (a) sampling dropping, (b) packet dropping, and (c) network delay. |
Network delays, intermitent long delays between sends, is a big problem with this RAM starved chip. I was only able to statically allocate enough space for 200 raw packets, so if a delay happens > 100ms (which it does) you’re going to loose packets if streaming 16chan at 1000Hz. The next feature I’m trying to add is dynamic memory allocation on the board right as the request to start streaming comes in. This way, I estimate we could get up to 250ms for ring buffer at 1000Hz at 16 Chan. I’m working on a new iteration of the shield with a chip that will have much more stack/heap space that will allow us to store seconds worth of data. |
There is also a big potential speed up for ganglion where we can jam two samples per packet instead of sending empty space like we are now |
Sounds good! I do feel network delays or packets dropping can be less problematic when high precision hardware timestamping is available (or even just a monotonically increasing ID) in each sample. In that case, missing samples/packets can be marked, and the time series still has equal interval. Otherwise, there is always uncertainty in networked/wireless environment.
… On Mar 2, 2018, at 2:44 PM, AJ Keller ***@***.***> wrote:
Network delays, intermitent long delays between sends, is a big problem with this RAM starved chip. I was only able to statically allocate enough space for 200 raw packets, so if a delay happens > 100ms (which it does) you’re going to loose packets if streaming 16chan at 1000Hz.
The next feature I’m trying to add is dynamic memory allocation on the board right as the request to start streaming comes in. This way, I estimate we could get up to 250ms for ring buffer at 1000Hz at 16 Chan.
I’m working on a new iteration of the shield with a chip that will have much more stack/heap space that will allow us to store seconds worth of data.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
|
The document http://docs.openbci.com/OpenBCI%20Software/03-OpenBCI_Wifi_Server says
Three questions:
The text was updated successfully, but these errors were encountered: