-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
'Clip' is not a suitable structure for an OpenTrackIO sample #115
Comments
I am not sure I understand the "and vice versa" from the above. For the " Or is 'Sample' type more like what you would get from indexing a If the latter, I am not sure what 'remove unnecessary fields' means. Would it suffice to have the Hmmm. Does "and vice versa" mean there would be a method on a |
But while we are speaking of clips: a freshly-created |
I'd like @palemieux to comment on this to be sure we agree on the definitions. I see it like this: A Clip is a collection of metadata parameters that may be sampled at different rates. This is how CamDKit was originally designed. In this case each dynamic (or 'regular' as it is referred to in code) parameter has an array of values. A Clip is normally associated with a recording e.g. on a card in a camera and can contain static data and dynamic data of different lengths.
By contrast a Sample (or perhaps 'Frame'?) is sample of multiple metadata parameters at a particular time.
and so a sequence of samples (a 'Stream'?) would be an array of these objects Currently data is stored in a clip and then 'swizzled' out when the OpenTrackIO docs are generated. It would be better to refactor a new Sample type that uses the same metadata model and framework but handles the conceptual difference between a Clip and a Sample. |
If the metadata parameters in the clip may be sampled at different rates, is there any way of determining a priori which sample of If not, what is the advantage (beyond simplicity of implementation) of storing parameters from various samples as lists, one per parameter, stored in a clip object, vs. having the clip object have a dictionary of samples indexed by timestamp (be it timecode or PTP or even a sequence number from the transport system)? |
While we're discussing Incidentally in response to @jamesmosys ' comment above I think |
It has been suggested that we have a new 'Sample' type and this allows us to remove unnecessary fields from a Clip and vice versa
The text was updated successfully, but these errors were encountered: