Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

'Clip' is not a suitable structure for an OpenTrackIO sample #115

Open
jamesmosys opened this issue Oct 20, 2024 · 5 comments
Open

'Clip' is not a suitable structure for an OpenTrackIO sample #115

jamesmosys opened this issue Oct 20, 2024 · 5 comments

Comments

@jamesmosys
Copy link
Collaborator

It has been suggested that we have a new 'Sample' type and this allows us to remove unnecessary fields from a Clip and vice versa

@JGoldstone
Copy link
Contributor

JGoldstone commented Dec 9, 2024

I am not sure I understand the "and vice versa" from the above.

For the "Sample is the result of removing unnecessary fields from a Clip" suggestion, is the idea that one has a Sample object, containing the Sampling.STATIC data from the Clip for such fields that have non-None values as well as the Sampling.REGULAR data from the Clip that is a tuple of non-None values?

Or is 'Sample' type more like what you would get from indexing a Clip, where in the process of doing so, one would select the tuple values corresponding to that index, so that a 'Sample' was basically a single temporal sample?

If the latter, I am not sure what 'remove unnecessary fields' means. Would it suffice to have the Sample's to_json() method omit Sampling.STATIC fields that had None as a value, and Sampling.REGULAR fields that were empty tuples?

Hmmm. Does "and vice versa" mean there would be a method on a Sample that would take a JSON dict or JSON string and create a Clip with values of None for Sampling.STATIC attributes not in the Sample, and with empty tuples resulting in the Clip for Sampling.REGULAR attributes that, similarly, were not in the Sample?

@JGoldstone
Copy link
Contributor

But while we are speaking of clips: a freshly-created Clip has its various sampled fields set to None. Using Clip.append() or using accessors one can set the sampled fields to tuples of the appropriate type. Is it considered legitimate to have a sampled field have a zero-length tuple?

@jamesmosys
Copy link
Collaborator Author

I'd like @palemieux to comment on this to be sure we agree on the definitions. I see it like this:

A Clip is a collection of metadata parameters that may be sampled at different rates. This is how CamDKit was originally designed. In this case each dynamic (or 'regular' as it is referred to in code) parameter has an array of values. A Clip is normally associated with a recording e.g. on a card in a camera and can contain static data and dynamic data of different lengths.

{
    "static_param1", 1
    "dyn_param1": [1,2,3],
    "dyn_param2": [1,2,3,4,5,6],
}

By contrast a Sample (or perhaps 'Frame'?) is sample of multiple metadata parameters at a particular time.

{
    "param1": 1,
    "param2": 2,
    "timestamp": 123456789
}

and so a sequence of samples (a 'Stream'?) would be an array of these objects [{...},{...},{...}].
A Stream is different to a Clip because it cannot have different frequency metadata inside as each Sample defines state at a particular time. (Sources requiring differing frequency data would send on separate Streams.)

Currently data is stored in a clip and then 'swizzled' out when the OpenTrackIO docs are generated. It would be better to refactor a new Sample type that uses the same metadata model and framework but handles the conceptual difference between a Clip and a Sample.

@JGoldstone
Copy link
Contributor

If the metadata parameters in the clip may be sampled at different rates, is there any way of determining a priori which sample of dyn_param2 "goes with" a particular sample of dyn_param1?

If not, what is the advantage (beyond simplicity of implementation) of storing parameters from various samples as lists, one per parameter, stored in a clip object, vs. having the clip object have a dictionary of samples indexed by timestamp (be it timecode or PTP or even a sequence number from the transport system)?

@JGoldstone
Copy link
Contributor

While we're discussing Clip, why do we need a duration parameter? I would think the duration would be the Clip'same capture_frame_rate multiplied by the number of instances of a regular attribute (though that way, the duration of a clip could vary depending on which attribute one examines).

Incidentally in response to @jamesmosys ' comment above I think Sample is to be preferred to Frame. Lens metadata comes in at many times the frame rate. Even if it comes in over a separate Stream with a higher sample rate, you still need to call the individual things you've gathered something, and Frame can't be it; multiple things would relate to the same frame of image or sound essence.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants