Skip to content

Commit

Permalink
Merge branch 'main' into set_verbose_to_false
Browse files Browse the repository at this point in the history
  • Loading branch information
h-mayorquin authored Dec 20, 2024
2 parents d234021 + 74ca5b8 commit df5954d
Show file tree
Hide file tree
Showing 23 changed files with 131 additions and 175 deletions.
7 changes: 6 additions & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,13 +6,17 @@
* Interfaces and converters now have `verbose=False` by default [PR #1153](https://github.com/catalystneuro/neuroconv/pull/1153)
* Completely removed compression settings from most places [PR #1126](https://github.com/catalystneuro/neuroconv/pull/1126)
* Soft deprecation for `file_path` as an argument of `SpikeGLXNIDQInterface` and `SpikeGLXRecordingInterface` [PR #1155](https://github.com/catalystneuro/neuroconv/pull/1155)
* `starting_time` in RecordingInterfaces has given a soft deprecation in favor of time alignment methods [PR #1158](https://github.com/catalystneuro/neuroconv/pull/1158)


## Bug Fixes
* datetime objects now can be validated as conversion options [#1139](https://github.com/catalystneuro/neuroconv/pull/1126)
* Make `NWBMetaDataEncoder` public again [PR #1142](https://github.com/catalystneuro/neuroconv/pull/1142)
* Fix a bug where data in `DeepLabCutInterface` failed to write when `ndx-pose` was not imported. [#1144](https://github.com/catalystneuro/neuroconv/pull/1144)
* `SpikeGLXConverterPipe` converter now accepts multi-probe structures with multi-trigger and does not assume a specific folder structure [#1150](https://github.com/catalystneuro/neuroconv/pull/1150)
* `SpikeGLXNIDQInterface` is no longer written as an ElectricalSeries [#1152](https://github.com/catalystneuro/neuroconv/pull/1152)
* Fix a bug on ecephys interfaces where extra electrode group and devices were written if the property of the "group_name" was set in the recording extractor [#1164](https://github.com/catalystneuro/neuroconv/pull/1164)


## Features
* Propagate the `unit_electrode_indices` argument from the spikeinterface tools to `BaseSortingExtractorInterface`. This allows users to map units to the electrode table when adding sorting data [PR #1124](https://github.com/catalystneuro/neuroconv/pull/1124)
Expand All @@ -28,7 +32,8 @@
* Use mixing tests for ecephy's mocks [PR #1136](https://github.com/catalystneuro/neuroconv/pull/1136)
* Use pytest format for dandi tests to avoid window permission error on teardown [PR #1151](https://github.com/catalystneuro/neuroconv/pull/1151)
* Added many docstrings for public functions [PR #1063](https://github.com/catalystneuro/neuroconv/pull/1063)

* Clean up with warnings and deprecations in the testing framework [PR #1158](https://github.com/catalystneuro/neuroconv/pull/1158)
* Enhance the typing of the signature on the `NWBConverter` by adding zarr as a literal option on the backend and backend configuration [PR #1160](https://github.com/catalystneuro/neuroconv/pull/1160)

# v0.6.5 (November 1, 2024)

Expand Down
6 changes: 2 additions & 4 deletions docs/conversion_examples_gallery/fiberphotometry/tdt_fp.rst
Original file line number Diff line number Diff line change
Expand Up @@ -207,15 +207,13 @@ Convert TDT Fiber Photometry data to NWB using
>>> LOCAL_PATH = Path(".") # Path to neuroconv
>>> editable_metadata_path = LOCAL_PATH / "tests" / "test_on_data" / "ophys" / "fiber_photometry_metadata.yaml"
>>> interface = TDTFiberPhotometryInterface(folder_path=folder_path, verbose=True)
Source data is valid!
>>> interface = TDTFiberPhotometryInterface(folder_path=folder_path, verbose=False)
>>> metadata = interface.get_metadata()
>>> metadata["NWBFile"]["session_start_time"] = datetime.now(tz=ZoneInfo("US/Pacific"))
>>> editable_metadata = load_dict_from_file(editable_metadata_path)
>>> metadata = dict_deep_update(metadata, editable_metadata)
>>> # Choose a path for saving the nwb file and run the conversion
>>> nwbfile_path = LOCAL_PATH / "example_tdtfp.nwb"
>>> nwbfile_path = f"{path_to_save_nwbfile}"
>>> # t1 and t2 are optional arguments to specify the start and end times for the conversion
>>> interface.run_conversion(nwbfile_path=nwbfile_path, metadata=metadata, t1=0.0, t2=1.0)
NWB file saved at example_tdtfp.nwb!
2 changes: 1 addition & 1 deletion docs/conversion_examples_gallery/sorting/blackrock.rst
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ Convert Blackrock sorting data to NWB using
>>>
>>> file_path = f"{ECEPHY_DATA_PATH}/blackrock/FileSpec2.3001.nev"
>>> # Change the file_path to the location of the file in your system
>>> interface = BlackrockSortingInterface(file_path=file_path, verbose=False)
>>> interface = BlackrockSortingInterface(file_path=file_path, sampling_frequency=30000.0, verbose=False)
>>>
>>> # Extract what metadata we can from the source files
>>> metadata = interface.get_metadata()
Expand Down
3 changes: 2 additions & 1 deletion docs/conversion_examples_gallery/sorting/neuralynx.rst
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,8 @@ Convert Neuralynx data to NWB using
>>>
>>> folder_path = f"{ECEPHY_DATA_PATH}/neuralynx/Cheetah_v5.5.1/original_data"
>>> # Change the folder_path to the location of the data in your system
>>> interface = NeuralynxSortingInterface(folder_path=folder_path, verbose=False)
>>> # The stream is optional but is used to specify the sampling frequency of the data
>>> interface = NeuralynxSortingInterface(folder_path=folder_path, verbose=False, stream_id="0")
>>>
>>> metadata = interface.get_metadata()
>>> session_start_time = datetime(2020, 1, 1, 12, 30, 0, tzinfo=ZoneInfo("US/Pacific")).isoformat()
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -88,7 +88,9 @@ def get_metadata_schema(self) -> dict:
def get_metadata(self) -> DeepDict:
metadata = super().get_metadata()

channel_groups_array = self.recording_extractor.get_channel_groups()
from ...tools.spikeinterface.spikeinterface import _get_group_name

channel_groups_array = _get_group_name(recording=self.recording_extractor)
unique_channel_groups = set(channel_groups_array) if channel_groups_array is not None else ["ElectrodeGroup"]
electrode_metadata = [
dict(name=str(group_id), description="no description", location="unknown", device="DeviceEcephys")
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -98,8 +98,8 @@ def __init__(self, file_path: FilePath, sampling_frequency: Optional[float] = No
The file path to the ``.nev`` data
sampling_frequency: float, optional
The sampling frequency for the sorting extractor. When the signal data is available (.ncs) those files will be
used to extract the frequency automatically. Otherwise, the sampling frequency needs to be specified for
this extractor to be initialized.
used to extract the frequency automatically. Otherwise, the sampling frequency needs to be specified for
this extractor to be initialized.
verbose : bool, default: False
Enables verbosity
"""
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -112,7 +112,14 @@ class NeuralynxSortingInterface(BaseSortingExtractorInterface):
associated_suffixes = (".nse", ".ntt", ".nse", ".nev")
info = "Interface for Neuralynx sorting data."

def __init__(self, folder_path: DirectoryPath, sampling_frequency: Optional[float] = None, verbose: bool = False):

def __init__(
self,
folder_path: DirectoryPath,
sampling_frequency: Optional[float] = None,
verbose: bool = False,
stream_id: Optional[str] = None,
):
"""_summary_
Parameters
Expand All @@ -123,9 +130,14 @@ def __init__(self, folder_path: DirectoryPath, sampling_frequency: Optional[floa
If a specific sampling_frequency is desired it can be set with this argument.
verbose : bool, default: False
Enables verbosity
stream_id: str, optional
Used by Spikeinterface and neo to calculate the t_start, if not provided and the stream is unique
it will be chosen automatically
"""

super().__init__(folder_path=folder_path, sampling_frequency=sampling_frequency, verbose=verbose)
super().__init__(
folder_path=folder_path, sampling_frequency=sampling_frequency, stream_id=stream_id, verbose=verbose
)


def extract_neo_header_metadata(neo_reader) -> dict:
Expand Down
11 changes: 4 additions & 7 deletions src/neuroconv/nwbconverter.py
Original file line number Diff line number Diff line change
Expand Up @@ -204,11 +204,8 @@ def run_conversion(
nwbfile: Optional[NWBFile] = None,
metadata: Optional[dict] = None,
overwrite: bool = False,
# TODO: when all H5DataIO prewraps are gone, introduce Zarr safely
# backend: Union[Literal["hdf5", "zarr"]],
# backend_configuration: Optional[Union[HDF5BackendConfiguration, ZarrBackendConfiguration]] = None,
backend: Optional[Literal["hdf5"]] = None,
backend_configuration: Optional[HDF5BackendConfiguration] = None,
backend: Optional[Literal["hdf5", "zarr"]] = None,
backend_configuration: Optional[Union[HDF5BackendConfiguration, ZarrBackendConfiguration]] = None,
conversion_options: Optional[dict] = None,
) -> None:
"""
Expand All @@ -226,11 +223,11 @@ def run_conversion(
overwrite : bool, default: False
Whether to overwrite the NWBFile if one exists at the nwbfile_path.
The default is False (append mode).
backend : "hdf5", optional
backend : {"hdf5", "zarr"}, optional
The type of backend to use when writing the file.
If a `backend_configuration` is not specified, the default type will be "hdf5".
If a `backend_configuration` is specified, then the type will be auto-detected.
backend_configuration : HDF5BackendConfiguration, optional
backend_configuration : HDF5BackendConfiguration or ZarrBackendConfiguration, optional
The configuration model to use when configuring the datasets for this backend.
To customize, call the `.get_default_backend_configuration(...)` method, modify the returned
BackendConfiguration object, and pass that instead.
Expand Down
10 changes: 10 additions & 0 deletions src/neuroconv/tools/spikeinterface/spikeinterface.py
Original file line number Diff line number Diff line change
Expand Up @@ -862,6 +862,16 @@ def add_electrical_series_to_nwbfile(
whenever possible.
"""

if starting_time is not None:
warnings.warn(
"The 'starting_time' parameter is deprecated and will be removed in June 2025. "
"Use the time alignment methods or set the recording times directlyfor modifying the starting time or timestamps "
"of the data if needed: "
"https://neuroconv.readthedocs.io/en/main/user_guide/temporal_alignment.html",
DeprecationWarning,
stacklevel=2,
)

assert write_as in [
"raw",
"processed",
Expand Down
22 changes: 7 additions & 15 deletions src/neuroconv/tools/testing/data_interface_mixins.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,3 @@
import inspect
import json
import tempfile
from abc import abstractmethod
Expand Down Expand Up @@ -407,7 +406,7 @@ def check_read_nwb(self, nwbfile_path: str):
# Spikeinterface behavior is to load the electrode table channel_name property as a channel_id
self.nwb_recording = NwbRecordingExtractor(
file_path=nwbfile_path,
electrical_series_name=electrical_series_name,
electrical_series_path=f"acquisition/{electrical_series_name}",
use_pynwb=True,
)

Expand Down Expand Up @@ -439,7 +438,7 @@ def check_read_nwb(self, nwbfile_path: str):
assert_array_equal(
recording.get_property(property_name), self.nwb_recording.get_property(property_name)
)
if recording.has_scaled_traces() and self.nwb_recording.has_scaled_traces():
if recording.has_scaleable_traces() and self.nwb_recording.has_scaleable_traces():
check_recordings_equal(RX1=recording, RX2=self.nwb_recording, return_scaled=True)

# Compare channel groups
Expand Down Expand Up @@ -625,29 +624,22 @@ def check_read_nwb(self, nwbfile_path: str):

# NWBSortingExtractor on spikeinterface does not yet support loading data written from multiple segment.
if sorting.get_num_segments() == 1:
# TODO after 0.100 release remove this if
signature = inspect.signature(NwbSortingExtractor)
if "t_start" in signature.parameters:
nwb_sorting = NwbSortingExtractor(file_path=nwbfile_path, sampling_frequency=sf, t_start=0.0)
else:
nwb_sorting = NwbSortingExtractor(file_path=nwbfile_path, sampling_frequency=sf)
nwb_sorting = NwbSortingExtractor(file_path=nwbfile_path, sampling_frequency=sf, t_start=0.0)

# In the NWBSortingExtractor, since unit_names could be not unique,
# table "ids" are loaded as unit_ids. Here we rename the original sorting accordingly
if "unit_name" in sorting.get_property_keys():
renamed_unit_ids = sorting.get_property("unit_name")
# sorting_renamed = sorting.rename_units(new_unit_ids=renamed_unit_ids) #TODO after 0.100 release use this
sorting_renamed = sorting.select_units(unit_ids=sorting.unit_ids, renamed_unit_ids=renamed_unit_ids)
sorting_renamed = sorting.rename_units(new_unit_ids=renamed_unit_ids)

else:
nwb_has_ids_as_strings = all(isinstance(id, str) for id in nwb_sorting.unit_ids)
if nwb_has_ids_as_strings:
renamed_unit_ids = sorting.get_unit_ids()
renamed_unit_ids = [str(id) for id in renamed_unit_ids]
renamed_unit_ids = [str(id) for id in sorting.get_unit_ids()]
else:
renamed_unit_ids = np.arange(len(sorting.unit_ids))

# sorting_renamed = sorting.rename_units(new_unit_ids=sorting.unit_ids) #TODO after 0.100 release use this
sorting_renamed = sorting.select_units(unit_ids=sorting.unit_ids, renamed_unit_ids=renamed_unit_ids)
sorting_renamed = sorting.rename_units(new_unit_ids=renamed_unit_ids)
check_sortings_equal(SX1=sorting_renamed, SX2=nwb_sorting)

def check_interface_set_aligned_segment_timestamps(self):
Expand Down
4 changes: 4 additions & 0 deletions src/neuroconv/tools/testing/mock_interfaces.py
Original file line number Diff line number Diff line change
Expand Up @@ -271,6 +271,10 @@ def __init__(
verbose=verbose,
)

# Sorting extractor to have string unit ids until is changed in SpikeInterface
string_unit_ids = [str(id) for id in self.sorting_extractor.unit_ids]
self.sorting_extractor = self.sorting_extractor.rename_units(new_unit_ids=string_unit_ids)

def get_metadata(self) -> dict:
metadata = super().get_metadata()
session_start_time = datetime.now().astimezone()
Expand Down
Loading

0 comments on commit df5954d

Please sign in to comment.