Releases
v0.3.1
v0.3.1: Datasets, cache checksum, improvements for text and visualization
Additions
Added dataset module (#949 ) containing MNIST, SST-2, SST-5, REUTERS, OHSUMED, FEVER and GoEmotions datasets
Add Ludwig Model Serve Example (#947 )
Add checksum mechanism for HDF5 and Meta JSON cache file (#1006 )-
Improvements
Updated run_experiment to use new skip parameters and returns (#955 )
Several improvements to testing (more coverage, with faster tests)
Changed default value of HF encoder trainable parameter to True (for performance reasons) (#996 )
Improved and slightly modified visualization functions API-
Bugfixes
Changed not to is None in dataset checks in hyperopt.run.hyperopt() (#956 )
Fix LudwigModel.predict() when skip_save_predictions = False (#962 )
Fix #963 : Convert materialized tensors to numpy arrays up front to avoid repeated conversion ()
Fix errors with DataFrame truth checks in hyperopt (#956 )
Added truncation to HF tokenizer (#978 )
Reimplemented Jaccard Metric for the Set Feature (#979 )
Fix learning rate computation with decay and warmup (#982 )
Fix CLI logger typos (#998 , #999 )
Fix loading of split from hdf5 (#1003 )
Fix visualization unit tests (#981 )
Fix concatenate_csv to work with arbitrary read functions and renamed concatenate_datasets
Fix compatibility issue with matplotlib 3.3.3
Limit numpy and h5py max versions due to tensorflow 2.3.1 max supported versions (#990 )
Fixed usage of model_load_path with Horovod (#1011 )
You can’t perform that action at this time.