You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As in jupyter notebook shows, I have run r.run("split") and suppose the data processing started and returned new df that I wanted to use in training. However, when I run r.run("train"), I am facing with mlflow.exceptions.MlflowException: Error has occurred during training of AutoML model using FLAML: AssertionError('Input data must not be empty.') error. Below you can see my yaml files
local.yaml
## FIXME::REQUIRED: set an MLflow experiment name to track recipe executions and artifacts.#experiment:
name: "test_food"tracking_uri: "sqlite:///metadata/mlflow/mlruns.db"artifact_location: "./metadata/mlflow/mlartifacts"model_registry:
## FIXME::OPTIONAL: Set the registry server URI. This property is especially useful if you have a# registry server that’s different from the tracking server.# uri: "sqlite:///metadata/mlflow/registry.db"# FIXME::REQUIRED: Specifies the name of the Registered Model to use when registering a trained# model to the MLflow Model Registry.model_name: "random-forest"INGEST_CONFIG:
## FIXME::REQUIRED: Specify the format of the training and evaluation dataset. Natively supported# formats are: parquet, spark_sql, delta.using: "csv"# FIXME::OPTIONAL: Specify the training and evaluation data location.location: "./data/data.csv"loader_method: "load_file_as_dataframe"# INGEST_SCORING_CONFIG:# For different options please read: https://github.com/mlflow/recipes-classification-template#batch-scoring# FIXME::OPTIONAL: Specify the format of the scoring dataset. Natively supported formats are:# parquet, spark_sql, delta.# using: ""# FIXME::OPTIONAL: Specify the scoring data location.# location: ""# PREDICT_OUTPUT_CONFIG:# For different options please read: https://github.com/mlflow/recipes-classification-template#predict-step# FIXME::OPTIONAL: Specify the format of the scored dataset. Natively supported formats are:# parquet, delta, table.# using: ""# FIXME::OPTIONAL: Specify the output location of the batch scoring predict step.# location: ""
recipe.yaml
# `recipe.yaml` is the main configuration file for an MLflow Recipe.# Required recipe parameters should be defined in this file with either concrete values or# variables such as {{ INGEST_DATA_LOCATION }}.## Variables must be dereferenced in a profile YAML file, located under `profiles/`.# See `profiles/local.yaml` for example usage. One may switch among profiles quickly by# providing a profile name such as `local` in the Recipe object constructor:# `r = Recipe(profile="local")`## NOTE: All "FIXME::REQUIRED" fields in recipe.yaml and profiles/*.yaml must be set correctly# to adapt this template to a specific classification problem. To find all required fields,# under the root directory of this recipe, type on a unix-like command line:# $> grep "# FIXME::REQUIRED:" recipe.yaml profiles/*.yaml## NOTE: YAML does not support tabs for indentation. Please use spaces and ensure that all YAML# files are properly formatted.recipe: "classification/v1"# FIXME::REQUIRED: Specifies the target column name for model training and evaluation.target_col: "target"# FIXME::REQUIRED: Specifies the value of `target_col` that is considered the positive class.positive_class: "1"# FIXME::REQUIRED: Sets the primary metric to use to evaluate model performance. This primary# metric is used to select best performing models in MLflow UI as well as in# train and evaluation step.# Built-in primary metrics are: recall_score, precision_score, f1_score, accuracy_score.primary_metric: "f1_score"steps:
# Specifies the dataset to use for model developmentingest: {{INGEST_CONFIG}}split:
using: split_ratios## FIXME::OPTIONAL: Adjust the train/validation/test split ratios below.#split_ratios: [0.75, 0.125, 0.125]## FIXME::OPTIONAL: Specifies the method to use to "post-process" the split datasets. Note that# arbitrary transformations should go into the transform step.post_split_filter_method: create_dataset_filtertransform:
using: "custom"## FIXME::OPTIONAL: Specifies the method that defines an sklearn-compatible transformer, which# applies input feature transformation during model training and inference.transformer_method: transformer_fntrain:
## FIXME::REQUIRED: Specifies the method to use for training. Options are "automl/flaml" for# AutoML training or "custom" for user-defined estimators.using: "automl/flaml"time_budget_secs: 3000predict_scores_for_all_classes: Truepredict_prefix: "predicted_"evaluate:
## FIXME::OPTIONAL: Sets performance thresholds that a trained model must meet in order to be# eligible for registration to the MLflow Model Registry.#validation_criteria:
- metric: f1_scorethreshold: 0.9register:
# Indicates whether or not a model that fails to meet performance thresholds should still# be registered to the MLflow Model Registryallow_non_validated_model: false# FIXME::OPTIONAL: Specify the dataset to use for batch scoring. All params serve the same function# as in `data`# ingest_scoring: {{INGEST_SCORING_CONFIG}}# predict:# output: {{PREDICT_OUTPUT_CONFIG}}# model_uri: "models/model.pkl"# result_type: "double"# save_mode: "default# custom_metrics:# FIXME::OPTIONAL: Defines custom performance metrics to compute during model development.# - name: ""# function: get_custom_metrics# greater_is_better: False
split.py
"""This module defines the following routines used by the 'split' step:- ``create_dataset_filter``: Defines customizable logic for filtering the training, datasets produced by the data splitting procedure. Note that arbitrary transformations should go into the transform step."""frompandasimportDataFrame, Seriesimportpandasaspdimportnumpyasnpimportastfromsklearn.preprocessingimportLabelEncoderfromtqdmimporttqdmdefcreate_dataset_filter(dataset: DataFrame) ->Series:
""" Mark rows of the split datasets to be additionally filtered. This function will be called on the training datasets. :param dataset: The {train,validation,test} dataset produced by the data splitting procedure. :return: A Series indicating whether each row should be filtered """# Step 1: Process the datasetprocessed_data=start_preprocessing(dataset)
# Step 2: Check for NA values and log a warning if foundprint(processed_data.isna().any())
ifprocessed_data.empty:
print("Warning: Processed data is empty.")
returnSeries(False, index=dataset.index) # Return False for all rows if processed data is empty# Step 3: Create a filtering Series based on your conditions# Example: Keep rows that are not null in a specific column (e.g., 'target')filter_condition=processed_data['target'].notna() # Adjust this based on your target column or filtering criteria# Optional: Log the number of rows being filteredprint(f"Filtered rows: {filter_condition.sum()} out of {len(dataset)}")
returnfilter_conditiondeffill_null_values_with_average_values(df: pd.DataFrame) ->pd.DataFrame:
""" This method identifies null values in specific nutritional columns and fills them with the average of their respective categories. """# Specify the columns to check for null valuescolumns_to_fix_nulls= [
'nutritional_saturated_fat_100g',
'nutritional_carbohydrates_100g',
'nutritional_fat_100g',
'nutritional_sugars_100g',
'nutritional_proteins_100g',
'nutritional_fiber_100g',
'nutritional_energy_100g',
'nutritional_salt_100g'
]
forcolintqdm(columns_to_fix_nulls):
category_means=df.groupby('category')[col].mean().fillna(0)
df[col] =df.apply(
lambdarow: category_means[row['category']] ifpd.isnull(row[col]) elserow[col],
axis=1
)
returndfdefextract_number_of_ingredients_from_string(datum) ->int:
returnlen(ast.literal_eval(datum))
defconvert_string_to_list_size(df: DataFrame) ->DataFrame:
convert=lambdax: extract_number_of_ingredients_from_string(x)
df['ingredients_ordered'] =df['ingredients_ordered'].apply(convert)
returndfdefencode_category(df: DataFrame) ->DataFrame:
le=LabelEncoder()
df['category'] =le.fit_transform(df['category'])
returndfdefstart_preprocessing(df: DataFrame) ->DataFrame:
df_no_null=fill_null_values_with_average_values(df)
df_ingridients_list=convert_string_to_list_size(df_no_null)
df_encoded=encode_category(df_ingridients_list)
returndf_encoded.drop(columns=['id', 'category', 'is_liquid', 'nutritional_saturated_fat_100g', 'nutritional_fat_100g', 'nutritional_fiber_100g', 'nutritional_salt_100g'])
train.py
"""This module defines the following routines used by the 'train' step:- ``estimator_fn``: Defines the customizable estimator type and parameters that are used during training to produce a model recipe."""fromtypingimportDict, Anyfromsklearn.ensembleimportRandomForestClassifierdefestimator_fn(estimator_params: Dict[str, Any] =None) ->Any:
""" Returns an *unfitted* estimator that defines ``fit()`` and ``predict()`` methods. The estimator's input and output signatures should be compatible with scikit-learn estimators. """## FIXME::OPTIONAL: return a scikit-learn-compatible classification estimator with fine-tuned# hyperparameters.ifestimator_paramsisNone:
estimator_params= {
'n_estimators': 100,
'max_depth': None,
'class_weight': 'balanced',
'random_state': 42,
}
returnRandomForestClassifier(**estimator_params)
Additionally, when I tested my dataset with r.get_artifact("training_data").isnull().any() line, I see there is no null value. Can anyone help me in this case?
The text was updated successfully, but these errors were encountered:
As in jupyter notebook shows, I have run
r.run("split")
and suppose the data processing started and returned new df that I wanted to use in training. However, when I runr.run("train")
, I am facing withmlflow.exceptions.MlflowException: Error has occurred during training of AutoML model using FLAML: AssertionError('Input data must not be empty.')
error. Below you can see my yaml fileslocal.yaml
recipe.yaml
split.py
train.py
Additionally, when I tested my dataset with
r.get_artifact("training_data").isnull().any()
line, I see there is no null value. Can anyone help me in this case?The text was updated successfully, but these errors were encountered: