Configuring the Forecasting Algorithm

Specifies the name of the algorithm used for training. Identifying the algorithm by name allows for precise documentation and easier replication of model training processes. This setting is fundamental for tracing model performance back to the algorithmic choices made during setup.

Defines the type of algorithm being implemented, categorizing it as either 'regression' or 'classification'. This distinction is crucial as it directly influences the kind of predictive analysis the model will perform, based on whether the outcome is a continuous variable (regression) or categorical labels (classification).

Outlines a collection of hyperparameters specific to the chosen algorithm. These parameters control various aspects of the algorithm's behavior and performance, such as learning rate, number of trees (in tree-based models), or regularization terms. Proper tuning of these parameters is essential for optimizing the model's accuracy and efficiency.

This configuration is only available with the XGBoost algorithm. It provides a method to tailor the fitting process, combining the advantages of a global model with those of a local model. These parameters might include settings for handling imbalanced data, specifying the number of boosting rounds, or other XGBoost-specific options that influence how the model learns from the data.

This configuration function allows for the specification of algorithm parameters that override global settings for specific subgroups within the dataset. These subgroups are identified based on the values in discriminant columns. By setting conditional parameters, users can tailor the algorithm's behavior to better fit the distinct characteristics or needs of different segments within the data. This targeted approach can significantly enhance model performance by optimizing parameters for specific conditions or scenarios within the dataset.

Last updated