Verteego Doc
  • Getting started
    • About Verteego
    • Sample Use Cases
    • Concepts
  • Data
    • Introduction
    • Datasources
      • URL connector specification
    • Datasets
  • Pipelines
    • Forecasting Pipelines
      • Getting started
      • Configuration
        • Identifying and preparing data
          • calculated_cols
          • cols_type
          • date_col
          • normalize
          • preprocessing
          • prediction_resolution
        • Configuring the Forecasting Algorithm
          • algo_name
          • algo_type
          • algorithm_parameters
          • fit_parameters
          • conditional_algorithm_parameters
        • Building the Training and Prediction Set
          • column_to_predict
          • features
          • input_prediction_columns
        • Using Hyperparameter Tuning for the Model
          • tuning_search_params
          • hyperparameter_tuning_parameters
        • Evaluating the Results of the Forecast
          • scores
        • Modifying the results of the forecast
          • postprocessing
      • Calculators
        • External source
          • get_from_dataset
          • weather
        • Mathematic
          • aggregate_val_group_by_key
          • binary_operation
          • count_rows_by_keys
          • hierarchical_aggregate
          • mathematical_expression
          • unary_operation
          • Moving Average (EWM)
        • Machine Learning
          • pca
          • clustering
          • glmm_encoder
          • one_hot_encode
          • words_similarity
        • Transformation
          • fillna
          • fill_series
          • case_na
          • interval_index
          • constant
          • cyclic
          • replace
        • Temporal
          • bank_holidays_countdown
          • bankholidays
          • date_attributes
          • date_weight
          • day_count
          • duration
          • events_countdown
          • seasonality
          • tsfresh
    • Optimization Pipelines
      • Getting started
      • Configuration
      • Constraints
        • Unary Constraint
        • Binary Constraint
        • Aggregation Constraint
        • Order Constraint
        • Multiplicative Equality Constraint
        • Generate constraints from a Dataset
  • Apps
    • About Apps
    • Recipes
      • Pipelines
      • Datasets
  • Users
    • User roles
  • Best practices
    • Performance analysis and ML model improvement
  • Developers
    • API
    • Change logs
Powered by GitBook
On this page
  • Forecast Pipelines
  • Optimization Pipelines
  • Extract Pipeline Results
  1. Apps
  2. Recipes

Pipelines

Forecast Pipelines

When launching a Forecast Pipeline within a recipe, specific details need to be provided to ensure the correct execution of the predictive model:

  • pipeline_name: Identifies the particular pipeline configuration to be used. Mandatory.

  • train_set: Specifies the dataset for model training. Mandatory.

  • predict_set: Indicates the dataset on which predictions will be made. Mandatory.

Example

launch_forecast_pipeline:
  type: forecast_pipeline
  params:
    pipeline_name: sales_forecast
    train_set: train_dataset
    predict_set: test_dataset

Optimization Pipelines

Initiating an Optimization Pipeline requires identifying the dataset that comprises the optimization scenarios:

  • pipeline_name: The unique identifier for the optimization configuration. Mandatory.

  • optimization_set: The dataset containing the various options or scenarios for optimization. Mandatory.

Example

launch_optimization_pipeline:
  type: optimization_pipeline
  params:
    pipeline_name: Fleet_Optimization
    optimization_set: fleet_planning_system

Extract Pipeline Results

PreviousRecipesNextDatasets

Last updated 1 year ago

Different steps within the pipelines can be extracted as datasets using the import_from_pipeline method, providing flexibility in the output that you wish to analyze further. Learn more on .

extracting pipeline results