Verteego Doc
  • Getting started
    • About Verteego
    • Sample Use Cases
    • Concepts
  • Data
    • Introduction
    • Datasources
      • URL connector specification
    • Datasets
  • Pipelines
    • Forecasting Pipelines
      • Getting started
      • Configuration
        • Identifying and preparing data
          • calculated_cols
          • cols_type
          • date_col
          • normalize
          • preprocessing
          • prediction_resolution
        • Configuring the Forecasting Algorithm
          • algo_name
          • algo_type
          • algorithm_parameters
          • fit_parameters
          • conditional_algorithm_parameters
        • Building the Training and Prediction Set
          • column_to_predict
          • features
          • input_prediction_columns
        • Using Hyperparameter Tuning for the Model
          • tuning_search_params
          • hyperparameter_tuning_parameters
        • Evaluating the Results of the Forecast
          • scores
        • Modifying the results of the forecast
          • postprocessing
      • Calculators
        • External source
          • get_from_dataset
          • weather
        • Mathematic
          • aggregate_val_group_by_key
          • binary_operation
          • count_rows_by_keys
          • hierarchical_aggregate
          • mathematical_expression
          • unary_operation
          • Moving Average (EWM)
        • Machine Learning
          • pca
          • clustering
          • glmm_encoder
          • one_hot_encode
          • words_similarity
        • Transformation
          • fillna
          • fill_series
          • case_na
          • interval_index
          • constant
          • cyclic
          • replace
        • Temporal
          • bank_holidays_countdown
          • bankholidays
          • date_attributes
          • date_weight
          • day_count
          • duration
          • events_countdown
          • seasonality
          • tsfresh
    • Optimization Pipelines
      • Getting started
      • Configuration
      • Constraints
        • Unary Constraint
        • Binary Constraint
        • Aggregation Constraint
        • Order Constraint
        • Multiplicative Equality Constraint
        • Generate constraints from a Dataset
  • Apps
    • About Apps
    • Recipes
      • Pipelines
      • Datasets
  • Users
    • User roles
  • Best practices
    • Performance analysis and ML model improvement
  • Developers
    • API
    • Change logs
Powered by GitBook
On this page
  • Overview
  • Implementation
  • Example
  1. Pipelines
  2. Optimization Pipelines
  3. Constraints

Generate constraints from a Dataset

Overview

Verteego allows for the dynamic creation of multiple constraints directly from the data contained within a dataset. This functionality is instrumental in building scalable and adaptable optimization models that respond to varying data conditions.

Implementation

To automatically generate constraints from a dataset, you must specify the dataset to be used as the input and identify the relevant columns that will dictate the constraint parameters. In Verteego, this is achieved by referencing the desired column names, prefixed with the '@' symbol to denote their role in constraint generation.

Constraint Creation Process

For every row in the specified input dataset, Verteego examines the column values and generates constraints accordingly. These constraints reflect the conditions and relationships present within the dataset, ensuring the optimization model aligns with real-world scenarios and data-specific requirements.

Example

Below is a YAML configuration snippet illustrating how to set up the system for generating binary constraints:

yamlCopy codebinary_constraint_generator:
  input_dataset: my_dataset
  constraint_type: binary
  left_column: @left_column  # Column in 'my_dataset' defining the left-hand side of the constraint
  operator: @operator        # Column in 'my_dataset' indicating the constraint operator (e.g., 'less', 'greater', 'equal')
  right_column: @right_column  # Column in 'my_dataset' defining the right-hand side of the constraint
  delta: 0                   # The numerical offset for the constraint (optional, defaults to 0)

With the provided configuration:

  • input_dataset: Identifies 'my_dataset' as the source from which constraints will be generated.

  • constraint_type: Specifies that the constraints being generated are of the 'binary' type.

  • left_column, operator, right_column: These fields are dynamically filled for each constraint based on the values found in the corresponding columns in 'my_dataset', signified by the '@' prefix.

  • delta: Establishes an offset value for the constraints, which in this case is set to 0, meaning there is no additional value added to or subtracted from the constraint equation.

PreviousMultiplicative Equality ConstraintNextAbout Apps

Last updated 1 year ago