Linear model of tree-based decision rules based on the rulefit algorithm from Friedman and Popescu.

The algorithm can be used for predicting an output vector y given an input matrix X. In the first step a tree ensemble is generated with gradient boosting. The trees are then used to form rules, where the paths to each node in each tree form one rule. A rule is a binary decision if an observation is in a given node, which is dependent on the input features that were used in the splits. The ensemble of rules together with the original input features are then being input in a L1-regularized linear model, also called Lasso, which estimates the effects of each rule on the output target but at the same time estimating many of those effects to zero.

Expand source code
"""Linear model of tree-based decision rules based on the rulefit algorithm from Friedman and Popescu.

The algorithm can be used for predicting an output vector y given an input matrix X. In the first step a tree ensemble
is generated with gradient boosting. The trees are then used to form rules, where the paths to each node in each tree
form one rule. A rule is a binary decision if an observation is in a given node, which is dependent on the input features
that were used in the splits. The ensemble of rules together with the original input features are then being input in a
L1-regularized linear model, also called Lasso, which estimates the effects of each rule on the output target but at the
same time estimating many of those effects to zero.
"""
from typing import List, Tuple

import numpy as np
import pandas as pd
import scipy
from scipy.special import softmax
from sklearn.base import BaseEstimator, ClassifierMixin, RegressorMixin
from sklearn.base import TransformerMixin
from sklearn.utils.multiclass import unique_labels
from sklearn.utils.validation import check_X_y, check_array, check_is_fitted

from imodels.rule_set.rule_set import RuleSet
from imodels.util.arguments import check_fit_arguments
from imodels.util.extract import extract_rulefit
from imodels.util.rule import get_feature_dict, replace_feature_name, Rule
from imodels.util.score import score_linear
from imodels.util.transforms import Winsorizer, FriedScale


class RuleFit(BaseEstimator, TransformerMixin, RuleSet):
    """Rulefit class. Rather than using this class directly, should use RuleFitRegressor or RuleFitClassifier


    Parameters
    ----------
    tree_size:      Number of terminal nodes in generated trees. If exp_rand_tree_size=True, 
                    this will be the mean number of terminal nodes.
    sample_fract:   fraction of randomly chosen training observations used to produce each tree. 
                    FP 2004 (Sec. 2)
    max_rules:      total number of terms included in the final model (both linear and rules)
                    approximate total number of candidate rules generated for fitting also is based on this
                    Note that actual number of candidate rules will usually be lower than this due to duplicates.
    memory_par:     scale multiplier (shrinkage factor) applied to each new tree when 
                    sequentially induced. FP 2004 (Sec. 2)
    lin_standardise: If True, the linear terms will be standardised as per Friedman Sec 3.2
                    by multiplying the winsorised variable by 0.4/stdev.
    lin_trim_quantile: If lin_standardise is True, this quantile will be used to trim linear 
                    terms before standardisation.
    exp_rand_tree_size: If True, each boosted tree will have a different maximum number of 
                    terminal nodes based on an exponential distribution about tree_size. 
                    (Friedman Sec 3.3)
    include_linear: Include linear terms as opposed to only rules
    alpha:          Regularization strength, will override max_rules parameter
    cv:             Whether to use cross-validation scores to select the regularization strength 
                    the final regularization value out of all that satisfy max_rules. If False, the
                    least regularization possible is used.
    random_state:   Integer to initialise random objects and provide repeatability.
    tree_generator: Optional: this object will be used as provided to generate the rules. 
                    This will override almost all the other properties above. 
                    Must be GradientBoostingRegressor(), GradientBoostingClassifier(), or RandomForestRegressor()

    Attributes
    ----------
    rule_ensemble: RuleEnsemble
        The rule ensemble

    feature_names: list of strings, optional (default=None)
        The names of the features (columns)

    """

    def __init__(self,
                 n_estimators=100,
                 tree_size=4,
                 sample_fract='default',
                 max_rules=30,
                 memory_par=0.01,
                 tree_generator=None,
                 lin_trim_quantile=0.025,
                 lin_standardise=True,
                 exp_rand_tree_size=True,
                 include_linear=True,
                 alpha=None,
                 cv=True,
                 random_state=None):
        self.n_estimators = n_estimators
        self.tree_size = tree_size
        self.sample_fract = sample_fract
        self.max_rules = max_rules
        self.memory_par = memory_par
        self.tree_generator = tree_generator
        self.lin_trim_quantile = lin_trim_quantile
        self.lin_standardise = lin_standardise
        self.exp_rand_tree_size = exp_rand_tree_size
        self.include_linear = include_linear
        self.alpha = alpha
        self.cv = cv
        self.random_state = random_state

        self.winsorizer = Winsorizer(trim_quantile=self.lin_trim_quantile)
        self.friedscale = FriedScale(self.winsorizer)
        self.stddev = None
        self.mean = None

    def fit(self, X, y=None, feature_names=None):
        """Fit and estimate linear combination of rule ensemble

        """
        X, y, feature_names = check_fit_arguments(self, X, y, feature_names)
        if isinstance(self, ClassifierMixin) and len(np.unique(y)) > 2:
            raise ValueError(
                "RuleFit does not yet support multiclass classification")

        self.n_features_ = X.shape[1]
        self.feature_dict_ = get_feature_dict(X.shape[1], feature_names)
        self.feature_placeholders = np.array(list(self.feature_dict_.keys()))
        self.feature_names = np.array(list(self.feature_dict_.values()))

        extracted_rules = self._extract_rules(X, y)
        self.rules_without_feature_names_, self.coef, self.intercept = self._score_rules(
            X, y, extracted_rules)
        self.rules_ = [
            replace_feature_name(rule, self.feature_dict_) for rule in self.rules_without_feature_names_
        ]

        # count total rule terms, plus nonzero linear terms
        self.complexity_ = self._get_complexity()
        if self.include_linear:
            self.complexity_ += np.sum(
                np.array(self.coef[:X.shape[1]]) != 0)

        return self

    def _predict_continuous_output(self, X):
        """Predict outcome of linear model for X
        """
        if type(X) == pd.DataFrame:
            X = X.values.astype(np.float32)

        y_pred = np.zeros(X.shape[0])
        y_pred += self._eval_weighted_rule_sum(X)

        if self.include_linear:
            if self.lin_standardise:
                X = self.friedscale.scale(X)
            y_pred += X @ self.coef[:X.shape[1]]
        return y_pred + self.intercept

    def predict(self, X):
        '''Predict. For regression returns continuous output.
        For classification, returns discrete output.
        '''
        check_is_fitted(self)
        if scipy.sparse.issparse(X):
            X = X.toarray()
        X = check_array(X)
        if isinstance(self, RegressorMixin):
            return self._predict_continuous_output(X)
        else:
            class_preds = np.argmax(self.predict_proba(X), axis=1)
            return np.array([self.classes_[i] for i in class_preds])

    def predict_proba(self, X):
        check_is_fitted(self)
        if scipy.sparse.issparse(X):
            X = X.toarray()
        X = check_array(X)
        continuous_output = self._predict_continuous_output(X)
        logits = np.vstack(
            (1 - continuous_output, continuous_output)).transpose()
        return softmax(logits, axis=1)

    def transform(self, X=None, rules=None):
        """Transform dataset.

        Parameters
        ----------
        X : array-like matrix, shape=(n_samples, n_features)
            Input data to be transformed. Use ``dtype=np.float32`` for maximum
            efficiency.

        Returns
        -------
        X_transformed: matrix, shape=(n_samples, n_out)
            Transformed data set
        """
        df = pd.DataFrame(X, columns=self.feature_placeholders)
        # print('df', df.dtypes, df.head())
        X_transformed = np.zeros((X.shape[0], len(rules)))

        for i, r in enumerate(rules):
            features_r_uses = list(
                set(term.split(' ')[0] for term in r.split(' and ')))
            # print('r', r)
            # print('feats', df[features_r_uses])
            # print('ans', df[features_r_uses].query(r))
            # print(
            #     'tra', X_transformed[df[features_r_uses].query(r).index.values, i])
            X_transformed[df[features_r_uses].query(r).index.values, i] = 1
        return X_transformed

    def _get_rules(self, exclude_zero_coef=False, subregion=None):
        """Return the estimated rules

        Parameters
        ----------
        exclude_zero_coef: If True (default), returns only the rules with an estimated
                           coefficient not equalt to  zero.

        subregion: If None (default) returns global importances (FP 2004 eq. 28/29), else returns importance over 
                           subregion of inputs (FP 2004 eq. 30/31/32).

        Returns
        -------
        rules: pandas.DataFrame with the rules. Column 'rule' describes the rule, 'coef' holds
               the coefficients and 'support' the support of the rule in the training
               data set (X)
        """
        n_features = len(self.coef) - len(self.rules_)
        rule_ensemble = list(self.rules_without_feature_names_)
        output_rules = []
        # Add coefficients for linear effects
        for i in range(0, n_features):
            if self.lin_standardise:
                coef = self.coef[i] * self.friedscale.scale_multipliers[i]
            else:
                coef = self.coef[i]
            if subregion is None:
                importance = abs(coef) * self.stddev[i]
            else:
                subregion = np.array(subregion)
                importance = sum(abs(coef) * abs([x[i] for x in self.winsorizer.trim(subregion)] - self.mean[i])) / len(
                    subregion)
            output_rules += [(self.feature_names[i],
                              'linear', coef, 1, importance)]

        # Add rules
        for i in range(0, len(self.rules_)):
            rule = rule_ensemble[i]
            coef = self.coef[i + n_features]

            if subregion is None:
                importance = abs(coef) * (rule.support *
                                          (1 - rule.support)) ** (1 / 2)
            else:
                rkx = self.transform(subregion, [rule])[:, -1]
                importance = sum(
                    abs(coef) * abs(rkx - rule.support)) / len(subregion)

            output_rules += [(self.rules_[i].rule, 'rule',
                              coef, rule.support, importance)]
        rules = pd.DataFrame(output_rules, columns=[
                             "rule", "type", "coef", "support", "importance"])
        if exclude_zero_coef:
            rules = rules.ix[rules.coef != 0]
        return rules

    def visualize(self, decimals=2):
        rules = self._get_rules()
        rules = rules[rules.coef != 0].sort_values("support", ascending=False)
        pd.set_option('display.max_colwidth', None)
        return rules[['rule', 'coef']].round(decimals)

    def __str__(self):
        if not hasattr(self, 'coef'):
            s = self.__class__.__name__
            s += "("
            s += "max_rules="
            s += repr(self.max_rules)
            s += ")"
            return s
        else:
            s = '> ------------------------------\n'
            s += '> RuleFit:\n'
            s += '> \tPredictions are made by summing the coefficients of each rule\n'
            s += '> ------------------------------\n'
            return s + self.visualize().to_string(index=False) + '\n'

    def _extract_rules(self, X, y) -> List[str]:
        return extract_rulefit(X, y,
                               feature_names=self.feature_placeholders,
                               n_estimators=self.n_estimators,
                               tree_size=self.tree_size,
                               memory_par=self.memory_par,
                               tree_generator=self.tree_generator,
                               exp_rand_tree_size=self.exp_rand_tree_size,
                               random_state=self.random_state)

    def _score_rules(self, X, y, rules) -> Tuple[List[Rule], List[float], float]:
        X_concat = np.zeros([X.shape[0], 0])

        # standardise linear variables if requested (for regression model only)
        if self.include_linear:

            # standard deviation and mean of winsorized features
            self.winsorizer.train(X)
            winsorized_X = self.winsorizer.trim(X)
            self.stddev = np.std(winsorized_X, axis=0)
            self.mean = np.mean(winsorized_X, axis=0)

            if self.lin_standardise:
                self.friedscale.train(X)
                X_regn = self.friedscale.scale(X)
            else:
                X_regn = X.copy()
            X_concat = np.concatenate((X_concat, X_regn), axis=1)

        X_rules = self.transform(X, rules)
        if X_rules.shape[0] > 0:
            X_concat = np.concatenate((X_concat, X_rules), axis=1)

        # no rules fit and self.include_linear == False
        if X_concat.shape[1] == 0:
            return [], [], 0
        prediction_task = 'regression' if isinstance(
            self, RegressorMixin) else 'classification'
        return score_linear(X_concat, y, rules,
                            prediction_task=prediction_task,
                            max_rules=self.max_rules,
                            alpha=self.alpha,
                            cv=self.cv,
                            random_state=self.random_state)


class RuleFitRegressor(RuleFit, RegressorMixin):
    ...


class RuleFitClassifier(RuleFit, ClassifierMixin):
    ...

Classes

class RuleFit (n_estimators=100, tree_size=4, sample_fract='default', max_rules=30, memory_par=0.01, tree_generator=None, lin_trim_quantile=0.025, lin_standardise=True, exp_rand_tree_size=True, include_linear=True, alpha=None, cv=True, random_state=None)

Rulefit class. Rather than using this class directly, should use RuleFitRegressor or RuleFitClassifier

Parameters

tree_size :  Number of terminal nodes in generated trees. If exp_rand_tree_size=True,
this will be the mean number of terminal nodes.
sample_fract :  fraction of randomly chosen training observations used to produce each tree.
FP 2004 (Sec. 2)
max_rules :  total number of terms included in the final model (both linear and rules)
approximate total number of candidate rules generated for fitting also is based on this Note that actual number of candidate rules will usually be lower than this due to duplicates.
memory_par :  scale multiplier (shrinkage factor) applied to each new tree when
sequentially induced. FP 2004 (Sec. 2)
lin_standardise : If True, the linear terms will be standardised as per Friedman Sec 3.2
by multiplying the winsorised variable by 0.4/stdev.
lin_trim_quantile : If lin_standardise is True, this quantile will be used to trim linear
terms before standardisation.
exp_rand_tree_size : If True, each boosted tree will have a different maximum number of
terminal nodes based on an exponential distribution about tree_size. (Friedman Sec 3.3)
include_linear : Include linear terms as opposed to only rules
 
alpha :  Regularization strength, will override max_rules parameter
 
cv : Whether to use cross-validation scores to select the regularization strength
the final regularization value out of all that satisfy max_rules. If False, the least regularization possible is used.
random_state: Integer to initialise random objects and provide repeatability.
tree_generator : Optional: this object will be used as provided to generate the rules.
This will override almost all the other properties above. Must be GradientBoostingRegressor(), GradientBoostingClassifier(), or RandomForestRegressor()

Attributes

rule_ensemble : RuleEnsemble
The rule ensemble
feature_names : list of strings, optional (default=None)
The names of the features (columns)
Expand source code
class RuleFit(BaseEstimator, TransformerMixin, RuleSet):
    """Rulefit class. Rather than using this class directly, should use RuleFitRegressor or RuleFitClassifier


    Parameters
    ----------
    tree_size:      Number of terminal nodes in generated trees. If exp_rand_tree_size=True, 
                    this will be the mean number of terminal nodes.
    sample_fract:   fraction of randomly chosen training observations used to produce each tree. 
                    FP 2004 (Sec. 2)
    max_rules:      total number of terms included in the final model (both linear and rules)
                    approximate total number of candidate rules generated for fitting also is based on this
                    Note that actual number of candidate rules will usually be lower than this due to duplicates.
    memory_par:     scale multiplier (shrinkage factor) applied to each new tree when 
                    sequentially induced. FP 2004 (Sec. 2)
    lin_standardise: If True, the linear terms will be standardised as per Friedman Sec 3.2
                    by multiplying the winsorised variable by 0.4/stdev.
    lin_trim_quantile: If lin_standardise is True, this quantile will be used to trim linear 
                    terms before standardisation.
    exp_rand_tree_size: If True, each boosted tree will have a different maximum number of 
                    terminal nodes based on an exponential distribution about tree_size. 
                    (Friedman Sec 3.3)
    include_linear: Include linear terms as opposed to only rules
    alpha:          Regularization strength, will override max_rules parameter
    cv:             Whether to use cross-validation scores to select the regularization strength 
                    the final regularization value out of all that satisfy max_rules. If False, the
                    least regularization possible is used.
    random_state:   Integer to initialise random objects and provide repeatability.
    tree_generator: Optional: this object will be used as provided to generate the rules. 
                    This will override almost all the other properties above. 
                    Must be GradientBoostingRegressor(), GradientBoostingClassifier(), or RandomForestRegressor()

    Attributes
    ----------
    rule_ensemble: RuleEnsemble
        The rule ensemble

    feature_names: list of strings, optional (default=None)
        The names of the features (columns)

    """

    def __init__(self,
                 n_estimators=100,
                 tree_size=4,
                 sample_fract='default',
                 max_rules=30,
                 memory_par=0.01,
                 tree_generator=None,
                 lin_trim_quantile=0.025,
                 lin_standardise=True,
                 exp_rand_tree_size=True,
                 include_linear=True,
                 alpha=None,
                 cv=True,
                 random_state=None):
        self.n_estimators = n_estimators
        self.tree_size = tree_size
        self.sample_fract = sample_fract
        self.max_rules = max_rules
        self.memory_par = memory_par
        self.tree_generator = tree_generator
        self.lin_trim_quantile = lin_trim_quantile
        self.lin_standardise = lin_standardise
        self.exp_rand_tree_size = exp_rand_tree_size
        self.include_linear = include_linear
        self.alpha = alpha
        self.cv = cv
        self.random_state = random_state

        self.winsorizer = Winsorizer(trim_quantile=self.lin_trim_quantile)
        self.friedscale = FriedScale(self.winsorizer)
        self.stddev = None
        self.mean = None

    def fit(self, X, y=None, feature_names=None):
        """Fit and estimate linear combination of rule ensemble

        """
        X, y, feature_names = check_fit_arguments(self, X, y, feature_names)
        if isinstance(self, ClassifierMixin) and len(np.unique(y)) > 2:
            raise ValueError(
                "RuleFit does not yet support multiclass classification")

        self.n_features_ = X.shape[1]
        self.feature_dict_ = get_feature_dict(X.shape[1], feature_names)
        self.feature_placeholders = np.array(list(self.feature_dict_.keys()))
        self.feature_names = np.array(list(self.feature_dict_.values()))

        extracted_rules = self._extract_rules(X, y)
        self.rules_without_feature_names_, self.coef, self.intercept = self._score_rules(
            X, y, extracted_rules)
        self.rules_ = [
            replace_feature_name(rule, self.feature_dict_) for rule in self.rules_without_feature_names_
        ]

        # count total rule terms, plus nonzero linear terms
        self.complexity_ = self._get_complexity()
        if self.include_linear:
            self.complexity_ += np.sum(
                np.array(self.coef[:X.shape[1]]) != 0)

        return self

    def _predict_continuous_output(self, X):
        """Predict outcome of linear model for X
        """
        if type(X) == pd.DataFrame:
            X = X.values.astype(np.float32)

        y_pred = np.zeros(X.shape[0])
        y_pred += self._eval_weighted_rule_sum(X)

        if self.include_linear:
            if self.lin_standardise:
                X = self.friedscale.scale(X)
            y_pred += X @ self.coef[:X.shape[1]]
        return y_pred + self.intercept

    def predict(self, X):
        '''Predict. For regression returns continuous output.
        For classification, returns discrete output.
        '''
        check_is_fitted(self)
        if scipy.sparse.issparse(X):
            X = X.toarray()
        X = check_array(X)
        if isinstance(self, RegressorMixin):
            return self._predict_continuous_output(X)
        else:
            class_preds = np.argmax(self.predict_proba(X), axis=1)
            return np.array([self.classes_[i] for i in class_preds])

    def predict_proba(self, X):
        check_is_fitted(self)
        if scipy.sparse.issparse(X):
            X = X.toarray()
        X = check_array(X)
        continuous_output = self._predict_continuous_output(X)
        logits = np.vstack(
            (1 - continuous_output, continuous_output)).transpose()
        return softmax(logits, axis=1)

    def transform(self, X=None, rules=None):
        """Transform dataset.

        Parameters
        ----------
        X : array-like matrix, shape=(n_samples, n_features)
            Input data to be transformed. Use ``dtype=np.float32`` for maximum
            efficiency.

        Returns
        -------
        X_transformed: matrix, shape=(n_samples, n_out)
            Transformed data set
        """
        df = pd.DataFrame(X, columns=self.feature_placeholders)
        # print('df', df.dtypes, df.head())
        X_transformed = np.zeros((X.shape[0], len(rules)))

        for i, r in enumerate(rules):
            features_r_uses = list(
                set(term.split(' ')[0] for term in r.split(' and ')))
            # print('r', r)
            # print('feats', df[features_r_uses])
            # print('ans', df[features_r_uses].query(r))
            # print(
            #     'tra', X_transformed[df[features_r_uses].query(r).index.values, i])
            X_transformed[df[features_r_uses].query(r).index.values, i] = 1
        return X_transformed

    def _get_rules(self, exclude_zero_coef=False, subregion=None):
        """Return the estimated rules

        Parameters
        ----------
        exclude_zero_coef: If True (default), returns only the rules with an estimated
                           coefficient not equalt to  zero.

        subregion: If None (default) returns global importances (FP 2004 eq. 28/29), else returns importance over 
                           subregion of inputs (FP 2004 eq. 30/31/32).

        Returns
        -------
        rules: pandas.DataFrame with the rules. Column 'rule' describes the rule, 'coef' holds
               the coefficients and 'support' the support of the rule in the training
               data set (X)
        """
        n_features = len(self.coef) - len(self.rules_)
        rule_ensemble = list(self.rules_without_feature_names_)
        output_rules = []
        # Add coefficients for linear effects
        for i in range(0, n_features):
            if self.lin_standardise:
                coef = self.coef[i] * self.friedscale.scale_multipliers[i]
            else:
                coef = self.coef[i]
            if subregion is None:
                importance = abs(coef) * self.stddev[i]
            else:
                subregion = np.array(subregion)
                importance = sum(abs(coef) * abs([x[i] for x in self.winsorizer.trim(subregion)] - self.mean[i])) / len(
                    subregion)
            output_rules += [(self.feature_names[i],
                              'linear', coef, 1, importance)]

        # Add rules
        for i in range(0, len(self.rules_)):
            rule = rule_ensemble[i]
            coef = self.coef[i + n_features]

            if subregion is None:
                importance = abs(coef) * (rule.support *
                                          (1 - rule.support)) ** (1 / 2)
            else:
                rkx = self.transform(subregion, [rule])[:, -1]
                importance = sum(
                    abs(coef) * abs(rkx - rule.support)) / len(subregion)

            output_rules += [(self.rules_[i].rule, 'rule',
                              coef, rule.support, importance)]
        rules = pd.DataFrame(output_rules, columns=[
                             "rule", "type", "coef", "support", "importance"])
        if exclude_zero_coef:
            rules = rules.ix[rules.coef != 0]
        return rules

    def visualize(self, decimals=2):
        rules = self._get_rules()
        rules = rules[rules.coef != 0].sort_values("support", ascending=False)
        pd.set_option('display.max_colwidth', None)
        return rules[['rule', 'coef']].round(decimals)

    def __str__(self):
        if not hasattr(self, 'coef'):
            s = self.__class__.__name__
            s += "("
            s += "max_rules="
            s += repr(self.max_rules)
            s += ")"
            return s
        else:
            s = '> ------------------------------\n'
            s += '> RuleFit:\n'
            s += '> \tPredictions are made by summing the coefficients of each rule\n'
            s += '> ------------------------------\n'
            return s + self.visualize().to_string(index=False) + '\n'

    def _extract_rules(self, X, y) -> List[str]:
        return extract_rulefit(X, y,
                               feature_names=self.feature_placeholders,
                               n_estimators=self.n_estimators,
                               tree_size=self.tree_size,
                               memory_par=self.memory_par,
                               tree_generator=self.tree_generator,
                               exp_rand_tree_size=self.exp_rand_tree_size,
                               random_state=self.random_state)

    def _score_rules(self, X, y, rules) -> Tuple[List[Rule], List[float], float]:
        X_concat = np.zeros([X.shape[0], 0])

        # standardise linear variables if requested (for regression model only)
        if self.include_linear:

            # standard deviation and mean of winsorized features
            self.winsorizer.train(X)
            winsorized_X = self.winsorizer.trim(X)
            self.stddev = np.std(winsorized_X, axis=0)
            self.mean = np.mean(winsorized_X, axis=0)

            if self.lin_standardise:
                self.friedscale.train(X)
                X_regn = self.friedscale.scale(X)
            else:
                X_regn = X.copy()
            X_concat = np.concatenate((X_concat, X_regn), axis=1)

        X_rules = self.transform(X, rules)
        if X_rules.shape[0] > 0:
            X_concat = np.concatenate((X_concat, X_rules), axis=1)

        # no rules fit and self.include_linear == False
        if X_concat.shape[1] == 0:
            return [], [], 0
        prediction_task = 'regression' if isinstance(
            self, RegressorMixin) else 'classification'
        return score_linear(X_concat, y, rules,
                            prediction_task=prediction_task,
                            max_rules=self.max_rules,
                            alpha=self.alpha,
                            cv=self.cv,
                            random_state=self.random_state)

Ancestors

  • sklearn.base.BaseEstimator
  • sklearn.utils._estimator_html_repr._HTMLDocumentationLinkMixin
  • sklearn.utils._metadata_requests._MetadataRequester
  • sklearn.base.TransformerMixin
  • sklearn.utils._set_output._SetOutputMixin
  • RuleSet

Subclasses

Methods

def fit(self, X, y=None, feature_names=None)

Fit and estimate linear combination of rule ensemble

Expand source code
def fit(self, X, y=None, feature_names=None):
    """Fit and estimate linear combination of rule ensemble

    """
    X, y, feature_names = check_fit_arguments(self, X, y, feature_names)
    if isinstance(self, ClassifierMixin) and len(np.unique(y)) > 2:
        raise ValueError(
            "RuleFit does not yet support multiclass classification")

    self.n_features_ = X.shape[1]
    self.feature_dict_ = get_feature_dict(X.shape[1], feature_names)
    self.feature_placeholders = np.array(list(self.feature_dict_.keys()))
    self.feature_names = np.array(list(self.feature_dict_.values()))

    extracted_rules = self._extract_rules(X, y)
    self.rules_without_feature_names_, self.coef, self.intercept = self._score_rules(
        X, y, extracted_rules)
    self.rules_ = [
        replace_feature_name(rule, self.feature_dict_) for rule in self.rules_without_feature_names_
    ]

    # count total rule terms, plus nonzero linear terms
    self.complexity_ = self._get_complexity()
    if self.include_linear:
        self.complexity_ += np.sum(
            np.array(self.coef[:X.shape[1]]) != 0)

    return self
def predict(self, X)

Predict. For regression returns continuous output. For classification, returns discrete output.

Expand source code
def predict(self, X):
    '''Predict. For regression returns continuous output.
    For classification, returns discrete output.
    '''
    check_is_fitted(self)
    if scipy.sparse.issparse(X):
        X = X.toarray()
    X = check_array(X)
    if isinstance(self, RegressorMixin):
        return self._predict_continuous_output(X)
    else:
        class_preds = np.argmax(self.predict_proba(X), axis=1)
        return np.array([self.classes_[i] for i in class_preds])
def predict_proba(self, X)
Expand source code
def predict_proba(self, X):
    check_is_fitted(self)
    if scipy.sparse.issparse(X):
        X = X.toarray()
    X = check_array(X)
    continuous_output = self._predict_continuous_output(X)
    logits = np.vstack(
        (1 - continuous_output, continuous_output)).transpose()
    return softmax(logits, axis=1)
def set_fit_request(self: RuleFit, *, feature_names: Union[bool, ForwardRef(None), str] = '$UNCHANGED$') ‑> RuleFit

Request metadata passed to the fit method.

Note that this method is only relevant if enable_metadata_routing=True (see :func:sklearn.set_config). Please see :ref:User Guide <metadata_routing> on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to fit if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to fit.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

Added in version: 1.3

Note

This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a :class:~sklearn.pipeline.Pipeline. Otherwise it has no effect.

Parameters

feature_names : str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED
Metadata routing for feature_names parameter in fit.

Returns

self : object
The updated object.
Expand source code
def func(*args, **kw):
    """Updates the request for provided parameters

    This docstring is overwritten below.
    See REQUESTER_DOC for expected functionality
    """
    if not _routing_enabled():
        raise RuntimeError(
            "This method is only available when metadata routing is enabled."
            " You can enable it using"
            " sklearn.set_config(enable_metadata_routing=True)."
        )

    if self.validate_keys and (set(kw) - set(self.keys)):
        raise TypeError(
            f"Unexpected args: {set(kw) - set(self.keys)} in {self.name}. "
            f"Accepted arguments are: {set(self.keys)}"
        )

    # This makes it possible to use the decorated method as an unbound method,
    # for instance when monkeypatching.
    # https://github.com/scikit-learn/scikit-learn/issues/28632
    if instance is None:
        _instance = args[0]
        args = args[1:]
    else:
        _instance = instance

    # Replicating python's behavior when positional args are given other than
    # `self`, and `self` is only allowed if this method is unbound.
    if args:
        raise TypeError(
            f"set_{self.name}_request() takes 0 positional argument but"
            f" {len(args)} were given"
        )

    requests = _instance._get_metadata_request()
    method_metadata_request = getattr(requests, self.name)

    for prop, alias in kw.items():
        if alias is not UNCHANGED:
            method_metadata_request.add_request(param=prop, alias=alias)
    _instance._metadata_request = requests

    return _instance
def set_transform_request(self: RuleFit, *, rules: Union[bool, ForwardRef(None), str] = '$UNCHANGED$') ‑> RuleFit

Request metadata passed to the transform method.

Note that this method is only relevant if enable_metadata_routing=True (see :func:sklearn.set_config). Please see :ref:User Guide <metadata_routing> on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to transform if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to transform.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

Added in version: 1.3

Note

This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a :class:~sklearn.pipeline.Pipeline. Otherwise it has no effect.

Parameters

rules : str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED
Metadata routing for rules parameter in transform.

Returns

self : object
The updated object.
Expand source code
def func(*args, **kw):
    """Updates the request for provided parameters

    This docstring is overwritten below.
    See REQUESTER_DOC for expected functionality
    """
    if not _routing_enabled():
        raise RuntimeError(
            "This method is only available when metadata routing is enabled."
            " You can enable it using"
            " sklearn.set_config(enable_metadata_routing=True)."
        )

    if self.validate_keys and (set(kw) - set(self.keys)):
        raise TypeError(
            f"Unexpected args: {set(kw) - set(self.keys)} in {self.name}. "
            f"Accepted arguments are: {set(self.keys)}"
        )

    # This makes it possible to use the decorated method as an unbound method,
    # for instance when monkeypatching.
    # https://github.com/scikit-learn/scikit-learn/issues/28632
    if instance is None:
        _instance = args[0]
        args = args[1:]
    else:
        _instance = instance

    # Replicating python's behavior when positional args are given other than
    # `self`, and `self` is only allowed if this method is unbound.
    if args:
        raise TypeError(
            f"set_{self.name}_request() takes 0 positional argument but"
            f" {len(args)} were given"
        )

    requests = _instance._get_metadata_request()
    method_metadata_request = getattr(requests, self.name)

    for prop, alias in kw.items():
        if alias is not UNCHANGED:
            method_metadata_request.add_request(param=prop, alias=alias)
    _instance._metadata_request = requests

    return _instance
def transform(self, X=None, rules=None)

Transform dataset.

Parameters

X : array-like matrix, shape=(n_samples, n_features)
Input data to be transformed. Use dtype=np.float32 for maximum efficiency.

Returns

X_transformed : matrix, shape=(n_samples, n_out)
Transformed data set
Expand source code
def transform(self, X=None, rules=None):
    """Transform dataset.

    Parameters
    ----------
    X : array-like matrix, shape=(n_samples, n_features)
        Input data to be transformed. Use ``dtype=np.float32`` for maximum
        efficiency.

    Returns
    -------
    X_transformed: matrix, shape=(n_samples, n_out)
        Transformed data set
    """
    df = pd.DataFrame(X, columns=self.feature_placeholders)
    # print('df', df.dtypes, df.head())
    X_transformed = np.zeros((X.shape[0], len(rules)))

    for i, r in enumerate(rules):
        features_r_uses = list(
            set(term.split(' ')[0] for term in r.split(' and ')))
        # print('r', r)
        # print('feats', df[features_r_uses])
        # print('ans', df[features_r_uses].query(r))
        # print(
        #     'tra', X_transformed[df[features_r_uses].query(r).index.values, i])
        X_transformed[df[features_r_uses].query(r).index.values, i] = 1
    return X_transformed
def visualize(self, decimals=2)
Expand source code
def visualize(self, decimals=2):
    rules = self._get_rules()
    rules = rules[rules.coef != 0].sort_values("support", ascending=False)
    pd.set_option('display.max_colwidth', None)
    return rules[['rule', 'coef']].round(decimals)
class RuleFitClassifier (n_estimators=100, tree_size=4, sample_fract='default', max_rules=30, memory_par=0.01, tree_generator=None, lin_trim_quantile=0.025, lin_standardise=True, exp_rand_tree_size=True, include_linear=True, alpha=None, cv=True, random_state=None)

Rulefit class. Rather than using this class directly, should use RuleFitRegressor or RuleFitClassifier

Parameters

tree_size :  Number of terminal nodes in generated trees. If exp_rand_tree_size=True,
this will be the mean number of terminal nodes.
sample_fract :  fraction of randomly chosen training observations used to produce each tree.
FP 2004 (Sec. 2)
max_rules :  total number of terms included in the final model (both linear and rules)
approximate total number of candidate rules generated for fitting also is based on this Note that actual number of candidate rules will usually be lower than this due to duplicates.
memory_par :  scale multiplier (shrinkage factor) applied to each new tree when
sequentially induced. FP 2004 (Sec. 2)
lin_standardise : If True, the linear terms will be standardised as per Friedman Sec 3.2
by multiplying the winsorised variable by 0.4/stdev.
lin_trim_quantile : If lin_standardise is True, this quantile will be used to trim linear
terms before standardisation.
exp_rand_tree_size : If True, each boosted tree will have a different maximum number of
terminal nodes based on an exponential distribution about tree_size. (Friedman Sec 3.3)
include_linear : Include linear terms as opposed to only rules
 
alpha :  Regularization strength, will override max_rules parameter
 
cv : Whether to use cross-validation scores to select the regularization strength
the final regularization value out of all that satisfy max_rules. If False, the least regularization possible is used.
random_state: Integer to initialise random objects and provide repeatability.
tree_generator : Optional: this object will be used as provided to generate the rules.
This will override almost all the other properties above. Must be GradientBoostingRegressor(), GradientBoostingClassifier(), or RandomForestRegressor()

Attributes

rule_ensemble : RuleEnsemble
The rule ensemble
feature_names : list of strings, optional (default=None)
The names of the features (columns)
Expand source code
class RuleFitClassifier(RuleFit, ClassifierMixin):
    ...

Ancestors

  • RuleFit
  • sklearn.base.BaseEstimator
  • sklearn.utils._estimator_html_repr._HTMLDocumentationLinkMixin
  • sklearn.utils._metadata_requests._MetadataRequester
  • sklearn.base.TransformerMixin
  • sklearn.utils._set_output._SetOutputMixin
  • RuleSet
  • sklearn.base.ClassifierMixin

Methods

def set_score_request(self: RuleFitClassifier, *, sample_weight: Union[bool, ForwardRef(None), str] = '$UNCHANGED$') ‑> RuleFitClassifier

Request metadata passed to the score method.

Note that this method is only relevant if enable_metadata_routing=True (see :func:sklearn.set_config). Please see :ref:User Guide <metadata_routing> on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to score if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to score.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

Added in version: 1.3

Note

This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a :class:~sklearn.pipeline.Pipeline. Otherwise it has no effect.

Parameters

sample_weight : str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED
Metadata routing for sample_weight parameter in score.

Returns

self : object
The updated object.
Expand source code
def func(*args, **kw):
    """Updates the request for provided parameters

    This docstring is overwritten below.
    See REQUESTER_DOC for expected functionality
    """
    if not _routing_enabled():
        raise RuntimeError(
            "This method is only available when metadata routing is enabled."
            " You can enable it using"
            " sklearn.set_config(enable_metadata_routing=True)."
        )

    if self.validate_keys and (set(kw) - set(self.keys)):
        raise TypeError(
            f"Unexpected args: {set(kw) - set(self.keys)} in {self.name}. "
            f"Accepted arguments are: {set(self.keys)}"
        )

    # This makes it possible to use the decorated method as an unbound method,
    # for instance when monkeypatching.
    # https://github.com/scikit-learn/scikit-learn/issues/28632
    if instance is None:
        _instance = args[0]
        args = args[1:]
    else:
        _instance = instance

    # Replicating python's behavior when positional args are given other than
    # `self`, and `self` is only allowed if this method is unbound.
    if args:
        raise TypeError(
            f"set_{self.name}_request() takes 0 positional argument but"
            f" {len(args)} were given"
        )

    requests = _instance._get_metadata_request()
    method_metadata_request = getattr(requests, self.name)

    for prop, alias in kw.items():
        if alias is not UNCHANGED:
            method_metadata_request.add_request(param=prop, alias=alias)
    _instance._metadata_request = requests

    return _instance

Inherited members

class RuleFitRegressor (n_estimators=100, tree_size=4, sample_fract='default', max_rules=30, memory_par=0.01, tree_generator=None, lin_trim_quantile=0.025, lin_standardise=True, exp_rand_tree_size=True, include_linear=True, alpha=None, cv=True, random_state=None)

Rulefit class. Rather than using this class directly, should use RuleFitRegressor or RuleFitClassifier

Parameters

tree_size :  Number of terminal nodes in generated trees. If exp_rand_tree_size=True,
this will be the mean number of terminal nodes.
sample_fract :  fraction of randomly chosen training observations used to produce each tree.
FP 2004 (Sec. 2)
max_rules :  total number of terms included in the final model (both linear and rules)
approximate total number of candidate rules generated for fitting also is based on this Note that actual number of candidate rules will usually be lower than this due to duplicates.
memory_par :  scale multiplier (shrinkage factor) applied to each new tree when
sequentially induced. FP 2004 (Sec. 2)
lin_standardise : If True, the linear terms will be standardised as per Friedman Sec 3.2
by multiplying the winsorised variable by 0.4/stdev.
lin_trim_quantile : If lin_standardise is True, this quantile will be used to trim linear
terms before standardisation.
exp_rand_tree_size : If True, each boosted tree will have a different maximum number of
terminal nodes based on an exponential distribution about tree_size. (Friedman Sec 3.3)
include_linear : Include linear terms as opposed to only rules
 
alpha :  Regularization strength, will override max_rules parameter
 
cv : Whether to use cross-validation scores to select the regularization strength
the final regularization value out of all that satisfy max_rules. If False, the least regularization possible is used.
random_state: Integer to initialise random objects and provide repeatability.
tree_generator : Optional: this object will be used as provided to generate the rules.
This will override almost all the other properties above. Must be GradientBoostingRegressor(), GradientBoostingClassifier(), or RandomForestRegressor()

Attributes

rule_ensemble : RuleEnsemble
The rule ensemble
feature_names : list of strings, optional (default=None)
The names of the features (columns)
Expand source code
class RuleFitRegressor(RuleFit, RegressorMixin):
    ...

Ancestors

  • RuleFit
  • sklearn.base.BaseEstimator
  • sklearn.utils._estimator_html_repr._HTMLDocumentationLinkMixin
  • sklearn.utils._metadata_requests._MetadataRequester
  • sklearn.base.TransformerMixin
  • sklearn.utils._set_output._SetOutputMixin
  • RuleSet
  • sklearn.base.RegressorMixin

Methods

def set_score_request(self: RuleFitRegressor, *, sample_weight: Union[bool, ForwardRef(None), str] = '$UNCHANGED$') ‑> RuleFitRegressor

Request metadata passed to the score method.

Note that this method is only relevant if enable_metadata_routing=True (see :func:sklearn.set_config). Please see :ref:User Guide <metadata_routing> on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to score if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to score.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

Added in version: 1.3

Note

This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a :class:~sklearn.pipeline.Pipeline. Otherwise it has no effect.

Parameters

sample_weight : str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED
Metadata routing for sample_weight parameter in score.

Returns

self : object
The updated object.
Expand source code
def func(*args, **kw):
    """Updates the request for provided parameters

    This docstring is overwritten below.
    See REQUESTER_DOC for expected functionality
    """
    if not _routing_enabled():
        raise RuntimeError(
            "This method is only available when metadata routing is enabled."
            " You can enable it using"
            " sklearn.set_config(enable_metadata_routing=True)."
        )

    if self.validate_keys and (set(kw) - set(self.keys)):
        raise TypeError(
            f"Unexpected args: {set(kw) - set(self.keys)} in {self.name}. "
            f"Accepted arguments are: {set(self.keys)}"
        )

    # This makes it possible to use the decorated method as an unbound method,
    # for instance when monkeypatching.
    # https://github.com/scikit-learn/scikit-learn/issues/28632
    if instance is None:
        _instance = args[0]
        args = args[1:]
    else:
        _instance = instance

    # Replicating python's behavior when positional args are given other than
    # `self`, and `self` is only allowed if this method is unbound.
    if args:
        raise TypeError(
            f"set_{self.name}_request() takes 0 positional argument but"
            f" {len(args)} were given"
        )

    requests = _instance._get_metadata_request()
    method_metadata_request = getattr(requests, self.name)

    for prop, alias in kw.items():
        if alias is not UNCHANGED:
            method_metadata_request.add_request(param=prop, alias=alias)
    _instance._metadata_request = requests

    return _instance

Inherited members