Getting started
In this introductory section, you will learn about the main building blocks of SolePostHoc.jl. The above introduces two important ideas for using post-hoc explanation algorithms. Further on in the documentation, the potential of SolePostHoc.jl will become apparent: this package's primary purpose is to provide a uniform interface for knowledge extraction algorithms, enabling the comparison of different post-hoc interpretation methods while maintaining a coherent and intuitive user experience.
Fast introduction
Consider a machine learning model trained on a generic dataset. For example, let us consider a Random Forest Classifier learned on the Iris dataset to classify 3 different species of flowers. We are interested in extracting interpretable rules that explain the model's decision process. SolePostHoc.jl offers two primary methods for accomplishing this task.
The first approach is to directly call the specific algorithm function. For example:
# Extract rules using the LUMEN algorithm directly
extracted_rules = lumen(model, X_test, y_test, args...)The second approach uses the unified interface through rule extractors:
# Extract rules using the unified interface
extractor = LumenRuleExtractor()
decision_set = extractrules(extractor, model, X_test, y_test, args...)The key advantage of the second approach is that it not only executes the original algorithm (equivalent to calling lumen(...) directly) but also converts the output into a DecisionSet. A DecisionSet is a vector of propositional logical rules in Disjunctive Normal Form (DNF), with one rule per class/label.
Consider a trained model that classifies hand gestures. Using SolePostHoc.jl, we might extract the following decision set:
Class "Iris-setosa": IF (SepalLengthCm < -0.5) AND (SepalWidthCm < 8.2) THEN predict "Iris-setosa"
Class "Iris-versicolor": IF (SepalLengthCm > 0.5) AND (SepalWidthCm < 3.25) THEN predict "Iris-versicolor"
Class "Iris-virginica": IF (PetalWidthCm > 2.0) THEN predict "Iris-virginica"Core definitions
The foundation of SolePostHoc.jl lies in providing interpretable explanations for complex machine learning models through rule extraction.
abstract type RuleExtractorA RuleExtractor is an abstract type that defines the interface for all post-hoc explanation algorithms. Each concrete implementation represents a specific knowledge extraction method.
A DecisionSet represents the extracted knowledge as a collection of logical rules, where each rule corresponds to a specific class or decision outcome in Disjunctive Normal Form.
The main entry point for rule extraction is:
extractrules(extractor::RuleExtractor, model, args...)Algorithm Types
SolePostHoc.jl integrates a wide range of algorithms for knowledge extraction, categorized into three main types:
Surrogate Trees
Algorithms that approximate complex models such as neural networks or random forests with more interpretable decision trees.
struct REFNERuleExtractor <: RuleExtractor end
struct BATreesRuleExtractor <: RuleExtractor end
struct TREPANRuleExtractor <: RuleExtractor endKnowledge Distillation
Techniques for transferring knowledge from complex models (teacher) to simpler and more transparent ones (student).
struct RuleCOSIPLUSRuleExtractor <: RuleExtractor end
struct InTreesRuleExtractor <: RuleExtractor endRule Extraction
Methods for deriving clear and understandable logical rules from any machine learning model.
struct LUMENRuleExtractor <: RuleExtractor endDirect Algorithm Access
For users who prefer to use algorithms in their original form without the unified interface, SolePostHoc.jl provides direct access to each algorithm:
SolePostHoc.RuleExtraction.intrees — Function
intrees(model::Union{AbstractModel,DecisionForest}, X, y::AbstractVector{<:Label}; kwargs...)::DecisionListReturn a decision list which approximates the behavior of the input model on the specified supervised dataset. The set of relevant and non-redundant rules in the decision list are obtained by means of rule selection, rule pruning, and sequential covering (stel).
References
- Deng, Houtao. "Interpreting tree ensembles with intrees." International Journal of Data Science and Analytics 7.4 (2019): 277-287.
Keyword Arguments
prune_rules::Bool=true: access to prune or notpruning_s::Union{Float64,Nothing}=nothing: parameter that limits the denominator in the pruning metric calculationpruning_decay_threshold::Union{Float64,Nothing}=nothing: threshold used in pruning to remove or not a joint from the rulerule_selection_method::Symbol=:CBC: rule selection method. Currently only supports:CBCrule_complexity_metric::Symbol=:natoms: Metric to use for estimating a rule complexity measuremax_rules::Int=-1: maximum number of rules in the final decision list (excluding default rule). Use -1 for unlimited rules.min_coverage::Union{Float64,Nothing}=nothing: minimum rule coverage for stel- See
extractruleskeyword arguments...
Although the method was originally presented for forests it is hereby extended to work with any symbolic models.
See also AbstractModel, DecisionList, listrules, rulemetrics.
SolePostHoc.RuleExtraction.Lumen.lumen — Function
lumen(config::LumenConfig, model::SM.AbstractModel) -> SM.DecisionSetCore single-model entry point for the LUMEN algorithm.
Extracts a minimized DecisionSet from model using the parameters encoded in config.
Pipeline
- Build
ExtractRulesDatafromconfigandmodel(atom extraction, truth-table enumeration, per-class grouping). - For each class, call
run_minimizationon the derived atom vectors. - Filter out classes for which no formula could be produced.
- Wrap the minimized formulas in
SM.Ruleobjects and return aDecisionSet.
Arguments
config::LumenConfig: Algorithm configuration (minimization scheme, depth, etc.).model::SM.AbstractModel: A single decision-tree model.
Returns
SM.DecisionSet: The minimized rule set.
lumen(config::LumenConfig, model::Vector{SM.AbstractModel}) -> LumenResultBatch variant: applies lumen(config, m) to every model in the vector and collects the results into a LumenResult.
lumen(model::SM.AbstractModel, args...; kwargs...) -> SM.DecisionSetConvenience wrapper: constructs a LumenConfig from keyword arguments and delegates to lumen(config, model).
lumen(model::Vector{SM.AbstractModel}, args...; kwargs...) -> LumenResultConvenience wrapper for vector of models: constructs LumenConfig from keyword arguments and maps over the vector.
Examples
# Single model with default settings
ds = lumen(my_tree)
# Single model with custom minimization scheme
ds = lumen(my_tree; minimization_scheme=:mitespresso, depth=0.8)
# Explicit config object
config = LumenConfig(minimization_scheme=:abc, depth=0.7)
ds = lumen(config, my_tree)
# Batch processing
results = lumen(config, [tree1, tree2, tree3])See also: LumenConfig, LumenResult, ExtractRulesData
SolePostHoc.RuleExtraction.BATrees.batrees — Function
batrees(f; dataset_name="iris", num_trees=10, max_depth=10, dsOutput=true)Builds and trains a set of binary decision trees OR using the specified function f.
Arguments
f: An SoleForest.dataset_name::String: The name of the dataset to be used. Default is "iris".num_trees::Int: The number of trees to be built. Default is 10.max_depth::Int: The maximum depth of each tree. Default is 10.dsOutput::Bool: A flag indicating whether to return the dsStruct output. Default is true. if false, returns the result single tree.
Returns
- If
dsOutputis true, returns the result is in DecisionSet ds. - If
dsOutputis false, returns the result is SoleTree t`.
Example
SolePostHoc.RuleExtraction.REFNE.refne — Function
refne(m, Xmin, Xmax; L=100, perc=1.0, max_depth=-1, n_subfeatures=-1,
partial_sampling=0.7, min_samples_leaf=5, min_samples_split=2,
min_purity_increase=0.0, seed=3)Extract interpretable rules from a trained neural network ensemble using decision tree approximation.
This implementation follows the REFNE-a (Rule Extraction From Neural Network Ensemble) algorithm, which approximates complex neural network behavior with an interpretable decision tree model.
Arguments
m: Trained neural network model to extract rules fromXmin: Minimum values for each input featureXmax: Maximum values for each input featureL: Number of samples to generate in the synthetic dataset (default: 100)perc: Percentage of generated samples to use (default: 1.0)max_depth: Maximum depth of the decision tree (default: -1, unlimited)n_subfeatures: Number of features to consider at each split (default: -1, all)partial_sampling: Fraction of samples used for each tree (default: 0.7)min_samples_leaf: Minimum number of samples required at a leaf node (default: 5)min_samples_split: Minimum number of samples required to split a node (default: 2)min_purity_increase: Minimum purity increase required for a split (default: 0.0)seed: Random seed for reproducibility (default: 3)
Returns
- A forest-decision trees representing the extracted rules
Description
The algorithm works by:
- Generating a synthetic dataset spanning the input space
- Using the neural network to label these samples
- Training a decision tree to approximate the neural network's behavior
References
- Zhi-Hua, Zhou, et al. Extracting Symbolic Rules from Trained Neural Network Ensembles
Example
model = load_decision_tree_model()
refne(model, Xmin, Xmax)See also AbstractModel, DecisionList, listrules, rulemetrics.
SolePostHoc.RuleExtraction.TREPAN.trepan — Function
- Mark W. Craven, et al. "Extracting Thee-Structured Representations of Thained Networks"
SolePostHoc.RuleExtraction.RULECOSIPLUS.rulecosiplus — Function
rulecosiplus(ensemble::Any, X_train::Any, y_train::Any)Extract interpretable rules from decision tree ensembles using the RuleCOSI+ algorithm.
This function implements the RuleCOSI+ methodology for rule extraction from trained ensemble classifiers, producing a simplified and interpretable rule-based model. The method combines and simplifies rules extracted from individual trees in the ensemble to create a more compact and understandable decision list.
Reference
Obregon, J. (2022). RuleCOSI+: Rule extraction for interpreting classification tree ensembles. Information Fusion, 89, 355-381. Available at: https://www.sciencedirect.com/science/article/pii/S1566253522001129
Arguments
ensemble::Any: A trained ensemble classifier (e.g., Random Forest, Gradient Boosting) that will be serialized and converted to a compatible format for rule extraction.X_train::Any: Training feature data. Can be a DataFrame or Matrix. If DataFrame, column names will be preserved in the extracted rules; otherwise, generic names (V1, V2, ...) will be generated.y_train::Any: Training target labels corresponding toX_train. Will be converted to string format for processing.
Returns
DecisionList: A simplified decision list containing the extracted and combined rules from the ensemble, suitable for interpretable classification.
Details
The function performs the following steps:
- Converts input data to appropriate matrix format
- Generates or extracts feature column names
- Serializes the Julia ensemble to a Python-compatible format
- Builds an sklearn-compatible model using the serialized ensemble
- Applies RuleCOSI+ algorithm with the following default parameters:
metric="fi": Optimization metric for rule combinationn_estimators=100: Number of estimators consideredtree_max_depth=100: Maximum depth of treesconf_threshold=0.25(α): Confidence threshold for rule filteringcov_threshold=0.1(β): Coverage threshold for rule filteringverbose=2: Detailed output during processing
- Extracts and converts rules to a decision list format
Configuration
The algorithm uses fixed parameters optimized for interpretability:
- Confidence threshold (α) = 0.25: Rules below this confidence are discarded
- Coverage threshold (β) = 0.1: Rules covering fewer samples are excluded
- Maximum rules = max(20, n_classes × 5): Adaptive limit based on problem complexity
Example
# Assuming you have a trained ensemble and training data
ensemble = ... # your trained ensemble
X_train = ... # training features
y_train = ... # training labels
# Extract interpretable rules
decision_list = rulecosiplus(ensemble, X_train, y_train)Notes
- The function prints diagnostic information including the number of trees and dataset statistics
- Raw rules are displayed before conversion to decision list format
- Requires Python interoperability and the RuleCOSI implementation
- The resulting decision list provides an interpretable alternative to the original ensemble
Rule Extraction, simplification and Optimization
One of the key features of SolePostHoc.jl is its ability to extract, simplify and optimize extracted rules while maintaining their expressiveness.
For example, consider this decision forest:
├[1/2]┐ (V3 < 2.45)
│ ├✔ Iris-setosa
│ └✘ (V4 < 1.75)
│ ├✔ (V3 < 4.65)
│ │ ├✔ Iris-versicolor
│ │ └✘ Iris-versicolor
│ └✘ Iris-virginica
└[2/2]┐ (V4 < 0.8)
├✔ Iris-setosa
└✘ (V1 < 5.65)
├✔ (V4 < 1.2)
│ ├✔ Iris-versicolor
│ └✘ Iris-versicolor
└✘ (V3 < 4.75)
├✔ Iris-versicolor
└✘ Iris-virginicaSolePostHoc.jl can leverage logical reasoning to obtain a more succinct and equally expressive theory:
▣
├[1/3] ((V3 ≥ 2.45) ∧ (V4 ≥ 1.75)) ∨ ((V1 ≥ 5.65) ∧ (V3 ≥ 4.75) ∧ (V4 ≥ 0.8)) ↣ Iris-virginica
├[2/3] ((V3 ≥ 2.45) ∧ (V3 < 4.65) ∧ (V4 < 1.75)) ∨ ((V3 ≥ 4.65) ∧ (V3 < 4.75) ∧ (V4 < 1.75)) ∨ ((V1 < 5.65) ∧ (V3 ≥ 4.65) ∧ (V4 < 1.75)) ∨ ((V3 ≥ 2.45) ∧ (V4 < 0.8)) ↣ Iris-versicolor
└[3/3] (V3 < 2.45) ↣ Iris-setosa
Customization and Extension
Users can implement their own rule extraction algorithms by extending the RuleExtractor interface:
function algorithm(model, args...)
# ordinary function
return output # regular generic type of output
end
struct MyCustomExtractor <: RuleExtractor
# algorithm-specific parameters
end
function extractrules(extractor::MyCustomExtractor, model, args...)
# implement your custom convert `generic type of output in decision set` logic
# return a DecisionSet
endIntegration with Sole.jl Ecosystem
SolePostHoc.jl seamlessly integrates with the broader Sole.jl ecosystem, particularly:
- SoleLogics.jl: For modal logic reasoning and formula manipulation
- SoleData.jl: For handling multivariate time series and relational data structures
- SoleModels.jl: For interpretable model training and symbolic learning