ConvexHullADChecker#
- class skfp.applicability_domain.ConvexHullADChecker(n_jobs: int | None = None, verbose: int | dict = 0)#
Convex hull method.
Defines applicability domain based on the convex hull spanned by the training data. New molecules should lie inside this space.
The problem is solved with linear programming formulation [1]. Problem is reduced to question whether a new point can be expressed as a convex linear combination of training set points. Formally, for training set of vectors \(X={x_1,x_2,...,x_n}\) and query point q, we check whether a problem has any solution:
variables \(\lambda_i\) for \(i=1,...,n\)
we only check if feasible solution exists, setting coefficients \(c = 0\) (all-zeros vector of length \(n\))
convex combination conditions:
\(q = \lambda_1 x_1 + ... + \lambda_n x_n\)
\(\lambda_1 + ... + \lambda_n = 1\)
\(\lambda_i \geq 0\) for all \(i=1,...,n\)
linear programming formulation:
\[\begin{split}\min_\lambda \ & c^T \lambda \\ \mbox{such that} \ & X \lambda = q,\\ & 1^T \lambda = 1,\\ & \lambda_i \geq 0 \text{ for all } i=1,...,n\end{split}\]Typically, physicochemical properties (continous features) are used as inputs. Consider scaling, normalizing, or transforming them before computing AD to lessen effects of outliers, e.g. with
PowerTransformer
orRobustScaler
.This method scales very badly with both number of samples and features. It has quadratic scaling \(O(n^2)\) in number of samples, and can be realistically run on at most 1000-3000 molecules. Its geometry also breaks down above ~10 features, marking everything as outside AD.
- Parameters:
n_jobs (int, default=None) – The number of jobs to run in parallel.
transform_x_y()
andtransform()
are parallelized over the input molecules.None
means 1 unless in ajoblib.parallel_backend
context.-1
means using all processors. See scikit-learn documentation onn_jobs
for more details.verbose (int or dict, default=0) – Controls the verbosity when filtering molecules. If a dictionary is passed, it is treated as kwargs for
tqdm()
, and can be used to control the progress bar.
References
Examples
>>> import numpy as np >>> from skfp.applicability_domain import ConvexHullADChecker >>> X_train = np.array([[0.0, 1.0], [0.0, 3.0], [3.0, 1.0]]) >>> X_test = np.array([[1.0, 1.0], [1.0, 2.0], [2.0, 3.0]]) >>> cvx_hull_ad_checker = ConvexHullADChecker() >>> cvx_hull_ad_checker ConvexHullADChecker()
>>> cvx_hull_ad_checker.fit(X_train) ConvexHullADChecker()
>>> cvx_hull_ad_checker.predict(X_test) array([ True, True, False])
Methods
fit
(X[, y])Fit applicability domain estimator.
fit_predict
(X[, y])Perform fit on X and returns labels for X.
Get metadata routing of this object.
get_params
([deep])Get parameters for this estimator.
predict
(X)Predict labels (1 inside AD, 0 outside AD) of X according to fitted model.
Calculate the applicability domain score of samples.
set_params
(**params)Set the parameters of this estimator.
- fit(X: ndarray, y: ndarray | None = None)#
Fit applicability domain estimator.
- Parameters:
X (array-like of shape (n_samples, n_features)) – The input samples.
y (any) – Unused, kept for scikit-learn compatibility.
- Returns:
self – Fitted estimator.
- Return type:
object
- fit_predict(X, y=None, **kwargs)#
Perform fit on X and returns labels for X.
Returns -1 for outliers and 1 for inliers.
- Parameters:
X ({array-like, sparse matrix} of shape (n_samples, n_features)) – The input samples.
y (Ignored) – Not used, present for API consistency by convention.
**kwargs (dict) –
Arguments to be passed to
fit
.Added in version 1.4.
- Returns:
y – 1 for inliers, -1 for outliers.
- Return type:
ndarray of shape (n_samples,)
- get_metadata_routing()#
Get metadata routing of this object.
Please check User Guide on how the routing mechanism works.
- Returns:
routing – A
MetadataRequest
encapsulating routing information.- Return type:
MetadataRequest
- get_params(deep=True)#
Get parameters for this estimator.
- Parameters:
deep (bool, default=True) – If True, will return the parameters for this estimator and contained subobjects that are estimators.
- Returns:
params – Parameter names mapped to their values.
- Return type:
dict
- predict(X: ndarray) ndarray #
Predict labels (1 inside AD, 0 outside AD) of X according to fitted model.
- Parameters:
X (array-like of shape (n_samples, n_features)) – The data matrix.
- Returns:
is_inside_applicability_domain – Returns 1 for molecules inside applicability domain, and 0 for those outside (outliers).
- Return type:
ndarray of shape (n_samples,)
- score_samples(X: ndarray) ndarray #
Calculate the applicability domain score of samples. It is simply a 0/1 decision equal to
.predict()
.- Parameters:
X (array-like of shape (n_samples, n_features)) – The data matrix.
- Returns:
scores – Applicability domain scores of samples.
- Return type:
ndarray of shape (n_samples,)
- set_params(**params)#
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as
Pipeline
). The latter have parameters of the form<component>__<parameter>
so that it’s possible to update each component of a nested object.- Parameters:
**params (dict) – Estimator parameters.
- Returns:
self – Estimator instance.
- Return type:
estimator instance