Fingerprint types#

scikit-fingerprints implements a lot of different molecular fingerprints. We can roughly divide them into 3 types: descriptors, substructural, and hashed. We describe them in detail in subsequent sections.

Of course, not all fingerprints fit nicely into this categorization, but it gives you a rough idea what you can use. Further sections describe them in more detail.

Most fingerprints operate on 2D, “flat” molecular graphs, also known as topological information. This is typically fast and suffices for most tasks. There are also 3D (conformational) fingerprints, which calculate values based on the spatial structure of a molecule, known as a conformer. While they add more information, computing the conformations is expensive, hard, and can even fail for some molecules. They are described in a separate tutorial, and here we focus only on topological fingerprints.

By default, fingerprints are NumPy dense arrays, which explicitly store all values, including zeros. However, many of them are very sparse, i.e. with overwhelming majority of zeros. NumPy arrays store them explicitly, and while this is compatible with almost everything in Python ML ecosystem, it also wastes this memory. scikit-fingerprints can also compute SciPy sparse arrays when sparse=True is specified in class constructor.

Dataset#

We will use a well-known beta-secretase 1 (BACE) dataset, where we predict whether a drug inhibits the production of beta-secretase 1 enzyme, suspected to influence the development of Alzheimer’s disease. It is a part of popular MoleculeNet benchmark.

[1]:
from skfp.datasets.moleculenet import load_bace
from skfp.preprocessing import MolFromSmilesTransformer


smiles_list, y = load_bace()

mol_from_smiles = MolFromSmilesTransformer()
mols = mol_from_smiles.transform(smiles_list)

Descriptors#

Descriptors are sets of physicochemical properties of molecule, e.g. number of heavy atoms (non-hydrogens), number of rings, estimated solubility, distributions of inter-atomic distances, and more. Those are typically floating point numbers or counts of simple topology features (graph structure). They are often very interpretable, as each feature has a certain chemical meaning. Those are e.g. Mordred and VSA.

Pros:

  • well-correlated with many global properties of molecule

  • typically good performance for regression

  • interpretable

Cons:

  • typically require feature selection

  • may have missing values

  • typically don’t benefit from sparse arrays

  • some are very slow to compute

Let’s compute two descriptor fingerprints: Mordred and RDKit2DDescriptorsFingerprint. Mordred is a set of descriptors proposed in the Mordred software publication, and the RDKit2DDescriptorsFingerprint is simply the collection of all topological descriptors available in RDKit.

We will set a few options: n_jobs=-1, batch_size=1, verbose=1. Fingerprints can be computed for all molecules independently, so parallelism with multiple cores is very efficient. Setting n_jobs=-1 by default uses all available N cores, dividing the dataset into N equal-sized batches. batch_size gives us more fine-grained control, and combined with verbose=True, it will show a nice progress bar, allowing us to check the progress molecule by molecule.

Parallelism is very beneficial for descriptors, as there are typically a lot of them, e.g. Mordred has 1613 features. They have to be computed one after another for each molecule. Nevertheless, this will take a minute or more.

[2]:
from skfp.fingerprints import MordredFingerprint, RDKit2DDescriptorsFingerprint

fp_mordred = MordredFingerprint(n_jobs=-1, batch_size=1, verbose=1)
fp_rdkit_2d = RDKit2DDescriptorsFingerprint(n_jobs=-1, batch_size=1, verbose=1)

X_mordred = fp_mordred.transform(mols)
X_rdkit_2d = fp_rdkit_2d.transform(mols)

print(f"Mordred shape: {X_mordred.shape}")
print(f"Mordred example values: {X_mordred[0, :10]}")
print()
print(f"RDKit 2D descriptors shape: {X_rdkit_2d.shape}")
print(f"RDKit 2D descriptors example values: {X_rdkit_2d[0, :10]}")
Mordred shape: (1513, 1613)
Mordred example values: [25.301933  18.76322    0.         0.        40.36188    2.4430382
  4.8860765 40.36188    1.2613088  4.398988 ]

RDKit 2D descriptors shape: (1513, 200)
RDKit 2D descriptors example values: [   1.5215017 1138.9497      22.880104    19.44299     19.44299
   15.215147    11.412099    11.412099     9.666773     9.666773 ]

Some descriptors also have built-in feature names, so we can inspect them and use them, e.g. with explainable AI approaches like SHAP. To understand the exact meaning of those features, see the source publications listed in documentation.

[3]:
import pandas as pd

print(f"Mordred feature names: {fp_mordred.get_feature_names_out()}")
print()

df_mordred = pd.DataFrame(X_mordred, columns=fp_mordred.get_feature_names_out())
df_mordred
Mordred feature names: ['ABC' 'ABCGG' 'nAcid' ... 'Zagreb2' 'mZagreb1' 'mZagreb2']

[3]:
ABC ABCGG nAcid nBase SpAbs_A SpMax_A SpDiam_A SpAD_A SpMAD_A LogEE_A ... SRW10 TSRW10 MW AMW WPath WPol Zagreb1 Zagreb2 mZagreb1 mZagreb2
0 25.301933 18.763220 0.0 0.0 40.361881 2.443038 4.886076 40.361881 1.261309 4.398988 ... 10.445841 68.262878 431.257263 6.634727 3233.0 52.0 172.0 201.0 10.923611 6.805555
1 36.076038 29.331556 0.0 1.0 58.299816 2.512416 4.903850 58.299816 1.240422 4.759111 ... 10.673619 100.694229 657.382202 6.707982 8159.0 72.0 240.0 278.0 17.118055 10.430555
2 32.828228 24.672882 0.0 1.0 54.320873 2.567860 5.012524 54.320873 1.293354 4.667789 ... 10.692922 94.122177 591.263550 7.299550 6374.0 74.0 224.0 266.0 13.756945 9.236111
3 31.013023 24.251154 0.0 1.0 47.993851 2.443327 4.886654 47.993851 1.199846 4.599177 ... 10.655257 77.270859 591.251038 7.484190 5853.0 64.0 210.0 241.0 17.048611 8.444445
4 34.657589 26.046909 0.0 1.0 55.686096 2.567861 5.012539 55.686096 1.265593 4.715560 ... 10.787399 96.491135 629.240417 7.865505 7300.0 78.0 238.0 282.0 15.569445 9.402778
... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
1508 19.258291 15.801585 0.0 0.0 32.258286 2.484048 4.819666 32.258286 1.290331 4.141597 ... 10.035612 73.840050 364.166595 7.283332 1619.0 37.0 128.0 149.0 8.138889 5.611111
1509 19.258291 15.801585 0.0 0.0 32.258286 2.484048 4.819666 32.258286 1.290331 4.141597 ... 10.035612 73.840050 357.135651 7.936347 1619.0 37.0 128.0 149.0 8.138889 5.611111
1510 15.084601 13.327415 0.0 0.0 24.393719 2.519188 4.856795 24.393719 1.283880 3.924382 ... 10.022292 72.515038 319.032013 9.667637 730.0 29.0 104.0 125.0 6.638889 4.055555
1511 19.177412 15.661138 0.0 0.0 32.058102 2.524323 4.872796 32.058102 1.335754 4.155212 ... 10.236919 78.661003 317.152802 7.375647 1429.0 38.0 132.0 159.0 7.000000 5.166667
1512 16.470304 13.438284 0.0 0.0 26.983883 2.453935 4.742945 26.983883 1.284947 4.005924 ... 9.863030 74.256088 306.124725 7.653119 1074.0 28.0 110.0 128.0 6.527778 4.583333

1513 rows × 1613 columns

Substructure fingerprints#

Substructure fingerprints check for existence of selected substructures (subgraphs, patterns) in a molecule, such as functional groups, rings of given size, or counts of atoms of particular element. They are often hand-crafted and selected by domain experts, reflecting which parts of molecule is typically interesting to e.g. medicinal chemists. Substructures are typically described using SMARTS patterns, which can be though of as a kind of “regular expressions” for molecule structures. Examples include MACCS fingerprint and PubChem fingerprint.

Like descriptors, they have a constant length, but they result in integer-valued vectors. We can distinguish binary and count variants of those fingerprints. Binary ones only check for existence of a given pattern, and often are created as if/else conditions, e.g. “number of oxygens >= 4” or “is there a ring of size 6?”. Count variants instead use the number of occurrences of a substructure, e.g. “number of oxygens” or “number of rings of size 6”. Due to this difference, the count version may have less features. Most of those are scikit-fingerprints novel propositions.

Pros:

  • typically good substructure searching performance

  • quite interpretable (if you understand SMARTS patterns)

  • some are very sparse, and benefit a lot from sparse arrays

Cons:

  • don’t generalize well outside the chemical space they’ve been designed for

  • longer ones are quite slow

We’ll compute MACCS fingerprint and PubChem fingerprint, in both binary and count versions. SMARTS pattern matching can be quite slow, so those fingerprints often benefit from parallelism, similarly to descriptors.

[4]:
from skfp.fingerprints import MACCSFingerprint, PubChemFingerprint


fp_maccs = MACCSFingerprint(n_jobs=-1)
fp_maccs_count = MACCSFingerprint(count=True, n_jobs=-1)

fp_pubchem = PubChemFingerprint(n_jobs=-1)
fp_pubchem_count = PubChemFingerprint(count=True, n_jobs=-1)

X_maccs = fp_maccs.transform(mols)
X_maccs_count = fp_maccs_count.transform(mols)

X_pubchem = fp_pubchem.transform(mols)
X_pubchem_count = fp_pubchem_count.transform(mols)
[5]:
print("Binary MACCS:")
print(f"Shape: {X_maccs.shape}")
print(f"Example values: {X_maccs[0, -10:]}")
print()
print("Count MACCS:")
print(f"Shape: {X_maccs_count.shape}")
print(f"Example values: {X_maccs_count[0, -10:]}")
print()
print("Binary PubChem:")
print(f"Shape: {X_pubchem.shape}")
print(f"Example values: {X_pubchem[0, :10]}")
print()
print("Count PubChem:")
print(f"Shape: {X_pubchem_count.shape}")
print(f"Example values: {X_pubchem_count[0, :10]}")
print()
Binary MACCS:
Shape: (1513, 166)
Example values: [1 1 1 1 1 1 1 1 1 0]

Count MACCS:
Shape: (1513, 159)
Example values: [2 1 2 2 0 2 3 1 1 4]

Binary PubChem:
Shape: (1513, 881)
Example values: [1 1 1 0 0 0 0 0 0 1]

Count PubChem:
Shape: (1513, 757)
Example values: [31  0  0 27  3  2  0  0  0  0]

Hashed fingerprints#

Hashed fingerprints extract all subgraphs of general shape, such as shortest paths between pairs of atoms (linear subgraphs) or neighborhoods of bonded atoms (circular fingerprints). From each substructure, an integer identifier is then computed. Then we use the hashing function (hence the name), which translates the identifier into an index in the final vector, where we put information that subgraph was detected. Those are e.g. ECFP fingerprint and Atom Pair fingerprint.

Initially, each atom gets assigned a numerical identifier, called atom invariant. It combines a few basic properties like e.g. atomic number, charge, and atomic mass, into a single integer value (typically with XOR function). This allows us to distinguish e.g. carbon in different contexts.

Then, subgraphs are computed, with their shape depending on a fingerprint. For example, ECFP uses circular neighborhood, e.g. atom with neighbors (bonded atoms), then with their neighbors (radius 2 neighborhood), up to a given radius (by default 2). The identifiers of atoms and bonds in the subgraph are combined into a single identifier of a whole substructure.

Output vector starts with only zeros. It has a given length (also called “number of bits”), which is a common hyperparameter of all hashed fingerprints. Subgraph identifiers are hashed into it, translating subgraph identifier into index of a vector, e.g. with a modulo function. Hashing collisions may occur, when two distinct substructures get the same index, but this is typically not a big problem. Binary variant ignores such collisions (just marks 1 at a given position), and count version sums up all occurrences at each index.

Pros:

  • very flexible

  • typically good performance for classification

  • fast to compute

  • very sparse, and benefit a lot from sparse arrays

Cons:

  • not interpretable

Let’s check out ECFP fingerprint and Atom Pair fingerprint. We will create default versions with length 2048, and short ones with 1024. Those fingerprints are often so fast to compute they don’t benefit from parallelism at all.

[6]:
from skfp.fingerprints import ECFPFingerprint, AtomPairFingerprint


fp_ecfp = ECFPFingerprint()
fp_ecfp_short = ECFPFingerprint(fp_size=1024)

fp_ap = AtomPairFingerprint()
fp_ap_short = AtomPairFingerprint(fp_size=1024)

X_ecfp = fp_ecfp.transform(mols)
X_ecfp_short = fp_ecfp_short.transform(mols)

X_ap = fp_ap.transform(mols)
X_ap_short = fp_ap_short.transform(mols)
[7]:
print("ECFP:")
print(f"Shape: {X_ecfp.shape}")
print(f"Example values: {X_ecfp[0, :25]}")
print()
print("Short ECFP:")
print(f"Shape: {X_ecfp_short.shape}")
print(f"Example values: {X_ecfp_short[0, :25]}")
print()
print("Atom Pair:")
print(f"Shape: {X_ap.shape}")
print(f"Example values: {X_ap[0, :25]}")
print()
print("Short Atom Pair:")
print(f"Shape: {X_ap_short.shape}")
print(f"Example values: {X_ap_short[0, :25]}")
print()
ECFP:
Shape: (1513, 2048)
Example values: [0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]

Short ECFP:
Shape: (1513, 1024)
Example values: [0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]

Atom Pair:
Shape: (1513, 2048)
Example values: [1 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1]

Short Atom Pair:
Shape: (1513, 1024)
Example values: [1 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 1 1 1 0 1 0 0 0 1]

Comparison#

Let’s compare the efficiency of those fingerprints: compute time, memory requirements, and classifier performance. Of course, this is just a single dataset comparison, but we’ll already be able to see some general trends.

For compute time, we’ll use parallel versions for descriptors and substructure fingerprints, since this is a typically recommended approach. Hashed fingerprints for typically sized datasets are so fast that spawning and managing processes is slower than any gain from multiple cores. Reported time will be the average of 3 runs. Running this takes a few minutes.

[8]:
from time import time

import numpy as np
from rdkit.Chem import Mol


def get_fp_compute_time(fp_obj, mols: list[Mol]) -> float:
    times = []
    for _ in range(3):
        start = time()
        fp_obj.transform(mols)
        end = time()
        times.append(end - start)

    return np.mean(times)


for fp_name, fp_obj in [
    ("Mordred", MordredFingerprint(n_jobs=-1)),
    ("RDKit 2D descriptors", RDKit2DDescriptorsFingerprint(n_jobs=-1)),
    ("MACCS", MACCSFingerprint(n_jobs=-1)),
    ("PubChem", PubChemFingerprint(n_jobs=-1)),
    ("ECFP", ECFPFingerprint()),
    ("Atom Pair", AtomPairFingerprint()),
]:
    fp_time = get_fp_compute_time(fp_obj, mols)
    print(f"{fp_name}: {fp_time:.3f}s")

Mordred: 65.202s
RDKit 2D descriptors: 3.350s
MACCS: 0.278s
PubChem: 3.699s
ECFP: 0.123s
Atom Pair: 0.221s

For measuring memory, we’ll check the size of dense NumPy array, as well as sparse SciPy array, in KB. Then we’ll be able to compare how much those fingerprints benefit from sparsity. Since computing sparse arrays has basically no impact on time, we can simply compute sparse arrays and then convert them to dense ones.

[31]:
import sys


def get_fp_memory_sizes(fp_obj, mols: list[Mol]) -> tuple[float, float]:
    X_sparse = fp_obj.transform(mols)
    X_dense = X_sparse.todense()

    # transform B -> KB
    memory_dense = X_dense.nbytes // 1024
    memory_sparse = X_sparse.data.nbytes // 1024

    return memory_dense, memory_sparse


for fp_name, fp_obj in [
    ("Mordred", MordredFingerprint(sparse=True, n_jobs=-1)),
    ("RDKit 2D descriptors", RDKit2DDescriptorsFingerprint(sparse=True, n_jobs=-1)),
    ("MACCS", MACCSFingerprint(sparse=True, n_jobs=-1)),
    ("PubChem", PubChemFingerprint(sparse=True, n_jobs=-1)),
    ("ECFP", ECFPFingerprint(sparse=True)),
    ("Atom Pair", AtomPairFingerprint(sparse=True)),
]:
    memory_dense, memory_sparse = get_fp_memory_sizes(fp_obj, mols)
    reduction = memory_dense / memory_sparse
    print(f"{fp_name}: dense {memory_dense} B, sparse {memory_sparse} B, {reduction:.1f}x reduction")

Mordred: dense 9533 B, sparse 7788 B, 1.2x reduction
RDKit 2D descriptors: dense 1182 B, sparse 634 B, 1.9x reduction
MACCS: dense 245 B, sparse 90 B, 2.7x reduction
PubChem: dense 1301 B, sparse 250 B, 5.2x reduction
ECFP: dense 3026 B, sparse 89 B, 34.0x reduction
Atom Pair: dense 3026 B, sparse 596 B, 5.1x reduction

For evaluating the efficiency of downstream ML models, we’ll train a Random Forest classifier and check its AUROC. Both this model type and metric are commonly used in chemoinformatics. For train-test splitting, we use scaffold split, which typically gives better estimation than overly optimistic random split.

[32]:
from skfp.model_selection import scaffold_train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import roc_auc_score


def get_rf_auroc(
    fp_obj,
    mols_train: list[Mol],
    mols_test: list[Mol],
    y_train: np.ndarray,
    y_test: np.ndarray
) -> tuple[float, float]:
    X_train = fp_obj.transform(mols_train)
    X_test = fp_obj.transform(mols_test)

    clf = RandomForestClassifier(n_jobs=-1, random_state=0)
    clf.fit(X_train, y_train)

    y_pred = clf.predict_proba(X_test)[:, 1]
    auroc = roc_auc_score(y_test, y_pred)

    return auroc


mols_train, mols_test, y_train, y_test = scaffold_train_test_split(mols, y, test_size=0.2)

for fp_name, fp_obj in [
    ("Mordred", MordredFingerprint(n_jobs=-1)),
    ("RDKit 2D descriptors", RDKit2DDescriptorsFingerprint(n_jobs=-1)),
    ("MACCS", MACCSFingerprint(n_jobs=-1)),
    ("PubChem", PubChemFingerprint(n_jobs=-1)),
    ("ECFP", ECFPFingerprint()),
    ("Atom Pair", AtomPairFingerprint()),
]:
    auroc = get_rf_auroc(fp_obj, mols_train, mols_test, y_train, y_test)
    print(f"{fp_name} AUROC {auroc:.2%}")

Mordred AUROC 74.48%
RDKit 2D descriptors AUROC 74.43%
MACCS AUROC 77.01%
PubChem AUROC 76.05%
ECFP AUROC 78.25%
Atom Pair AUROC 77.60%