Skip to content

EvaluationMetric

This class represents the available evaluation metrics for training the GNN models.

NameTypeDescriptionOptional
namestrThe name of the evaluation metric to use. Supported metrics vary depending on the task; see the table below for all available options.No
eval_at_kintThe number of top predictions (k) to consider when computing the evaluation metric; applicable only for link prediction tasks.Yes
Metric NameTaskDocumentation Link
average_precisionbinary_classificationlink
accuracybinary_classification or multiclass_classificationlink
f1binary_classificationlink
roc_aucbinary_classificationlink
precisionbinary_classificationlink
recallbinary_classificationlink
multilabel_auprc_micromultilabel_classificationlink
multilabel_auroc_micromultilabel_classificationlink
multilabel_precision_micromultilabel_classificationlink
multilabel_auprc_macromultilabel_classificationlink
multilabel_auroc_macromultilabel_classificationlink
multilabel_precision_macromultilabel_classificationlink
macro_f1multiclass_classificationlink
micro_f1multiclass_classificationlink
r2regressionlink
maeregressionlink
rmseregressionlink
maperegressionlink
link_prediction_precisionlink_prediction or repeated_link_predictionlink
link_prediction_recalllink_prediction or repeated_link_predictionlink
link_prediction_maplink_prediction or repeated_link_predictionlink

An instance of the EvaluationMetric class.

For a binary classification task:

from relationalai_gnns import EvaluationMetric
binary_clf_metric = EvaluationMetric(name="accuracy")

For a link prediction task:

from relationalai_gnns import EvaluationMetric
link_pred_metric = EvaluationMetric(name="link_prediction_map", eval_at_k=12)