utils
Utils for evaluating model(s) on given data
get_reports(config)
Generate reports with given evaluation config
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
config |
EvalConfig
|
config for evaluation |
required |
Returns:
| Name | Type | Description |
|---|---|---|
out |
Dict[str, dict]
|
model names as keys and report dicts as values |
Source code in conftrainer/evaluation/utils.py
get_multibranch_reports(config)
Generate reports with given evaluation config
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
config |
MultiBranchEvalConfig
|
config for evaluation |
required |
Returns:
| Name | Type | Description |
|---|---|---|
out |
Dict[str, dict]
|
model names as keys and report dicts as values |
Source code in conftrainer/evaluation/utils.py
evaluate_single_model(model_path, datagen, metrics, batch_size, classes, del_network=False, input_shape=None)
Evaluate a single network and return a report
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model_path |
str
|
path to load the network |
required |
datagen |
ImageDatagen
|
data to evaluate on |
required |
metrics |
dict
|
names as keys and metric objects as values |
required |
batch_size |
int
|
size of each batch passed to the model |
required |
classes |
Union[List[str], Dict[str, List[str]]]
|
classes names corresponding to the models. Can be a list of class names or a dictionary mapping each branch to its classes for multibranch models. |
required |
del_network |
bool
|
whether delete the network after evaluating to release memory |
False
|
input_shape |
List[int]
|
input shape of the network. Used only if the input shape couldn't be inferred from the network itself |
None
|
Returns:
| Name | Type | Description |
|---|---|---|
out |
dict
|
report |
Source code in conftrainer/evaluation/utils.py
evaluate_multiple_models(model_paths, datagen, metrics, batch_size, classes=Union[List[str], Dict[str, List[str]]], del_network=False, input_shape=None)
Evaluate a list of models and return dict of reports. For more information see evaluate_single_model
Source code in conftrainer/evaluation/utils.py
save_report(config)
Generate evaluation reports for several models and save a single report
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
config |
evaluation.config.EvalConfig
|
configuration for evaluation |
required |
Source code in conftrainer/evaluation/utils.py
save_multioutput_report(config)
Generate evaluation report for 1 or several multioutput models and save a single report
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
config |
evaluation.config.MultiBranchEvalConfig
|
configuration for evaluation |
required |