l5kit.evaluation.extract_metrics module

l5kit.evaluation.extract_metrics.compute_metrics_csv(ground_truth_path: str, inference_output_path: str, metrics: List[Callable[[numpy.ndarray, numpy.ndarray, numpy.ndarray, numpy.ndarray], numpy.ndarray]]) dict

Compute a set of metrics between ground truth and prediction csv files

Parameters
  • ground_truth_path (str) – Path to the ground truth csv file.

  • inference_output_path (str) – Path to the csv file containing network output.

  • metrics (List[Callable]) – a list of callable to be applied to the elements retrieved from the 2

  • files (csv) –

Returns

keys are metrics name, values is the average metric computed over the elements

Return type

dict

l5kit.evaluation.extract_metrics.validate_dicts(ground_truth: dict, predicted: dict) bool

Validate GT and pred dictionaries by comparing keys

Parameters
  • ground_truth (dict) – mapping from (track_id + timestamp) to an element returned from our csv utils

  • predicted (dict) – mapping from (track_id + timestamp) to an element returned from our csv utils

Returns

True if the 2 dicts match (same keys)

Return type

(bool)