l5kit.cle.closed_loop_evaluator module

class l5kit.cle.closed_loop_evaluator.ClosedLoopEvaluator(evaluation_plan: l5kit.cle.closed_loop_evaluator.EvaluationPlan)

Bases: object

The closed loop evaluator executes a evaluation plan and keep track of histograms, failed scenes, etc.

Parameters

evaluation_plan – the specified evaluation plan

composite_metric_results() Dict[int, Dict[str, float]]

Return the computed composite metric results.

Returns

a dictionary indexed by scene with composite metric name and results.

evaluate(simulation_outputs: Sequence[l5kit.simulation.unroll.SimulationOutputCLE]) None

Executes the evaluation plan on all outputs from the simulator.

Parameters

simulation_outputs – the outputs from the simulator

metric_results() Dict[int, Dict[str, torch.Tensor]]

Return the computed metric results.

Returns

a dictionary indexed by scene with metric name and results.

reset() None

Resets the computed stats.

scene_composite_metric_results: Dict[int, Dict[str, float]]

Results from the composite metrics indexed by the scene id

scene_metric_results: Dict[int, Dict[str, torch.Tensor]]

Results of the metrics indexed by the scene id

scene_validation_results: Dict[int, Dict[str, l5kit.cle.validators.ValidatorOutput]]

Results from the validation results indexed by the scene id

validation_results() Dict[int, Dict[str, l5kit.cle.validators.ValidatorOutput]]

Return the computed validator results.

Returns

a dictionary indexed by scene with validator name and results.

class l5kit.cle.closed_loop_evaluator.EvaluationPlan(metrics: Iterable[l5kit.cle.metrics.SupportsMetricCompute], validators: Optional[Iterable[l5kit.cle.validators.SupportsMetricValidate]] = None, composite_metrics: Optional[Iterable[l5kit.cle.composite_metrics.SupportsCompositeMetricCompute]] = None, intervention_validators: Optional[List[str]] = None)

Bases: object

Evaluation plan describes a plan to evaluate metrics and run validators. It is composed by the list of metrics that should be computed as well as the list of validators that will run. It checks for consistency of the plan (validators depend on metrics).

Note

Please note that the intervention_validators argument, that specifies a list of interventions will stop the validation of all other metrics if any validator specified in this list is triggered.

Parameters
  • metrics – list of the metrics to compute

  • validators – list of validators to compute

  • composite_metrics – list of composite metrics to compute

  • intervention_validators – list of validators that are considered interventions.

composite_metrics_dict() Dict[str, l5kit.cle.composite_metrics.SupportsCompositeMetricCompute]

Get the composite metric names and composite metrics from the plan.

evaluate(simulation_output: l5kit.simulation.unroll.SimulationOutputCLE) Dict[str, torch.Tensor]

Execute the evaluation (metric computation) on the scene.

Parameters

simulation_output – output from the closed-loop simulator

Returns

results from all metrics on a dictionary

evaluate_composite(simulation_output: l5kit.simulation.unroll.SimulationOutputCLE, scene_metrics: Dict[str, torch.Tensor], scene_validation: Dict[str, l5kit.cle.validators.ValidatorOutput]) Dict[str, float]

Execute the evaluation of the composite metrics on the scene.

Parameters
  • simulation_output – output from the closed-loop simulator

  • scene_metrics – metric results indexed by the metric name

  • scene_validation – outputs from validator indexed by the validation name

Returns

results from the composite metrics indexed by the composite metric name

metrics_dict() Dict[str, l5kit.cle.metrics.SupportsMetricCompute]

Get the metric names and metrics from the plan.

process_interventions(results: Dict[str, l5kit.cle.validators.ValidatorOutput]) Dict[str, l5kit.cle.validators.ValidatorOutput]

This method will process the the validator results accordingly to the validators defined as interventions. If any validator, that is also an intervention is triggered, it will reset all other validators.

Parameters

results – resuls from the validation

Returns

updated the results

validate(scene_metrics: Dict[str, torch.Tensor], simulation_output: l5kit.simulation.unroll.SimulationOutputCLE) Dict[str, l5kit.cle.validators.ValidatorOutput]

Execute the validation (validators) on all metric results.

Parameters
  • scene_metrics – the result for the metrics computation

  • simulation_output – output from the closed-loop simulator

Returns

the result of all validators

validators_dict() Dict[str, l5kit.cle.validators.SupportsMetricValidate]

Get the validator names and validators from the plan.