l5kit.planning.vectorized.local_graph module

class l5kit.planning.vectorized.local_graph.LocalMLP(dim_in: int, use_norm: bool = True)

Bases: torch.nn.modules.module.Module

forward(x: torch.Tensor) torch.Tensor

forward of the module

Parameters

x (torch.Tensor) – input tensor (…, dim_in)

Returns

output tensor (…, dim_in)

Return type

torch.Tensor

training: bool
class l5kit.planning.vectorized.local_graph.LocalSubGraph(num_layers: int, dim_in: int)

Bases: torch.nn.modules.module.Module

forward(x: torch.Tensor, invalid_mask: torch.Tensor, pos_enc: torch.Tensor) torch.Tensor

Forward of the module: - Add positional encoding - Forward to layers - Aggregates using max (calculates a feature descriptor per element - reduces over points)

Parameters
  • x (torch.Tensor) – input tensor (B,N,P,dim_in)

  • invalid_mask (torch.Tensor) – invalid mask for x (B,N,P)

  • pos_enc (torch.Tensor) – positional_encoding for x

Returns

output tensor (B,N,P,dim_in)

Return type

torch.Tensor

training: bool
class l5kit.planning.vectorized.local_graph.LocalSubGraphLayer(dim_in: int, dim_out: int)

Bases: torch.nn.modules.module.Module

forward(x: torch.Tensor, invalid_mask: torch.Tensor) torch.Tensor

Forward of the model

Parameters

x – input tensor

:tensor (B,N,P,dim_in) :param invalid_mask: invalid mask for x :tensor invalid_mask (B,N,P) :return: output tensor (B,N,P,dim_out) :rtype: torch.Tensor

training: bool
class l5kit.planning.vectorized.local_graph.SinusoidalPositionalEmbedding(d_model: int, max_len: int = 5000)

Bases: torch.nn.modules.module.Module

forward(x: torch.Tensor) torch.Tensor
Parameters

x – Input tensor of shape batch_size x num_agents x sequence_length x d_model

training: bool