This example was automatically generated from a Jupyter notebook in the RxInferExamples.jl repository.
We welcome and encourage contributions! You can help by:
- Improving this example
- Creating new examples
- Reporting issues or bugs
- Suggesting enhancements
Visit our GitHub repository to get started. Together we can make RxInfer.jl even better! 💪
These examples demonstrate the use of RxInfer for trajectory planning in multi-agent situations. The animations show the inferred trajectories from probabilistic inference. The examples shown in this notebook are based on https://github.com/biaslab/MultiAgentTrajectoryPlanning/blob/main/door.jl, prepared by Michi-Tsubaki, extended by bvdmitri. The original code is a part of the paper Multi-Agent Trajectory Planning with NUV Priors by Bart van Erp.
Introduction
This notebook demonstrates multi-agent trajectory planning using probabilistic inference with RxInfer.jl. In this example, we model multiple agents navigating through an environment with obstacles while trying to reach their respective goals. The planning problem is formulated as Bayesian inference, where:
- Agent states evolve according to linear dynamics
- Collision avoidance between agents and obstacles is encoded as probabilistic constraints
- Goal-seeking behavior is represented as prior distributions
By performing inference on this probabilistic model, we can compute optimal trajectories that balance goal-reaching with collision avoidance. The visualization shows how agents coordinate their movements to navigate efficiently through the environment.
using LinearAlgebra, RxInfer, Plots, LogExpFunctions, StableRNGs
Environment setup
To test our ideas, we need an environment to work with. We are going to create a simple environment consisting of a plane with boxes as obstacles. These boxes can be placed anywhere we want on the plane, allowing us to experiment with different configurations and scenarios. This flexible setup will help us evaluate how our multi-agent trajectory planning algorithms perform under various conditions and obstacle arrangements.
# A simple struct to represent a rectangle, which is defined by its center (x, y) and size (width, height)
Base.@kwdef struct Rectangle
center::Tuple{Float64, Float64}
size::Tuple{Float64, Float64}
end
function plot_rectangle!(p, rect::Rectangle)
# Calculate the x-coordinates of the four corners
x_coords = rect.center[1] .+ rect.size[1]/2 * [-1, 1, 1, -1, -1]
# Calculate the y-coordinates of the four corners
y_coords = rect.center[2] .+ rect.size[2]/2 * [-1, -1, 1, 1, -1]
# Plot the rectangle with a black fill
plot!(p, Shape(x_coords, y_coords),
label = "",
color = :black,
alpha = 0.5,
linewidth = 1.5,
fillalpha = 0.3)
end
# A simple struct to represent an environment, which is defined by a list of obctales,
# and in this demo the obstacles are just rectangles
Base.@kwdef struct Environment
obstacles::Vector{Rectangle}
end
function plot_environment!(p, env::Environment)
for obstacle in env.obstacles
plot_rectangle!(p, obstacle)
end
return p
end
function plot_environment(env::Environment)
p = plot(size = (800, 400), xlims = (-20, 20), ylims = (-20, 20), aspect_ratio = :equal)
plot_environment!(p, env)
return p
end
plot_environment (generic function with 1 method)
In the code above, we’ve defined two key structures for our environment:
Rectangle
: A simple structure representing rectangular obstacles, defined by:center
: The (x,y) coordinates of the rectangle’s centersize
: The (width, height) of the rectangle
Environment
: A structure that contains a collection of obstacles (rectangles)
We’ve also defined several plotting functions:
plot_rectangle!
: Adds a rectangle to an existing plotplot_environment!
: Adds all obstacles in an environment to an existing plotplot_environment
: Creates a new plot and displays the environment
These structures and functions provide the foundation for visualizing our 2D environment where multi-agent trajectory planning will take place.
Let’s create a couple of different environments to demonstrate multi-agent trajectory planning. You can experiment with different obstacle configurations by modifying the rectangle positions, sizes, and quantities. This will allow you to test how the agents navigate around various obstacle arrangements and interact with each other in different scenarios.
Door environment
In this environment, we’ll create a scenario resembling a doorway that agents must navigate through. The environment will consist of two wall-like obstacles with a narrow passage between them, simulating a door or gateway. This setup will test the agents’ ability to coordinate when passing through a constrained space, which is a common challenge in multi-agent path planning. The narrow passage will force agents to negotiate the right-of-way and potentially wait for each other to pass through, demonstrating emergent cooperative behaviors.
door_environment = Environment(obstacles = [
Rectangle(center = (-40, 0), size = (70, 5)),
Rectangle(center = (40, 0), size = (70, 5))
])
plot_environment(door_environment)
In this environment, we’ll create a scenario with a wall in the center that agents must navigate around. The environment will consist of a single elongated obstacle positioned in the middle of the space, forcing agents to choose whether to go above or below the wall. This setup will test the agents’ ability to find efficient paths around obstacles and coordinate with each other to avoid congestion on either side of the wall. It represents a common scenario in multi-agent navigation where agents must make decisions about which route to take when faced with a barrier.
wall_environment = Environment(obstacles = [
Rectangle(center = (0, 0), size = (10, 5))
])
plot_environment(wall_environment)
In this environment, we’ll combine the door and wall scenarios to create a more complex navigation challenge. This environment will feature both a narrow doorway that agents must pass through and a wall obstacle they need to navigate around. This combined setup will test the agents’ ability to handle multiple types of obstacles in sequence, requiring more sophisticated path planning and coordination. Agents will need to negotiate the doorway and then decide which path to take around the wall, or vice versa depending on their starting and goal positions. This represents a more realistic scenario where environments often contain various types of obstacles that require different navigation strategies.
combined_environment = Environment(obstacles = [
Rectangle(center = (-50, 0), size = (70, 2)),
Rectangle(center = (50, -0), size = (70, 2)),
Rectangle(center = (5, -1), size = (3, 10))
])
plot_environment(combined_environment)
In this section, we define states and goals for our agents. Each agent has a initial position and target end position. These states will be used to drive agent movement through the environment. The trajectory planning algorithm will use this information to generate paths from start to destination while avoiding obstacles. We start by first defining the necessary structures and functions for the goals.
# Agent plan, encodes start and goal states
Base.@kwdef struct Agent
radius::Float64
initial_position::Tuple{Float64, Float64}
target_position::Tuple{Float64, Float64}
end
function plot_marker_at_position!(p, radius, position; color="red", markersize=10.0, alpha=1.0, label="")
# Draw the agent as a circle with the given radius
θ = range(0, 2π, 100)
x_coords = position[1] .+ radius .* cos.(θ)
y_coords = position[2] .+ radius .* sin.(θ)
plot!(p, Shape(x_coords, y_coords); color=color, label=label, alpha=alpha)