Bevy Version: | 0.14 | (outdated!) |
---|
As this page is outdated, please refer to Bevy's official migration guides while reading, to cover the differences: 0.14 to 0.15.
I apologize for the inconvenience. I will update the page as soon as I find the time.
Background Computation
Relevant official examples:
async_compute
,
external_source_external_thread
.
Sometimes you need to perform long-running background computations. You want to do that in a way that does not hold up Bevy's main frame update loop, so that your game can keep refreshing and feeling responsive with no lag spikes.
To do this, Bevy offers a special AsyncComputeTaskPool
. You can spawn
tasks there, and Bevy will run them on special CPU threads dedicated for
the purpose of running background computations.
When you initiate the task, you get a Task
handle, which you can use
to check for completion.
It is common to write two separate systems, one for initiating tasks and storing the handles, and one for handling the finished work when the tasks complete.
use bevy::tasks::futures_lite::future;
use bevy::tasks::{block_on, AsyncComputeTaskPool, Task};
#[derive(Resource)]
struct MyMapGenTasks {
generating_chunks: HashMap<UVec2, Task<MyMapChunkData>>,
}
fn begin_generating_map_chunks(
mut my_tasks: ResMut<MyMapGenTasks>,
) {
let task_pool = AsyncComputeTaskPool::get();
for chunk_coord in decide_what_chunks_to_generate(/* ... */) {
// we might have already spawned a task for this `chunk_coord`
if my_tasks.generating_chunks.contains_key(&chunk_coord) {
continue;
}
let task = task_pool.spawn(async move {
// TODO: do whatever you want here!
generate_map_chunk(chunk_coord)
});
my_tasks.generating_chunks.insert(chunk_coord, task);
}
}
fn receive_generated_map_chunks(
mut my_tasks: ResMut<MyMapGenTasks>
) {
my_tasks.generating_chunks.retain(|chunk_coord, task| {
// check on our task to see how it's doing :)
let status = block_on(future::poll_once(task));
// keep the entry in our HashMap only if the task is not done yet
let retain = status.is_none();
// if this task is done, handle the data it returned!
if let Some(mut chunk_data) = status {
// TODO: do something with the returned `chunk_data`
}
retain
});
}
// every frame, we might have some new chunks that are ready,
// or the need to start generating some new ones. :)
app.add_systems(Update, (
begin_generating_map_chunks, receive_generated_map_chunks
));
Internal Parallelism
Your tasks can also spawn additional independent tasks themselves, for extra parallelism, using the same API as shown above, from within the closure.
If you'd like your background computation tasks to process data in parallel, you can use scoped tasks. This allows you to create tasks that borrow data from the function that spawns them.
Using the scoped API can also be easier, even if you don't need to borrow data,
because you don't have to worry about storing and await
ing the Task
handles.
A common pattern is to have your main task (the one you initiate from your systems, as shown earlier) act as a "dispacher", spawning a bunch of scoped tasks to do the actual work.
I/O-heavy Workloads
If your intention is to do background I/O (such as networking or accessing
files) instead of heavy CPU work, you can use IoTaskPool
instead of
AsyncComputeTaskPool
. The APIs are the same as shown above. The choice
of task pool just helps Bevy schedule and manage your tasks appropriately.
For example, you could spawn tasks to run your game's multiplayer
netcode, save/load game save files, etc. Bevy's asset loading
infrastructure also makes use of the IoTaskPool
.
Passing Data Around
The previous examples showcased a "spawn-join" programming pattern, where you start tasks to perform some work and then consume the values they return after they complete.
If you'd like to have some long-running tasks that send values
back to you, instead of returning, you can use channels (from the
async-channel
crate). Channels can also be used to
send data to your long-running background tasks.
Set up some channels and put the side you want to access from Bevy in a
resource. To receive data from Bevy systems,
you should poll the channels using a non-blocking method, like try_recv
,
to check if data is available.
use bevy::tasks::IoTaskPool;
use async_channel::{Sender, Receiver};
/// Messages we send to our netcode task
enum MyNetControlMsg {
DoSomething,
// ...
}
/// Messages we receive from our netcode task
enum MyNetUpdateMsg {
SomethingHappened,
// ...
}
/// Channels used for communicating with our game's netcode task.
/// (The side used from our Bevy systems)
#[derive(Resource)]
struct MyNetChannels {
tx_control: Sender<MyNetControlMsg>,
rx_updates: Receiver<MyNetUpdateMsg>,
}
fn setup_net_session(
mut commands: Commands,
) {
// create our channels:
let (tx_control, rx_control) = async_channel::unbounded();
let (tx_updates, rx_updates) = async_channel::unbounded();
// spawn our background i/o task for networking
// and give it its side of the channels:
IoTaskPool::get().spawn(async move {
my_netcode(rx_control, tx_updates).await
}).detach();
// NOTE: `.detach()` to let the task run
// without us storing the `Task` handle.
// Otherwise, the task will get canceled!
// (though in a real application, you probably want to
// store the `Task` handle and have a system to monitor
// your task and recreate it if necessary)
// put our side of the channels in a resource for later
commands.insert_resource(MyNetChannels {
tx_control, rx_updates,
});
}
fn handle_net_updates(
my_channels: Res<MyNetChannels>,
) {
// Non-blocking check for any new messages on the channel
while let Ok(msg) = my_channels.rx_updates.try_recv() {
// TODO: do something with `msg`
}
}
fn tell_the_net_task_what_to_do(
my_channels: Res<MyNetChannels>,
) {
if let Err(e) = my_channels.tx_control.try_send(MyNetControlMsg::DoSomething) {
// TODO: handle errors. Maybe our task has
// returned or panicked, and closed the channel?
}
}
/// This runs in the background I/O task
async fn my_netcode(
rx_control: Receiver<MyNetControlMsg>,
tx_updates: Sender<MyNetUpdateMsg>,
) {
// TODO: Here we can connect and talk to our multiplayer server,
// handle incoming `MyNetControlMsg`s, send `MyNetUpdateMsg`s, etc.
while let Ok(msg) = rx_control.recv().await {
// TODO: do something with `msg`
// Send data back, to be handled from Bevy systems:
tx_updates.send(MyNetUpdateMsg::SomethingHappened).await
.expect("Error sending updates over channel");
// We can also spawn additional parallel tasks
IoTaskPool::get().spawn(async move {
// ... some other I/O work ...
}).detach();
AsyncComputeTaskPool::get().spawn(async move {
// ... some heavy CPU work ...
}).detach();
}
}
app.add_systems(Startup, setup_net_session);
app.add_systems(FixedUpdate, (
tell_the_net_task_what_to_do,
handle_net_updates,
));
Make sure to add async_channel
to your Cargo.toml
:
[dependencies]
async-channel = "2.3.1"
Wider Async Ecosystem
Bevy's task pools are built on top of the smol
runtime.
Feel free to use anything from its ecosystem of compatible crates:
async-channel
- Multi-producer multi-consumer channelsasync-fs
- Async filesystem primitivesasync-net
- Async networking primitives (TCP/UDP/Unix)async-process
- Async interface for working with processesasync-lock
- Async locks (barrier, mutex, reader-writer lock, semaphore)async-io
- Async adapter for I/O types, also timersfutures-lite
- Misc helper and extension APIsfutures
- More helper and extension APIs (notably the powerfulselect!
andjoin!
macros)- Any Rust async library that supports
smol
.
Using Your Own Threads
While not typically recommended, sometimes you might want to manage an
actual dedicated CPU thread of your own. For example, if you also want to run
another framework's runtime (such as tokio
) in parallel
with Bevy. You might have to do this if you have to use crates built for
another async ecosystem, that are not compatible with smol
.
To interoperate with your non-Bevy thread, you can move data between
it and Bevy using channels. Do the equivalent of what was shown in
the example earlier on this page, but instead of
async-channel
, use the channel types provided
by your alternative runtime (such as tokio
), or
std
/crossbeam
for raw OS threads.