Download Latest Version TorchRL v0.11.1 source code.tar.gz (9.3 MB)
Email in envelope

Get an email when there's a new version of TorchRL

Home / v0.11.1
Name Modified Size InfoDownloads / Week
Parent folder
torchrl-0.11.1-cp313-cp313t-win_amd64.whl 2026-02-05 2.0 MB
torchrl-0.11.1-cp313-cp313-manylinux_2_28_aarch64.whl 2026-02-05 2.1 MB
torchrl-0.11.1-cp314-cp314-win_amd64.whl 2026-02-05 2.0 MB
torchrl-0.11.1-cp314-cp314t-manylinux_2_28_aarch64.whl 2026-02-05 2.1 MB
torchrl-0.11.1-cp311-cp311-win_amd64.whl 2026-02-05 2.0 MB
torchrl-0.11.1-cp311-cp311-manylinux_2_28_aarch64.whl 2026-02-05 2.1 MB
torchrl-0.11.1-cp313-cp313-manylinux_2_28_x86_64.whl 2026-02-05 2.1 MB
torchrl-0.11.1-cp310-cp310-manylinux_2_28_x86_64.whl 2026-02-05 2.1 MB
torchrl-0.11.1-cp313-cp313t-manylinux_2_28_aarch64.whl 2026-02-05 2.1 MB
torchrl-0.11.1-cp313-cp313t-manylinux_2_28_x86_64.whl 2026-02-05 2.1 MB
torchrl-0.11.1-cp312-cp312-manylinux_2_28_x86_64.whl 2026-02-05 2.1 MB
torchrl-0.11.1-cp314-cp314-manylinux_2_28_aarch64.whl 2026-02-05 2.1 MB
torchrl-0.11.1-cp310-cp310-win_amd64.whl 2026-02-05 2.0 MB
torchrl-0.11.1-cp313-cp313-macosx_12_0_arm64.whl 2026-02-05 2.4 MB
torchrl-0.11.1-cp314-cp314t-manylinux_2_28_x86_64.whl 2026-02-05 2.1 MB
torchrl-0.11.1-cp313-cp313-win_amd64.whl 2026-02-05 2.0 MB
torchrl-0.11.1-cp312-cp312-win_amd64.whl 2026-02-05 2.0 MB
torchrl-0.11.1-cp314-cp314t-macosx_11_0_arm64.whl 2026-02-05 2.4 MB
torchrl-0.11.1-cp310-cp310-manylinux_2_28_aarch64.whl 2026-02-05 2.1 MB
torchrl-0.11.1-cp314-cp314-macosx_11_0_arm64.whl 2026-02-05 2.4 MB
torchrl-0.11.1-cp314-cp314t-win_amd64.whl 2026-02-05 2.0 MB
torchrl-0.11.1-cp310-cp310-macosx_11_0_arm64.whl 2026-02-05 2.4 MB
torchrl-0.11.1-cp313-cp313t-macosx_11_0_arm64.whl 2026-02-05 2.4 MB
torchrl-0.11.1-cp314-cp314-manylinux_2_28_x86_64.whl 2026-02-05 2.1 MB
torchrl-0.11.1-cp312-cp312-manylinux_2_28_aarch64.whl 2026-02-05 2.1 MB
torchrl-0.11.1-cp311-cp311-macosx_11_0_arm64.whl 2026-02-05 2.4 MB
torchrl-0.11.1-cp312-cp312-macosx_11_0_arm64.whl 2026-02-05 2.4 MB
torchrl-0.11.1-cp311-cp311-manylinux_2_28_x86_64.whl 2026-02-05 2.1 MB
README.md 2026-02-05 4.3 kB
TorchRL v0.11.1 source code.tar.gz 2026-02-05 9.3 MB
TorchRL v0.11.1 source code.zip 2026-02-05 10.0 MB
Totals: 31 Items   79.3 MB 15

Highlights

This patch release includes several important bug fixes and performance improvements:

  • Fixed Composite.encode() to correctly set the batch size of the output TensorDict
  • Fixed StepCounter to properly track nested truncated and done states in multi-agent environments
  • Fixed shared memory weight updater to work correctly with collectors using multiple policies
  • Fixed _repr_html_ dispatch in parallel environments that was causing doc CI failures
  • Added scalar_output_mode to loss modules for proper handling of reduction='none'
  • Fixed torch.compile configuration for Dreamer
  • Performance: GPU Image Transforms for Dreamer (~5.5x faster sampling)
  • Performance: SliceSampler GPU acceleration for faster trajectory computation
  • Performance: Always enable prefetch for replay buffer

Breaking Changes

No breaking changes in this release.

Bug Fixes

  • Fixed batch size in Composite.encode: The Composite.encode() method now correctly sets the batch_size of the output TensorDict to match the shape of the tensor spec, rather than returning an empty batch size. (#3411) - @tobiabir

Previously, calling Composite.encode(raw_vals) would return a TensorDict with batch_size=torch.Size([]) regardless of the spec's shape. This is now fixed to return the correct batch size matching the spec shape.

  • Fixed StepCounter nested done/truncated tracking in multi-agent environments: StepCounter now properly updates nested truncated and done keys for multi-agent environments. (#3405) - @vmoens

When using StepCounter with multi-agent environments (e.g., PettingZoo), the transform now correctly propagates truncated/done signals to agent-specific keys (e.g., ("agent", "truncated")) in addition to the root-level keys.

  • Fixed shared memory weight updater with multiple policies: The shared memory weight updater now correctly handles collectors that use multiple policies. (#3442) - @vmoens

  • Fixed _repr_html_ dispatch in parallel environments: Parallel environments no longer incorrectly dispatch private/special attribute access (like _repr_html_) to worker processes. (#3441) - @vmoens

  • Added scalar_output_mode to loss modules: Loss modules (SAC, IQL, CQL, CrossQ, REDQ, DecisionTransformer) now support scalar_output_mode parameter for proper handling of reduction='none'. (#3426) - @vmoens

  • Fixed torch.compile configuration for Dreamer: Fixed compilation settings for Dreamer world model training. - @vmoens

Performance Improvements

  • GPU Image Transforms for Dreamer: ~5.5x faster sampling with GPU-accelerated image transforms. - @vmoens
  • SliceSampler GPU acceleration: Faster trajectory computation using GPU. - @vmoens
  • Always enable prefetch for replay buffer: Improved data loading performance. - @vmoens

Cleanup

  • Removed pin_memory from replay buffer: Simplified replay buffer configuration. - @vmoens

Internal / CI Improvements

  • Added PyTorch version check instructions to release prompt (#3443) - @vmoens
  • Added tutorials CI workflow for testing sphinx tutorials (#3441) - @vmoens
  • Upgraded meshgrid usage to address PyTorch deprecation warning (#3412) - @vmoens
  • Added flaky test tracking system for improved CI reliability (#3408) - @vmoens
  • Added file-based auto-labeling for PR components (#3402) - @vmoens
  • Improved LLM prompt for release workflow (#3399) - @vmoens

Contributors

Thanks to all contributors to this release:

  • @tobiabir (Tobias Birchler) - First-time contributor!
  • @vmoens (Vincent Moens)

Installation

```bash pip install torchrl==0.11.1 ```

Or with conda:

```bash conda install -c pytorch torchrl=0.11.1 ```

Source: README.md, updated 2026-02-05