The WELL (World Experience Learning Library) is a
multi-modal training dataset built from real-world
human physical activity. Motion capture, spatial
orientation, video, audio, and task context
collected by TRACE mining partners during ordinary
daily work and leisure activities across diverse
environments and geographies.
The dataset is designed for training embodied AI
systems: humanoid robot control policies,
Vision-Language-Action models, and behavioral
foundation models that require grounded
understanding of how people actually move and
interact in unstructured settings.
TRACE is currently in private alpha. The WELL
dataset is under active development as our first
mining partners begin collecting verified task
data.
We are building toward commercial licensing of the
dataset for research and product development in
embodied AI. If you are a researcher, robotics
company, or AI lab with interest in early access,
partnership, or licensing, we would like to hear
from you.
What we can discuss now:
- Data modalities, formats, and collection methodology
- Licensing structure and terms
- Research collaboration and early access programs
- Custom data collection for specific task verticals
For research inquiries and data licensing:
@exosequitur on Telegram
For general questions about TRACE, join our
community:
TRACE Telegram group
|
@TRCDynamics on X
In private alpha - public beta coming soon
Check the FAQ