Datasets:
title: Fall Prediction Dataset for Humanoid Robots
datasets:
- naos-fall-prediction
tags:
- humanoid-robotics
- fall-prediction
- machine-learning
- sensor-data
- robotics
- temporal-convolutional-networks
license:
- apache-2.0
Fall Prediction Dataset for Humanoid Robots
Dataset Summary
This dataset consists of 37.9 hours of real-world sensor data collected from 20 Nao humanoid robots over the course of one year in various test environments, including RoboCup soccer matches. The dataset includes 18.3 hours of walking data, featuring 2519 falls. It captures a wide range of activities such as omni-directional walking, collisions, standing up, and falls on various surfaces like artificial turf and carpets.
The dataset is primarily designed to support the development and evaluation of fall prediction algorithms for humanoid robots. It includes data from multiple sensors, such as gyroscopes, accelerometers, and force-sensing resistors (FSR), recorded at a high frequency to track robot movements and falls with precision.
Using this dataset, the RePro-TCN model was developed, which outperforms existing fall prediction methods under real-world conditions. This model leverages temporal convolutional networks (TCNs) and incorporates advanced training techniques like progressive forecasting and relaxed loss formulations.
Dataset Structure
- Duration: 37.9 hours total, 18.3 hours of walking
- Falls: 2519 falls during walking scenarios
- Data Types: Gyroscope (roll, pitch), accelerometer (x, y, z), body angle, and force-sensing resistors (FSR) per foot.
Use Cases
- Humanoid robot fall prediction and prevention
- Robot control algorithm benchmarking
- Temporal sequence modeling in robotics
Licensing
This dataset is shared under the apache-2.0 license, allowing use and modification with proper attribution, as long as derivatives are shared alike.
Citation
If you use this dataset in your research, please cite it as follows:
How to Use the Dataset
To get started with the Fall Prediction Dataset for Humanoid Robots, follow the steps below:
1. Set Up a Virtual Environment
It's recommended to create a virtual environment to isolate dependencies. You can do this with the following command:
python -m venv .venv
After creating the virtual environment, activate it:
On Windows:
.venv\Scripts\activate
On macOS/Linux:
source .venv/bin/activate
2. Install Dependencies
Once the virtual environment is active, install the necessary packages by running:
pip install -r requirements.txt
3. Run the Example Script
To load and use the dataset for training a simple LSTM model, run the usage_example.py
script:
python usage_example.py
This script demonstrates how to:
- Load the dataset
- Select the relevant sensor columns
- Split the data into training and test sets
- Train a basic LSTM model to predict falls
- Evaluate the model on the test set
Make sure to check the script and adjust the dataset paths if necessary. For further details, see the comments within the script.