A comprehensive multiview visual dataset for human activity recognition designed for assistive robots and HRI scenarios
26,804 videos across 14 activities • 4 synchronized viewpoints • RGB + Skeleton data
The Robot House Multi-View (RHM) Dataset represents a significant advancement in human activity recognition research, specifically designed for assistive robotics and ambient intelligence applications. Created in a typical British home environment, this comprehensive dataset addresses the critical need for understanding human activities in domestic settings where companion robots and smart home systems must operate effectively.
What sets RHM apart is its focus on essential daily living activities that are crucial for independent living and home care scenarios. The dataset features synchronized multi-view recordings from four strategically positioned cameras, including a unique mobile robot perspective that provides dynamic viewpoints not available in traditional fixed-camera setups.
The RHM dataset employs a sophisticated four-camera configuration designed to capture comprehensive activity information from complementary perspectives. This multi-view approach enables robust activity recognition that can handle occlusions, varying lighting conditions, and different spatial relationships.
The 14 activity classes in RHM were carefully selected based on their importance for independent living and their relevance to assistive robotics applications. Each activity represents a fundamental daily task that companion robots and ambient systems must recognize to provide meaningful assistance.
These activities span fundamental categories essential for home care: mobility (walking, stairs), object manipulation (lifting, carrying), self-care (drinking, stretching), and household tasks (cleaning, reaching). This comprehensive coverage ensures the dataset addresses real-world scenarios where assistive robots must demonstrate understanding and appropriate response capabilities.
The RHM dataset follows rigorous machine learning best practices with standardized train-validation-test splits that ensure reliable evaluation and comparison across different research approaches.
Building upon the RGB foundation, the RHM Skeleton dataset provides skeleton-based pose data extracted using state-of-the-art human pose estimation. This multi-modal extension enables researchers to explore both appearance-based and pose-based approaches to activity recognition, offering complementary perspectives for robust system development.
Extensive quality assessment reveals critical insights about pose extraction reliability across different camera views and activity types, providing valuable guidance for multi-view system design.
The RHM dataset addresses critical gaps in human activity recognition research, particularly in domestic environments where assistive technologies must operate reliably and safely.
Enable home care robots to understand daily activities, predict user needs, and provide appropriate assistance while maintaining safety and privacy.
Develop smart home systems that adapt to user behavior patterns, optimize energy usage, and provide proactive support for independent living.
Create non-intrusive monitoring systems for elderly care, rehabilitation tracking, and early detection of health changes through activity pattern analysis.
Advance computer vision research in multi-perspective analysis, view fusion techniques, and robust activity recognition under varying conditions.
The RHM dataset evolution represents a comprehensive research trajectory spanning foundational work to advanced multi-modal analysis:
Please cite the relevant papers below if you are using the datasets in your research.
Comprehensive documentation of the multi-view RGB dataset, detailing collection methodology, synchronization techniques, and baseline performance evaluations.
Advanced extension incorporating skeleton pose data, quality analysis across multiple views, and applications to ambient assisted living scenarios.
Original dataset introduction presenting the Robot House platform and initial activity recognition capabilities, laying groundwork for multi-view expansion.
Foundational exploration of HAR challenges in domestic environments, establishing the conceptual framework that inspired the RHM dataset development.
Both the RHM RGB dataset and RHM Skeleton dataset are available for research purposes. Please cite the relevant publications when using these datasets in your research.
Multi-view RGB video dataset with 26,804 videos across 4 synchronized camera views covering 14 daily activities.
Skeleton pose dataset with 17 keypoints extracted from 6,700 synchronized videos using HRNet, stored in 5D tensor format.
Access the complete implementation code, processing tools, and analysis frameworks developed for the RHM dataset. Each repository contains specific components of the dataset creation and analysis pipeline.
Single-view activity recognition implementation and baseline experiments.
Tools for extracting features from video frames for activity analysis.
Dual-stream C3D implementation for multiview activity recognition.
OpenCV-based tools for multiple video editing and preprocessing.
Repository Collection: Complete toolkit for dataset processing, feature extraction, model training, and evaluation. Includes baseline experiments, preprocessing scripts, and analysis tools for multiview human activity recognition research.