Human activity recognition (HAR) can be used for a number of applications, such as health-care services and smart home applications. Many sensors have been utilized for human activity recognition, such as wearable sensors, smartphones, radio frequency (RF) sensors (WiFi, RFID), LED light sensors, cameras, etc. Owing to the rapid development of wireless sensor network, a large amount of data has been collected for the recognition of human activities with different kind of sensors. Conventional shallow learning algorithms, such as support vector machine and random forest, require to manually extract some representative features from large and noisy sensory data. However, manual feature engineering requires export knowledge and will inevitably miss implicit features.
Recently, deep learning has achieved great success in many challenging research areas, such as image recognition and natural language processing. The key merit of deep learning is to automatically learn representative features from massive data. This technology can be a good candidate for human activity recognition. Some initial attempts can be found in the literature. However, many challenging research problems in terms of accuracy, device heterogeneous, environment changes, etc. remain unsolved.
This workshop intends to prompt state-of-the-art approaches on deep learning for human activity recognition. The organizers invite researchers to participate and submit their research papers in the Deep Learning for Human Activity Recognition Workshop.
September 15, 2020 | Submission deadline |
September 30, 2020 | Acceptance notification |
January 4-10, 2021 | Conference date |
Due to COVID 19, the main conference has been postponed to January 2021, thus we will also extend the submission to give more time for authors. Thanks.
The proceedings have been pulished in Springer CCIS book sereis (CCIS, volume 1370) with the Link
Device-based HAR using deep learning
Device-free HAR using deep learning
Image based HAR using deep learning
Light sensor based HAR using deep learning
Sensor fusion for HAR using deep learning
Fusion of shallow models with deep networks for HAR
Device heterogeneous for device-based HAR
Environment changes for device-free HAR
Transfer Learning for HAR
Online Learning for HAR
Semi-supervised Learning for HAR
Survey for deep learning based HAR
Submission Format: The authors should follow IJCAI paper preparation instructions, including page length (e.g. 6 pages + 1 extra page for reference). Submission Link
IJCAI-20 Registration is open. Registration Link
Time Zone: UTC
12:00AM--12:10AM Welcome from Organizers
12:10AM--12:30AM
Oral 1: Fully Convolutional Network Bootstrapped by Word Encoding and Embedding for Activity Recognition in Smart Homes
Damien Bouchabou, Sao Mai Nguyen, Christophe Lohr, Ioannis Kanellos and Benoit LeDuc
12:30AM--12:50AM
Oral 2: Single Run Action Detector over Video Stream - A Privacy Preserving Approach
Anbumalar Saravanan, Justin Sanchez, Hassan Ghasemzadeh, Aurelia Macabasco-O'Connell and Hamed Tabkhi
12:50AM--01:10AM
Oral 3: Personalization Models for Human Activity Recognition With Distribution Matching-Based Metrics
Huy Thong Nguyen, Hyeokhyen Kwon, Harish Haresamudram, Andrew Peterson and Thomas Ploetz
01:10AM--01:30AM
Oral 4: Efficacy of Model Fine-Tuning for Personalized Dynamic Gesture Recognition
Junyao Guo, Unmesh Kurup and Mohak Shah
01:30AM--01:50AM
Oral 5: Towards User Friendly Medication Mapping Using Entity-Boosted Two-Tower Neural Network
Shaoqing Yuan, Parminder Bhatia, Busra Celikkaya, Haiyang Liu and Kyunghwan Choi
01:50AM--02:20AM Tea Break
02:20AM--02:40AM
Oral 6: ARID: A New Dataset for Recognizing Action in the Dark
Yuecong Xu, Jianfei Yang, Haozhi Cao, Kezhi Mao, Jianxiong Yin and Simon See
02:40AM--03:00AM
Oral 7: Wheelchair Behavior Recognition for Visualizing Sidewalk Accessibility by Deep Neural Networks
Takumi Watanabe, Hiroki Takahashi, Goh Sato, Yusuke Iwasawa, Yutaka Matsuo and Ikuko Eguchi Yairi
03:00AM--03:20AM
Oral 8: Towards Data Augmentation and Interpretation on Sensor-based Fine-grained Hand Activity Recognition
Jinqi Luo, Xiang Li and Rabih Younes
03:20AM--03:40AM
Oral 9: Resource-Constrained Federated Learning with Heterogeneous Labels and Models for Human Activity Recognition
Gautham Krishna Gudur and Satheesh Kumar Perepu
03:40AM--04:00AM
Oral 10: Human Activity Recognition using Wearable Sensors: Review, Challenges, Evaluation Benchmark
Reem Abdel-Salam, Rana Mostafa and Mayada Hadhood
Nanyang Technological University/A*STAR, Singapore
xlli@i2r.a-star.edu.sgA*STAR, Singapore
wumin@i2r.a-star.edu.sgA*STAR, Singapore
chen_zhenghua@i2r.a-star.edu.sgA*STAR, Singapore
zhangleuestc@gmail.comNankai Univerisity, P.R.C
Sichuan University, P.R.C
Advanced Digital Sciences Center, Singapore
Nanyang Technological University, Singapore
A*STAR, Singapore
Cornell University, USA
Purdue University, USA
University of California, Berkeley, USA
University of Oxford, UK
University of Amsterdam, Amsterdam
Tencent AI Lab, P.R.C
McLaren Applied Technologies, UK
Co-organizer