Human activity recognition (HAR) can be used for a number of applications, such as health-care services and smart home applications. Many sensors have been utilized for human activity recognition, such as wearable sensors, smartphones, radio frequency (RF) sensors (WiFi, RFID), LED light sensors, cameras, etc. Owing to the rapid development of wireless sensor network, a large amount of data has been collected for the recognition of human activities with different kind of sensors. Conventional shallow learning algorithms, such as support vector machine and random forest, require to manually extract some representative features from large and noisy sensory data. However, manual feature engineering requires export knowledge and will inevitably miss implicit features.
Recently, deep learning has achieved great success in many challenging research areas, such as image recognition and natural language processing. The key merit of deep learning is to automatically learn representative features from massive data. This technology can be a good candidate for human activity recognition. Some initial attempts can be found in the literature. However, many challenging research problems in terms of accuracy, device heterogeneous, environment changes, etc. remain unsolved.
This workshop intends to prompt state-of-the-art approaches on deep learning for human activity recognition. The organizers invite researchers to participate and submit their research papers in the Deep Learning for Human Activity Recognition Workshop.
June 30, 2021 | Submission deadline |
July 15, 2021 | Acceptance notification |
August 21-23, 2021 | Conference dates |
As requested by authors, we will extend the submission to June 30 to give more time for authors. Thanks.
Potential topics include but are not limited to
Device-based HAR using deep learning
Device-free HAR using deep learning
Image based HAR using deep learning
Light sensor based HAR using deep learning
Sensor fusion for HAR using deep learning
Fusion of shallow models with deep networks for HAR
Device heterogeneous for device-based HAR
Transfer Learning for HAR
Federated Learning for HAR
Reinforcement Learning for HAR
Online Learning for HAR
Self-supervised Learning for HAR
Semi-supervised Learning for HAR
Survey for deep learning based HAR
Submission Format: The authors should follow IJCAI paper preparation instructions, including page length (e.g. 6 pages + 1 extra page for reference). Submission Link
Time: Montreal Time (UTC-4) Aug 19
20:00--20:10 Welcome from Organizers
20:10--20:40
Keynote Presentation by Prof Sinno Pan from Nanyang Technological University, Singapore
Title: Distribution-embedded Networks for Sensor-based Activity Recognition
20:40—21:00
Oral 1: Human Activity Recognition using Attribute-Based Neural Networks and Context Information
Stefan Lüdtke, Fernando Moya Rueda, Waqas Ahmed, Gernot A. Fink and Thomas Kirste
21:00--21:20
Oral 2: Device-free Multi-Location Human Activity Recognition using Deep Complex Network
Xue Ding, Ting Jiang, Zhiwei Li, Jianfei Yang, Sheng Wu and Yi Zhong
21:20--21:50
Keynote Presentation by Prof Zhang Juyong from University of Science and Technology of China (USTC), China
Title: Digitalizing Everyone in the World
21:50--22:10
Oral 3: Few Shot Activity Recognition Using Variational Inference
Neeraj Kumar and Siddhansh Narang
22:10--22:30
Invited Presentation by Dr. Xu Yuecong from A*STAR, Singapore
Title: Recognizing Actions in the Dark: Starting from the ARID Dataset
End of the Section
Advanced Digital Sciences Center, Singapore
Nanyang Technological University, Singapore
A*STAR, Singapore
A*STAR, Singapore
University of New South Wale, Australian
Zhejiang University, China
Nanyang Technological University, Singapore
Microsoft, USA
A*STAR, Singapore
University of Edinburg, UK
University of Amsterdam, Amsterdam
A*STAR, UK
Zimmer Biomet, UK
Co-organizer